Automation & Orchestration
It Starts with Knowing what you have
For many organizations the move to software defined infrastructure and cloud technology seems to be the next necessary step. The demand to orchestrate automated processes to enable security operations, distributed work, and cloud enablement is unparalleled. Combine that with both the drive to digitize businesses and the new remote working “normal”, both accelerated by Covid-19, and you have unprecedented pressure for business transformation. The world has become so dependent on technology and its rapid innovation that without thoughtfulness there will be many casualties along the way.
In recent weeks, daily reports of hacking, ransomware, system outages etc. have become commonplace. The industry leaders talk about the need for security, automation and cloud optimization solutions but the pandemic accelerated our rush to open the doors to our networks.
It starts with, “knowing what you got,” then describing where you want to be, and finally using that knowledge...
How do we catch up, manage, lock down and then leverage this change to our benefit, rather than fall victim to it? In our opinion, we need to get back to basics, beyond the hype and define and document what our enterprises have in place. The need to reinvent our thinking about our systems is crucial.
Launching new software defined infrastructure, especially in the cloud, without clear and detailed knowledge of your existing systems and infrastructure is like trekking cross country without a compass or a map.
As crazy as this sounds, what if the first step to infrastructure innovation was just going back to the basics and doing things the right way, recording them so that future execution is correct, repeatable, and controlled.
It starts with, “knowing what you got,” describing where you want to be, and then using that knowledge to assemble components and plan that enable you to get where you want to go. That allows the new technological innovations and efficiencies to be leveraged to accelerate delivery and improve system integrity, or as we call it “Thoughtful Operations”.
It is this representation of knowledge, where both state and outcomes are documented and understood, that enables true digital transformation. Too many times organizations move to implementing platforms for incident, problem, and change management before even figuring out what they have or “knowing what you got” and documenting it first.
We realize that "know what you got" is a simplification of the challenge, but putting it into the context of automation and orchestration and having this mental model. You should start with a configuration management database CMDB).
So knowledge is the prerequisite. Unfortunately, what we have seen across engagements is that the “human process” tends to be very ad hoc and reactive. We often discover that the thing that we are looking to automate is in many ways idiosyncratic as to what the individual people who were responsible for it knew and how they approached it. Whereas when there is a level of understanding about what is required to be done and what the outcome needs to be, then in fact you can design a process which is more aligned with organizational strategies rather than tactical responses.
Getting the Process Right
For many organizations, the “need” to automate as quickly as possible w as primarily to save on costs. For some it has come at the expense of Thoughtful Operations and truly understanding how the environments are changed. Further, these same organizations are defining automation and orchestration as if they are the same thing. But this is wrong and they are different.
Automation and Orchestration Defined
Automation is fundamentally concerned with recording tasks and processes such that they can then be performed, at least in part, by a machine. It could be simple automation, as basic as a “run book” documenting the process and procedure in the form of a standard language format (e.g. Ansible or some other tool), thereby enabling a human to repeatedly perform that process in a consistent way. As the practice matures, the idea is to then extend the automation by allowing the tool to complete the process with the minimum amount of human intervention.
As an example, if the task is small enough and the requirements are to stand up a generic Windows Server and have it be available as an SMB file server, you could conceivably do that in one piece of automation and no explicit orchestration would be needed in the automation. But if you then have to put it in a subnet, configure firewall rules, and apply ACLs to restrict access to pieces of that fileshare then orchestration becomes necessary and distinct from just automating a Windows server to be used as a file server.
In the above example, orchestration centers itself with coordinating multiple discrete processes to produce a bigger picture. In an enterprise, this is often where business value is produced. Automating the stand-up of a server automatically is a completely valid task. Whereas, orchestration enables the removal of the server from the load balancing pool, apply patches, reboot it and put it back in the pool such that there is no negative business impact.
Automation is the execution of discrete tasks including the complete and comprehensive documentation of that task. While orchestration is connecting all of those different documented pieces of automation together to produce meaningful business output.
Why does the distinction Matter?
Often when we discuss DevOps, the conversation turns to automation as the end goal because it is a best practice. Whereas, we believe that the reason it is a best practice is because there is power in understanding the underlying system well enough to automate those discrete tasks. This power is multiplied when it is combined with orchestration to accelerate the delivery of business value.
The Power of Understanding
When workforces of five can begin to behave like a workforce of a thousand, everyone wins! This is doubly true if it is efficient and sustainable. Less frequently talked about, but key to the former, is the power of understanding the underlying system. We have always said that there's no substitute for understanding both what you are trying to achieve and what you are starting with. This concept is true in traditional disciplines like mechanical or structural engineering as well.
In IT specifically, when we start thinking about automation and orchestration, the tools are important, but more important is an assessment of where you are now in relation to where you want to be. Then, with the assessment, understanding and defining what are the steps and pieces needed to get there. Starting here before progressing to simplifying, standardizing, automating, orchestrating and monitoring is mandatory because if you don't understand it, you cannot simplify it. Then, and only then, can we accurately evaluate which tools are appropriate for the work.
The Value of Good Communication
As it has evolved, we have seen the meaning of DevOps shift and change. What was originally an idea meant to encourage collaboration across organizational silos morphed into the idea that everybody could do everything. Today, we see silos once again emerging within organizations who claim to have a DevOps culture. For example, many teams have a team member who is the de-facto release engineer, whose role is to focus on very specific aspects of an automation.
We believe in a model where operational information is shared freely, recorded in repositories, and execution is automated where feasible...
We still see traditional IT organizations under the name of “DevOps” with a network team, with a server team, etc. DevOps has not been the answer due to the separation of knowledge. DevOps is akin to a methodology where you need to know everything in the strictest sense. So when faced with that challenge, people were not converting to it and as a result run DevOps teams in name only.
In the 8 years since DevOps has really become a mainstream movement. It has become clear that there is no substitute for knowledge. The high performers understand the system inside and out and they are also skilled with the tools that enable them to document the current state and desired state of the system.
DevOps has become a broad movement in a lot of organizations. But even in Cloud companies like Amazon or Google, they have SREs. They have reliability engineers whose job is to focus on operations. It's not enough to say that everyone in every organization should be capable of all functions. That's where automation is a key enabler because automation allows you to break it apart.
You might ask, how do organizations adopt something like this while keeping the separation of knowledge and where everyone does not have to learn everything?
We believe in a model where information is shared freely, recorded in repositories, and automated where feasible while still acknowledging that roles exist and individuals take on one or more of those roles at any given point in time. In this model, you may still have engineers who are familiar with WAN networks and others who are familiar with the application code base, but may not be deep in all areas of the system. In this model, those with the knowledge encode it in the form of documentation and automation scripts such that they can be used by those less familiar with that sub component of the system. When done properly, when there are changes that need to be made, it will be the knowledge expert that does it but it will be documented in a way any member of the team will be able to use it. For example, a team member would be able to deploy a new instance of the product because any one of them can operate the orchestration, even if they could not have created the automation.
With automation, it is deeply about engineering. Orchestration is about deeply understanding the end result focusing around the business process needs and understanding the flow of the value dstreams. In some ways, orchestration is almost business process engineering. For the most part we do not think of orchestration in that context, usually as developers we only think about it in the strict technical situation.
So having your orchestration engineers become deeply aware of business processes is a way to highlight the importance of their roles in the DevOps chain. Consider that developers can struggle with understanding the business value they're producing and being connected to the customer. Agile and scrum and some of other methodologies were intended to address this need. Connecting the doer with the person who wants things done.
Without connecting automation and orchestration, a company may be very strong at automation and create the very best mouse trap, but if the goal was to catch antelope, your tool would not be sufficient to the purpose.
Putting it all Together
Simply put, Orchestration is the front end of automation, which aligns the tools, processes and specific knowledge with the business needs. Orchestration defines the policies and service levels that are then enforced using automated workflows with weaved-in human intervention wherever applicable.
We strongly believe that the path to thoughtful operations starts with careful documentation of both your current state and your desired future state, then automation and orchestration can effectively begin. Both current and future state are best captured in architecture diagrams, documents, and a configuration management database (CMDB).
A powerful enabling component in an orchestration solution is an active CMDB that is both automatically updated by changes and provides the information needed to implement automated changes as they are approved. OpStack’s experience in building active CMDBs for our clients will be the topic of our next whitepaper.