Leveraging these core principles will provide a guiding light as your organisation designs, builds and operates your cloud estate.

Six Principles to Consider for Streamlined Cloud Operations

We have all heard about the principles used by Amazon, Google and Microsoft in developing their cloud platforms, and the outcomes they produce. The API became a first-class citizen; pay-for-use became the norm; and platform services were created to offer more significant capabilities with reduced management overhead. But what does all that mean for those of us architecting the applications and workloads to run on these platforms? This article explores a few of the core principles to keep in mind as you design, build and operate your cloud estates.

1. Utilise Minimum Viable Tooling

Many cloud architects run into the like-for-like, or tool-centric, approach when trying to plot their journey to the cloud. The IT or security department declares that in order to migrate, they need Splunk/ServiceNow/<insert favorite tool name here> before any workloads (including dev/test) can be migrated. This decision greatly increases the upfront complexity required to get running in the cloud and is often overkill for the workloads being deployed. Instead of taking this approach, consider adopting a more capability-focused mindset, evaluating the specific capabilities required to support the workloads being deployed.

This plan of attack allows you to reduce the initial complexity and opens up your thinking to consider cloud native solutions. For example, in your initial dev/test deployment, do you really need a full deployment of Splunk, or only a small subset of its features? Could you deploy the CSP’s native logging/alerting capabilities, or use a SaaS solution in the short term to cover that requirement?

2. Do Not Build What You Can Buy

Borrowing a lesson from the world of lean manufacturing, you should ask: how can we eliminate waste from processes that do not add value to our “product”? Translated to the world of cloud technology, this emphasis on utility means that unless the function is a core component of your business’s value proposition, it makes little sense to divert resources to build, implement and manage a solution that could easily be acquired from, or run by a third party. For example, in the on-premises world, almost every company builds, runs and maintains significant infrastructure to support corporate email. Yet while email is a critical business function, it is generally not part of the “secret sauce” that differentiates a company’s product.

In the cloud world, you have the option to use managed services to offload such functions, letting you focus on those “secret sauce” services and functions that are core to your business’s success. Whether it takes the form of managed cloud services such as (Database as a service) DBaaS, higher level capabilities like the artificial intelligence/machine learning (AI/ML) offerings or third-party tooling, when done with proper consideration, this offloading approach can greatly improve your agility and velocity without compromising your critical business functions or security posture.

3. Automate, Automate, Automate

One of the key steps to effectively leveraging cloud services is to step away from the console and build everything you can using automation. The end goal is to have the number of manual steps in the deployment of applications and infrastructure as close to zero as possible. While the primary benefits of taking this approach, such as consistency and repeatability, are obvious, the secondary gains, such as improved auditability, are just as important.

The process of automating your deployments can seem a daunting task given the realities of often hundreds of components, complex connectivity diagrams and tightly coupled dependencies. In the majority of cases, the best way to go is to start small and take an iterative approach, beginning with a collection of scripts/playbooks, building on them and eventually developing a full deployment pipeline.

Embrace Immutability

The next logical step on your automation journey is to use immutable infrastructure. This concept entails deploying servers (or containers) that you never log into or modify once deployed. If there is a bug fix, patch or configuration change, you simply update your automation (image build scripts, application deployment code, etc.) and redeploy. Not only does immutable infrastructure enable you to achieve greater reliability and consistency, it also significantly reduces the attack surface, by removing the need to expose services such as RDP/SSH on instances. The destroy/deploy model also limits the total lifetime of any one instance and thereby reduces the effective dwell time of an attacker.

Consider Composability

As you design your cloud estates, you should also consider the composability of your design. To borrow a rule from the programming world, “Don’t Repeat Yourself.” In other words, design deployments so that the process becomes more like building a Lego model, reusing various modules — infrastructure as code (IaC), immutable infrastructure and managed services — to achieve business goals. To enable your modules to be “composable” you need to ensure they are parameterised (e.g., there should be no hard-coded variables). Composability allows your code to be used in a broad range of situations.

4. Take a Security First Approach

The major CSPs run some of the most secure data centers on the planet, and the majority of security issues that make the news stem from a misconfiguration of services, or poorly implemented guardrails by the consumers of public cloud. To ensure that your business does not end up as front-page news for the wrong reasons, it is imperative that security stops being a bolt-on option and becomes a core component of everything done in the cloud. This change is easily the largest cultural evolution that has to occur in an organisation to ensure a successful–and secure cloud journey. Implementing this principle involves ”shifting left,” integrating security into the entire process, from initial design through to Site Reliability Engineering (SRE). Your security teams must learn more about infrastructure and coding practices, and, conversely, your infrastructure and development teams need to become knowledgeable about security practices. A good example of this cross-domain interaction can be found with secrets management. The security team provides input on tool selection, by working with both the development team, to implement secure coding practices for handling secrets, and with the infrastructure team, to deploy the tooling and application in a secure and scalable manner.

Security Automation

This is a fairly new concept in the world of security, but it is a necessary change to make if you are to manage highly variable environments without growing your security teams exponentially. Automating the remediation of common security issues allows near real-time responses and ensures a more consistent resolution of vulnerabilities. A significant side benefit of this automation is that your security teams can be freed up from mundane issues, to work on more complex tasks such as threat hunting.

Continuous Compliance

As the usage of cloud services and the velocity of deployments increases, you need a way to effectively monitor your cloud estates for noncompliant configurations. Continuous compliance tools including our Managed Cloud Controls and Continuous Compliance software provide a near real-time view of the compliance of your cloud estate. When coupled with a solid security automation practice, these services become powerful tools for keeping your environment secure.

5. Seek Cost Transparency

One of the larger nontechnical challenges in the cloud journey is managing cost. The shift from CapEx to OpEx can be quite disruptive, and it is critical to ensure that each business unit/team can see its costs clearly. This requirement mandates a strong tagging design and is often more valuable than the ability to forecast spend.

6. Determine Your Multi-Cloud Appetite

This is possibly the most frequent question we are asked by customers. While multi-cloud is a noble goal, it is important to remember that tackling a multi-cloud environment involves a significant increase in complexity and should only be approached once you have first established a firm handle on managing a single cloud. A key challenge when developing a multi-cloud strategy is to determine which applications should be hosted on which cloud. This requires analysing the dependencies and requirements for each application, against the capabilities and benefits of each cloud provider, to decide which application goes where.

Conclusion

While it may be impractical in the near term for most enterprises to follow 100 percent of these principles, they present tangible goals to strive for. Leveraging these principles as guiding lights in cloud design and deployment will not only ensure better architectural decisions, but also greater reliability, while simultaneously delivering the velocity your business is seeking.

This article was originally published in The Doppler, published by Hewlett-Packard Enterprise. Reprinted with permission.


Sign up to the Primer newsletter

Keep your finger on the pulse with monthly news, insights and tips on technology, innovation and transformation