A tip sheet of steps to help implement a secure and reliable cloud environment.

May 10, 2019

7 Min Read
How AWS Helps MSPs Deliver Well-Designed Clouds
Shutterstock

By Taylor Gaffney

Gaffney-Taylor_NetEnrich-author-150x150.jpeg

Taylor Gaffney

Today’s cloud infrastructure isn’t like yesterday’s or even last month’s cloud. AWS is releasing new features and services every week. The speed at which you can set up and deploy a new environment in the cloud is mind-boggling: minutes, compared with the old way of weeks of time spent on hardware procurement and installation. Of course, you must know what you’re doing.

MSPs looking to move clients to the public cloud need a regular refresher on skills and knowledge to ensure cost and performance optimization as well as strong security. Partners play an important role in delivering visibility into the customer’s cloud infrastructure. IT professionals who’ve been running traditional, on-premise environments typically lack skills in the specific public cloud platforms, as well as in the tenets of hyperscale computing.

Creating a well-architected cloud environment in AWS consists of best practices spanning security, cost and performance. AWS offers guidance to get you started. Here’s how we break this down:

Security: More than 90% of the issues our customers face when moving to the cloud relate to the security pillar. Companies aren’t often aware of security vulnerabilities in the cloud until we do a thorough review. The challenge is that in most traditional environments, security teams focus primarily on protecting the edge of the network, using technologies such as firewalls, intrusion detection systems, data loss prevention and access control. In AWS, companies must secure their environment at every layer, including instances, subnets, load balancers, operating systems and applications.

The top best practices include:

  • Multifactor authentication: Ensure customers have MFA enabled on all local AWS accounts. Roughly 80% of the companies we work with don’t have this feature turned on, and it’s an easy, effective way to double down on secure access to your systems.

  • Control access: Advise the customer to give authorized AWS users the bare minimum of access privileges to start, increasing privileges only as the role requires.

  • Encryption: AWS supports encryption of data at rest and in transit. AWS key management service (KMS) allows you to define encryption keys, encrypt data and protect keys with identity and access management (IAM) policies.

  • Automation: A general principle is to minimize the amount of human touch on the AWS environment and instead take advantage of the many cloud services to automate configurations and workload management. This prevents errors and lowers the customer’s overall risk.

  • Visibility: Enable traceability and monitor alerts in real time, because of the constantly-changing nature of on-demand infrastructure. AWS CloudTrail is a fundamental tool in this effort, as it provides rich detail about API calls made in your AWS account so you can see exactly what happened, where and when if you are investigating issues. Landing Zone is another: this service allows you to create a master account template from which all AWS accounts follow during set up. Not only is this more secure because it automates a standardized, secure configuration matching the customer’s requirements, but it dramatically reduces the time needed to create new accounts. Amazon GuardDuty, a managed threat detection service; AWS Config, which manages configuration history; and Amazon CloudWatch are other valuable monitoring and management services to consider. There are also now plenty of sophisticated AWS partner solutions, such as OpsRamp, which you can use for similar purposes.

Cost: A common issue with cost containment is …

… that customers select compute resources that are too large for their needs. One customer we worked with had set up its own cloud environment using a couple of large instances. We changed this so that the workload was spread across many smaller instances and across different availability zones. This not only saved the company more than 20% on monthly consumption bills, but it also lowered their risk in the event of outages. Here are some tips:

  • Purchase only what you will use: Most companies have general ideas about demand and downtime with each application, but making assumptions isn’t accurate enough. The MSP will need to load-test the customer environment to get a better read on requirements. Using AWS Auto Scaling is another smart step; the customer can set thresholds for performance so that when demand changes, cloud resources automatically scale up or down accordingly. This way, customers pay only for the resources they use and nothing more. Reserve instances are another strategy for cost savings, allowing the customer to purchase one or three years of infrastructure ahead of time at a discounted price. However, with reserve instances, you’re locked in, which isn’t great if you wind up needing significantly less infrastructure than you had purchased. As a result, spot instances are a more popular choice.

  • Monitoring: AWS Cost Explorer provides usage and cost trend data to help customers optimize spending and predict future costs. Most customers use CloudWatch for overall infrastructure monitoring, but it’s also helpful to manage resource utilization. CloudCheckr is a third-party tool that can discover cloud resources which are hidden or being underutilized. That’s critical in these times of shadow IT, which can account for up to 10% to 20% of cloud spending. Without central IT governance and management of the cloud environment, companies are certain to overspend, along with introducing unneeded security risks to the business.

Operations: Monitoring and visibility tools, as mentioned above, are taking a lot of the grunt work out of keeping cloud-based systems up and running, according to SLAs. Increasingly, though, IT managers won’t have to learn all these technologies to maintain a high-performing environment. More of this backend work is being offered and delivered efficiently by AWS.

  • Operations as Code allows everything to be done in software. For IT operations, this is a game changer, especially regarding automating procedures which reduce the risk of human error. A key AWS service for Operations as Code is CloudFormation. AWS CloudFormation provisions your resources in a safe, repeatable manner, allowing you to build and rebuild your infrastructure and applications without having to perform manual actions or write custom scripts. CloudFormation takes care of determining the right operations to perform when managing your stack and rolls back changes automatically if errors are detected.

  • Frequent, small, reversible changes: Changes to your environment traditionally came from bulky releases, which introduce a lot of change to customers at once and can negatively affect the dependent systems and components. AWS recommends an approach to designing systems which uses small, focused components that are highly resistant to failure and are composed together into a single holistic system. This allows organizations to quickly roll out changes and reverse them quickly if needed.

Performance: Monitoring and visibility tools, as mentioned above, are taking a lot of the grunt work out of keeping cloud-based systems up and running, according to SLAs. Increasingly, though, IT managers won’t have to …

… learn all these technologies to maintain a high-performing environment as AWS is offering more of this backend work.

  • Serverless computing plays into this future nicely and AWS has an offering through Lambda. The gift of serverless architecture is that IT no longer needs to configure, run and maintain servers. AWS does all this heavy lifting automation for you, more efficiently, accurately and cost-effectively. Customers will likely see much lower latency with serverless infrastructure and, often, lower costs. In-house resources can focus on development, not the infrastructure behind it.

  • Go global in minutes: The keyword here is minutes. Instead of building data centers in different locations, whether it is nationally or internationally, take advantage of AWS regions and edge locations to deliver services to your customers at lower latency and higher performance.

The AWS Well-Architected Framework provides architectural best practices across the five pillars for designing and operating reliable, secure, efficient and cost-effective systems in the cloud. The framework provides a set of questions that allows you to review an existing or proposed architecture. It also provides a set of AWS best practices for each pillar.

Using the framework in your architecture helps you produce stable and efficient systems, which allows you to focus on functional requirements.

Taylor Gaffney is a results-oriented cloud solutions architect at NetEnrich who identifies and designs effective customer solutions to meet current and future needs. Follow him on LinkedIn and @NetEnrich.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like