In: Computer Science
Your company Beta has some users, hosts, and data that require high-level of security protection, while some users, hosts, and data that don’t need high-level security protection, just need to prevent outsider to access their data. How will you design your system to accommodate both types of users, hosts, and data. Describe how you will design the cloud
Think Adaptive and Elastic
The AWS cloud architecture should be such that it support growth of users, traffic, or data size with no drop in performance. It should also allow for linear scalability when and where an additional resource is added. The system needs to be able to adapt and proportionally serve additional load. Whether the AWS cloud architecture includes vertical scaling, horizontal scaling or both; it is up to the designer, depending on the type of application or data to be stored. But your design should be equipped to take maximum advantage of the virtually unlimited on-demand capacity of cloud computing.
Consider whether your AWS cloud architecture is being built for a short-term purpose, wherein you can implement vertical scaling. Else, you will need to distribute your workload to multiple resources to build internet-scale applications by scaling horizontally. Either way, your AWS cloud architecture should be elastic enough to adapt to the demands of cloud computing.
Also, knowing when to engage stateless applications, stateful applications, stateless components and distributed processing, makes your cloud very effective in its storage.
Treat servers as disposable resources
One of the biggest advantages of cloud computing is that you can treat your servers as disposable resources instead of fixed components. However, resources should always be consistent and tested. One way to enable this is to implement the immutable infrastructure pattern, which enables you to replace the server with one that has the latest configuration instead of updating the old server.
It is important to keep the configuration and coding as an automated and repeatable process, either when deploying resources to new environments or increasing the capacity of the existing system to cope with extra load. AWS Bootstrapping, AWS Golden Images or a Hybrid of the two will help you keep the process automated and repeatable without any human errors.
Bootstrapping can be executed after launching an AWS resource with default configuration. This will let you reuse the same scripts without modifications.
But in comparison, the Golden Image approach results in faster start times and removes dependencies to configuration services or third-party repositories. Certain AWS resource types like Amazon EC2 instances, Amazon RDS DB instances, Amazon Elastic Block Store (Amazon EBS) volumes, etc., can be launched from a golden image.
When suitable, use a combination of the two approaches, where
some parts of the configuration get captured in a golden image,
while others are configured dynamically through a bootstrapping
action.
Not to be limited to the individual resource level, you can apply
techniques, practices, and tools from software development to make
your whole infrastructure reusable, maintainable, extensible, and
testable.
Automate Automate Automate
Unlike traditional IT infrastructure, Cloud enables automation of a number of events, improving both your system’s stability and the efficiency of your organization. Some of the AWS resources you can use to get automated are:
As an architect for the AWS Cloud, these automation resources are a great advantage to work with.
Implement loose coupling
IT systems should ideally be designed in a way that reduces inter-dependencies. Your components need to be loosely coupled to avoid changes or failure in one of the components from affecting others.
Your infrastructure also needs to have well defined interfaces that allow the various components to interact with each other only through specific, technology-agnostic interfaces. Modifying any underlying operations without affecting other components should be made possible.
In addition, by implementing service discovery, smaller services can be consumed without prior knowledge of their network topology details through loose coupling. This way, new resources can be launched or terminated at any point of time.
Loose coupling between services can also be done through asynchronous integration. It involves one component that generates events and another that consumes them. The two components do not integrate through direct point-to-point interaction, but usually through an intermediate durable storage layer. This approach decouples the two components and introduces additional resiliency. So, for example, if a process that is reading messages from the queue fails, messages can still be added to the queue to be processed when the system recovers.
Lastly, building applications in such a way that they handle component failure in a graceful manner helps you reduce impact on the end users and increase your ability to make progress on your offline procedures.
Focus on services, not servers
A wide variety of underlying technology components are required to develop manage and operate applications. Your AWS cloud architecture should leverage a broad set of compute, storage, database, analytics, application, and deployment services. On AWS, there are two ways to do that. The first is through managed services that include databases, machine learning, analytics, queuing, search, email, notifications, and more. For example, with the Amazon Simple Queue Service (Amazon SQS) you can offload the administrative burden of operating and scaling a highly available messaging cluster, while paying a low price for only what you use. Not only that, Amazon SQS is inherently scalable.
The second way is to reduce the operational complexity of running applications through server-less architectures. It is possible to build both event-driven and synchronous services for mobile, web, analytics, and the Internet of Things (IoT) without managing any server infrastructure.