AutoScaling the Cloud


Managing Resources on a Server is Like a Game of Tetris

It used to be simple, SysAdmins would manage resources on a server like a game of Tetris by closely monitoring and making sure service resource utilization would stay in balance with the hardware.

Optimizations were mostly done at the application layer. Before we knew it, one server wasn’t enough and any downtime would be a bad time for a variety of good reasons. High-availability had to come, so the engineers made the cloud happen.

The cloud, within reason, is simply a clustered pool of technological resources and infrastructure that we needed to support modern-day operational requirements.

The growth of emerging tech quickly and heavily started to rely on that infrastructure and became the standard go-to for starting anything legitimate on the net. It remained a similar game of efficiency, however the blocks and controls of this particular game became a lot more complex and expensive.

When a mistake has an associated cost, it is normal in the industry to remove error-prone factors out of the equation. In most cases that factor is the operator. We focused on automation systems that removed the user out of the equation and designed it to follow clear logic, thorough review, monitoring, and optimization. For resource management of our applications on the cloud, AutoScaling was designed to efficiently automate the infrastructure of that process.

In modern times – a DevOps or SysOps engineer in the making – is going to have to learn a lot about the AutoScaling mechanics of “the Cloud”. Companies in need of an internet presence and the reliability of such technologies have to quickly adapt to the business models of various cloud providers.

Big Public Cloud providers, like AWS and GCP, focused their product offerings on extracting revenue efficiently from their compute cycle billing, bandwidth tracking, per gig storage, along with many other associated service systems. Our new roles now have a direct association with operational cost and thus we have to play this game responsibly.

Table of Contents:

Reasons to Automate and Scale Your Infrastructure

There are many reasons to automate the scaling for a service or infrastructure. Paying for something you don’t actively use is a luxury and often a waste of money. The all-time joke of having a gym membership you rarely use holds very little value when it comes to business cost management. On the opposite side, what if the funding or the success of the company brings you into a phase of rapid development and operational growth? How do you expand quickly? Time is money and in many cases paying to speed up the process is worthwhile to achieve time to market goals and deliver results by a desired date.

Cost management is going to be the number one decision driver for nearly any business. Paying for something you don’t use is wasteful and is a non-argument. AutoScaling Cloud resources focus on having a logical minimum of the desired resource, usually set by an operator, for the task at hand. This value is decided by a variety of factors such as forecasted activity, promotional periods, acquisition requirements, etc.

By using metrics and reactive monitoring techniques the resource pool automatically expands with incoming resource demand, up to a set maximum limit. These hard limits are also set by the operator and are usually based on the overall resource pool availability requirement or a cost barrier.

When you need it, you have it, up to a certain extent. And when you don’t? Remove the wasted allocation. Efficient and effective.

The dream of any start-up company is to create something that has a profound impact on the market, causing the demand to go through the roof. Unfortunately when the demand isn’t met with the supply a portion of the revenue is lost and frustration ensues. One of the best reasons to scale up is due to rapid demand from a successful offering.

It is much easier to scale with more of the same infrastructure than it is to pivot to a whole new system. With growth, you will need more resources regardless of your system design. This is one of the reasons the cloud fits this growth model so well. If you need more resources, you can quickly add more to your existing infrastructure. Not only are you able to expand with the same type of resource units required for your current operations, but you can also add new components to a cloud that you can also scale on-demand as an entirely new revenue stream and offering.

Certain technologies like Kubernetes, Docker Swarm, Ceph, or even general DevOps principles like Continuous Integration (CI) and Continuous Deployment (CD) come with AutoScaling in mind. The product had to fit the infrastructure cost design and support time to market goals, so the infrastructure now has even more reasons to support AutoScaling.

You can put nearly any new piece of tech in the search bar with the word “autoscale” and come up with a decent result on how to get that working on a cloud. Certain pieces of technology have this as a requirement in order to run effectively or support high availability and we wouldn’t be surprised if this becomes the de facto standard going forward. It’s just that efficient.

AutoScaling your Cloud

There are different ways of scaling your cloud. The generic horizontal and vertical scaling descriptors refer to either scaling out with the same units you worked with before or scaling up with more powerful resource types. Both are effective but mainly focus on the virtual barriers of your Cloud segment.

Tools like OpenStack’s Heat and HashiCorp’s Terraform focus on assisting in management of this space in an automated fashion with infrastructure as code focus. While Heat focuses mainly on OpenStack resource deployment automation, Terraform attempts to provide support to many of the major public Cloud providers and technologies, including OpenStack, with various supported cloud drivers.

It’s worth noting that there are many alternatives to Terraform, such as SaltStack, Ansible, Spinnaker, and others, so do your own research before committing. Reaction based monitoring based on collected metrics with tools like Prometheus, Grafana or a service provider like DataDog is also becoming popular.

In the context of automation, the principle of reaction-based monitoring, is for a conditional threshold to trigger a reaction against the Cloud via one of the aforementioned technologies to scale your operation up or down. One of the tasks ahead in the Cloud space is automating multi-regional and hardware-based AutoScaling for enterprise Cloud users.

InMotion Hosting’s Flex Metal Cloud

Our focus with Flex Metal Cloud was to bring the Cloud and the competitive technologies that come with it to be more available to all. Making a fully private piece of tech, a private Cloud, available to anyone also removed security concerns.

Utilizing API driven, open-source technology like OpenStack was favored by all engineers. With such diverse community contributions and areas of focus, the project-based specialization approach for the latest technologies that are possible with the cloud infrastructure, allows anyone to have a competitive advantage.

One of these awesome projects, Heat – the main project in the OpenStack Orchestration program, is streamlined toward the ability to automate and orchestrate various cloud resources for an OpenStack Cloud using template based format, allowing our SysOps and DevOps kin to utilize their Tetris automation skills as efficiently as possible.

Looking for hosted private cloud, infrastructure as a service, or bare metal solutions? A Flex Metal Cloud is a powerful private cloud solution that gives you the security and performance you need to successfully run your business. Learn more about the hosted private cloud inside of Flex Metal.

Are you interested in learning more about Terraform or OpenStack Services? Check out these articles:

Was this article helpful? Let us know!