How to Make Workloads in Cloud Faster: 7 Strategies

Just putting workloads in cloud doesn’t automatically make them faster; it might even make them slower.

Christopher Tozzi, Technology analyst

January 8, 2021

6 Min Read
How to Make Workloads in Cloud Faster: 7 Strategies
Getty Images

The cloud offers a variety of benefits, including more scalability, greater flexibility and potentially lower costs. One advantage that the cloud doesn't necessarily provide, however, is speed. Moving workloads to the cloud won't automatically make them faster. In fact, it could make them slower, due to issues such as network bottlenecks. That's why it's important to assess the speed of workloads in cloud and take steps to make them faster if necessary.

This article walks through strategies for improving the speed of workloads in cloud. Some of these tips involve changes you can make at the workload level, while others require having certain architectural configurations in place.

1. Don't Undersize Your Instances

A basic best practice for improving cloud speed is to choose the right virtual machine instance types when deploying workloads in cloud (assuming your workloads are hosted on virtual machines, that is). You want to make sure your workloads have enough CPU and memory resources allocated to them, while avoiding situations where you provision them with so many resources that it leads to cost waste.

There are a variety of ways to strike the right balance. Following the basics of capacity planning is a start. Some application performance monitoring (APM) tools can also make recommendations about which instance types to choose to optimize performance without breaking the bank. You should also make use of your cloud provider's autoscaling policies, which allow you to scale up resource allocation in the event that your workloads need more resources to achieve the desired level of speed.

2. Use Load Balancers

Another basic step toward improving cloud speed is to use a load balancer to distribute network traffic efficiently across different instances of your applications. You can do this using cloud providers' native load-balancing services. This is the easiest approach, but it gives you less control. You can also use a third-party tool like NGINX, which will require more setup, but offers greater control.

3. Enable Anti-DDoS

In the event that an attacker actively tries to slow down or disrupt your workloads in cloud via a DDoS attack, having an anti-DDoS service at the ready is invaluable. Anti-DDoS tools block malicious traffic to prevent it from overwhelming your applications.

Like load balancers, anti-DDoS solutions are available directly from cloud providers (in the form of services like AWS Shield and Google Cloud Armor), as well as from third-party vendors (like Cloudflare and Fastly).

4. Be Smart about Data Architectures

Network connectivity is typically the weakest link in any cloud architecture. The more data you have to move between one cloud and another, or between an on-premises data center and the cloud, the slower your cloud will be.

You can mitigate this problem by choosing a cloud architecture wherein data lives as close as possible (in a network-topology sense) to the applications that create or ingest it. Avoid scenarios where you have applications running in one cloud and data that lives in another.

If you can't avoid distance between your applications and their data, you can at least take advantage--in some cases--of network-optimization services like AWS Direct Connect and Azure ExpressRoute. They're not available for all cloud regions and all private data centers, but where they are, they will dramatically improve the speed of data transfers.

5. Use the Cloud Edge Wisely

Edge computing architectures may be over-hyped, in that they are not the be-all, end-all of performance optimization for all types of workloads in cloud. For certain architectures and configurations, however, edge architectures--which place data and applications closer to end users in a geographical sense--can boost cloud speed.

Edge architectures are particularly valuable if you have concentrations of users in locations that are geographically distant from your main cloud data center. They are also useful for workloads that require very low latency, which is increasingly the case for IoT applications.

There are also good, old-fashioned content delivery networks (CDNs), which not only allow you to place data and applications closer to end users, but also provide caching functionality that can improve the speed and efficiency of cloud workloads. Whether you believe CDNs (which have existed since long before the dawn of the modern cloud computing era) are a form of edge computing or another solution entirely, they remain an important strategy for improving cloud speed.

6. Avoid Virtualization

Originally, virtualization was part and parcel of cloud infrastructure. If you wanted to deploy  workloads in cloud, you hosted them in a virtual machine.

Today, however, most cloud providers offer bare-metal instances, which allow you to run workloads without relying on virtualization. This approach may improve cloud speed because you don't have to waste resources on a hypervisor and hardware-abstraction services. In certain cases, you can also take advantage of special hardware-acceleration features, such as crunching numbers on GPUs instead of CPUs.

This is not to say that you should avoid virtualization at all costs. For most workloads in cloud, the speed benefits of running on bare metal are not worth the added cost. But for workloads that can benefit in particular ways from direct access to hardware, removing the virtualization layer may lead to dramatic speed improvements.

7. Consider Hybrid Cloud

Not all cloud workloads have to be hosted solely in the public cloud--and, in some cases, the public cloud alone will not deliver the best speed. You may be better served by a hybrid architecture that allows you to use public cloud services without relying exclusively on public cloud infrastructure.

For example, you may want to use serverless functions that are hosted in the public cloud in order to run CPU-intensive workloads on-demand, in a cost-efficient way. But you may want to keep the rest of your applications on-premises, where you won't have to worry about network bottlenecks and will be able to make full use of the hardware resources you already own to power your workloads. In that case, a hybrid architecture would allow you to achieve the results you want.

Conclusion

Simply moving workloads to the cloud won't automagically improve their speed; in fact, it may do the opposite. But by being wise about the way you design your cloud architectures, as well as taking advantage of tools that help optimize cloud speed and availability, you can deploy applications on the cloud that are at least as fast as those hosted on-premises.

About the Author(s)

Christopher Tozzi

Technology analyst, Fixate.IO

Christopher Tozzi is a technology analyst with subject matter expertise in cloud computing, application development, open source software, virtualization, containers and more. He also lectures at a major university in the Albany, New York, area. His book, “For Fun and Profit: A History of the Free and Open Source Software Revolution,” was published by MIT Press.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like