Before the start of 2020 and the global spread of COVID-19, the cloud was already a fixture in our lives. With its mix of features and options, the cloud offers large organizations a sense of convenience and flexibility few were previously accustomed to. With providers shifting more infrastructure and software tools to cloud-based monthly subscription models though, companies have been forced to change the way they budget their IT expenditures. Many have struggled to make this transition. With the wide-reaching effects of Coronavirus and the move to remote working environments, this problem has only grown. As companies fight to keep up, it becomes harder and harder to track and accurately predict their cloud expenditures. The result? Rampant cloud sprawl and diminishing returns. But with the future undoubtedly moving towards the cloud and cloud-like technologies, what can your organization do to limit these consequences to your bottom-line?
In this article, we’ll explore the financial intricacies and risks the cloud poses, take a look at some of the pricing differences between the big three providers and their dedicated server alternatives, and offer some potential solutions on how to bring predictability back to your IT budget.
Want to skip straight to our price estimates or another specific section? Use the table of contents below!
The Growth of Multi-Cloud, Hybrid Cloud, and Cloud Sprawl
When the cloud first gained traction in the tech world, it was categorized using the following three monikers: Public, Private, or Hybrid. Public clouds ran on the networks of external companies and were shared by large groups of users, Private offered users a single-tenant alternative, and Hybrid served as a combination of the two, using a mix of cloud technology and traditional dedicated servers. As cloud solutions have grown though, we’ve seen the rise of another type of cloud infrastructure: Multi-cloud.
Multi-cloud is the idea that instead of using a single cloud solution to meet infrastructure needs, an organization may utilize several cloud solutions in unison. This multi-cloud mindset has lent itself to a rise in hybrid cloud environments as well. With more companies seeing the cloud not as a singular, fix-all solution, but rather just one of many tools on their belt, we’re seeing the creation of intricate hybrid solutions becoming more and more common.
There’s a problem with this though. As the number of cloud platforms a company works with grows, it becomes easier and easier to lose track. Even when working with a single platform, as new testing instances are created and recreated, they eat up resources and pile on costs. Unless these instances are deconstructed afterwards, they continue to persist. Over time, the monetary consequences of these forgotten instances can add up. This is called Cloud Sprawl, the uncontrolled growth of cloud resources surpassing an organization’s actual needs.
Now though, take this issue and multiply it across multiple cloud platforms. As more and more tools switch over to cloud-based models, the number of different cloud platforms a company works with continues to grow. At a certain point, it becomes very difficult to keep track of these expanding services. Often, it’s not until a company receives its bill that it starts to realize just how out of control this situation has become.
The Risks of Cloud Sprawl
There are two major risks associated with cloud sprawl: financial drain and security vulnerabilities.
The first point is rather obvious. As your company begins implementing more and more cloud-based tools or virtual machines, the resources and resulting costs needed to run and store data for these tools adds up. After all, bandwidth isn’t free. While a single abandoned VM isn’t likely to break the bank, as these instances increase, their impact grows more potentially devastating. This is especially true if you’re part of a large organization which relies on heavy communication between departments. If the proper oversight isn’t there, it can be very difficult to identify these issues before they take their toll.
As the number of cloud platforms a company works with grows, it becomes easier and easier to lose track.
The second major consequence of cloud sprawl is its risk to your organization’s security. The more information floating around unsecured, the more points of entry a hacker has when launching an attack. Often when creating test environments, operations teams will copy over at least a portion of the data present in the production environment for testing. While this helps ensure the test environment remains true to its production counterpart, it also leaves this data vulnerable. In the event this testing environment is abandoned after use but not deleted, that data persists. If this instance remains forgotten, it is unlikely to be as secured as the rest of your infrastructure. This creates easy vulnerabilities which can leave your organization open to attack.
But if cloud sprawl is such a risk, why aren’t companies doing more to fight it?
The issue is, even when an organization is aware it has unused infrastructure taking up bandwidth and creating vulnerabilities, depending on its size and structure, it can be very difficult to hunt down and remediate these instances. This wastes time and resources your average IT team can’t spare. The result is, the issue gets pushed back, losing priority until it’s often too late. This is especially true as multiple cloud solutions become integrated together. While these multi-cloud environments give companies more customizable infrastructures, they can also create problems, then obscuring them beneath layers of overlapping platforms.
In addition to cloud sprawl being a difficult issue to solve once it’s taken root, it can also be a difficult problem to avoid in the first place. Even with proper planning, as your cloud infrastructure grows to meet your rising needs, it becomes very difficult to predict the actual impact these services will have on your IT budget.
But why is that?
The Deceptiveness of Cloud Pricing
A few years ago, we noticed we were getting a lot of feedback from new customers reporting they’d switched over to us after paying unpredictably high charges through AWS. At the time, we did some research into the cost differences between AWS and traditional bare-metal solutions and compiled our findings. The results of our investigation are posted here, Why Switching to AWS May Cost You a Fortune, but the main takeaway of this study was that for customers with heavy bandwidth usage, cloud solutions like AWS could end up costing hundreds if not thousands of dollars more per month.
In the years since then though, other major players have risen up the ranks bringing greater competition to the world of public cloud solutions. But, have these changes translated to actual cost savings for their customers?
Let’s take a look.
The Big Three: AWS, Azure, and Google Cloud
In today’s current public cloud environment, there are three main cloud-based IaaS tools controlling the majority of cloud solutions: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. While Microsoft and Google both hold respectable shares of the market, it’s Amazon who remains the undisputed king.
Curious to see how Amazon’s pricing had changed over recent years, I loaded up the AWS cost calculator, and re-entered the same input and output values used in our original post covering AWS’s pricing. While the interface has changed slightly, I was surprised to find their prices hadn’t. Although the cost of data transfer and storage drops slightly with each passing year, the cost for inbound and outbound data transfer through AWS has not. These are the price estimates I received before factoring in anything other than bandwidth:
Amazon Web Services (AWS) Price Estimates
- 3,000GB outbound, 300GB inbound (10Mbps) – $268/month
- 9,000GB outbound, 1,000GB inbound (30Mbps) – $808/month
- 15,000GB outbound, 1,500GB inbound (50Mbps) – $1,457/month
- 30,000GB outbound, 3,000GB inbound (100Mbps) – $2,859/month
- 150,000GB outbound, 15,000GB inbound (500Mbps) – $12,410/month
At the time of our original posting, we estimated that the average user, transferring 10TB of bandwidth a month, would pay at least $600 more each month using AWS over a dedicated server. This comparison was based on the price difference between AWS and a Quad-Core Xeon CPU with 8GB RAM, a 500GB Hdd, and including 10TB of bandwidth. Back then, a server like this would have cost you about $100 to $200 each month.
But bandwidth and storage for dedicated servers has grown even less expensive over the years. Today, you can buy an instant deployment server through our dedicated servers page, ready to deploy in an average of 7-minutes, with specs twice as high, for half the price.
For example, an E3-1230 Quad-Core Kaby Lake Server, with 16GB of RAM, a 240GB SSD, and 20TB of bandwidth (roughly 60Mbps of sustained throughput) can be purchased for only $95 a month. This same level of bandwidth if purchased through AWS would cost you over $1,500 a month. That’s a savings of over $1,400 each month while utilizing your own dedicated hardware and resources. This means not only cost savings for you, but higher reliability and security as well.
For customers with heavy bandwidth usage, cloud solutions like AWS could end up costing hundreds if not thousands of dollars more per month.
But this is just one side of the equation, and AWS is just one of several providers. Surely in such a competitive market place, one of its competitors can offer users a better price.
Seeking to test this theory, I loaded up the Microsoft Azure estimate calculator and began plugging in some bandwidth numbers. The first thing I noticed was that Azure’s cost calculator is less user friendly than it’s AWS counterpart. The second thing I noticed was the pricing, which, while less than AWS, is only nominally so. Azure’s calculator doesn’t have a place for separate outbound vs. inbound data, only an outbound GB transfer. So, using the same outbound GB ranges as with AWS, here are the estimates I received:
Microsoft Azure Price Estimates
- 3,000GB – $260.57/month
- 9,000GB – $782.56/month
- 15,000GB – $1,285.53/month
- 30,000GB – $2,530.53/month
- 150,000GB – $11,206.13/month
While these prices are slightly lower, they still pale in comparison to the equivalent costs one would see using a dedicated bare-metal solution.
To finish out the big three, I decided to take a look at Google Cloud as well. Again, the prices here are lower, but still considerably higher than a bare-metal solution. Here’s what I found:
Google Cloud Price Estimates
- 3,000GB – $237.49/month
- 9,000GB – $712.46/month
- 15,000GB – $1,112.84/month
- 30,000GB – $2,020.88/month
- 150,000GB – $9,285.20/month
As you can see, while users can save money choosing Google Cloud over AWS, even with this better pricing, they would still be looking at potentially paying over $1,000 more each month for their bandwidth than they would using a dedicated bare-metal solution.
For those who’d like to see a quick side-by-side comparison of these providers’ pricing, I’ve included a comparison table below:
Now, this is of course an oversimplification of a complex issue. The truth is you’ll never be paying for just bandwidth in either of these scenarios. There is much more that goes into running a server or utilizing a cloud service than just the flow of data back and forth. The hardware and software you use, the way your infrastructure is set up, managed vs. unmanaged services, there are virtually infinite combinations of features that can be added, and these additions are rarely free. Looking at bandwidth only is a quick way to compare services, but it’s a short-sided means of decision-making.
So, in an effort to better compare the services offered by these providers to those available through Hivelocity, I decided to compare what a similar server build would look like across the various providers. Here’s what I discovered:
E3-1230 v6 3.5GHz Kaby Lake
4 Cores / 8 Threads
200GB Cloud Storage
20TB on a 1Gbps port (~60Mbps)
Linux on c5d.2xlarge
1 x 200GB NVMe SSD
200GB General Purpose SSD
20TB Outbound Data Transfer
E15: 274.9GB (256GiB) SSD
200GB Temporary Storage
20TB Outbound Data Transfer
1 x 402.7GB (375GiB) Local SSD
200GB (186.3GiB) Stored Data
20TB (18,626.5GiB) Data Transfer
Now, as I’m sure you’ve noticed, the values in this table are not all the same. There are a few reasons for this. Unfortunately, using the options available within the pricing calculators, it seems impossible to create a truly identical build across the board. While some values, such as bandwidth and storage space, can be specified exactly, others must be selected from a dropdown list of options. Many of these components, such as Cores, Memory, and Drive, are often grouped together, further reducing options for custom specifications. On top of this, some providers list their values as Gigabytes (GB) while others list them as Gibibytes (GiB), requiring necessary conversions to ensure values remain identical. I’ve listed these conversions where applicable.
Additionally, there is the issue of whether or not a vCPU is actually equivalent to a Core. While your average cloud provider would have you believe that a virtual core is the same as a physical core, this isn’t entirely accurate. It’s similar to how the dedicated server I selected features 4 Cores but 8 threads. The threading technology does improve processing, but this isn’t the same as having 8 physical cores. This is why for both AWS and Google Cloud, I’ve selected options with 8 vCPU, as realistically, these options are more equivalent to 4 true physical cores. For those who may be concerned that this distinction is unfairly affecting the pricing of these builds, I did test them with 4 vCPU options as well. In both cases, the difference accounts for less than $200 a month (about $160/month for AWS and about $50/month for Google). While this could certainly make a difference over the course of a year, when your monthly bill is already in the thousands, this difference feels inconsequential.
So, to combat these limiting factors and keep these builds as similar as possible, I decided to assign priority levels to each component. Focusing first and foremost on network bandwidth and the number of cores, I selected options which allowed these values to stay identical across all four builds. From here, I chose the available RAM options which were most similar, saving the drive selection and storage for last. This does result in a range of variations between the sizes of the drives used, but it should be noted that in all these cases, the specific drive selected made little impact on the overall cost. For example, the Google Cloud 1 x 375GiB Local SSD is the largest drive on this list. However, its addition adds only $30 to the monthly bill for this build. The pricing for the other drive options appears comparable to this. Others will note that the AWS NVMe SSD drive is actually a superior device to a standard SSD drive. While this is true, it’s up to you to determine if this difference justifies the thousands of dollars more a month you’d be spending using AWS.
In the end though, what this experiment really proves is just how challenging it can be to achieve a true side-by-side comparison of these providers. Even when looking at only a handful of available options, it remains incredibly difficult to determine the real value of these services. This is where most eCommerce experts would argue for the usefulness of customer reviews in helping to make purchasing decisions. Sometimes, the raw numbers just don’t paint a clear enough picture.
This experiment proves just how challenging it can be to achieve a true side-by-side comparison of these providers.
So, to get a better idea of some of the additional factors that go into making these complex infrastructure decisions, I reached out to a customer of ours who is personally familiar with the advantages and disadvantages of both the cloud and dedicated server solutions: Vince Albanese.
How Hybrid Cloud Saves Vince Albanese Over $15,000 a Month
Vince Albanese has been a customer with Hivelocity for 15 years. He’s been a customer with AWS for even longer. Over the years, he’s developed software for companies including Revolution Money, an online banking alternative to PayPal which was eventually purchased by American Express, and Certainty, a technology which leverages distributed ledger to provide immutable transactional control over communications, document sharing, intellectual property, and asset management. Serving clients in the legal, accounting, wealth management, human resources, and healthcare markets, the data Certainty uses must remain compliant with multiple external regulation standards. This creates a major limiting factor Vince must remain aware of when making decisions, and is one of the key factors that led him to the hybrid solution he uses today.
It took a willingness to experiment with new technologies, but Vince eventually found a hybrid solution that was perfect for his needs.
In his time as a customer with both our company and AWS, Vince has utilized a variety of services. However, when he first signed up with Hivelocity, it was for a very specific reason. As his AWS use had expanded, his associated costs had grown out of control, becoming harder and harder to accurately predict. Suddenly, he was spending almost $20,000 a month to run his applications through AWS. That’s when Vince started colocating with us, utilizing his own servers but housing them within our facilities. By doing so, he brought his monthly bill down to only a couple thousand dollars a month.
But there was an issue still. Because of the highly regulated nature of the data he works with, Vince must meet the highest levels of security and privacy compliance. While these days all Hivelocity data centers are HIPAA, PCI, and SSAE-16 SOC 1 and SOC 2 compliant, at the time when Vince joined us, we were not. Even if we had been, having a compliance certified data center is only one piece of the puzzle. A user’s hardware and business operations must meet certain standards as well and certification of this requires auditing. Although Vince could save significant money using an entirely dedicated server solution, in order to assure his infrastructure setup met necessary standards, he would have to spend a small fortune on auditing and potential upgrades.
This was one of the benefits of AWS for Vince. Because AWS meets these compliance standards, users who keep their data stored in the AWS cloud meet this same level of compliance as well.
So, if Vince stayed with AWS, he could more easily meet his compliance regulations, but by staying with a dedicated server setup, he could drastically reduce monthly costs and improve performance. He was stuck between a rock and a hard place.
It took several years of trial and error, and a willingness to experiment with new technologies, but Vince eventually found a hybrid solution that was perfect for his needs. Using Docker containers stored on dedicated servers to run his applications, Vince continues to utilize AWS, keeping his data stored securely behind their firewall. This combination allows Vince to meet his necessary compliance standards while still utilizing dedicated hardware for the heavy lifting. The servers he uses can be deployed instantly as needed, and a reusable configuration code lets Vince quickly provision these servers to all be identical. This keeps costs to a minimum and has brought predictability back to his budgeting, reducing latency and improving performance at the same time.
But this is just one solution to a very specific problem. It’s likely Vince’s circumstances and tech needs are not the same as yours. Because of the strict regulations associated with healthcare providers and legal groups, the ability to meet compliance has to remain one of his biggest considerations. This resulted in a hybrid infrastructure that allows Vince to meet his security obligations while still limiting his overall reliance on the cloud, keeping costs manageable. What worked for him, probably isn’t the ideal solution for you. But there’s an important lesson here.
The takeaway from this is that Vince was willing to experiment, to try out and test different methods until he reached an ideal solution. When I asked Vince what his advice would be for others facing similarly difficult decisions, he told me that five years ago, the solution he currently uses wouldn’t have been possible. It’s thanks to new technologies he’s been able to achieve this solution. His advice to others, is to keep an eye on new technologies. Just because your current configuration gets the job done, doesn’t mean there isn’t a solution that can do it better or cheaper.
Hivelocity: Your Private and Hybrid Cloud Provider
The truth is, it’s daunting making big infrastructure changes. Even with thousands of dollars on the line, untangling yourself from a platform and its services is a time-consuming and miserable task. Beyond the work involved, the potential risks of major infrastructure shifts leave many IT managers unwilling to welcome change until it’s forced on them. While this reaction is understandable, it isn’t always what’s best for their organizations.
When facing major issues that require big decisions, you don’t have to wait until it’s too late. Start planning today and you can limit the negative effects of cloud sprawl on your organization’s resources and finances. With a hybrid cloud solution, you can utilize the best features of both the cloud and dedicated hardware. Keep the pieces that work for you and avoid those that simply hurt your budget. Best of all, this transition doesn’t have to happen all at once.
Not sure if a hybrid cloud solution is right for you? Try it and see. Talk with our cloud experts and start slowly. If the changes you make look promising, then transition more as you see fit. You don’t have to operate with an all or nothing mentality.
In the end though, if you want to know whether one service can provide greater benefits than another, the only way to truly know is to start a conversation with the provider. Every user’s needs are different and every organization is bound by different limiting factors. The right solution for you is out there. It may just not be the obvious one.
So if you’re curious, if you’ve been considering making a switch to a more dedicated solution, or if you just want to know if you could be saving money somewhere else, give Hivelocity a call or start a live chat today. Talk to our sales agents, discuss your current setup and your plans for the future, and get a quote. After all, the fastest way to know if our services are right for you is to ask.
With an abundance of custom-built solutions available, Hivelocity is the service provider you’ve been looking for. We think customers like Vince would agree. His 15 years as a client are a testament to that.
– Sean Kelly