This blog post was written by Barry Collins, a technology writer and editor who has worked for numerous publications and websites over his 20-year career.
With the high levels of performance available from public cloud services, it’s easy to assume that any cloud migration will inevitably improve user experience. Alas, that’s a dangerous assumption.
It’s vital to understand how different cloud infrastructure components can impact application performance for users. Failure to do so can lead to severe degradation of performance, to the point where users even abandon an application and turn to rival services.
The forthcoming ThousandEyes Cloud Performance Report will pinpoint the factors that application developers should pay attention to when designing their apps for the cloud. In the meantime, we’ve asked ThousandEyes’ Principal Solutions Analyst, Mike Hicks, to share his experience of how to make better decisions when undertaking cloud migrations and how to avoid the costly pitfalls.
Focus on the User
User requirements are often pushed too far down the priority list when it comes to designing applications, according to Mike Hicks. Developers will often design applications to be as robust as possible within a given budget without necessarily evaluating what level of performance the user base can tolerate.
Decisions about which cloud infrastructure to use are often made before the impact on user experience has been evaluated, and that’s putting the cart before the horse, according to Mike. “User-centered design techniques can help in making application design decisions that align with user needs and reduce over-engineering, which can lead to over-specification of cloud infrastructure,” he said.
“With better knowledge of users and expected performance, organizations can make infrastructure choices that are optimal and aligned with user needs, even as ecosystems evolve and new options become available.”
Look at the Entire Chain
The complexity of modern applications makes careful planning essential. Long gone are the days when cloud app performance was determined by a single route back and forth from a remote server. Nowadays, different components—such as user authentication or payment systems—can be distributed across different clouds, data centers, regions, and availability zones.
In the same way that a city planner wouldn't plan for goods trucks to drive through a heavily congested city center routinely, app developers must also be conscious of potential bottlenecks that might arise with their cloud infrastructure choices. And they can only do that with good visibility of the complete service delivery chain, complete with a view for defining and setting expectations.
“One of the primary challenges for teams responsible for designing applications on the cloud is to understand the performance of various cloud infrastructure components to make informed decisions,” said Mike. “It's crucial to know which clouds, regions, and availability zones host different components and how they perform compared to the organization's user base.”
“To design applications that perform optimally, it's essential to understand how different infrastructure elements impact application design and cloud service performance. Being continuously aware of the performance characteristics of the underlying cloud infrastructure puts organizations in a better position to have some influence and control over the delivery of digitized, cloud-based user experiences.”
The decision on where to host an application should, whenever possible, be made after these factors have been evaluated, not before. “Application design involves more than just infrastructure,” said Mike. “An application's characteristics are determined well before the discussion on where and how to re-host it. The role of users in determining those characteristics must be considered in the cloud development process. Doing so could lead to a very different set of infrastructure requirements for each application.”
The Perils of Poor Planning
Mike has witnessed real-world examples of how failing to fully consider the user experience has backfired on app developers. He tells the story of one company that performed a "simple lift and shift" away from its own data center to a cloud platform with a distributed database.
"The problem was that nobody had considered the interaction and location of the users," said Mike. He explained that the application in question was time-sensitive and required a quick response. The assumption was that moving to the cloud would improve performance. However, the construction and distribution of the specific services required meant that the users, while quickly able to query the front end, were simply frustrated when trying to transact with the application. The backend database and services were essentially operating some distance away from the front-end interaction."
That proved a costly mistake because "speed was the differentiating factor for this company, and they began losing customers to rival services until they were able to identify and classify the users' requirements."
Alienating customers isn’t the only way companies can lose money on poor cloud migration planning. Flawed infrastructure decisions can also create unnecessary compute costs. “Most cloud providers operate on a combination of ingest and compute resource charges,” said Mike. “The fact that developers can offload compute resources sometimes means they take a less than optimal design decision, not always considering elements such as the location of functions in relation to the user.”
"This can lead not just to increased compute resources but also to needless shipping of data back and forth across regions. Over-engineering can lead to the over-provisioning and over-specification of cloud infrastructure to support the application, which can result in extra costs."
Better Communication Across Silos
It's not only a lack of visibility over external cloud providers that can hamper application performance—a lack of visibility over what your colleagues are doing can also prove problematic.
In large organizations, different internal teams may be responsible for services that span across multiple products or domains, and that can create silos where nobody has a complete view of the entire service delivery chain.
“For instance, a recent case involved a database cluster migration that coincided with a scheduled job making many database calls, leading to an exhaustion of available remaining capacity,” said Mike. Consequently, an entirely avoidable situation dragged down the performance of the app.
“While there will always be the occasional 'perfect storm,' which is an unpredictable combination of factors that creates outage conditions, this is the exception rather than the rule,” said Mike.
“The truth is that many outages could be avoided or have a more limited duration and blast radius if the impact of silos was reduced, and decision-making and activity across the end-to-end service delivery chain was transparent to everyone involved.”
Don’t Overdo the Cost-cutting
One major motivation for cloud migrations is cost savings. Cost savings can be made not only by moving from on-premise data centers to cloud infrastructure but also by moving from one provider to another in search of a better deal. But what might appear to be a significant short-term cost saving can turn into a huge drag on revenue if the impact on performance is not carefully considered.
“The approach cannot just be about cost-cutting alone,” said Mike. “Organizations must become more thoughtful in terms of how they optimize their infrastructure spend while still maintaining their performance. The emphasis should be on architecting smarter, with cost-effectiveness coming in almost as a by-product of making smarter, more user-centric architectural decisions.”
“Organizations cannot afford to take a backward step on performance to save a few dollars,” Mike adds. “Users expect fast performance from web-based applications and workloads, and that expectation will only increase.”
That doesn’t mean cost savings can’t be achieved. But again, it comes back to taking a holistic view of the entire service delivery chain and working out where the savings can be made without harming end-user experience.
Mike gives a couple of examples of how that might be achieved. “The most cost-effective and simultaneously performant way to serve users is to cache content via a CDN and host compute-intensive infrastructure as close as possible to the users accessing it,” he said. “An eCommerce store that operates on huge backend databases may benefit from a more distributed database structure that speeds up calls on the backend (fulfilling user requests for their product) and reduces the cost-to-serve for each would-be purchaser.”
Improving performance for the user and cutting costs is the ultimate win-win. But it’s only possible if you’re as focused on the user experience as you are on the bottom line.