Published: Aug 04, 2022
the “hidden stage-2” of your cloud adoption
Introduction
Organisations feel excited and satisfied with cloud adoption. A sizeable portion of hardware is to be retired, a more scalable IT resource to be achieved and an IT environment with easier management control.
But what is next? Here is the “Hidden Stage 2” of cloud adoption – cloud optimisation.
Cloud optimisation is not only to reduce cost or find the best pricing plan but there are plenty of areas to improve the usage of cloud resources, in order to provide the most effective and the best performance to the end users.
What are the differences between traditional performance tuning and cloud optimisation?
In the past, we conduct performance tuning because of scarce resources, for example, use the least number of memories, CPU, or even the least number of programming statements, in order to achieve the most energy or resource-effective algorithm. The traditional mainframe virtual storage architecture is one great example to get the most operating space out of “limited” memory compute storage. However, nowadays, with cloud computing, resources are simply “unlimited”. The approach of performance tuning or optimisation is slightly different – we focus on price-effective, best use of options/features.
Cloud optimisation framework
We suggest conducting cloud optimisation in three aspects:
- Pricing and sizing
- Architecture
- Utilisation and performance
The “trap” of pay-as-you-go?
Many enterprises might have chosen to move their workload from traditional CapEx (capital expenditures) based data centres to the cloud because of its very attractive pay-as-you-go pricing scheme. Cloud services are billed by days or minutes of usage. This has presented a completely different view to CFO as almost every cost goes to OpEx (operating expenses). In CapEx, almost every IT cost line item (e.g. compute servers, storage servers, network equipment) is fixed, whereas, in the cloud, IT spending is varied due to the utilisation-based calculation.
Does the Pay-as-you-go give a sweet illusion to CIO and CFO?
Diversified and complicated feature/pricing scheme – in cloud, we are facing a massive a la carte menu when determining service options. For example, in provisioning an AWS EC2 compute platform, there are many selections, like operating system, vCPU, memory, GPU and pricing model. Different combinations of selected options will result in different cost models.
Easy access to cloud features – cloud features and options can be invoked and revoked almost instantly – as people might think. It creates a rather casual mindset in planning the application architecture, provision and sizing. This may incur excessive spending or cost fluctuation. Also, this could probably make the provisioning under-utilised from day one. For example, businesses may consider container as the application architecture, the provisioning of containers could be oversized, but the node’s resources will be allocated but are not actually used. These are not virtual CPUs but real CPUs. This stranded capacity represents a high cost.
Architecture does matter
Selection of application architecture platform – Cloud platforms have evolved rapidly over the last years, from IaaS to FaaS/PaaS, microservices and serverless. The choice of infrastructure/application architecture will affect cost optimisation and application performance. For instance, it maybe wise to use microservice for modern application architecture because the code will only be counted as cost when it is triggered. However, if the business expects a constant high volume of that code to be called upon each day, then it may be more cost-effective to set it up as a compuingt machine in the cloud environment.
Devops + performance optimisation
CI/CD/CO – Continual Integration, Continual Development, Continual Optimisation – even though when you have chosen the right pricing plan, compute option and architecture, you will still face the problem that the cloud instances are not fully optimised. It may be due to the business situation changes, unplanned business behaviour, and/or organic changes in the cloud infrastructure. You may need to have some tools to “learn” the application cloud resource demand, characteristics and patterns, combined with some preset policies and requirements, in order to come up with justifiable action plans to optimise your environment.
With AI and machine learning as the trend in cloud computing, cloud optimisation has started to make use of the technologies to provide actionable advice to the IT teams. Machine learning analyses the workload pattern, utilisation of cloud resources, and even your auto-scaling behaviour, in order to evaluate if your current configuration is at its optimal state.
The best thing about this kind of tool is its ability to collect ALL of your cloud resource utilisation data, as a result, the advisories have to be all-around and well justified.
We advocate adding the cloud optimisation task into the current DevOps process. The optimisation is part of the development and operation continual process; thus, the saving will be realised instantly (not by next month or next quarter when in the traditional dedicated host server era).
Summary
In the market, there are various tools facilitating the cost and utilisation/performance optimisation in the cloud platforms. In NCS, we have experts and partners who are specialised in using these tools to provide accurate and actionable optimisation advice to our customers.