When it comes to migrating your workloads to the cloud, the related costs are the first thing that is put up for discussion. Cost is the driving factor that leads to organizations opting for a cloud in the first place. And a common mistake most companies/System Admin users make with regards to cost and cost estimation for locus are twofold:
- Looking for a physical server mirror in the cloud; Cloud Infrastructure and server marginally differs from the physical on premise servers we are used to in many aspects. For example, there is no actual MEMORY/CPU but a series of computer logic and rules that make this provision i.e. your Lenovo ThinkSystem SR590 Server (Xeon SP Gen 2) can have a 16GB RAM, 24 cores (2.1 GHz speeds)Xeon Processor and 1.6TB of storage, while in cloud such as GCP, you only need to choose the machine type/family with a set or custom specifications depending on the type of workload you intend to run and not spinning up the exact Lenovo ThinkSystem SR590 Server. So if you run heavy memory consuming applications, highmem machines are ideal where if you want to run hosting of your web apps, E2 machines would be ideal! read more on machine types.
- Focusing on VM costs ONLY: Virtual Machines are the equivalent of the “physical server” we run. This is a good starting point for cloud consideration but hardly the only one. You still need to account for Operating System (OS) Costs ( Windows or other paid UNIX Distros), your applications and utilities that your server needs to run; this include RDP(Remote Desktop Protocol), Active Directory, Licenses among others. You also need to account for network (egress_, IP/ephemeral , Storage and so forth. All these are mostly ignored and can lead to wrong estimates and assumptions.
Considering the current unpredictable circumstances, cloud services present themselves as the ideal solution. Although investing in cloud services is the best way to maintain a remote workforce, the costs involved can quickly spiral out of control if not kept in check. We’ll discuss some ways that can help you keep your cloud costs in control.
The first and foremost step in cloud migration is analyzing your current infrastructure expenditure and whether GCP cloud will be able to reduce it or not (in the short or long run; a 3 year TCO(Total Cost of Ownership) is the most common cost metric). In almost all cases, GCP services allow companies the benefits of greater flexibility at low costs. However, if left unchecked, these services can lead to unplanned cloud costs. The first stage calls for the organization to make a budget for cloud services and work accordingly. Google has an online pricing calculator to help you plan and budget.
Granular visibility into the cloud infrastructure allows you to monitor and ensure optimal performance, identify security indicators, and correct degradation. Cloud visibility ensures that you can forego unused or underutilized resources while monitoring threat identifications. Monitoring and logging services are available to identify malicious traffic by source and monitor traffic at every link of the network. Visibility also allows users to ensure every single resource of the company is optimized, thus eradicating any wastage, and saving on costs.
Once you are ready, plan for migration! Migration needs to be carefully planned and budgeted for! Whatever migration method you choose, there are costs to be incurred! From resources that the migration mechanism would consume to migrate ( yes, you need a cloud server alias Migrate for Compute Engine to run the migration loads!!)Network and VPN costs from your on premise and sometimes technical consulting fees to enable seamless migration. Whatever the case , plan for Migration and allow for testing, countercheck data authenticity and similarity with current setup migrating from before going live! We recommend doing at least 3 tests in this layers:
- OS and App layers: Make sure all your OS(operating system) and applications are migrated and licensed properly! Make sure to have basics such as RDP,SSH and your SQL licenses well licensed. It gets forgotten and can lead to disasters! Trust us, we have seen users get locked out of their Windows VM upon expiration of RDP licenses. Consider the number of concurrent access users to budget for RDP and other considerations.
- Data layer; Check the consistency of your data over and over. Run “mock live sessions” on the migrated instances to see if ALL your data is migrated. Run mock live sessions testing over and over, like 3-5 times, weekly and at least a day to go liver, until you are happy. Also keep checking for users access permission into your OS and applications levels.
- Security and Network layers: Check your firewalls, rules, VPN connection type , Ports and internet-connections needed to keep your systems running. It is not uncommon to still find a DB path still pointing to the on premise IP/network and all cursing when something isn’t working!!
Next week, we shall run you through the “things to watch out” once you are live to avoid cost overruns and surprises! Good thing, these “shocks and surprises” happen a lot in the first 3-6 months as organizations stabilize into the cloud way!
Other Popular Posts
- What is a Load Balancer? Use cases: An Analogy
- How to enhance your security posture in your enterprise cloud by wiping devices remote: Case for Google Workspace
- How to Set up a No-Reply Email with a Custom Rejection Message in Google Workspace
- The Ultimate Comparison of Zoho Workplace and Google Workspace