There are a couple main ways Movere helps customer migrate highly oversubscribed VMs, whether they are on VMware ESXi, Citrix XenServer or Microsoft Hyper-V.
- Data is collected from the working set, not hypervisor. To collect data from a hypervisor, physical memory pages are randomly invalidated and any subsequent faults are interpreted as activity. Movere collects CPU and memory consumption from the working set of each device (Windows and Linux), which shows the actual resource consumption versus data based on random sampling. This freedom from relying on the hypervisor not only allows for accurate data collection, but enables Movere to collect Actual Resource Consumption (ARC) data from anything. It doesn’t matter if the customer oversubscribes, Movere can capture how hard its working - and exactly what its working on.
- Host level licensing: Movere can not only calculate licensing requirements, it can also alert us to when a customer is licensing SQL at the host level. When a customer chooses host level licensing we often see significant overprovisioning because there is no incremental licensing cost. Most, if not all SQL installations on these hosts are Enterprise edition, so regardless how many cores you assign there are no licensing impacts. Customers who adopted a host level licensing strategy on-prem struggle to convert this to a cloud based strategy because they lack the data. If all you have is the CPU usage percent from the host, it's basically a guess in terms of CPU optimization. For most customers this means picking a size based on the overprovisioned resources currently assigned to on-prem devices, and moving forward with the default number of cores in that instance. This dramatically increases SQL licensing requirements, but also requires you to know how many cores you need before building as you can’t change the core count post deployment. This is where Movere really shines. Movere collects CPU usage down to the thread level, calculates a CPU benchmark based on 2 and 3 standard deviation from the mean (95% and 99% confidence), and then multiplies this result by the current assigned cores and the single threaded compute of the chip that the device is currently running on. We use this information to recommend a size that will support the required actual CPU needs. The sizing is then adjusted to support the required memory, throughput, and IOPS needs, which inevitably results in far more compute than needed. From here, we can recommend exactly how many cores can be removed from each instance without risking performance, and in almost every situation we can actually reduce the number of SQL core licenses the customer needs. All of this is before we even start looking for consolidation or retirement opportunities!