They went the Kubernetes route.
Something I've deployed myself. It's really fun stuff but the performance implications are really a mystical topic often enough, and requires a lot of reading and understanding how the multi-layer faceting works.
You have the Hardware. Then you usually have the Hypervisor. Then you have the VM running an underlying OS. Then you have a container orchestration system, usually docker that runs your namespace seperated virtual OS(Linux kernel namespaces). Then you finally have your blank OS image settled on top of that with as little jargon as possible and finally. You have the game binaries and necessary libraries that execute the server code itself.
Count the layers. And give it a guess where the performance drops. It's pretty nuts to be honest. If you're running at the edge of your hardwares limits, which Blizz probably is, I wouldn't even know myself. And I've deployed quite a few clusters.
I'd personally drop the hypervisor, but you usually build the cloud on a hypervisor so... myeah. If that's there then oh boy.
I believe if logic's correct, what they call a CRZ. We in the cloud would call a pod. And a pod is basically a seperate virtual operating system running its own instance of something. Essentially, there's multiples of near identical instances of the same server running. You're just connecting between them.
Anyways, I personally don't think they have cheap servers. What I do think, is that they've overestimated their compute resources a lot and often. Plus, I'd be willing to bet that the inter-layer latencies play a large part in increasing the perceived lag per say.
My honest opinion. It'd be better if they'd drop the cloud for open world areas, as it's seemingly not working as needed often enough. A strong single-threaded server with good ST performance beats the hypervisor with 2.7Ghz Xeons still, for this sort of application in my mind.
That's all assuming that they're running it like most companies do.