An Engineering Update on the Dragonflight Launch
Originally Posted by Blizzard (Blue Tracker / Official Forums)
With Dragonflight’s recent launch behind us, we want to take some time to talk with you more about what occurred these past few days from an engineering viewpoint. We hope that this will provide a bit more insight on what it takes to make a global launch like this happen, what can go right, what hiccups can occur along the way, and how we manage them.

Internally, we call events like last Monday “content launch,” because launching an expansion is a process, not one day. Far from being a static game running the same way it did eighteen years ago—or even two years ago—World of Warcraft is in constant change and growth, and our deployment processes change as well.

Expansions now consist of several smaller launches: the code first goes live running the old content, then pre-launch events and new systems turn on, and finally, on content launch day, new areas, quests, and dungeons. Each stage changes different things so we can find and fix problems. But in any large, complex system, the unexpected can still occur.

One change with this expansion was that the content launch was triggered using a timed event —multiple changes to the game can be triggered to all happen at a particular time. Manually making these changes carries the risk of human error, or an internal or external tool outage. Using a timed event helps to mitigate these risks.

Another change in Dragonflight: greatly enhanced support for encrypting game data records. Encrypted records allow us to send out our client with the data that the game needs to show cutscenes, share voice lines, or unlock quests, but keep that data from being mined before players get to experience them in-game. We know the community loves WoW, and when you’re hungry to experience any morsel, it’s hard to not spoil yourself before the main course. Encrypted records allow us to take critical story beats and hide them from players until the right time to reveal them.

We now know that the lag and instability we saw last week was caused by the way these two systems interacted. The result was: they forced the simulation server (that moves your characters around the world and performs their spells and abilities) to recalculate which records should be hidden more than one hundred times a second, per simulation. As a great deal of CPU power was spent doing these calculations, the simulations became bogged down, and requests from other services to those simulation servers backed up. Players see this as lag and error messages like “World Server Down”.

As we discovered, records encrypted until a timed event unlocked them exposed a small logic error in the code: a misplaced line of code signaled to the server that it needed to recalculate which records to hide, even though nothing had changed.

Here’s some insight on how that investigation occurred. First, the clock struck 3:00 p.m. PST. We know from testing that the Horde boat arrives first, and the Alliance boat arrives next. Many of us are logged in to the game on our characters sitting on the docks in both locations in one computer window, watching logs or graphs or dashboards in other windows. We’re also on a conference call with colleagues from our support teams from all over Blizzard.

Before launch, we’ve created contingency plans for situations we’re worried about as a result of our testing. For example, for this launch, our designers created portals that players could use to get to the Dragon Isles in case the boats failed to work.

At 3:02 p.m. the Horde boat arrives on schedule. Hooray! Players pile on, including some Blizzard employees. Other employees wait (they want to be test cases in case we must turn on portals.) The players on the boats sail off, and while some do arrive on the Dragon Isles, many more are disconnected or get stuck.

Immediately we start searching logs and dashboards. There are some players on the Dragon Isles map, but not many. Colleagues having issues report their character names and realms as specific examples. Others start reporting spikes in CPU load and on our NFS (Network File Storage) that our servers use. Still others are watching in-game, reporting what they see.

Now that we’ve seen the Horde boats, we start watching for the Alliance boats to arrive. Most of them don’t, and most of the Horde boats do not return.

A picture emerges: the boats are stuck, and Dragon Isles servers are taking much longer to spin up than expected. Here’s where we really dig in and start to problem solve.

Boats have been a problem in the past, so we turn on portals while we continue investigating. Our NFS is clearly overloaded. There’s a large network queue on the service responsible for coordinating the simulation servers, making it think simulations aren’t starting, so it launches more and starts to overwhelm our hardware. Soon we discover that adding the portals has made the overload worse, because players can click the portals as many times as they want, so we turn the portals off.

As the problems persist, we work on tackling the increased load to get as many players in to play as possible, but the service is not acting like it did in pre-launch tests. We continue to problem-solve the issue and discount things we know aren’t the issue based on those tests.

Despite the lateness in the day, many continue to work while others take off to get rest so they can return early the following day to get a fresh start and relieve those who will work overnight.

By Tuesday morning, we have a better understanding of things. We know we’re sending more messages to clients about quests than usual, although later discoveries will reveal this isn’t causing problems. A new file storage API we’re using is hitting our file storage harder than usual. Some new code added for quest givers to beckon players seems slower than it should be. The service is taking a very long time to send clients all the data changes made in hotfixes. Reports are coming in that the players who have gotten to the Dragon Isles playing have started experiencing extreme lag.

Mid-Tuesday morning a coincidence happens: digging deep into the new beckon code we find hooks for the new encryption system. We start looking at the question from the other side —could the encryption system being slow explain these and other issues we’re seeing? As it turns out, yes it can. The encryption system being slow explains the hotfix problem, the file storage problem, and the lag players are experiencing. With the source identified, the author of the relevant part of the system was able to identify the error and make the needed correction.

Pushing a fix to code used across so many services isn’t like flipping a switch, and new binaries must be pushed out and turned on. We must slowly move players from the old simulations to new ones for the correction to be picked up. In fact, at one point we try to move players too quickly and cause another part of the service to suffer. Some of the affected binaries cannot be corrected without a service restart, which we delay until the fewest players are online to not disrupt players who were in the game. By Wednesday, the fix was completely out and service stability dramatically improved.

While it took some effort to identify the issue and get it fixed, our team was incredibly vigilant in investigating the issue and getting it corrected as quickly as possible. Good software engineering isn’t about never making mistakes; it’s about minimizing the chances of making them, finding them quickly when they happen, having the tools to get in the fixes right away…

…and having an amazing team to come together to make it all happen.



—The World of Warcraft Engineering Team
This article was originally published in forum thread: An Engineering Update on the Dragonflight Launch started by Lumy View original post
Comments 103 Comments
  1. Neotart's Avatar
    TLDR:

    /10chars
  1. T3ramos's Avatar
    Kudos for being so open. And that's why I never play on patchday
  1. Aizo's Avatar
    pats for the team making it all happen, you the mvps ^U^
  1. Koollan's Avatar
    Inb4 the 'there's no excuse for this disgusting launch' crowd files in.

    In all seriousness, I can understand the pitfalls of new engineering challenges when it comes to encryptions in tandem with entirely new systems being implemented across the board. I personally was surprised that there weren't more issues in launch week and post launch week. The most issue I've had is the Azure Span lag, which can also be explained away by Tuskarr Soup and Cobalt Assembly grinds. The lag could be better and I hope it only improves (the recent hotfixing seems to have already done something), but the transparency is a fresh breath.
  1. Jotunhammer's Avatar
    it is awesome to read the engineers views on this as it gives a glimpse on how hard it is to run such large server, even the slightest error can have extreme effects
  1. nacixems's Avatar
    thanks for the update, nice to see how this all ties in. great work, never easy doing a content release.
  1. Aleksej89's Avatar
    "Now that we’ve seen the Horde boats, we start watching for the Alliance boats to arrive"

    Horde favoritism confirmed
  1. bloodykiller86's Avatar
    this is another reason i cant wait for MSFT to finish this acquisition.......blizzard gets full access to using Azure servers at a severely reduced cost im sure lol i bet that will help tons down the line
  1. bloodwulf's Avatar
    Man this sounded so much more ominous than it should of:


    "Now that we’ve seen the Horde boats, we start watching for the Alliance boats to arrive. Most of them don’t, and most of the Horde boats do not return."

    Pour one out for those characters forever lost at sea.
  1. Ram2191's Avatar
    A very interesting read, Thankyou! I think having an insight to this every expansion launch would be fun for us to read and know why and why not something didnt go quite as planned.
  1. Biomega's Avatar
    Quote Originally Posted by bloodykiller86 View Post
    this is another reason i cant wait for MSFT to finish this acquisition.......blizzard gets full access to using Azure servers at a severely reduced cost im sure lol i bet that will help tons down the line
    I got bad news for you lol
  1. christarp's Avatar
    Love engineering post-mortems, I hope blizzard continues with them as they are super insightful and interesting!
  1. OrangeJuice's Avatar
    that was a really roundabout way of saying they don't test shit before shipping it, but why was it written like a 4chan greentext?
  1. ablib's Avatar
    Quote Originally Posted by OrangeJuice View Post
    that was a really roundabout way of saying they don't test shit before shipping it, but why was it written like a 4chan greentext?
    You're either not technical, or didn't read it.
  1. Hablion's Avatar
    Quote Originally Posted by bloodykiller86 View Post
    this is another reason i cant wait for MSFT to finish this acquisition.......blizzard gets full access to using Azure servers at a severely reduced cost im sure lol i bet that will help tons down the line
    Well the FTC has decided to Sue to Block the Merger so it is quite likely that they might not get ABK after all.
  1. ablib's Avatar
    Quote Originally Posted by bloodykiller86 View Post
    this is another reason i cant wait for MSFT to finish this acquisition.......blizzard gets full access to using Azure servers at a severely reduced cost im sure lol i bet that will help tons down the line
    The EU and now the US are trying to block it. Also, whatever you said, has nothing to do with the problem.

    https://www.cnn.com/2022/12/08/tech/...ion/index.html
  1. bloodwulf's Avatar
    Quote Originally Posted by OrangeJuice View Post
    that was a really roundabout way of saying they don't test shit before shipping it, but why was it written like a 4chan greentext?
    Right here guys. We got our first armchair SysOps admin. Obviously OrangeJuice is a subject matter expert when it comes to content launches facing millions of simultaneous clients connecting to many overlapping systems, please treat them with the respect they deserve.
  1. Relapses's Avatar
    I love how this is basically a giant "you don't know jackshit" to all the quadruple PhD WoW serverologists that were posting about how certain they were that the launch issues were caused by Blizzard being too cheap to upgrade their hardware.
  1. dwarven's Avatar
    Quote Originally Posted by bloodykiller86 View Post
    this is another reason i cant wait for MSFT to finish this acquisition.......blizzard gets full access to using Azure servers at a severely reduced cost im sure lol i bet that will help tons down the line
    That would not have helped anything in this case. As a developer and cloud user myself, bad code is bad code no matter where it runs. Look at New World, which runs on AWS.
  1. Rathwirt's Avatar
    So basically working around MMO-C and Wowhead datamining caused launched issues.

Site Navigation