Except, you know, in case of a completely new database failure mode in which case the developers in charge of that would have to get involved in figuring out what happened and how to fix it.
We got a (relatively) small issue when deploying a live content patch earlier this year on the game I work on (just after we all went into confinement, actually), where a small, dumb bug would cause a small, dumb server side player data corruption for some players (which nonetheless caused the game client to bug rather spectacularly), and it involved 6 of us in the dev team (3 gameplay programmers, 2 online programmers, and the technical director) to fix it.
The people who manage the servers can only really fix deployment issues (hmm we copied over the wrong binary, or the wrong configuration file, or some service is not starting for whatever reason and such), and some common failure modes (mostly those whose fix involves restarting things, redeploying things, or restoring a backup). At some point when things go haywire in really weird ways you need the devs to come and debug it, and look at how the data was corrupted, and how to fix it.
And yeah it's understandable that they also don't want to deploy a completely new build (which probably involves a lots of database conversions, if only to convert the old character customization data to the new ones) if one server has its database in tatters at the moment. They probably can't deploy on every server but this one, and they probably can't deploy the new build over an already broken database without fucking it up even more (and without potentially losing even more data)