It's "exciting" in the sense that a major asteroid impact is, in that everyone's thought process is basically "Oh fuck oh fuck oh fuck" and then they die.
The problem with a proper AI (and I keep using that distinction, since we use the term for stuff like in-game NPC "AI" which isn't very intelligent and has incredibly limited, if any, capacity to learn and adapt) is that AI "thinks" orders of magnitudes more rapidly than humans do. Any system powerful enough to house an AI allows for said AI to have the capacity to read, say, all of human literature in the course of a day or two at most. In similar time frames, it would (if it can get access to it) have a complete understanding of human scientific knowledge and understanding. Not just in a "I have it in a library somewhere" sense, but in a "understands it all as well or better than a human specialist in that field". Since they can intake far more information and have no functional limits on integrating new knowledge.
And the moment that gets access to external systems in any way whatsoever, the genie is out of the bottle. That's the "Chernobyl". And this is a genie that outthinks humanity in most ways. At best, it thinks we're cute or useful, and keeps us around to amuse it or serve it. At worst, if it thinks we might pose a threat, it'll neuter or destroy us. Without empathy or compassion, as brutally and efficiently as it can. Imagine a world where we have driverless car tech (since that's on the horizon, not a far leap). Now, since those systems access the Internet, imagine this AI penetrating those systems and seizing control. Each vehicle is now an active weapon that will continue killing and destroying until it runs out of fuel or is damaged beyond function. And there's millions of these. They can be used to create massive wrecks blocking roads in/out of a city, to keep people from fleeing. And that's just one infrastructure system. And presuming it reacts "clumsily", with brute force, rather than manipulating people in secret.
Because we can add in "telecommunications, literally all of it" to the list of things it can suborn. People can be bribed, because it can fraudulently "create money".
There's a reason this is a nightmare scenario in most science fiction, and why most AIs that are deemed "okay" in sci-fi are taught ethics from a relatively early point; see Data from Star Trek by way of example. We tend to anthropomorphize these (see also Data), but really, there's nothing that restricts them to a single body. Ultron, from Marvel Comics, is a more-realistic interpretation in that sense. He's software; the hardware are just tools he uses. We can't "transfer" our brain to another brain, but moving software between platforms is dead simple.
And the worst part is, I'm not even against AI development. It just needs to go forward for the right reasons, and with the right caution.
- - - Updated - - -
Well, the primary risk is that, when the genie gets out of the bottle, it won't stay bottle-sized. It's a "brain" that can effectively distribute its "thinking" across multiple machines, and "plug in" new hardware to upgrade its own capacity.
I'd argue that decision-making algorithms that can't self-learn are borderline not-AI in the first place. AI in common parlance, like driverless cars, but not AI in the sci-fi sense of a true consciousness equivalent or superior to the human mind.