He's being naive, that he happens to be protecting his business interests doesn't change that.
Apart from the whole "building a brain" challenge, the greatest hurdle with AI is overcoming the control problem which is something we do not have an answer to yet. IF they could build an AI and
they haven't solved the control problem which is just as huge a challenge, then the likely result is something similar to the
Paperclip Maximizer.
The reason the control problem is so difficult is because machines need a goal (utility function), but we also need to develop machines with human values so that their goals align with our own, but how do you quantify and program human values? What is the mathematical expression for happiness, love, fear, hate?
If I COULD program and build an AI with the goal of making humans happy, how do I know that the maximal outcome of that goal is what I intended? Maybe I want the machine to provide a utopian society with a high standard of living, to develop cures for all human diseases, to extend human life spans ten-fold, to invent new forms of fuel and means to generate energy that open up the stars to our civilisation. That may be what I WANT but it's not optimal for maximising the machines utility function, what's going to maximise an AI's function is probably something fucking terrifying, like pumping my brain full of drugs so I'm ecstatic and incapable of anything more than dribbling in a chair and smiling to myself. Maybe it's even more abstract and goes to the nature of what "I" am (because how do you define a person in mathematics?), maybe it just creates a copy of a human brain, and runs the same memory on a loop as fast as it can forever.