While I'm personally pro-AI (As it has a lot of applications for the future), I think some of the fears, while valid, can be driven too far. But that's evident with each new, novel technology we develop.
AI will probably not be the super intelligent Gods like we think, but instead just genius level citizens living among us. Only dedicated AI on a supercomputer could get to that extreme point, but even then, more processing power doesn't mean higher intelligence, let alone godlike intelligence; and you will almost certainly get diminishing returns after awhile. So they'd be lugging around wasted space for a long time just... figuring stuff out. I don't think they would want that, since I presume they have sapience. And with that, all the bells, whistles, and existential crisis's prevalent in our budding species. They may want the freedom to explore and learn naturally, and just be self-contained artificial entities. Just a prediction of mine though.
Skynet: Likely won't happen. Scientists are smart for a reason, they've watched Terminator, they probably have some cool acronyms describing worst-case scenario plans. Etc.
Utopia: Definitely not at first. We're still wary of self-driving cars, very basic genetic engineering, etc. It will take decades at least for both the old and new generations to accept and get used to AI. [Not gonna say specifics due to forum rules] But it'll take a long time for them to be accepted into being a "major' center in certain affairs. Even then, humans will still need to be a guiding hand. We still need human solutions to our human problems.
I think they will be merely used to assist us as well. Not be the babysitters of the post-scarcity human race as we race spaceships across the solar system. For one - We still need jobs, either as profitable hobbies, or the classical kind. Robots can't take over everything, even if they are better. Instead, we'll be working side by side. As for the arts, I'm confident I'll still be in a job even as I become an old man