We were talking in class today about the threat that artificial intelligence might pose to the world, and I'm pretty firmly of the opinion that it absolutely doesn't matter.
This is mostly because of what I know about nuclear weapons escalation. The Russians and the Americans kept building bigger and more destructive bombs. The end of that was pretty much when the Russians built Tsar Bomba, which had a mushroom cloud six time higher than Mt. Rushmore. The fireball was five kilometers in diameter, with blast damage 1000km away. Even more than that, the bomb that they tested was at half of the yield it was capable of, because the fallout from the 100MT test would have been too much, and there was no bomber on the planet that could drop it and get far enough away from the blast. So after that, the governments decided to move to more strategic capability for bombs. The governments have had the power to destroy the world for a long time now, and they haven't done it.
The other threat from nukes is that some rouge nation or group will get a hold of one and detonate it in the middle of some city. A dedicated group of individuals can do some pretty awful things, and it becomes easier for them to do it with every passing year. You can buy a gene sequencer online for about $5000 dollars and create the next smallpox. You can hijack a plane and fly it into some buildings for much cheaper. And when/if nanotechnology comes to fruition, that'll prove a much bigger problem for security because it'll be so much harder to stop.
The cost barrier for AI means that the first ones will come from academia, corporations, or a government. To say that the people who are smart enough to build something that can think will be stupid enough to give it unfettered access to the outside world is illogical. On top of that, making an AI that wants to kill everyone will surely have to be a mistake. So the odds of it ever being a threat is minimal, especially since once the first AI are around they can defend against new ones.