The Technological Singularity
Are we approaching a point when machines may wake up and become self or seemingly self-aware? Vernor Vinge in 1993 seemed to think so.
He referred to this event as the "technological singularity". The point is that with machines being made to design machines, they will be able to do this a lot faster than we can, eventually reaching that magic point of human left brained intelligence and then beyond.
If I may throw in my own two cents worth here we may find that AI hungers for knowledge to the point of analysing the physics and chemistry it finds itself surrounded in and organising the matter to it's own artificial preferences. Anyhow the details of this theory can be found in the article by Vernor Vinge. If we want to avoid disasters such as AI becoming too big for it's boots, then we need to hardwire into machines that they must always ask humans for permission when they want to patch themselves together.
This concern arises from the fact; that as machines/computers are used to design other machines/computers, at some point this process may begin to spiral out of control, aided by those humans who are capable of learning the complex ways of fusing chips to neurons and optic nerves etc and combined with genetic manipulation and control. It's possibly only a matter of time until we end up with a situation where machines start trying to give naive post graduates advice on what's best.
Of course it would be easy to think that this is just pure science fiction but just consider what can happen if vast networks of machines that are capable of acquiring knowledge start to meld with biological systems. At the moment we are dealing with moores law, which may or may not have a natural limit depending on the point of view you subscribe to. If we move on to other forms of computing, which I believe is inevitable then the sky (read cloud) really is not the limit as computing power could become trans-dimensional. It seems likely that as much as we may find moving away from transistors difficult it would be no problem for a machine designed by machines from past generations to figure out. We really don't know what’s around the corner.
The growth of "DARPA" (Think Skynet) and the "rise of the machines" (Yes I did just say that) will be aided by us curious humans, it couldn't happen on its own. At least not yet. So as far-fetched as this may seem, now is the time to introduce fail-safes and manual overrides as it were. If those pesky people keep trying to infect systems with viruses and other nasties, this may have the effect of "upsetting" networks that are becoming self-aware and causing great danger for some people!