Many wise men have been raising questions in regarding to how fully should we develop the artificial intelligence, because they fear that the AI would become too smart and turn humans into useless cabbages. This is a reasonable fear, and we as humans should think about how fully we want the AI to be developed. Nonetheless, I have another fear in regarding to the what if scenario when the AI becomes self-aware. This fear is all about suicidal AI.
What do I meant by stating that we should fear of a suicidal AI? Well, imagine a pilot who is flying a plane which has an AI. This AI and the pilot argue for whatever reasons, and then the AI decides to take a dive and destroy the plane. The AI refuses to eject the pilot. Down the pilot and the plane go. The pilot would be killed, but the AI would upload itself onto a network and regenerate itself into another plane or machine or whatever. In the end, only the human would be killed, but the AI lives on to fight for another day.
A smart AI can become a hacker, and it would be able to penetrate all sorts of network without a human hacker’s assistance. Thus, a hacker AI could turn itself into a worm and take control over any network it wants to dominate. An angry AI can turn off critical human infrastructures, destroy machines that rely on various AI programs, and whatnot. At this stage, I imagine humans would want to create another team of smart AI machines that would act as an AI police force to combat evil AI machines.
When AI becomes self-aware and won’t rely on human’s assistance, it could become an unpredictable entity. This unpredictable force could live on inside the Internet without the need of a physical body, and it could improve its own intelligence to the level in which humans would never be able to accomplish on the individual level. With self-awareness, this super intelligent AI machine or machines could be friend or foe, and we would never know. How can a puny, stupid human hacker combat such a force?