What If Futuristic Scenario: Intelligent Malicious Program Reprograms Itself To Evade Counter-attacks

How a botnet works: 1. A botnet operator sends...

How a botnet works: 1. A botnet operator sends out viruses or worms, infecting ordinary users' computers, whose payload is a malicious application — the bot. 2. The bot on the infected PC logs into a particular command and control (C&C) server (often an IRC server, but, in some cases a web server). 3. A spammer purchases access to the botnet from the operator. 4. The spammer sends instructions via the IRC server to the infected PCs, causing them to send out spam messages to mail servers. (Photo credit: Wikipedia)

When I was in a community college basic computer course (forgot which one it was), a professor professed that there would be no way for the good guys to counter-attack the bad guys in regarding to hacking.  As I read this cool article here in regarding to how researchers had successfully poisoned a botnet, Down the Sinkhole: Inside the Kelihos.B takedown, I guess the professor was a little off.  It seemed that if the good guys knew how the bad guys distributed their attack schemes, at least in the article I mentioned the good guys could manipulate the tools that the hackers used to their advantages such as to poison a botnet, consequently rendering the bad guys’ tools somewhat inefficient or just not worth the time.  Anyhow, as I was halfway through reading the article I mentioned, an unreal question formed within my mind.

The question was, what if one day artificial intelligence will be so capable as in it might be capable of reprogramming itself so it would not be susceptible to most sophisticated counterattacks?  So, how is this question might fit into the topic of hacking?  Well, imagine the hackers would create an intelligent botnet which in itself is fully automated and required zero attention, intervention, from the botnet creators.  This botnet (or virus or worm or whatever the next term might describe it) will be able to figure out that it needs to evolve when it senses it has been invaded, and so it will use it artificial intelligence to reprogram itself and become something else but keep its main objectives intact, consequently nullifying the good guys’ counterattack strategies.

Perhaps, as the bad guys try building their own artificial intelligence malicious programs, should the good guys also try to build their own artificial intelligence defensive programs?  We might see bad versus good battling out in the digital space as normal humans will be unaware of such things in their everyday digital life.  It does feel as if this is Ghost in the Shell sort of thing, right?


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s