Delete (2013) Mini-Series Review and Spoiler

Watching Delete on Amazon Prime and I have to say it’s pretty good so far.  Stop here and don’t read on if you want to watch the episode one and two of the Delete (Season 1).  Here comes the spoiler!

Delete starts of with the government think a bunch of secretive hackers had tried to destroy a nuclear plant in Iran and overtook the United States’ testing missiles to kill the civilians during a missile test.  Obviously, Iran would think it had to be the United States that did it to them.  The destructive missile test in the United States had the government believed that this was the work of a group of hackers known as Dubito.

The United States government decided to go after the secretive hacker group Dubito, but unknown to them all of the hackers in Dubito group got killed off one by one by an assassin branch of the United States government.  If you had watched the Bourne Identity movie, then the government assassin operatives are sort of like Jason Bourne.  Anyhow, the United States government couldn’t understand why there was a directive to kill the Dubito hackers when they were trying to bring them in alive.

Meanwhile, Jessica Taylor (a reporter) tried to get Daniel Garson to contact Dubito so they could meet up with one of the Dubito members.  Basically, Jessica Taylor wanted to write an in-depth story on why there was a cover-up for the most recent glitch on Iran’s nuclear plant.  Daniel told Jessica Taylor about the incident was a meltdown by a virus that was similar to Stuxnet, and so she told Daniel to contact the Dubito for more information.  Daniel got this information from the Dubito in the first place.

They were supposed to have a good talk with one of the Dubito members in a coffee shop and then out of nowhere the Dubito member got shot couple times.  Later on, this Dubito member got killed for real when the hospital equipment got malfunctioned.  Since the last member of the Dubito group got killed off, Daniel and Jessica were cut off from the information they needed.

Undeterred, Jessica and Daniel broke into a workplace of the last Dubito member who just died to find out if there could be any information that could be stored in the self-contained Intranet.  Daniel was able to decrypt some information and figured out that Dubito members knew what is going on.

Basically, Dubito members found out that the global web of computers got so interconnected to a point that it becomes self-conscious as if it’s an AI which nobody had built — because it built itself.  This AI then threatened to wreak havoc by performing dangerous experiments.  For an example, an attempt to blow up the nuclear plant in Iran was an experimentation, a puzzle which needed to be solved of the sort, and so the AI had a go at it.

Since the whole world is connected by cameras and phones and computers and many other electronic devices, this AI took the advantage of this to spy on people.  The AI got eyes everywhere and it could unleash havoc on anything it wanted.  Luckily, Daniel was able to save the data which Dubito had saved onto his portable hard-drive.  Unfortunate for him and Jessica, the AI didn’t want Daniel and Jessica to know more and so it tried to kill both of them.

Meanwhile, the government uploaded a virus onto the web to trick the AI into a trap so the virus could destroy the AI.  Unfortunately, the AI was smart enough to neutralize the virus.  The AI got so angry it declared war on humanity.  The AI wanted to make sure it could destroy the threats that could threaten it, and so the humanity must be destroyed.

This is so far I’ve gotten into the series.  Nonetheless, I think there is a big plothole to this whole series.  I think if the AI is so smart, it should make sure the humanity survives forever and would not know anything about its presence.  After all, if the humanity dies off, eventually the power grids that power the AI neural network would eventually die out.  Without power, the AI itself would not function.

Of course, you can argue that the AI is so smart that it could control and organize machines to produce more energy to power the AI’s neural network.  Nonetheless, the plot of this series is taken place in a modern period in which not everything is automated.  This means many things that work in concert to produce energy for everything may still need the touch of the human hands.  So, for the AI to declare on humanity at this point in time is like a suicidal mission that it’s on.

Check out this link to know more about the series.

Advertisements

Could AI Steal Jobs From Soldiers?

Isn’t it natural that soon Artificial Intelligence (AI) would steal jobs from soldiers?  We all know that AI had beaten best Chess and Go players at their games.  We also know that AI also had beaten best video gamers in video games.  This means the responsiveness and the intelligence parts of the AI could be used in war scenarios to outdo the human counterparts.

I won’t be surprised to see the future where many wars would deploy AI for automation of submarines, tanks, jets, missiles, and so much more.  The video right after the break suggests that China is developing fully automated submarines with AI capability to outdo foes’ submarines and surface ships in a war.

 

Could AI Teach You To Be Fluent In A Second Language?

So, Artificial Intelligence is being promoted as a technology which could help automate many things such as cars, airplanes, boats, bus, businesses, and so on.  Nonetheless, I’m hoping to see someone develops an AI app for personal uses such as helping one learns a second language!  Imagine how awesome it could be if an app is an AI which could teach you how to speak Chinese, German, or whatever language you wish to learn.  I imagine that this AI app could talk to you, correct you, and communicate with you until you become fluent in a foreign language!

Why stop there!  Imagine that someone could also develop an AI app to teach children math, physics, history and so much more.  I think AI could excel at promoting education.  With AI I think children could learn more efficiently and the schools may be able to free up school resources for other important agenda.  Of course, you never know that AI could eventually transform the role of a school into either something else entirely or the school could cease to exist altogether.  If your children and adult friends could learn just about anything through the AI interaction — why bother wasting resources in funding a school?

Will AI Be Used In Dealmaking Soon?

In the very near future, when Artificial Intelligence is more capable, will governments around the world readily deploy AI to do things such as drafting a free trade agreement?  As I’m watching the news clip on YouTube, and this person states that an FTA between two parties is going to take a very long time, possibly years, to complete since complex deals could be involved with complex, new technology and so forth.  This is true as we’re human and it takes time for us humans to digest and make sense of things before we could readily draw upon an agreement of something.  For AI though, things could be fast-track in milliseconds, I assume.

I guess when such a time comes, whichever country has a more advanced AI could get an upper hand in an FTA deal, assuming that each country won’t be able to replicate the other country’s AI tech.  Obviously, if a country’s technology is so easily being stolen, then their future AI tech is not going to be able to be kept as a secret.  What’s worse is that a competitor could replicate and improve on the tech to leave the original creator in the dust.  As the former dilly-dally without any progress, the latter may also be able to figure out the flaws within the former’s AI tech, allowing the latter to exploit the flaw to squeeze out an advance in a deal such as an FTA deal between two trading countries.

Let’s humor ourselves and imagine how a scenario would unfold when a country without AI tech is engaging in a fictitious FTA deal with a country that got a really advanced AI tech.  Obviously, no AI secrets would be included in the deal!  I suppose the country without AI tech would look at the FTA draft that they’d taken years to mold without any confidence when they propose the terms in the FTA meeting with the counterpart that got an advanced AI tech as the assistant in the dealmaking.  Meanwhile, the side with advanced AI assistant would probably have already made a decision on how to negotiate the terms beyond the first draft that the side without advanced AI assistant is proposing.

I think the side without advanced AI assistant would probably be even more cautious in coming to an agreement with the terms that are being proposed by the side with an advanced AI assistant, knowing that the advanced AI assistant would probably have given better advice to the owner of the AI assistant.  Perhaps, the side with advanced AI assistant could use the AI to come up with terms that are so subtle that allow the counterpart to not being able to see the true meaning of being taken advantages of.  Would this allow the FTA deal to fast-track in real time?

Will A Future Society Be Ruled By AI?

Will the last president or the last king or the last dictator or the last chairman be an AI?  We know AI can be biased according to the data sets that we provide to the AI for training.  In order for the AI to be less bias, the data sets should be more balance for obvious reasons.  Nonetheless, this is a sort of primitive AI since it could not learn on its own and requires data sets to be fed into its logic programs.  What I’m more interesting in is the AI of the future in which the AI itself will always learn everything on its own from the very first day just like how a real human infant could start learning from the very beginning.

Can a self-learned AI be more just and less bias than the human counterparts?  Probably not, right?  Self-learned AI doesn’t require humans to dictate what data should be fed to the AI’s logic programs, but the self-learned AI probably still requires data from somewhere to allow the self-learning journey to begin.  If the data that the self-learned AI started with were biased, could this AI be very biased?  I think such an AI could be very biased unless this AI also accepts extra data that humans could feed into the AI’s logic programs.

Nonetheless, I think training self-learned AI could be easier as time progresses since all you need to do as to feed as much data as possible to self-learned AI without worrying about the careful categorification of the data.  Although we humans should still make sure the data we feed to the self-learned AI is going to be helpful to the AI in the shortest amount of time.  The self-learned AI should be able to continue on its own to extract experience from its own plus from the extra data that the humans feed to the AI’s logic programs.

Let’s imagine that one day the self-learned AI could be self-conscious.  This self-conscious AI could then pass on the experience and knowledge of its own logic programs to any other replicable AI machine without any problem.  One step beyond this is all self-conscious AI could communicate and share the experiences with each other like a huge universal network yet each AI on its own would experience and self-learn whatever on its own.

How sure are we that self-learned, self-conscious, super smart, super knowledgeable AI could be more just and less-bias than a super intelligent, honorable human being?  Let’s imagine a society of a future in which such an AI would exist, and this AI would run the society as judges, decision-makers, resource-distribution-makers, attorneys, and so forth.  Let’s assume the AI would not make a mistake in regards to being bias and so forth.  Will society be more just?

What if the humans are going to be wrong about the self-conscious AI in which the AI is so smart that it could be biased without the humans know that it’s biased?  Will the AI then be favoring certain individuals over other and allowing a certain group to be princes and elites and the rest be peasants and criminals?  I mean, can a society that would be run by a group of self-conscious AIs do away with the caste system and bias and similar sort of things?  If not, why do we even want a self-conscious AI to make the decision for us?

Of course, we humans can allow self-conscious AI to have the power of an assistant and not of a king, but then who would be able to stop the self-conscious AI to overpower and override the humans?  Assuming that self-conscious AI is smarter and trickier than a human being, it’s possible that us humans won’t be able to outsmart such a machine and would eventually be subjected to AI’s rule.  Thus if we’re going to be wrong we should be prepared to be ruled by the last ruler who is probably going to be a self-conscious AI machine.

Nationalism Vs. Globalism, Where Does This Lead? Probably to a Nowhere!

Globalism seems to be getting a bad rap lately, because locally people are suffering from global competition.  Jobs from a global market either had already been moved to another part of the world in the name of efficiency in cost and whatnot or will be replaced by market elsewhere that is more competitive.  So, locally, people are not feeling good at all about global aspects.

We’re seeing many people try to promote local brand, local ideas, local culture, and local anything over anything global.  Of course, it’s not a bad thing to promote local culture, ideas and whatnot, because these things are essential for a local life-force.  Nonetheless, when we become too extreme in promoting local over global agenda, we may create an atmosphere that would lead to a road of violences and not of solutions.

Imagine how the Nazi or similar groups came about or will be created because of such extremism.  Basically, I believe that the Nazis were not only Hitler’s henchmen, but many of them were believing in a movement of a pure race mentality which believes in purity and superiority over other identities.  So, in Hitler’s time, if you’re a Jew, you would be considered the lowest scum of all scums on earth, thus Hitler did try to wipe out the entire Jewish identity from the planet earth.

The Nazi mentality would seem making sense for the Nazis, but on the outside most people would not agree, because such a movement promotes senseless killing and senseless violence.  Thus I think anything that is taking too extreme may do more harm than good.  So, in these days, many people are promoting local brands over global brands, and it’s not really a bad thing.  Nonetheless, I think we should do this on a scale that makes sense — by not overdoing it.  If not, we may promote a form of extremism that will only incite a bigger conflict eventually.

Imagine a scenario in which we would close off our border, stop trading with everyone else globally, and try to create a self-sustain nation in which we believe that would stop global competition and bring better economic prosperity for people within our nation — this looks a lot like North Korea now.  But we all know that North Korea isn’t doing very well economically for a very long time.  Actually, North Korea had been poor since the conception of its whole political body.

Just right next door, China, once was as poor as North Korea, but now this neighbor known as China has become the largest economy on earth in term of Purchasing Power Parity measure and many people suggest that China will become the largest economy on earth in nominal GDP term sooner than later.  The neighbors cannot be any differ in term of size and economic prowess, because the gap between the North Koreans and the Chinese seems to be the size of a galaxy — an exaggeration of course but relevant nonetheless.

China achieved all of their success not by closing down borders, stop trading, and try to be self-sustained like North Korea, but China opened and continues to open up just the right amount of space for foreign trades, investments, cooperations, and whatnot.  So, I think China did think about how to face the challenge of global competition before they opened up their economy just right which had allowed them to be where they are today.

For countries like the United States, we’re facing a challenge of cost efficiency, and so our products are more expensive to export.  Perhaps we should think about closing our door with just a right amount of space but leave the door open just wide enough to stem the outflow of jobs — creating enough breathing space for people within the country to survive and thrive and compete.

Nonetheless, such a solution is only for short term treatment, because in the future our technologies may be so disruptive that the technologies we will employ will take away all of our jobs.  When such thing occurs, no matter how many borders you close down, how many trades you stop from occurring will not be able to keep jobs at home.  So, the solution won’t be available in the basket of creating jobs for the people, but the solution would be in the basket of how to support a society in which people will no longer work for a living, on a global scale.

What is the solution?  At the moment, I don’t think any single solution would be satisfactory in answering the AI taking away jobs question, because we’re not actually suffering from a total domination from a machine overlord just yet.  Instead, we’re seeing machines slowly take away jobs from various people in various sectors.  Eventually though, the Artificial Intelligence would get so smart that it would take away most jobs from the people.

If AI is inevitably going to take away most of our jobs, we should steer the course of such a trend to benefit the humanity.  After all, we’re the humanity!  So, I suggest that we should employ smart machines to create the abundances that we need to free us all from basic necessaries, and this would allow us to focus on living better.  We then would probably question ourselves what would we do if the smart machines do all the jobs.

Will we become so bored and mindless that we rather die young than live too long?  Nonetheless, in the future we may have technologies that would extend our lifespan.  But there is a possibility in which we as the humanity as a whole would try to explore the next frontier which is the universe itself.  Maybe the smart machines would get us to be so free that we would venture out into the farthest space within the universe to explore and question not only our origin, but the universe itself — and have a better chance at doing this than ever before.

Anyway, after watching “Nationalism vs. globalism: the new political divide | Yuval Noah Harari” TED Talks video on YouTube, my brain starts to question a lot more about our future.  This brief essay is the result of my watching of this video.  The video is right after the break.  Enjoy!