The Future Is Now, News Anchors Could Lose Jobs As AI Is Taken Over — The AI Won’t Go Off Script, Ever!

A Chinese news agency Xinhua had just unleashed a world first news anchor AI presenter which based on a real human’s facial expressions.  You could easily tell the speech is spoken by AI since the voice is quite digitized.  Anyhow, I think they could as well make the voice seems more human-like if they choose.

I wonder if someone else out there is watching this video and has an idea that what if we could also get rid of real actors by using this technology, right?  I think it’s possible in the future that we could use AI to replace real human actors!  Nonetheless, I could also foresee the possibility of using this technology to fabricate and frame a good person in a scandal!

Advertisements

Delete (2013) Mini-Series Review and Spoiler

Watching Delete on Amazon Prime and I have to say it’s pretty good so far.  Stop here and don’t read on if you want to watch the episode one and two of the Delete (Season 1).  Here comes the spoiler!

Delete starts of with the government think a bunch of secretive hackers had tried to destroy a nuclear plant in Iran and overtook the United States’ testing missiles to kill the civilians during a missile test.  Obviously, Iran would think it had to be the United States that did it to them.  The destructive missile test in the United States had the government believed that this was the work of a group of hackers known as Dubito.

The United States government decided to go after the secretive hacker group Dubito, but unknown to them all of the hackers in Dubito group got killed off one by one by an assassin branch of the United States government.  If you had watched the Bourne Identity movie, then the government assassin operatives are sort of like Jason Bourne.  Anyhow, the United States government couldn’t understand why there was a directive to kill the Dubito hackers when they were trying to bring them in alive.

Meanwhile, Jessica Taylor (a reporter) tried to get Daniel Garson to contact Dubito so they could meet up with one of the Dubito members.  Basically, Jessica Taylor wanted to write an in-depth story on why there was a cover-up for the most recent glitch on Iran’s nuclear plant.  Daniel told Jessica Taylor about the incident was a meltdown by a virus that was similar to Stuxnet, and so she told Daniel to contact the Dubito for more information.  Daniel got this information from the Dubito in the first place.

They were supposed to have a good talk with one of the Dubito members in a coffee shop and then out of nowhere the Dubito member got shot couple times.  Later on, this Dubito member got killed for real when the hospital equipment got malfunctioned.  Since the last member of the Dubito group got killed off, Daniel and Jessica were cut off from the information they needed.

Undeterred, Jessica and Daniel broke into a workplace of the last Dubito member who just died to find out if there could be any information that could be stored in the self-contained Intranet.  Daniel was able to decrypt some information and figured out that Dubito members knew what is going on.

Basically, Dubito members found out that the global web of computers got so interconnected to a point that it becomes self-conscious as if it’s an AI which nobody had built — because it built itself.  This AI then threatened to wreak havoc by performing dangerous experiments.  For an example, an attempt to blow up the nuclear plant in Iran was an experimentation, a puzzle which needed to be solved of the sort, and so the AI had a go at it.

Since the whole world is connected by cameras and phones and computers and many other electronic devices, this AI took the advantage of this to spy on people.  The AI got eyes everywhere and it could unleash havoc on anything it wanted.  Luckily, Daniel was able to save the data which Dubito had saved onto his portable hard-drive.  Unfortunate for him and Jessica, the AI didn’t want Daniel and Jessica to know more and so it tried to kill both of them.

Meanwhile, the government uploaded a virus onto the web to trick the AI into a trap so the virus could destroy the AI.  Unfortunately, the AI was smart enough to neutralize the virus.  The AI got so angry it declared war on humanity.  The AI wanted to make sure it could destroy the threats that could threaten it, and so the humanity must be destroyed.

This is so far I’ve gotten into the series.  Nonetheless, I think there is a big plothole to this whole series.  I think if the AI is so smart, it should make sure the humanity survives forever and would not know anything about its presence.  After all, if the humanity dies off, eventually the power grids that power the AI neural network would eventually die out.  Without power, the AI itself would not function.

Of course, you can argue that the AI is so smart that it could control and organize machines to produce more energy to power the AI’s neural network.  Nonetheless, the plot of this series is taken place in a modern period in which not everything is automated.  This means many things that work in concert to produce energy for everything may still need the touch of the human hands.  So, for the AI to declare on humanity at this point in time is like a suicidal mission that it’s on.

Check out this link to know more about the series.

Could AI Steal Jobs From Soldiers?

Isn’t it natural that soon Artificial Intelligence (AI) would steal jobs from soldiers?  We all know that AI had beaten best Chess and Go players at their games.  We also know that AI also had beaten best video gamers in video games.  This means the responsiveness and the intelligence parts of the AI could be used in war scenarios to outdo the human counterparts.

I won’t be surprised to see the future where many wars would deploy AI for automation of submarines, tanks, jets, missiles, and so much more.  The video right after the break suggests that China is developing fully automated submarines with AI capability to outdo foes’ submarines and surface ships in a war.

 

Could AI Teach You To Be Fluent In A Second Language?

So, Artificial Intelligence is being promoted as a technology which could help automate many things such as cars, airplanes, boats, bus, businesses, and so on.  Nonetheless, I’m hoping to see someone develops an AI app for personal uses such as helping one learns a second language!  Imagine how awesome it could be if an app is an AI which could teach you how to speak Chinese, German, or whatever language you wish to learn.  I imagine that this AI app could talk to you, correct you, and communicate with you until you become fluent in a foreign language!

Why stop there!  Imagine that someone could also develop an AI app to teach children math, physics, history and so much more.  I think AI could excel at promoting education.  With AI I think children could learn more efficiently and the schools may be able to free up school resources for other important agenda.  Of course, you never know that AI could eventually transform the role of a school into either something else entirely or the school could cease to exist altogether.  If your children and adult friends could learn just about anything through the AI interaction — why bother wasting resources in funding a school?

Will AI Be Used In Dealmaking Soon?

In the very near future, when Artificial Intelligence is more capable, will governments around the world readily deploy AI to do things such as drafting a free trade agreement?  As I’m watching the news clip on YouTube, and this person states that an FTA between two parties is going to take a very long time, possibly years, to complete since complex deals could be involved with complex, new technology and so forth.  This is true as we’re human and it takes time for us humans to digest and make sense of things before we could readily draw upon an agreement of something.  For AI though, things could be fast-track in milliseconds, I assume.

I guess when such a time comes, whichever country has a more advanced AI could get an upper hand in an FTA deal, assuming that each country won’t be able to replicate the other country’s AI tech.  Obviously, if a country’s technology is so easily being stolen, then their future AI tech is not going to be able to be kept as a secret.  What’s worse is that a competitor could replicate and improve on the tech to leave the original creator in the dust.  As the former dilly-dally without any progress, the latter may also be able to figure out the flaws within the former’s AI tech, allowing the latter to exploit the flaw to squeeze out an advance in a deal such as an FTA deal between two trading countries.

Let’s humor ourselves and imagine how a scenario would unfold when a country without AI tech is engaging in a fictitious FTA deal with a country that got a really advanced AI tech.  Obviously, no AI secrets would be included in the deal!  I suppose the country without AI tech would look at the FTA draft that they’d taken years to mold without any confidence when they propose the terms in the FTA meeting with the counterpart that got an advanced AI tech as the assistant in the dealmaking.  Meanwhile, the side with advanced AI assistant would probably have already made a decision on how to negotiate the terms beyond the first draft that the side without advanced AI assistant is proposing.

I think the side without advanced AI assistant would probably be even more cautious in coming to an agreement with the terms that are being proposed by the side with an advanced AI assistant, knowing that the advanced AI assistant would probably have given better advice to the owner of the AI assistant.  Perhaps, the side with advanced AI assistant could use the AI to come up with terms that are so subtle that allow the counterpart to not being able to see the true meaning of being taken advantages of.  Would this allow the FTA deal to fast-track in real time?

Will A Future Society Be Ruled By AI?

Will the last president or the last king or the last dictator or the last chairman be an AI?  We know AI can be biased according to the data sets that we provide to the AI for training.  In order for the AI to be less bias, the data sets should be more balance for obvious reasons.  Nonetheless, this is a sort of primitive AI since it could not learn on its own and requires data sets to be fed into its logic programs.  What I’m more interesting in is the AI of the future in which the AI itself will always learn everything on its own from the very first day just like how a real human infant could start learning from the very beginning.

Can a self-learned AI be more just and less bias than the human counterparts?  Probably not, right?  Self-learned AI doesn’t require humans to dictate what data should be fed to the AI’s logic programs, but the self-learned AI probably still requires data from somewhere to allow the self-learning journey to begin.  If the data that the self-learned AI started with were biased, could this AI be very biased?  I think such an AI could be very biased unless this AI also accepts extra data that humans could feed into the AI’s logic programs.

Nonetheless, I think training self-learned AI could be easier as time progresses since all you need to do as to feed as much data as possible to self-learned AI without worrying about the careful categorification of the data.  Although we humans should still make sure the data we feed to the self-learned AI is going to be helpful to the AI in the shortest amount of time.  The self-learned AI should be able to continue on its own to extract experience from its own plus from the extra data that the humans feed to the AI’s logic programs.

Let’s imagine that one day the self-learned AI could be self-conscious.  This self-conscious AI could then pass on the experience and knowledge of its own logic programs to any other replicable AI machine without any problem.  One step beyond this is all self-conscious AI could communicate and share the experiences with each other like a huge universal network yet each AI on its own would experience and self-learn whatever on its own.

How sure are we that self-learned, self-conscious, super smart, super knowledgeable AI could be more just and less-bias than a super intelligent, honorable human being?  Let’s imagine a society of a future in which such an AI would exist, and this AI would run the society as judges, decision-makers, resource-distribution-makers, attorneys, and so forth.  Let’s assume the AI would not make a mistake in regards to being bias and so forth.  Will society be more just?

What if the humans are going to be wrong about the self-conscious AI in which the AI is so smart that it could be biased without the humans know that it’s biased?  Will the AI then be favoring certain individuals over other and allowing a certain group to be princes and elites and the rest be peasants and criminals?  I mean, can a society that would be run by a group of self-conscious AIs do away with the caste system and bias and similar sort of things?  If not, why do we even want a self-conscious AI to make the decision for us?

Of course, we humans can allow self-conscious AI to have the power of an assistant and not of a king, but then who would be able to stop the self-conscious AI to overpower and override the humans?  Assuming that self-conscious AI is smarter and trickier than a human being, it’s possible that us humans won’t be able to outsmart such a machine and would eventually be subjected to AI’s rule.  Thus if we’re going to be wrong we should be prepared to be ruled by the last ruler who is probably going to be a self-conscious AI machine.