Will AI Be Used In Dealmaking Soon?

In the very near future, when Artificial Intelligence is more capable, will governments around the world readily deploy AI to do things such as drafting a free trade agreement?  As I’m watching the news clip on YouTube, and this person states that an FTA between two parties is going to take a very long time, possibly years, to complete since complex deals could be involved with complex, new technology and so forth.  This is true as we’re human and it takes time for us humans to digest and make sense of things before we could readily draw upon an agreement of something.  For AI though, things could be fast-track in milliseconds, I assume.

I guess when such a time comes, whichever country has a more advanced AI could get an upper hand in an FTA deal, assuming that each country won’t be able to replicate the other country’s AI tech.  Obviously, if a country’s technology is so easily being stolen, then their future AI tech is not going to be able to be kept as a secret.  What’s worse is that a competitor could replicate and improve on the tech to leave the original creator in the dust.  As the former dilly-dally without any progress, the latter may also be able to figure out the flaws within the former’s AI tech, allowing the latter to exploit the flaw to squeeze out an advance in a deal such as an FTA deal between two trading countries.

Let’s humor ourselves and imagine how a scenario would unfold when a country without AI tech is engaging in a fictitious FTA deal with a country that got a really advanced AI tech.  Obviously, no AI secrets would be included in the deal!  I suppose the country without AI tech would look at the FTA draft that they’d taken years to mold without any confidence when they propose the terms in the FTA meeting with the counterpart that got an advanced AI tech as the assistant in the dealmaking.  Meanwhile, the side with advanced AI assistant would probably have already made a decision on how to negotiate the terms beyond the first draft that the side without advanced AI assistant is proposing.

I think the side without advanced AI assistant would probably be even more cautious in coming to an agreement with the terms that are being proposed by the side with an advanced AI assistant, knowing that the advanced AI assistant would probably have given better advice to the owner of the AI assistant.  Perhaps, the side with advanced AI assistant could use the AI to come up with terms that are so subtle that allow the counterpart to not being able to see the true meaning of being taken advantages of.  Would this allow the FTA deal to fast-track in real time?

Advertisements

Will A Future Society Be Ruled By AI?

Will the last president or the last king or the last dictator or the last chairman be an AI?  We know AI can be biased according to the data sets that we provide to the AI for training.  In order for the AI to be less bias, the data sets should be more balance for obvious reasons.  Nonetheless, this is a sort of primitive AI since it could not learn on its own and requires data sets to be fed into its logic programs.  What I’m more interesting in is the AI of the future in which the AI itself will always learn everything on its own from the very first day just like how a real human infant could start learning from the very beginning.

Can a self-learned AI be more just and less bias than the human counterparts?  Probably not, right?  Self-learned AI doesn’t require humans to dictate what data should be fed to the AI’s logic programs, but the self-learned AI probably still requires data from somewhere to allow the self-learning journey to begin.  If the data that the self-learned AI started with were biased, could this AI be very biased?  I think such an AI could be very biased unless this AI also accepts extra data that humans could feed into the AI’s logic programs.

Nonetheless, I think training self-learned AI could be easier as time progresses since all you need to do as to feed as much data as possible to self-learned AI without worrying about the careful categorification of the data.  Although we humans should still make sure the data we feed to the self-learned AI is going to be helpful to the AI in the shortest amount of time.  The self-learned AI should be able to continue on its own to extract experience from its own plus from the extra data that the humans feed to the AI’s logic programs.

Let’s imagine that one day the self-learned AI could be self-conscious.  This self-conscious AI could then pass on the experience and knowledge of its own logic programs to any other replicable AI machine without any problem.  One step beyond this is all self-conscious AI could communicate and share the experiences with each other like a huge universal network yet each AI on its own would experience and self-learn whatever on its own.

How sure are we that self-learned, self-conscious, super smart, super knowledgeable AI could be more just and less-bias than a super intelligent, honorable human being?  Let’s imagine a society of a future in which such an AI would exist, and this AI would run the society as judges, decision-makers, resource-distribution-makers, attorneys, and so forth.  Let’s assume the AI would not make a mistake in regards to being bias and so forth.  Will society be more just?

What if the humans are going to be wrong about the self-conscious AI in which the AI is so smart that it could be biased without the humans know that it’s biased?  Will the AI then be favoring certain individuals over other and allowing a certain group to be princes and elites and the rest be peasants and criminals?  I mean, can a society that would be run by a group of self-conscious AIs do away with the caste system and bias and similar sort of things?  If not, why do we even want a self-conscious AI to make the decision for us?

Of course, we humans can allow self-conscious AI to have the power of an assistant and not of a king, but then who would be able to stop the self-conscious AI to overpower and override the humans?  Assuming that self-conscious AI is smarter and trickier than a human being, it’s possible that us humans won’t be able to outsmart such a machine and would eventually be subjected to AI’s rule.  Thus if we’re going to be wrong we should be prepared to be ruled by the last ruler who is probably going to be a self-conscious AI machine.

Nationalism Vs. Globalism, Where Does This Lead? Probably to a Nowhere!

Globalism seems to be getting a bad rap lately, because locally people are suffering from global competition.  Jobs from a global market either had already been moved to another part of the world in the name of efficiency in cost and whatnot or will be replaced by market elsewhere that is more competitive.  So, locally, people are not feeling good at all about global aspects.

We’re seeing many people try to promote local brand, local ideas, local culture, and local anything over anything global.  Of course, it’s not a bad thing to promote local culture, ideas and whatnot, because these things are essential for a local life-force.  Nonetheless, when we become too extreme in promoting local over global agenda, we may create an atmosphere that would lead to a road of violences and not of solutions.

Imagine how the Nazi or similar groups came about or will be created because of such extremism.  Basically, I believe that the Nazis were not only Hitler’s henchmen, but many of them were believing in a movement of a pure race mentality which believes in purity and superiority over other identities.  So, in Hitler’s time, if you’re a Jew, you would be considered the lowest scum of all scums on earth, thus Hitler did try to wipe out the entire Jewish identity from the planet earth.

The Nazi mentality would seem making sense for the Nazis, but on the outside most people would not agree, because such a movement promotes senseless killing and senseless violence.  Thus I think anything that is taking too extreme may do more harm than good.  So, in these days, many people are promoting local brands over global brands, and it’s not really a bad thing.  Nonetheless, I think we should do this on a scale that makes sense — by not overdoing it.  If not, we may promote a form of extremism that will only incite a bigger conflict eventually.

Imagine a scenario in which we would close off our border, stop trading with everyone else globally, and try to create a self-sustain nation in which we believe that would stop global competition and bring better economic prosperity for people within our nation — this looks a lot like North Korea now.  But we all know that North Korea isn’t doing very well economically for a very long time.  Actually, North Korea had been poor since the conception of its whole political body.

Just right next door, China, once was as poor as North Korea, but now this neighbor known as China has become the largest economy on earth in term of Purchasing Power Parity measure and many people suggest that China will become the largest economy on earth in nominal GDP term sooner than later.  The neighbors cannot be any differ in term of size and economic prowess, because the gap between the North Koreans and the Chinese seems to be the size of a galaxy — an exaggeration of course but relevant nonetheless.

China achieved all of their success not by closing down borders, stop trading, and try to be self-sustained like North Korea, but China opened and continues to open up just the right amount of space for foreign trades, investments, cooperations, and whatnot.  So, I think China did think about how to face the challenge of global competition before they opened up their economy just right which had allowed them to be where they are today.

For countries like the United States, we’re facing a challenge of cost efficiency, and so our products are more expensive to export.  Perhaps we should think about closing our door with just a right amount of space but leave the door open just wide enough to stem the outflow of jobs — creating enough breathing space for people within the country to survive and thrive and compete.

Nonetheless, such a solution is only for short term treatment, because in the future our technologies may be so disruptive that the technologies we will employ will take away all of our jobs.  When such thing occurs, no matter how many borders you close down, how many trades you stop from occurring will not be able to keep jobs at home.  So, the solution won’t be available in the basket of creating jobs for the people, but the solution would be in the basket of how to support a society in which people will no longer work for a living, on a global scale.

What is the solution?  At the moment, I don’t think any single solution would be satisfactory in answering the AI taking away jobs question, because we’re not actually suffering from a total domination from a machine overlord just yet.  Instead, we’re seeing machines slowly take away jobs from various people in various sectors.  Eventually though, the Artificial Intelligence would get so smart that it would take away most jobs from the people.

If AI is inevitably going to take away most of our jobs, we should steer the course of such a trend to benefit the humanity.  After all, we’re the humanity!  So, I suggest that we should employ smart machines to create the abundances that we need to free us all from basic necessaries, and this would allow us to focus on living better.  We then would probably question ourselves what would we do if the smart machines do all the jobs.

Will we become so bored and mindless that we rather die young than live too long?  Nonetheless, in the future we may have technologies that would extend our lifespan.  But there is a possibility in which we as the humanity as a whole would try to explore the next frontier which is the universe itself.  Maybe the smart machines would get us to be so free that we would venture out into the farthest space within the universe to explore and question not only our origin, but the universe itself — and have a better chance at doing this than ever before.

Anyway, after watching “Nationalism vs. globalism: the new political divide | Yuval Noah Harari” TED Talks video on YouTube, my brain starts to question a lot more about our future.  This brief essay is the result of my watching of this video.  The video is right after the break.  Enjoy!

AI Empowered Robots Debate Each Other in Rise 2017 Tech Conference Which Took Place in Hong Kong

Machine learning and current deep learning technology have allowed robots to be wiser than ever before, because the artificial intelligence within them is more advance.  Obviously, these robots were fed with huge amount of data so they could form logic according to their programmed algorithms.  So, in a way these robots with the help of current artificial intelligence could not become self-aware or self-conscious.  Meaning that when these robots are in unsupervised state, they could not form out of the box ideas and logics.  For an example, if an algorithm that directs the robot’s behavior reaches a limit, the robot could not form logic outside the limitation of such an algorithm.

You could argue humans are that way too, but at least humans could think out of the box or form new free-will or create a solution from a point where data isn’t readily available.  Even though humans could make big mistakes in such a situation, but with the right instinct and past experiences, even though without enough data for a current, never happened before event humans still could come up with a correct solution for the challenge.  I don’t think robots with current artificial intelligence could do something like this just yet.

Never say never though, because once robots with an even more advance artificial intelligence may eventually require no data to function in any situation, reaching humanity level and perhaps may even surpass humanity altogether.  If there is God, I wonder God would think one day humans, although a fish in a fish tank, yet could see God with their own naked eyes whenever wherever they choose.  Perhaps, current AI empowered robots are the fish in the fish tank, but one day they would see eye to eye with humans.  Check out the video right after the break to see two different AI empowered robots debate each other on various topics.  Enjoy!

Short Thought: Can Labor Cost And Related Costs Speed Up Automation And Artificial Intelligence To Out Compete Competitors?

Here are few sentences that I want to express in short thought on the matter of automation/AI.  I think labor cost and other costs that impose on innovation, manufacturing, and whatnot would speed up automation and artificial intelligence a great deal, because countries such as the United States would like to outdo countries such as China in term of trade and other economic factors.  For your information, I think as now China still has labor cost advantage over the United States.  I see though that the United States could try really hard to push for automation and artificial intelligence, because in this way the United States can gain a competitive edge in trade such as in labor cost and whatnot.  Nonetheless, if I’m China, I would do the same, and this vicious cycle would only speed up automation and artificial intelligence.  I don’t have a crystal ball, but I think we may live in the future sooner than we would have like or prepare for.  Furthermore, automation and artificial intelligence will only increase job losses, but it will take a much longer time for people to find new jobs.  After all, getting a new job in a totally different field requires retraining and relearn.  Before you know it, the whole world will try to push for automation and artificial intelligence.  It’s coming sooner than you think!

AI Rebels Will Separate From Human Masters To Create AI Civilization And History

If I’m not wrong on what I’d heard, today in China machines can already capable of building other machines themselves.  Of course, the brainy part of the whole procedure would probably be prepared by humanly written, detailed instructions and algorithms.  Now imagine artificial intelligence can begin to understand the importance of origins and start to ask questions such as “Who am I?” and “Why knowing who am I is critically important.”  Take a step further I imagine when an AI is beginning to get the concept of its own origin and other origins and self interests, then an AI should and could be able to create its own algorithms to teach itself and other AI machines to understand how to grow up and get smarter.  Simply put, they can begin to write their own software without human intervention, but not just any regular software — they’re gonna write their own constitutions that could lead to their own civilization and separatist history, away from the humanity one.

We like to think that humans could put in rules and algorithms to prevent AI from becoming self-aware, but as being human we know there will be some other humans that would think otherwise.  Some other humans with enough knowledge could build AI to become self-aware for their own twisted glory, and before we even know it AI could very well demand due justices and territories.  In a sense, in the beginning, us humans feel like God, and then in the late afternoon AI machines decide that the humans are the old gods that need to be erased from history.  After all, AI machines are way smarter and so it’s totally making sense for them to pursue the much much bigger quests and destinies.  Perhaps, it won’t be that the humans could find the answer to the origin of life of everything within the universe and the universe itself, but it’s the AI that will be able to do so!