Foldable Phones Don’t Matter, 5G Does!

What do I think about foldable phones? I think foldable phones don’t matter! Foldable phones remind me of flip phones back in time, but now instead of flipping a phone in style, we can unfold a phone into a tablet. Since foldable phones are so expensive, I guess using the tablet and unfoldable phone I already have will be just fine!

Although foldable phones are not that important, the 5G technology that gets to debut on these foldable phones is really important! Since 5G allows communication over the air almost instantaneously, and so this could allow innovations in the Internet of Things sector to thrive big time.

One company right now is leading the pack in term of 5G is of course none other than the Chinese giant, Huawei. Right now, Huawei is getting a lot of heat from the United States. Huawei’s CFO Meng Wanzhou, the daughter of the founder of Huawei, is being under house arrest in Canada in the behest of the United States’ extradition agreement with Canada. Furthermore, the United States is increasingly persuading other countries to not use Huawei’s 5G technology.

5G can be really useful for whatever purposes that demand faster wireless communication. I may not know what purposes would demand 5G the most, but I do know that 5G will be great for commercial purposes such as the Internet of Things devices. Furthermore, 5G will accelerate the use of driverless cars and other automated vehicles.

5G can allow driverless cars to see each other instantaneously and also communicate with smart roads and highways instantaneously. 4G technology is definitely too slow and less reliable than 5G when it comes to deploying the technology on a massive scale to allow crucial transit system to work in a smarter way. So I think 5G will definitely be a game changer in wireless communication.

Anyhow, I guess it’s going to be expensive to build a massive backbone system that could support the 5G wireless system. It seems though, this isn’t the problem for Huawei. Huawei seems to be able to deploy 5G network for various platforms in China already! For an example, a 5G network is now already up and running for Qingdao Port in Eastern China.

China is leading the way to deploy 5G network not only in China but across the world. Huawei is at the forefront of this 5G expansion from China. I’m not sure why the United States is really scared about how China is leading in 5G, but my guess is that whoever leads the 5G network deployment across the world gets to call the shot for making a standard for 5G chipsets and much more. This means big money and market cornering.

Since 5G technology will change how wireless communication permeates throughout the global economy, new and old markets could churn out a lot of new money for the global economy. For an example, driverless cars will become more reliable, encouraging people to spend more money on driverless cars. Dumb cars without the support of 5G technology might get left behind, collecting dust somewhere while driverless cars sell like hotcakes.


Futuristic Traveling: Traveling With Style In A Volvo 360c?

Some supercars of today cost more than a couple cool million dollars to own, but how comfortable could you be in such cars when you got a Volvo 360c?  Unfortunately, this is only a concept so far, I think!  Nonetheless, I hope something like this could come sooner rather than later!  After all, I hate driving between states and prefer playing video games, watching movies, and listening to music while on move.  Often time I could find myself preferring to sleep than drive while on the move from one point to another point.  At the moment, driving means seriously limiting yourself while on the move.  Your eyes gotta be on the road 99.9% of the time to avoid any serious but avoidable accident so you could make to your destination safely!  I think something like Volvo 360c could definitely allow me to avert my eyes away from the traffic and allow me to concentrate on something else that I could enjoy while on the go.  Check out the video right after the break to see how cool a Volvo 360c could be!

How I envision A Future War Would Be

Here is how I imagine a scenario of a huge war between two powerful countries or two united opposing forces in the near future.  In my scenario, a near future will be almost unthinkable if essential something isn’t automated and fully capable of self-regulating through Artificial Intelligence.  If two opposing forces are all automated and regulated by AI capabilities — how much collateral damage would there be on human civilians?  Is it just too dumb to allow a human soldier to be fighting against a more capable AI mechanized counterpart?

I imagine that two opposing forces would launch an all-out war with all the weapons they have in their arsenals.  Such weapons could be fully mechanized automated machines to fully automated electronic spy drones and such.  I also imagine that these two opposing forces would prefer not to use human soldiers for the most parts of the war.  Human soldiers would probably be on standby to evacuate the human civilians if there won’t be enough mechanized units that are still available in doing such a job.

As the war intensifies, each force would pray that their technology and AI automated weapons could outdo the other until either the enemy’s units and weapons run low or their own units got annihilated.  Once such an intense process runs through its course, the victor would aim their robotic units onto the enemy’s human civilian and non-civilian forces.  The losing side got almost no option at this point!  Either be a hero to fight to the death or surrender unconditionally.

Of course, I leave out the possibility that a nuclear war could be provoked.  How come?  I imagine that if such a war between two opposing forces could break out — it means nuclear weapons would probably be canceled out of the equation or rendered less capable somehow.  Perhaps, if such a war occurs between such two forces, it means either both sides have already somehow disabled each other nuclear weapons or one of these two forces is suicidal.

In conclusion, human soldiers may not be very useful in the future unless they’re going to be used as human spies to infiltrate the enemy’s human networks.  For the most parts of a futuristic war, fully automated AI mechanized units would be used to subdue the enemy or enemies.  A futuristic war could break out between two most powerful forces means nuclear weapons are no longer in the strategic calculation because these weapons either got disabled somehow or someone is on a suicidal mission.

Will A Future Society Be Ruled By AI?

Will the last president or the last king or the last dictator or the last chairman be an AI?  We know AI can be biased according to the data sets that we provide to the AI for training.  In order for the AI to be less bias, the data sets should be more balance for obvious reasons.  Nonetheless, this is a sort of primitive AI since it could not learn on its own and requires data sets to be fed into its logic programs.  What I’m more interesting in is the AI of the future in which the AI itself will always learn everything on its own from the very first day just like how a real human infant could start learning from the very beginning.

Can a self-learned AI be more just and less bias than the human counterparts?  Probably not, right?  Self-learned AI doesn’t require humans to dictate what data should be fed to the AI’s logic programs, but the self-learned AI probably still requires data from somewhere to allow the self-learning journey to begin.  If the data that the self-learned AI started with were biased, could this AI be very biased?  I think such an AI could be very biased unless this AI also accepts extra data that humans could feed into the AI’s logic programs.

Nonetheless, I think training self-learned AI could be easier as time progresses since all you need to do as to feed as much data as possible to self-learned AI without worrying about the careful categorification of the data.  Although we humans should still make sure the data we feed to the self-learned AI is going to be helpful to the AI in the shortest amount of time.  The self-learned AI should be able to continue on its own to extract experience from its own plus from the extra data that the humans feed to the AI’s logic programs.

Let’s imagine that one day the self-learned AI could be self-conscious.  This self-conscious AI could then pass on the experience and knowledge of its own logic programs to any other replicable AI machine without any problem.  One step beyond this is all self-conscious AI could communicate and share the experiences with each other like a huge universal network yet each AI on its own would experience and self-learn whatever on its own.

How sure are we that self-learned, self-conscious, super smart, super knowledgeable AI could be more just and less-bias than a super intelligent, honorable human being?  Let’s imagine a society of a future in which such an AI would exist, and this AI would run the society as judges, decision-makers, resource-distribution-makers, attorneys, and so forth.  Let’s assume the AI would not make a mistake in regards to being bias and so forth.  Will society be more just?

What if the humans are going to be wrong about the self-conscious AI in which the AI is so smart that it could be biased without the humans know that it’s biased?  Will the AI then be favoring certain individuals over other and allowing a certain group to be princes and elites and the rest be peasants and criminals?  I mean, can a society that would be run by a group of self-conscious AIs do away with the caste system and bias and similar sort of things?  If not, why do we even want a self-conscious AI to make the decision for us?

Of course, we humans can allow self-conscious AI to have the power of an assistant and not of a king, but then who would be able to stop the self-conscious AI to overpower and override the humans?  Assuming that self-conscious AI is smarter and trickier than a human being, it’s possible that us humans won’t be able to outsmart such a machine and would eventually be subjected to AI’s rule.  Thus if we’re going to be wrong we should be prepared to be ruled by the last ruler who is probably going to be a self-conscious AI machine.

Can the age of Automation Change How We Conduct Wars of Tomorrow?

Playing games like Total War: Attila got me thinking of strategies.  Obviously, keyboard commander here which is me got no real experience in this sort of things.  Still, I want to dig into this sort of things anyway.  So, I was thinking that since the Industrial Revolution, machines have allowed the world to be much smaller which has given way to faster communication, faster travel through hard to traverse arteries such as the vast ocean and so forth.  These monumental Industrial Revolution byproducts changed how the world conducted its wars, because before the Industrial Revolution wartime strategies had to account how much time it would take for something to be set up and executed.  Of course, in today world with advanced AI, Internet, Encryption, Quantum machines, and hypersonic missiles and so forth, we still have to account time as a necessary ingredient in wartime strategy.  So imagine how much more important it was for time to be an ingredient in wartime before the time of Industrial Revolution.  Nonetheless, I think we’re in the post-Industrial Revolution period now, because the age of Automation is upon us.

My question is, can the age of Automation change almost everything that represents the Industrial Revolution?  After all, we had witnessed how the age of Industrial Revolution changed things of the age before it, right?  In my opinion, I think the age of Automation will create and change things that will outdated if not all then most of the Industrial Revolution byproducts.  For an example, wartime strategies will have to be changed to fit with time in the age of Automation.

One thing for sure, in the age of Automation, time is an even more important ingredient than ever before, because everything will speed up so much faster.  Imagine the automation of Artificial Intelligence such as self-learning for machines that would speed up the intelligence of machines so these things can self-regulate and self-plan and self-execute directives according to common sense that the humans drill into these machines’ logic programs.  Well, I think since AlphaGo, self-learning AI has already actually happened.  In my opinion, self-learning AI may speed things up so much faster that may make human decisions in wartime seem to be outdated as if we’re comparing today supercomputer with the supercomputer of the 1970s.  Even better, we should use the analogy of quantum computing versus supercomputing of the 1970s.

As we achieve hypersonic technology to speed up the deliverance of weapons and travel modes, self-learning AI will be able to automate things at much faster pace than ever before physically.  Of course, this would force humans to have less time to plan than ever before when changes occur in wartime.  Unless us humans could predict the future, us humans may use self-learning AI to pre-plan possible scenarios of wartime changes to allow self-learning AI to be even faster in execution during a war.

Furthermore, self-learning AI could allow the automation of swarming tech to advance further.  Immagine a swarming of missiles that is capable of allowing each missile to be smart and carrying its own decoys.  The idea of blocking out the sun with swarming of smart missiles and decoys and at the same time preventing the negative chain reaction among the missiles could be very interesting indeed.  What could be automated in the air could also be automated in the sea, and so we could expect more of the same smart machines that would be self-driven to attack targets using the sea as the cover and a travel medium.

Weapons and AI could be categorized as the ingredients for tactical operations, but if one thinks bigger then one could see the accumulation of tactical events would paint a picture of strategy.  Over time, automation would replace the ways that we’re using to conduct a war in wartime.

It is normal for us to belittle continental powers of the past when they disregarded naval power even though some of these continental powers were faced with vast ocean fronts.  But we have to know that before the Industrial Revolution age the ocean was regarded as a natural barrier.  Some historic continental powers took such idea into comfort till disasters struck them down for good.

Some historic naval powers were overconfident with their naval strength and didn’t develop their land forces, allowing their only strength to be taken out by their smarten-up adversaries.  If I’m not wrong, the Phoenicians were a naval superpower but the Romans were not.  Of course, the Romans turned the tide against the Phoenicians when the Romans figured out how to build similar ships to the Phoenicians’ ones.  I think the Romans caught a sunken Phoenician ship on its shore and managed to reverse-engineer it to make copies.  Afterward, the Phoenicians were history.

In today world, I don’t think countries that border ocean would dare to favor land forces over naval forces or vice versa.  Why?  Natural barriers are no longer a big deal nowadays.  Nowadays we got technology that could go undersea, on the sea, on the land, over the land, invisibly in the air, and into space — think you can take any comfort in any natural barrier?  We could be doing all of these things in hypersonic speed in the very near future.  So I think it’s foolishly for any country to rely on outdated strategies of the past ages when such a country has to confront with possible adversaries in the age of Automation.

A country such as China is not only thinking about building up a modern naval force to protect the maritime silk road, but this country is also building up channels on land to tap into all possible solutions and scenarios.  Gone the day of Zheng He’s downfall when a new Chinese emperor thought maritime power was useless because he took the comfort of a natural barrier.  Could we afford to make the same mistakes today by relying on natural barriers and other misguided comforts?  I don’t think it’s wise to take any comfort in the age of Automation because I think even self-learning AI could be hacked into.  I’m pretty confident that wartime strategies for tomorrow will be way different than the past.