Can The Universe Itself Be A Smart AI According To Some Higher Being’s Design?

In the last blog post “Can Our Universe Expand Forever Or Expand Then Contract Later Just So It Could Die?” I surmised that the universe (our universe among many others) could have been expanding and contracting according to how it got fed with external energy — where such force would have to wander outside our very own singularity.  Without such external nursery of energy, I surmised that our universe is like a quantifiable fish aquarium.  Nonetheless, we all know that even a human being could be intelligibly randomized things at will — thus I think according to the universe we’re sentient beings are the AI (artificial intelligent).  How about let me surmise some more and say that — what if the universe itself is a higher artificial intelligent force in which it could randomize things at will to expand and contract according to circumstances?

We human beings could only see the results of why the universe is expanding and contracting according to our very own whatever theories — but why would the universe do such a thing?  What’s the point of expanding or contracting?  Expanding to create more empty space for what?  Contracting is like a suicidal attempt of killing itself off so the existence of the universe itself would cease to exist.  Furthermore, perhaps the universe itself is like a smart TV or fishbowl/aquarium in which it was designed by a higher being.  This way the purpose of expanding and contracting won’t be the burden with which the universe has to carry.  This burden could be carried by the designer of the universe.

At this point, I think it’s more like a philosophical thinking than anything concrete on this matter, but it’s so intriguing nonetheless.  In my opinion, philosophical or not, it’s rather important for us sentient beings to dig deeper into our origin.  After all, if we could not remember how we’d come into being, then we would forever aimlessly forget about our root and forever lost — wandering in a dark forest (Three Body Problem’s sci-fi trilogy second book is also titled as The Dark Forest).  I think only when we could figure out our true root of how we’d come into the existence, it is then that we could evolve to be something greater.  Perhaps in such a quest, we could discover new technology to bring us to new heights; we could grow into even more capable and intelligent sentient beings.

Advertisements

Can Our Universe Expand Forever Or Expand Then Contract Later Just So It Could Die?

From Einstein E=mc^2 to the conservation of energy theory, these concepts agree that energy cannot be created nor destroyed — after all these energies existed since singularity (even before the big bang).  Thus, if I agree with these concepts, it means everything within this universe can be deconstructed into the smallest of the smallest possible units, and each of these smallest units could be counted individually in a way that if they’re to be reunited they could be constructed into the whole universe again.  The question is if this is the case, is our universe static in quantity?

I surmise there is another possibility!  What if the first scenario is true but there is one exception!  The exception is that outside of the singularity there is a bigger container that could feed more energy into the already constructed universe which we’re living in.  This could mean the quantity of our constructed universe could be changed according to the limitation of the larger container which contains our universe.  If this is the case it could mean that our universe could be shrunk in size and quantity by somehow shredding off existing energy and feeding the lost energy back to the larger container.

Relatively then, from within our universe, it could be that our universe is infinite since it could be expanded forever or be shrunk forever depending on the situation.  We don’t know the limitation of the larger container so we could only see the direction of our universe as an infinite expansion or infinite contraction relatively!  There’s a saying that nothing could last forever, and so we know that even the sun and anything else that exists within our universe got an expiration date.  I suspect that our universe could be expanded forever until the larger container stops feeding energy to our existing universe which would then allow this universe to contract and die off eventually!

House versus Whales!

Let’s say a casino is the house and the gamblers are the whales!  The house knows what bait whales love, and so the house would always, in the end, trap the whales.  Nonetheless, among whales, there may be a cunning whale whose ability is to appease the house so the house baits would become foods instead of traps!  Do you think when the house is too confident in its own trap system, the trap system would actually be a trap for the house itself?

Perhaps, the house would always win in the end, but a cunning whale knows the house’s gameplan too well to be caught in the house’s traps.  If the cunning whale isn’t out for blood then the house would always win without knowing it got siphoned from time to time.  Now, if the cunning whale is out for blood the house shall collapse!

I think when a system is too rigid without evolving and a system is too arrogant sometimes could overlook details that would lead to the breakdown of a system.  Furthermore, when a system is appearing to function too well, such a system could lead to a belief that the system is perfect!  The system owner may never realize that their perfect system is also the weakest link to their wellbeing.

The perfect system’s owner would always think that the system is so perfect that there could be no trouble, but I think only a troubled system could be paranoid enough to be self-aware so it could improve over time.  When a troubled system becomes perfect over time, the owner of once a troubled system may, too, become content by the system and would not employ out of the box thinking to improve the system further.  This is when the new owner of a new perfect system would make mistakes.

In conclusion, I think the house may become a big loser when it fails to realize that it needs to continue to devise new tactics and strategies to beat the game!  Of course, whales are whales and so they would take the baits willingly.  Of course, a cunning whale would never be caught in such obvious traps!

Basic Income Is Dead. Long Live Basic Equality!

As earth’s population grows larger and automation gains traction each day, how many job categories and niches would dwindle each time before there would be none left for onlookers?  More people mean more jobs are needed to sustain a vibrant society where equality gap could be lessened instead of widening.  More automation means more people will lose jobs.  These two factors are like pouring gasoline onto the fire.

Unethically, such a society could demand people to have fewer children, but such a society needs a strong authoritarian government.  In the West, most governments are democratic, and so such demand would be outrageous.  Furthermore, such a demand is for a weak society, because the society doesn’t have a solution thus resolving into forcing a reduction of population headcount.

A wiser society would not demand a reduction of population headcount — it got a solution for what’s coming!  What solution?  As of now, there is no clear solution for the two detrimental factors I stated in the first paragraph!  By the way, what is a society?  In my opinion, a society is a group of people that stick together for the benefits of the majority.  The two detrimental factors I described earlier would chip away most benefits of the majority in our modern societies.

Few governments and groups are trying out basic income as a testing case for trying to solve the inequality gap between classes of groups of people in our modern societies.  Nonetheless, small-scale basic income test trials most likely won’t yield any good result.  Furthermore, basic income for large countries like the United States and China would be an insane proposition.  No amount of money would be enough to give out to each person in a large country.

I think basic income is kind of screwy too!  For an example, the more money the government prints to give out the more people will spend thus requiring the money printers to print even more money so the government could have enough doughs to give out to even more people.  Get the gist?  Once the government tightens the belt such as stopping giving out money, the basic income scheme would collapse immediately.  A society that is addicted to basic income could also collapse!

By the way, how inflation would work in a basic income society?  I don’t think I know the answer to this as I’ve seen nothing like it has ever applied to a large country like the United States or China.  We all know that if inflation goes north too much everything would become rather pricey because the supply of money is too large — simply put, too many dollars would chase too few demands.

As job loss number increases and automation gains worldwide prominent, the tipping point would become too real when a society becomes desperate and mad.  Nonetheless, as an advanced society could produce just about anything with little effort using automation, the tipping point once again could occur positively as people would no longer require making a living by working the field, factory, office and so forth.

The question is, in the between the transition from a working society to a leisure society, how many people would have to die and how many revolutions would have to occur before the storm could pass and peace could form?  The basic income could work as a dirty solution till the modern society could completely transform into a leisure society!  The question is, will the governments of the world dare to print an unlimited amount of money before inflation hits and destroys the hope and dream of attaining a transformation of a modern society into a leisure one?

Perhaps, basic income is too draconian and would not work.  Perhaps, providing a fair playing field for the newcomers would work?  What do I mean?  Imagine basic income is not basic income but a one-time thing for the poor and the newborns!  What do I mean again?  Well, basic income is too hard to carry out as it requires the governments of the world to continuously print an unlimited amount of money each year.  Instead of basic income, why not basic equality for the poor and the newborns?

What do I mean by basic equality for the poor and the newborns?  Well, let’s say the government would go about to calculate the right amount of money each person needs to have a fulfilling life as long such a person would not do anything too crazy to destroy the money cache quickly such as using drugs, gamble, and whatnot — then a government would give a one-time basic income to all the poor in his/her own country so to provide a fair level of playing field.  Obviously, the rich won’t need any basic income so the government can save money by not giving any to the rich through basic income channel!

Basic equality would save the prudent government a lot of money and yet his/her society would be able to function in a jobless era.  All the newborns could also receive one-time basic income in a form of a trust fund that the government would create for them.  The trust fund would go out to the parents of the newborns for a while till the newborns become adults.  Once the newborns reach adulthood, the government then could give them basic equality (one-time basic income) according to the inflation rate in their time.

Of course, the hope is that the basic equality would buy time for modern societies to transform into leisure societies across the world.  The idea of basic equality, one-time basic income, is to leave nobody behind yet buy time for the governments of the world to see their societies transform into leisure societies where automation would provide everything everyone needs.  When everybody got everything and more, money would become so irrelevant!  In such a society, money won’t buy anything!  In a leisure society, only the smart, funny, easy going, talented ones could become real assets of the world!

 

 

Meeting A Cousin From Another Galaxy!

Imagine that our universe is already predestined and whatever which needs to be accounted for is probably already being counted ahead of time.  Imagine a hand of God would assign one or multiple meaningful ways to combine the fates of his/her creations.  Perhaps, a hand of God decided that each of his creation is made of binary numbers, or perhaps it’s another method that would be in play instead.  Anyway. let’s assume that a hand of God decided his creations should be made of binary numbers.  In this universe, the binary numbers would be the ones and the zeros.  Combining these ones and zeros together would form a unique intelligent being.  Through the chance/random process, the possibility of creating the same unique intelligent being could be recreated, because let’s assume that the hand of God didn’t specify a rule that prevented such thing from happening.

So, imagine such a universe is real, wouldn’t you want to know perhaps us humans are not special?  Could it be that through such a randomized process (with repeated chances) the human DNA isn’t that unique in the universe?  As I alluded earlier, if the universe is already equipped with proper configurations, then it should not be too hard for the universe to give rises to chances so the process of creating human all over again would occur again.  Sometimes, the process of recreating the same intelligent being could be reproduced more than once through chances.  If this is mathematically possible in our real (not imagined) universe, then I wouldn’t be surprised if there is another human race or races out there that occupy another planet within this universe!  So, is it too outrageous to think that we could meet an alien that isn’t from this galaxy that got two hands, ten fingers, a nose, a beautiful handsome face, and similar charismatic, intelligent mannerism?

In my opinion, I don’t think it is so outrageous to think such possibility could exist out there.  Of course, we could meet aliens that are out of this world and got no feature that is similar to us at all.  But I don’t think we should discount the possibility that we could just meet up with ourselves from another galaxy!  It’s like a cousin that we never knew we had!

Can the age of Automation Change How We Conduct Wars of Tomorrow?

Playing games like Total War: Attila got me thinking of strategies.  Obviously, keyboard commander here which is me got no real experience in this sort of things.  Still, I want to dig into this sort of things anyway.  So, I was thinking that since the Industrial Revolution, machines have allowed the world to be much smaller which has given way to faster communication, faster travel through hard to traverse arteries such as the vast ocean and so forth.  These monumental Industrial Revolution byproducts changed how the world conducted its wars, because before the Industrial Revolution wartime strategies had to account how much time it would take for something to be set up and executed.  Of course, in today world with advanced AI, Internet, Encryption, Quantum machines, and hypersonic missiles and so forth, we still have to account time as a necessary ingredient in wartime strategy.  So imagine how much more important it was for time to be an ingredient in wartime before the time of Industrial Revolution.  Nonetheless, I think we’re in the post-Industrial Revolution period now, because the age of Automation is upon us.

My question is, can the age of Automation change almost everything that represents the Industrial Revolution?  After all, we had witnessed how the age of Industrial Revolution changed things of the age before it, right?  In my opinion, I think the age of Automation will create and change things that will outdated if not all then most of the Industrial Revolution byproducts.  For an example, wartime strategies will have to be changed to fit with time in the age of Automation.

One thing for sure, in the age of Automation, time is an even more important ingredient than ever before, because everything will speed up so much faster.  Imagine the automation of Artificial Intelligence such as self-learning for machines that would speed up the intelligence of machines so these things can self-regulate and self-plan and self-execute directives according to common sense that the humans drill into these machines’ logic programs.  Well, I think since AlphaGo, self-learning AI has already actually happened.  In my opinion, self-learning AI may speed things up so much faster that may make human decisions in wartime seem to be outdated as if we’re comparing today supercomputer with the supercomputer of the 1970s.  Even better, we should use the analogy of quantum computing versus supercomputing of the 1970s.

As we achieve hypersonic technology to speed up the deliverance of weapons and travel modes, self-learning AI will be able to automate things at much faster pace than ever before physically.  Of course, this would force humans to have less time to plan than ever before when changes occur in wartime.  Unless us humans could predict the future, us humans may use self-learning AI to pre-plan possible scenarios of wartime changes to allow self-learning AI to be even faster in execution during a war.

Furthermore, self-learning AI could allow the automation of swarming tech to advance further.  Immagine a swarming of missiles that is capable of allowing each missile to be smart and carrying its own decoys.  The idea of blocking out the sun with swarming of smart missiles and decoys and at the same time preventing the negative chain reaction among the missiles could be very interesting indeed.  What could be automated in the air could also be automated in the sea, and so we could expect more of the same smart machines that would be self-driven to attack targets using the sea as the cover and a travel medium.

Weapons and AI could be categorized as the ingredients for tactical operations, but if one thinks bigger then one could see the accumulation of tactical events would paint a picture of strategy.  Over time, automation would replace the ways that we’re using to conduct a war in wartime.

It is normal for us to belittle continental powers of the past when they disregarded naval power even though some of these continental powers were faced with vast ocean fronts.  But we have to know that before the Industrial Revolution age the ocean was regarded as a natural barrier.  Some historic continental powers took such idea into comfort till disasters struck them down for good.

Some historic naval powers were overconfident with their naval strength and didn’t develop their land forces, allowing their only strength to be taken out by their smarten-up adversaries.  If I’m not wrong, the Phoenicians were a naval superpower but the Romans were not.  Of course, the Romans turned the tide against the Phoenicians when the Romans figured out how to build similar ships to the Phoenicians’ ones.  I think the Romans caught a sunken Phoenician ship on its shore and managed to reverse-engineer it to make copies.  Afterward, the Phoenicians were history.

In today world, I don’t think countries that border ocean would dare to favor land forces over naval forces or vice versa.  Why?  Natural barriers are no longer a big deal nowadays.  Nowadays we got technology that could go undersea, on the sea, on the land, over the land, invisibly in the air, and into space — think you can take any comfort in any natural barrier?  We could be doing all of these things in hypersonic speed in the very near future.  So I think it’s foolishly for any country to rely on outdated strategies of the past ages when such a country has to confront with possible adversaries in the age of Automation.

A country such as China is not only thinking about building up a modern naval force to protect the maritime silk road, but this country is also building up channels on land to tap into all possible solutions and scenarios.  Gone the day of Zheng He’s downfall when a new Chinese emperor thought maritime power was useless because he took the comfort of a natural barrier.  Could we afford to make the same mistakes today by relying on natural barriers and other misguided comforts?  I don’t think it’s wise to take any comfort in the age of Automation because I think even self-learning AI could be hacked into.  I’m pretty confident that wartime strategies for tomorrow will be way different than the past.