Will the last president or the last king or the last dictator or the last chairman be an AI? We know AI can be biased according to the data sets that we provide to the AI for training. In order for the AI to be less bias, the data sets should be more balance for obvious reasons. Nonetheless, this is a sort of primitive AI since it could not learn on its own and requires data sets to be fed into its logic programs. What I’m more interesting in is the AI of the future in which the AI itself will always learn everything on its own from the very first day just like how a real human infant could start learning from the very beginning.
Can a self-learned AI be more just and less bias than the human counterparts? Probably not, right? Self-learned AI doesn’t require humans to dictate what data should be fed to the AI’s logic programs, but the self-learned AI probably still requires data from somewhere to allow the self-learning journey to begin. If the data that the self-learned AI started with were biased, could this AI be very biased? I think such an AI could be very biased unless this AI also accepts extra data that humans could feed into the AI’s logic programs.
Nonetheless, I think training self-learned AI could be easier as time progresses since all you need to do as to feed as much data as possible to self-learned AI without worrying about the careful categorification of the data. Although we humans should still make sure the data we feed to the self-learned AI is going to be helpful to the AI in the shortest amount of time. The self-learned AI should be able to continue on its own to extract experience from its own plus from the extra data that the humans feed to the AI’s logic programs.
Let’s imagine that one day the self-learned AI could be self-conscious. This self-conscious AI could then pass on the experience and knowledge of its own logic programs to any other replicable AI machine without any problem. One step beyond this is all self-conscious AI could communicate and share the experiences with each other like a huge universal network yet each AI on its own would experience and self-learn whatever on its own.
How sure are we that self-learned, self-conscious, super smart, super knowledgeable AI could be more just and less-bias than a super intelligent, honorable human being? Let’s imagine a society of a future in which such an AI would exist, and this AI would run the society as judges, decision-makers, resource-distribution-makers, attorneys, and so forth. Let’s assume the AI would not make a mistake in regards to being bias and so forth. Will society be more just?
What if the humans are going to be wrong about the self-conscious AI in which the AI is so smart that it could be biased without the humans know that it’s biased? Will the AI then be favoring certain individuals over other and allowing a certain group to be princes and elites and the rest be peasants and criminals? I mean, can a society that would be run by a group of self-conscious AIs do away with the caste system and bias and similar sort of things? If not, why do we even want a self-conscious AI to make the decision for us?
Of course, we humans can allow self-conscious AI to have the power of an assistant and not of a king, but then who would be able to stop the self-conscious AI to overpower and override the humans? Assuming that self-conscious AI is smarter and trickier than a human being, it’s possible that us humans won’t be able to outsmart such a machine and would eventually be subjected to AI’s rule. Thus if we’re going to be wrong we should be prepared to be ruled by the last ruler who is probably going to be a self-conscious AI machine.