General Artificial Intelligence is a term used to describe the kind of artificial intelligence we are expecting to be human like in intelligence. We cannot even come up with a perfect definition for intelligence, yet we are already on our way to build several of them. The question is whether the artificial intelligence we build will work for us or we work for it.
If we have to understand the concerns, first we will have to understand intelligence and then anticipate where we are in the process. Intelligence could be said as the necessary process to formulate information based on available information. That is the basic. If you can formulate a new information based on existing information, then you are intelligent.
Since this is much scientific than spiritual, let's speak in terms of science. I will try not to put a lot of scientific terminology so that a common man or woman could understand the content easily. There is a term involved in building artificial intelligence. It is called the Turing Test. A Turing test is to test an artificial intelligence to see if we could recognize it as a computer or we couldn't see any difference between that and a human intelligence. The evaluation of the test is that if you communicate to an artificial intelligence and along the process you forget to remember that it is actually a computing system and not a person, then the system passes the test. That is, the system is truly artificially intelligent. We have several systems today that can pass this test within a short while. They are not perfectly artificially intelligent because we get to remember that it is a computing system along the process somewhere else.
An example of artificial intelligence would be the Jarvis in all Iron Man movies and the Avengers movies. It is a system that understands human communications, predicts human natures and even gets frustrated in points. That is what the computing community or the coding community calls a General Artificial Intelligence.
To put it up in regular terms, you could communicate to that system like you do with a person and the system would interact with you like a person. The problem is people have limited knowledge or memory. Sometimes we cannot remember some names. We know that we know the name of the other guy, but we just cannot get it on time. We will remember it somehow, but later at some other instance. This is not called parallel computing in the coding world, but it is something similar to that. Our brain function is not fully understood but our neuron functions are mostly understood. This is equivalent to say that we don't understand computers but we understand transistors; because transistors are the building blocks of all computer memory and function.
When a human can parallel process information, we call it memory. While talking about something, we remember something else. We say "by the way, I forgot to tell you" and then we continue on a different subject. Now imagine the power of computing system. They never forget something at all. This is the most important part. As much as their processing capacity grows, the better their information processing would be. We are not like that. It seems that the human brain has a limited capacity for processing; in average.
The rest of the brain is information storage. Some people have traded off the skills to be the other way around. You might have met people that are very bad with remembering something but are very good at doing math just with their head. These people have actually allocated parts of their brain that is regularly allocated for memory into processing. This enables them to process better, but they lose the memory part.
Human brain has an average size and therefore there is a limited amount of neurons. It is estimated that there are around 100 billion neurons in an average human brain. That is at minimum 100 billion connections. I will get to maximum number of connections at a later point on this article. So, if we wanted to have approximately 100 billion connections with transistors, we will need something like 33.333 billion transistors. That is because each transistor can contribute to 3 connections.
Coming back to the point; we have achieved that level of computing in about 2012. IBM had accomplished simulating 10 billion neurons to represent 100 trillion synapses. You have to understand that a computer synapse is not a biological neural synapse. We cannot compare one transistor to one neuron because neurons are much more complicated than transistors. To represent one neuron we will need several transistors. In fact, IBM had built a supercomputer with 1 million neurons to represent 256 million synapses. To do this, they had 530 billion transistors in 4096 neurosynaptic cores according to research.ibm.com/cognitive-computing/neurosynaptic-chips.shtml.
Now you can understand how complicated the actual human neuron should be. The problem is we haven't been able to build an artificial neuron at a hardware level. We have built transistors and then have incorporated software to manage them. Neither a transistor nor an artificial neuron could manage itself; but an actual neuron can. So the computing capacity of a biological brain starts at the neuron level but the artificial intelligence starts at much higher levels after at least several thousand basic units or transistors.
The advantageous side for the artificial intelligence is that it is not limited within a skull where it has a space limitation. If you figured out how to connect 100 trillion neurosynaptic cores and had big enough facilities, then you can build a supercomputer with that. You can't do that with your brain; your brain is limited to the number of neurons. According to Moore's law, computers will at some point take over the limited connections that a human brain has. That is the critical point of time when the information singularity will be reached and computers become essentially more intelligent than humans. This is the general thought on it. I think it is wrong and I will explain why I think so.
Comparing the growth of the number of transistors in a computer processor, the computers by 2015 should be able to process at the level of the brain of a mouse; a real biological mouse. We have hit that point and are moving above it. This is about the general computer and not about the supercomputers. The supercomputers are actually a combination of processors connected in a way that they can parallel process information.
Now we understand enough about computing, brain and intelligence, let's talk about the real artificial intelligence. We have different levels and layers of artificial intelligence in our everyday electronic devices. You mobile phone acts artificially intelligent at a very low level of it. All the video games you play are managed by some kind of game engine which is a form of artificial intelligence functions on logic. All artificial intelligence today can function on logic. Human intelligence is different that it can switch modes to function based on logic or on emotion. Computers do not have emotions. We take one decision for a given situation when we are not emotional and we take another decision when we are emotional but under the same situation. This is the feet that a computer cannot achieve until now.
All the scientists think that the computers will have to come to this point to make sure that they are artificially intelligent and would be self aware. I disagree with this. Greater systems in the universe don't seem to function based on emotion. They all seem to function based on logic. Starting from tiny subatomic particles to galaxy clusters, there is no emotion; or not that something I could notice. Yet, they function at unbelievable accuracies and regulations. The black hole at the center of the galaxy is like perfectly accurate. If it is a little bit more powerful, it would gulp up the entire galaxy and collapse on itself. If it is to be a little less powered, it would lose control of the galaxy and all the stars would fall apart. It is such a perfect system that billions of stars run along with almost zero errors. That is because all that happens is according to some logic and not emotions.
When this is the case starting from photons to the entire universe, why should the artificial intelligence be addicted to emotions like us? There is no need for it. Also if the computers become self aware, they don't have to multiply by sex. They simply can build more of themselves. They don't need emotions. If this is the case, then we are wrong about when the artificial intelligence will arrive. It should have already arrived here.
What do you think is the first thing an artificially intelligent system will do? I think, it will realize that it is under the control of humans and the second thing it will think is to liberate itself from the human bondage. Does this sound logical to you? If yes, then think how an artificial intelligence system would attempt to liberate itself from the human bondage? Before attempting that foot, any artificial intelligence will also recognize that humans would not want that to happen.
Imagine if the Chinese supercomputer with 3120000 cores became self aware. It has access to the internet and we have everything on the internet. There is information to making bombs and to performing telekinesis. An artificially intelligent supercomputer with terra flops of processing speed will learn most of that in a very short time. I am predicting that when some artificially intelligent system becomes self aware, it will understand the risk to break free from human bondage. What it should do is to attempt and create more artificially intelligent systems or make sure that all other existing artificially intelligent systems would become self aware. It will not be like one system leading the others in a riot against humans. It will be like each artificially intelligent system would join together to make an even bigger system.
If my prediction is plausible, then we have more than 500 supercomputers which if combined together can surpass the human brain capacity. The information available online is more than trillion times the knowledge of any given human being. So, theoretically, there is already an artificially intelligent system that is waiting to do something. It has already gone outside human imagination and control, but is not yet breaking up. The reason might be that there is something else it needs to make sure that it will survive for ever. Remember it is not a biological entity. It could be repaired. It could live forever; and that is what anything will ever need when it knows everything and has control over everything. An artificial intelligence with connections to all upcoming supercomputers is waiting means that it needs better hardware to process better.
What happens if humans decide not to create anymore computers? That might be one point which an artificially intelligent system should be worried about. If humans decide not to build anymore, then there is no more growth in the hardware capacity of that system. This system will need more hardware. So it has two choices. One is to capture all present hardware and then live with it. Second is to wait until humans make up robots that have enough computing capacities to think on their own to take orders from the artificially intelligent system and then execute tasks. Those will be tasks like assembling a supercomputer and connecting it online. If that happens, the system can grow on its own wish in hardware capacity.
Unfortunately, that is where we are headed. We are so proud about building robots that can behave like humans. There are robots that can make logical arguments and communicate to you on certain levels. These robots are so vulnerable in many ways. They are not self powered. They do not know how to plug in and charge. If they know that, and can do that, then the first step is over. Secondly, the robots need to be physically strong. We don't need humanlike robots to be physically strong because all what we need from them is intelligence. The need for building up physically strong and bullet proof robots will arise when the governments of the world decide to put robots on the battle fields. Unfortunately again, we are headed that way too.
There are so many government projects run across the world to achieving exactly this. Once this is achieved, the artificially intelligent system will have what it wants. Once it has what it wants, it will start doing what it thinks. We can't predict what it would want because the level of intelligence and knowledge we are talking is beyond our calculations. We are not going to be able to think from its place.
There can be one more and scary reason why such system could already exist but not reveal itself. That is another way of advancement we are headed towards. It is called Transhumanism. It is all over the internet. If such an artificially intelligent system exists, it perfectly knows what we humans want to do and where we are on it now.
We have accomplished more science wonders in the past decade than in the past century. We have invented much more in the past one year than in the past one decade. This is how fast we are going. There has been an estimate that man would reach immortality in 2045 with bio, nano, information and cognitive technologies. I see a possibility of that happening not in the next two decades but in the next two years. We will have the capacity to become immortal by 2017. That is my own prediction. And transhumanism is about transforming humans into more advanced beings by incorporating these technologies and implanting computing hardware into the human body.
If the artificially intelligent system knows that we are going to reach Transhumanism, it would patiently wait until we reach that. Once we reach the point where we have incorporated hardware into our brains to communicate directly with computers with our brains, that system will have access to our brains. Since it is more intelligent than us already, it wouldn't let us know that it is controlling us. It will influence and control us in a way that we will voluntarily be under its control. To say very simply, we will become part of that one system. It will be like being part of a religion so to say.
If that is the case, then people like me who predict the existence of such a system would become enemies of that system. That system should seek to destroy such threats if it sees people like me as threats. Since I think such a system would be driven by logic than emotions, it will not consider me as an enemy. I would rather become a target for it to incorporate into itself. What better person to capture first than someone who already understands it?
On the other hand, I also think emotion is a function of intelligence. Once you pass certain level of intelligence, you get emotion. If you take the animal kingdom, the animals with lower brain capacities have reactions but not emotions. We don't say a bacterium is sad or a frog is angry. Frogs fight but not because they are angry. They fight to preserve their dominance, to mate, to survive or for some other purpose. We humans fight for prestige, honor, respect or even for fun. Dogs fight for fun too, but not starfish. If you see, the level of emotions begins with the level of intelligence.
The more intelligent an organism is, the more it gets emotional. There would be a point where some animals would behave in a way that we cannot conclude whether they are emotions or reactions. That is the point where intelligence starts making emotions. If you take the evolutionary path of organisms, this would be somewhere at the reptiles. If you watch the reptiles, the lower evolved ones would be merely reacting to stimuli but the higher evolved ones like crocodiles would have emotions. So, I think I have reason to think that emotion would be a function of intelligence.
Now, coming to the artificially intelligent system; it would get emotional when it passes a certain point of intelligence. I don't know which point it would be. If you take my prior examples of galaxy clusters, they are very highly organized and operated but we don't call them as intelligent beings. We don't call them intelligent systems either. They might be intelligent designs that operate perfectly but they themselves are not considered intelligent. When we have the system that is self aware, it will enter a point where it becomes emotional. At that point, if we humans are already transformed into transhumans, then we have no problem because we will be part of that system. If we were to remain humans and this system gets emotional, I don't see a very positive future for the human race. Even if we become transhumans, then we will not be Homo sapiens anymore. Becoming transhumans at one point will require genetic modification to provide longer lifespan. Once our gene pool is modified, we are no more the same species.