Categories
Uncategorized

General Artificial Intelligence Upgrade . Than Learning ability

General Artificial Learning ability is a term used to describe the kind of artificial learning ability we are expecting to be human like in learning ability. We cannot even come up conversational ai with a perfect definition for learning ability, yet we are already on our way to build many of them. The question is whether the artificial learning ability we build will work for us or we work for it.

If we need to understand the concerns, first we will have to understand learning ability and then anticipate where we are in the process. Learning ability could be said as the necessary process to formulate information based on available information. That is the basic. If you can formulate a new information based on existing information, then you are intelligent.

Since this is much scientific than spiritual, let’s speak in terms of science. I will attempt not to put a lot of scientific vocabulary so that a common man or woman could understand the content easily. There is a term involved in building artificial learning ability. It is called the Turing Test. A Turing test is to test an artificial learning ability to see if we could recognize it as a computer or we couldn’t see any difference between that and a human learning ability. The evaluation of the test is that if you communicate to an artificial learning ability and along the process you forget to remember that it is actually a calculating system and not a person, then the system passes the test. That is, the device is actually synthetically intelligent. We have several systems today that can pass this test within a short while. They are not perfectly synthetically intelligent because we get to remember that it is a calculating system along the process some place else.

An example of artificial learning ability would be the Jarvis in all Iron Man movies and the Avengers movies. It is a system that understands human communications, conjectures human natures and even gets frustrated in points. That is what the calculating community or the code community calls a standard Artificial Learning ability.

To put it up in regular terms, you could communicate to it system that you do with a person and the system would interact with you like a person. The problem is people have limited knowledge or memory. Sometimes we cannot remember some names. We know that we know the name of the other guy, but we just cannot get it on time. We will remember it somehow, but later at some other instance. This is not called parallel calculating in the code world, but it is similar to that. Our brain function is not fully understood but our neuron functions are mostly understood. This is equivalent to say that we don’t understand computers but we understand transistors; because transistors are the building blocks of all computer memory and function.

When a human can parallel process information, we call it memory. While talking about something, we remember something else. We say “by the way, I did not remember to tell you” and then we continue on a different subject. Now imagine the energy of calculating system. They remember something at all. This is the most important part. As much as their processing capacity grows, the better their information processing would be. We are different that. It seems that the human brain has a limited capacity for processing; in average.

Other brain is information storage. Some people have bought and sold off the skills to be the other way around. You might have met people that are very bad with remembering something but are very good at doing math concepts just with their head. These people have actually given parts of their brain that is regularly given for memory into processing. This enables them to process better, but they lose the memory part.

Human brain has an average size and therefore there is a limited amount of neurons. It’s estimated that there are around 100 billion neurons in an average human brain. That is at minimum 100 billion connections. I will get to maximum number of connections at a later point on this article. So, if we wanted to have approximately 100 billion connections with transistors, we will need a product like thirty three. 333 billion transistors. That is because each transistor can contribute to 3 connections.

Coming back to the point; we have achieved that level of calculating in about 2012. IBM had accomplished simulating 10 billion neurons to represent 100 trillion synapses. You have to understand that a computer synapse is not a physical sensory synapse. We cannot compare one transistor to at least one neuron because neurons are much more complicated than transistors. To represent one neuron we will need several transistors. In fact, IBM had built a supercomputer with 1 million neurons to represent 256 million synapses. To do this, they had 530 billion transistors in 4096 neurosynaptic cores according to research. ibm. com/cognitive-computing/neurosynaptic-chips. shtml.

Now you can know the way complicated the actual human neuron should be. The problem is we haven’t had the oppertunity to build an artificial neuron at a hardware level. We have built transistors and then have incorporated software to manage them. Neither a transistor nor an artificial neuron could manage itself; but a true neuron can. So the calculating capacity of a physical brain starts at the neuron level but the artificial learning ability starts at greater levels after at least several thousand basic units or transistors.

The advantageous side for the artificial learning ability is that it is not limited within a skull where it has a place reduction. If you figured out how to connect 100 trillion neurosynaptic cores and had big enough facilities, then you can build a supercomputer with that. You can’t do that with your brain; your brain is limited to the number of neurons. According to Moore’s law, computers will at some point control the limited connections that a human brain has. That is the critical point of time when the information singularity will be reached and computers become essentially more intelligent than humans. This is the general thought on it. I think it is wrong and I will explain why I think so.

Comparing the growth of the number of transistors in a computer processor, the computers by 2015 should be able to process at the degree of mental performance of a mouse; a real physical mouse. We have hit that point and are moving above it. This is about the general computer and not about the supercomputers. The supercomputers are actually a combination of processors connected in a fashion that they can parallel process information.

Leave a Reply

Your email address will not be published.