What is the connection between the human brain and a computer called? How neurotechnology allows the brain to communicate with a computer. Short-term memory and RAM are not the same thing

The brain recipe looks like this: 78% water, 15% fat, and the rest is proteins, potassium hydrate and salt. There is nothing more complex in the Universe that we know that is comparable to the brain in general.

How much energy do you think the brain consumes? 10 Watt. The best brains, at their best creative moments, use, say, 30 watts. A supercomputer needs megawatts. It follows from this that the brain works in some completely different way than a computer.

In the human brain, most processes occur in parallel, while computers have modules and work serially, the computer just moves very quickly from one task to another.

Short-term memory in humans is organized differently than in a computer. In a computer there is hardware and software, but in the brain hardware and software are inseparable, it’s some kind of mixture. You can, of course, decide that the hardware of the brain is genetics. But those programs that our brain downloads and installs throughout our lives become “hardware” after a while. What you learn begins to influence your genes.

Human memory is organized semantically, unlike a computer. For example, information about a dog does not lie in the place where our memory of animals is collected. Yesterday the dog knocked over a cup of coffee on my yellow skirt - and I will forever associate a dog of this breed with a yellow skirt.

Humans have more than one hundred billion neurons. Each neuron, depending on its type, can have up to 50 thousand connections with other parts of the brain. A quadrillion combinations, more than the number of stars in the universe. The brain is not just a neural network, it is a network of networks of networks. The brain contains 5.5 petabytes of information - that's three million hours of video material. Three hundred years of continuous viewing! These are pulsating neural networks. There are no “places” where one thing works separately. Therefore, even if we found zones of sacrifice, love, and conscience in the brain, this would not make our life easier.

Yes, there was a romantic period in the history of the science of studying the brain, when it still seemed that the brain could be described by qualities and addresses. When it was thought that there were sections that dealt with tender friendship, affection, etc. This was done on the basis of something. There was a period when they really began to discover the connection between people’s skills and certain parts of the brain that were supposedly responsible for it. Allegedly - because it is both true and untrue. We know that humans have speech zones. And if something happens to them, speech will disappear. On the other hand, we know a lot of examples when a person’s left brain is completely removed. And there is physically no speech zone there. But speech is possible. How does this happen? The issue of localizing functions is a very uncertain question. In the brain, everything is both localized and not localized at the same time. Memory has an address. And at the same time it doesn’t.

Of course, there are functional blocks in the brain, there is some kind of localization of functions. And we think, like fools, that if we do language work, then the areas in the brain that are occupied by speech will be activated. So no, they won’t. That is, they will be involved, but other parts of the brain will also take part in this. Attention and memory will work at this moment.

If the task is visual, then the visual cortex will also work, if it is auditory, then the auditory cortex. Associative processes will always work too. In a word, during the performance of any task, no specific part of the brain is activated - the whole brain is always working. That is, there seem to be areas that are responsible for something, and at the same time they seem to be absent

If we put a dot on a sheet of paper with a pencil, then it is a dot. And if we look at it through a magnifying glass, then it already becomes somehow rough. And if we take an electron microscope, it’s not even clear what we will see there. This is the situation we find ourselves in now. Another half step - and we will be able to describe the brain with an accuracy of one neuron.

And what? “We find ourselves in a situation where there are huge mountains of facts and millimeters of explanation. If we accept that consciousness is primarily awareness, then we are faced with a huge gap between relatively well-studied psychophysiological processes and virtually unstudied awareness and understanding. We can't even say what it is.

For example, where did you get the idea that using big data, big data, will you predict my behavior? My behavior is not predicted by Descartes, Aristotle, or anyone. It can be hysterical. For example, Nobel laureate in economics, psychologist Daniel Kahneman described how a person makes decisions, and came to the conclusion that decisions are made JUST THAT WAY. “And I’ll go like this, and that’s it, because I want to.” How are you going to predict this?

I can analyze a situation and decide to behave in a certain way, and then in four seconds everything breaks down. This speaks to a serious thing: how much we are not our own masters. A truly frightening thought - who is really the boss of the house? There are too many of them: genome, psychosomatic type, a lot of other things, including receptors. I would like to know who this decision-making creature is? Nobody knows anything about the subconscious, it’s better to close this topic right away.

The brain can play tricks on us. There are real works that talk about this. For example, “The mind’s best trick: How we experience conscious will” by Daniel Wegner. He writes that the brain does everything itself. Actually everything! After that, it sends us a signal: “Don’t worry, everything is fine, you made the decision.”

I often use the example of a finger to show how our brains work. Now I decide to bend my index finger right hand, but I'm not actually bending anything. Those. it's just a solution. But now I bend it (bends finger).

How did it happen? The answers I get to this question are always off target. They tell me that it was the brain that sent a signal to the receptors... But this is funny. I am a Doctor of Biological Sciences, I know all this. If it were true, I wouldn't ask this question. What interests me is exactly what happens in the time interval between the time I thought about it and the time the brain sent the signal. Why did the brain start sending a signal? It turns out that this was a leap from the realm of the intangible - i.e. from the realm of my thoughts to the material realm, when the finger began to bend.

Therefore, the central question, which does not go anywhere, is: “What is our brain - a realization of the set of all sets that are not members of themselves or a self-sufficient masterpiece that is in a recursive relationship with the person admitted into it, in whose body it is located?”

The brain does not live, like Professor Dowell's head, on a plate. He has a body - ears, arms, legs, skin, so he remembers the taste of lipstick, remembers what it means to have an itchy heel. The body is a direct part of it. The computer does not have this body.

Nowadays, more and more people are interested in how the brain works. Of course it's fashion. But the second reason is no less important - we are radically dependent on the brain. Our eyes, ears, our senses deliver information there. Looking is one thing, but seeing is another. The picture of the world is in the brain. But the question is - can we trust him? If you take a patient who is hallucinating and give him an MRI scan, it will show that his brain is actually processing visual or auditory signals while he is having visions.

If the brain is so self-sufficient that it does everything itself, then what is our role? Or are we just a container for this monster? Therefore, the question of free will is very serious in neuroscience, psychology and philosophy. Are we free in our decisions or not? Or the brain itself makes a decision, and then sends us a comforting signal: “Don’t worry about anything, you made this decision.”

Gestalt perception, all art, creativity, science, which is not only concerned with counting - computers cannot do this. As long as it's all ours, we have a chance.

It is still not very clear how languages, words, and their meanings are stored in the brain. At the same time, there are pathologies when people do not remember nouns, but remember verbs. And vice versa.

In general, consciousness is the brain, memory is the brain, and language is too. Brodsky said that “poetry is the highest form of language, a special accelerator of consciousness and our species goal.” That is, we as a species can do more than these iron accountants who keep counting ones and zeros. We're doing something completely different.

We know, of course, that there are functional blocks in the brain. Let's say this part deals with language, this part deals with visual images, there are areas that are especially busy with memory, but seriously, the whole brain is busy with everything. These zones exist, and we know about them, because if a brick falls on Broca's area, the person will stop talking, and this is a fact. But the reverse move is wrong. It cannot be said that speech is controlled by such and such a zone. Speech, like consciousness and memory, is controlled by the entire brain.

The trouble is that when you look into the brain, you don't see anything there. No matter how perfect your equipment is, the next step is the interpretation stage. And it depends on the philosophical position. This is a circle. Now there is a lot of skepticism about whether it makes sense to study all this at all. After all, we don’t know what to do with it. There is another problem here. A terrible difference in individual results. Even if we study the same person, and do not add together academicians, alcoholics, etc., the result will still be specific. The same experiment was repeated 33 times with one person. These are just different pictures. There is a gap in the explanatory basis. We can say this: “We think that...” and attach a picture from his brain.

There is also such a charming thing, which, by the way, it wouldn’t hurt everyone to know about - we have so-called “Mirror Systems” in our brains. These are systems that were discovered by Giacomo Resolatti, a wonderful scientist, by the way, our honorary professor at St. Petersburg University, I organized this, by the way, and he came to us, gave lectures, in general, a lovely guy. And he discovered these mirror systems. They are like this: they turn on not when you do something yourself, but when you watch someone else do it. The word "Other" is capitalized. In general, any Other. This is the basis for communication, the basis for any learning in general. And the basis of language, and most importantly, I repeat, is the basis of communication. Because people who are diagnosed with “Autism” or “Schizophrenia” have already been proven that these systems are broken. They live in their own world, completely unable to get out of it and look at the situation with different eyes.

Is man an animal?

Important differences between humans and other animals are language and consciousness.

We constantly deal not only with the objects themselves, but also with symbols. Let's say there is a glass on the table. Why call it a “glass”? Why draw him? The man seems to have what might be called a "passion for duplicating the world."

It is important to understand that we depend on our brain 100%. Yes, we look at the world “with our own eyes,” we hear something, we feel something, but how we understand it all depends only on the brain. He decides what to show us and how. In fact, we don’t even know what reality really is. Or how does another person see and feel the world? What about the mouse? How did the Sumerians see the world?

In crows, or rather even in corvids in general, the brain is quite similar to the brain of primates in terms of development. Crows recognize their reflection.

The monkeys have time to notice the order of the numbers and quickly press the squares in the correct order under which the numbers are hidden. Moreover, even you and I cannot compete with them in this.

If you go in and type something about the intellectual tasks that are given to monkeys, there are just films there, you can watch online how this happens: they show it some numbers for a short time and remove it, and after that these numbers begin to flash, and she must point her finger at the ones she saw. An absolutely impossible task for me. Not only at this speed, but in general, I can’t even think about it. She does this at cosmic speed, as you can see simply. So don't think too much to yourself.

The brain of dolphins is also powerfully developed. It is still unknown who has it better - us or them. He says that the answer is often “But they didn’t build a civilization!” But what difference does it make when they can sleep, turning off only one hemisphere and remaining awake, have irony, their own language, live happy lives, are always well-fed, do not have any dangerous enemies, and the list goes on. You see, they dance and sing, they have an endless amount of food - the whole ocean, the ecology is beautiful, swim wherever you want. They just sing, play, make love and that’s it, and what else should they do? Should we organize the construction of communism there, in Fiji or what? What should they do to make us happy?

And then there was the famous parrot Alex. He knew about 150 words and answered simple questions.

In my deepest conviction, science is engaged in trying to find out, to the best of its weak strength, how God designed the world. The more you know in a scientific sense, the more you see the unimaginable complexity of what happened, and at the same time the clarity and universality of these laws in the Universe - this suggests that everything is not accidental...

Do you think it was me, Tim_duke, who wrote the conclusion? No, this is who:

Chernigovskaya Tatyana Vladimirovna - born in 1947 in the city of Leningrad. Deals with problems of psycholinguistics, neuroscience and theory of consciousness. She is a Doctor of Biological Sciences, a professor, an honored scientist of the Russian Federation, and on her initiative, the scientific specialization “Psycholinguistics” was created in the year 2000. Until 1998 she worked at the Institute of Evolutionary Physiology and Biochemistry named after. THEM. Sechenov RAS, in the laboratories of bioacoustics, functional asymmetry of the human brain and comparative physiology of sensory systems (leading researcher).

It probably doesn’t make sense to list all Tatyana Vladimirovna’s credentials; she defended her master’s and doctoral dissertations in neurolinguistics, is a regularly invited lecturer at universities in the USA and Europe, and president of the Interregional Association for Cognitive Research. In 2010, by decree of the President of the Russian Federation, she was awarded the title “Honored Scientist of the Russian Federation.” In 2017, she was nominated by the Russian Academy of Sciences for a Gold Medal for outstanding achievements in the field of promoting scientific knowledge, a member of various Russian and international communities (linguistic, associations of artificial intelligence, physiological society, International Neuropsychological Society, International Society of Applied Psycholinguistics and others.

The central idea of ​​​​the works of the famous Ray Kurzweil is artificial intelligence, which will eventually dominate all spheres of people's lives. In his new book, The Evolution of the Mind, Kurzweil reveals the endless possibilities of reverse engineering the human brain.

In the same article, Turing reported another unexpected discovery regarding unsolvable problems. Unsolvable problems are those that are well described by a unique solution (which can be shown to exist), but (which can also be shown) cannot be solved by any Turing machine (that is, by any machine at all). The idea of ​​the existence of such problems fundamentally contradicts the concept that was formed at the beginning of the 20th century. the dogma that all problems that can be formulated are solvable. Turing showed that the number of unsolvable problems is no less than the number of solvable problems. In 1931, Kurt Gödel came to the same conclusion when he formulated the “incompleteness theorem.” This is a strange situation: we can formulate a problem, we can prove that it has a unique solution, but at the same time we know that we will never be able to find this solution.

Turing showed that computing machines operate on the basis of a very simple mechanism. Since a Turing machine (and therefore any computer) can determine its next function based on its previous results, it is capable of making decisions and creating hierarchical information structures of any complexity.

In 1939, Turing designed the Bombe electronic calculator, which helped decipher messages compiled by the Germans on the Enigma coding machine. By 1943, a team of engineers with Turing's participation had completed the Colossus machine, sometimes called the first computer in history. This allowed the Allies to decipher messages created by a more sophisticated version of Enigma. The Bombe and Colossus machines were designed to perform a single task and could not be reprogrammed. But they performed their function brilliantly. It is believed that partly because of them, the Allies were able to anticipate German tactics throughout the war, and the Royal Air Force was able to defeat the Luftwaffe forces three times larger than them in the Battle of Britain.

It was on this basis that John von Neumann created the computer of modern architecture, reflecting the third of the four most important ideas of information theory. In the nearly seventy years since then, the basic core of this machine, called the von Neumann machine, has remained virtually unchanged - just like the microcontroller in your washing machine, and in the largest supercomputer. In an article published on June 30, 1945, entitled "First Draft Report on EDVAC", von Neumann outlined the basic ideas that have guided the development of computer science ever since. In von Neumann's machine there is CPU, where arithmetic and logical operations are performed, a memory module in which programs and data are stored, mass memory, program counter and input/output channels. Although the article was intended for internal use as part of the project, it became the Bible for computer creators. This is how sometimes a simple routine report can change the world.

The Turing machine was not intended for practical purposes. Turing's theorems were not concerned with the efficiency of problem solving, but rather described the range of problems that could theoretically be solved by a computer. In contrast, von Neumann's goal was to create the concept of a real computer. His model replaced the one-bit Turing system with a multi-bit (usually a multiple of eight bits) system. A Turing machine has a serial memory tape, so programs spend a very long time moving the tape back and forth to record and retrieve intermediate results. In contrast, in a von Neumann system, memory is accessed randomly, allowing any desired data to be immediately retrieved.

One of von Neumann's key ideas is the concept of the stored program, which he developed ten years before the creation of the computer. The essence of the concept is that the program is stored in the same random access memory module as the data (and often even in the same block of memory). This allows you to reprogram the computer to solve different tasks and create self-modifying code (in the case of recording drives), which provides the possibility of recursion. Until that time, almost all computers, including Colossus, were created to solve specific problems. The concept of a stored program allowed the computer to become a truly universal machine, consistent with Turing's idea of ​​the universality of machine computing.

Another important property of the von Neumann machine is that each instruction contains an operation code that specifies the arithmetic or logical operation and the address of the operand in the computer memory.

Von Neumann's concept of computer architecture was reflected in the EDVAC project, on which he worked with Presper J. Eckert and John Mauchly. The EDVAC computer did not become operational until 1951, when other stored program computers already existed, such as the Manchester Small Experimental Machine, ENIAC, EDSAC and BINAC, all of which were created under the influence of von Neumann's paper and with the participation of Eckert and Mauchly. Von Neumann was also involved in the development of some of these machines, including latest version ENIAC, which used the stored program principle.

The von Neumann architecture computer had several predecessors, but none of them - with one unexpected exception - can be called a true von Neumann machine. In 1944, Howard Aiken released the Mark I, which was reprogrammable to some extent, but did not use a stored program. The machine read the instructions from the punched card and carried them out immediately. The car also did not provide for conditional transitions.

In 1941, German scientist Konrad Zuse (1910–1995) created the Z-3 computer. It also read the program from tape (in this case, encoded on tape) and also did not perform conditional branches. Interestingly, Zuse received financial support from the German Institute of Aircraft Engineering, which used this computer to study the flutter of an aircraft wing. However, Zuse's proposal to finance the replacement of relays with radio tubes was not supported by the Nazi government, which considered the development of computer technology "not of military importance." This, it seems to me, influenced the outcome of the war to a certain extent.

In fact, von Neumann had one brilliant predecessor, and he lived a hundred years earlier! English mathematician and inventor Charles Babbage (1791–1871) described his Analytical Engine in 1837, which was based on the same principles as von Neumann's computer and used a stored program printed on punched cards on jacquard weaving machines. The random access machine's memory contained 1,000 words of 50 decimal places each (equivalent to approximately 21 kilobytes). Each instruction contained an opcode and an operand number - just like in modern computer languages. The system did not use conditional branches or loops, so it was a true von Neumann machine. Completely mechanical, it apparently surpassed both the design and organizational capabilities of Babbage himself. He created parts of the machine, but never launched it.

It is not known for certain whether 20th-century computer pioneers, including von Neumann, were aware of Babbage's work.

However, the creation of Babbage's machine marked the beginning of the development of programming. The English writer Ada Byron (1815–1852), Countess of Lovelace, the only legitimate child of the poet Lord Byron, became the world's first computer programmer. She wrote programs for Babbage's Analytical Engine and debugged them in her head (since the computer never worked). Now programmers call this practice table checking. She translated an article by Italian mathematician Luigi Menabrea about the Analytical Engine, adding her own significant comments and noting that “the Analytical Engine weaves algebraic patterns like a jacquard loom weaves flowers and leaves.” She may have been the first to mention the possibility of creating artificial intelligence, but concluded that the analytical engine "is not capable of coming up with anything on its own."

Babbage's ideas seem amazing considering the era in which he lived and worked. However, by the middle of the 20th century. these ideas were practically forgotten (and only rediscovered later). It was von Neumann who invented and formulated the key principles of the operation of a computer in its modern form, and it is not for nothing that the von Neumann machine continues to be considered the main model of a computer. However, let's not forget that the von Neumann machine constantly exchanges data between individual modules and within these modules, so it could not be created without Shannon's theorems and the methods that he proposed for the reliable transmission and storage of digital information.

All this brings us to the fourth important idea, which overcomes Ada Byron's conclusions about the inability of computers to think creatively and allows us to find the key algorithms used by the brain, which can then be used to turn a computer into a brain. Alan Turing formulated this problem in his paper “Computing Machines and Intelligence,” published in 1950, which described the now well-known Turing test to determine the proximity of AI to human intelligence.

In 1956, von Neumann began preparing a series of lectures for the prestigious Silliman Readings at Yale University. The scientist was already ill with cancer and was unable to deliver his lectures or even finish the manuscript on which the lectures were based. Nevertheless, this unfinished work is a brilliant prediction of what I personally perceive as the most difficult and important project in the history of mankind. After the scientist’s death, in 1958, the manuscript was published under the title “Computer and Brain.” It so happened that the last work of one of the most brilliant mathematicians of the last century and one of the founders of computer technology turned out to be devoted to the analysis of thinking. This was the first serious study of the human brain from the point of view of a mathematician and computer scientist. Before von Neumann, computer technology and neuroscience were two separate islands with no bridge between them.

Von Neumann begins the story by describing the similarities and differences between a computer and the human brain. Considering the era in which this work was created, it seems surprisingly accurate. The scientist notes that the output signal of a neuron is digital - the axon is either excited or remains at rest. At that time it was far from obvious that the output signal could be processed analoguely. Signal processing in the dendrites leading to the neuron and in the body of the neuron is analog, and von Neumann described this situation using a weighted sum of the input signals with a threshold value.

This model of how neurons function led to the development of connectionism and to the use of this principle to create both hardware and computer programs. (As I described in the previous chapter, the first such system, the IBM 704 program, was created by Frank Rosenblatt of Cornell University in 1957, just after the manuscript of von Neumann's lectures became available.) Now we have more complex models describing combinations of neuronal inputs, but the general idea of ​​analog signal processing by changing the concentration of neurotransmitters is still valid.

Based on the concept of the universality of computer computing, von Neumann came to the conclusion that even with the seemingly radical difference in the architecture and structural units of the brain and the computer, using the von Neumann machine we can simulate the processes occurring in the brain. The converse postulate, however, is not valid, since the brain is not a von Neumann machine and does not have a stored program (although in the head we can simulate the operation of a very simple Turing machine). The algorithms or methods of functioning of the brain are determined by its structure. Von Neumann rightly concluded that neurons could learn appropriate patterns based on input signals. However, in von Neumann's time it was not known that learning also occurs through the creation and destruction of connections between neurons.

Von Neumann also pointed out that the speed of information processing by neurons is very low - on the order of hundreds of calculations per second, but the brain compensates for this by simultaneously processing information in many neurons. This is another obvious but very important discovery. Von Neumann argued that all 10 10 neurons in the brain (this estimate is also quite accurate: according to today's ideas, the brain contains from 10 10 to 10 11 neurons) process signals at the same time. Moreover, all contacts (on average from 10 3 to 10 4 per neuron) are counted simultaneously.

Considering the primitive state of neuroscience at the time, von Neumann's estimates and descriptions of neuronal function are remarkably accurate. However, I cannot agree with one aspect of his work, namely the idea of ​​the brain's memory capacity. He believed that the brain remembers every signal for life. Von Neumann estimated the average human lifespan at 60 years, which is approximately 2 x 10 9 seconds. If each neuron receives approximately 14 signals per second (which is actually three orders of magnitude lower than the true value), and there are 10 10 neurons in total in the brain, it turns out that the brain's memory capacity is about 10 20 bits. As I wrote above, we remember only a small part of our thoughts and experiences, but even these memories are not stored as low-level bit-by-bit information (like in a video), but rather as a sequence of higher-order images.

As von Neumann describes each mechanism in brain function, he simultaneously demonstrates how a modern computer could perform the same function, despite the apparent difference between the brain and the computer. The analog mechanisms of the brain can be modeled using digital mechanisms, since digital computing can simulate analog values ​​with any degree of accuracy (and the accuracy of analog information in the brain is quite low). It is also possible to simulate the massive parallelism of brain function, given the significant superiority of computers in serial computation speed (this superiority has become even stronger since von Neumann). In addition, we can carry out parallel signal processing in computers using parallel von Neumann machines - this is how modern supercomputers operate.

Given the ability of humans to make rapid decisions at such low neural speeds, von Neumann concluded that brain functions could not involve long, sequential algorithms. When a third baseman receives the ball and decides to throw it to first rather than second base, he makes this decision in a fraction of a second - during which time each neuron barely has time to complete several cycles of excitation. Von Neumann comes to the logical conclusion that the brain's remarkable ability is due to the fact that all 100 billion neurons can process information simultaneously. As I noted above, the visual cortex makes complex inferences in just three or four cycles of neuronal firing.

It is the significant plasticity of the brain that allows us to learn. However, the computer has much greater plasticity - its methods can be completely changed by changing software. Thus, a computer can imitate the brain, but the reverse is not true.

When von Neumann compared the massively parallel capabilities of the brain with the few computers of the time, it seemed clear that the brain had much greater memory and speed. Today, the first supercomputer has already been constructed, according to the most conservative estimates, satisfying the functional requirements needed to simulate the functions of the human brain (about 10 16 operations per second). (In my opinion, computers of this power will cost around $1,000 in the early 2020s.) In terms of memory capacity, we've moved even further. Von Neumann's work appeared at the very beginning of the computer era, but the scientist was confident that at some point we would be able to create computers and computer programs, capable of imitating the human brain; that is why he prepared his lectures.

Von Neumann was deeply convinced of the acceleration of progress and its significant impact on people's lives in the future. A year after von Neumann's death, in 1957, his colleague mathematician Stan Ulam quoted von Neumann as saying in the early 1950s that “every acceleration of technological progress and changes in the way people live gives the impression that some major singularity in history is approaching.” a human race beyond which human activity as we know it today can no longer continue.” This is the first known use of the word "singularity" to describe human technological progress.

Von Neumann's most important insight was the similarity between the computer and the brain. Note that part of human intelligence is emotional intelligence. If von Neumann's guess is correct and if we agree with my statement that a non-biological system that satisfactorily reproduces the intelligence (emotional and other) of a living person has consciousness (see the next chapter), we will have to conclude that between the computer (with correct software) And conscious There is a clear similarity in thinking. So, was von Neumann right?

Majority modern computers are completely digital machines, whereas the human brain uses both digital and analog methods. However, analogue methods can be easily reproduced digitally with any degree of accuracy. American computer scientist Carver Mead (b. 1934) showed that the analogue methods of the brain could be directly reproduced in silicon, and implemented this in the form of so-called neuromorphic chips. Mead demonstrated that this approach could be thousands of times more effective than digitally simulating analogue methods. If we are talking about coding redundant neocortical algorithms, it might make sense to use Mead's idea. An IBM research team led by Dharmendra Modhi is using chips that mimic neurons and their connections, including their ability to form new connections. One of the chips, called SyNAPSE, directly modulates 256 neurons and approximately a quarter of a million synaptic connections. The goal of the project is to simulate a neocortex consisting of 10 billion neurons and 100 trillion connections (equivalent to the human brain), using only one kilowatt of energy.

More than fifty years ago, von Neumann noticed that processes in the brain occur extremely slowly, but are characterized by massive parallelism. Modern digital circuits operate at least 10 million times faster than the brain's electrochemical switches. In contrast, all 300 million recognition modules of the cerebral cortex act simultaneously, and a quadrillion contacts between neurons can be activated at the same time. Therefore, to create computers that can adequately imitate the human brain, adequate memory and computing performance are required. There is no need to directly copy the architecture of the brain - this is a very inefficient and inflexible method.

What should the corresponding computers be like? Many research projects are aimed at modeling hierarchical learning and pattern recognition occurring in the neocortex. I myself am doing similar research using hierarchical hidden Markov models. I estimate that modeling one recognition cycle in one recognition module of the biological neocortex requires about 3000 calculations. Most simulations are built on a significantly smaller number of calculations. If we assume that the brain performs about 10 2 (100) recognition cycles per second, we get a total number of 3 x 10 5 (300 thousand) calculations per second for one recognition module. If we multiply this number by the total number of recognition modules (3 x 10 8 (300 million, according to my estimates)), we get 10 14 (100 trillion) calculations per second. I give roughly the same meaning in the book “The Singularity is Near.” I predict that functional brain simulation requires speeds of 10 14 to 10 16 calculations per second. Hans Moravec's estimate, based on extrapolation of data for initial visual processing throughout the brain, is 10 14 calculations per second, which is the same as my calculations.

Standard modern cars can work at speeds of up to 10 10 calculations per second, but with the help of cloud resources their productivity can be significantly increased. The fastest supercomputer, the Japanese K computer, has already reached a speed of 10 16 calculations per second. Given the massive redundancy of neocortical algorithms, good results can be achieved using neuromorphic chips, as in SvNAPSE technology.

In terms of memory requirements, we need about 30 bits (about 4 bytes) for each pin with one of the 300 million recognition modules. If an average of eight signals are suitable for each recognition module, we get 32 ​​bytes per recognition module. If we take into account that the weight of each input signal is one byte, we get 40 bytes. Add 32 bytes for downstream contacts and we get 72 bytes. I note that the presence of ascending and descending branches leads to the fact that the number of signals is much more than eight, even if we take into account that many recognition modules use a common highly branched system of connections. For example, recognizing the letter “p” may involve hundreds of recognition modules. This means that thousands of next-level recognition modules are involved in recognizing words and phrases containing the letter “p”. However, each module responsible for recognizing “p” does not repeat this tree of connections that feed all levels of recognition of words and phrases with “p”; all these modules have a common tree of connections.

The above is also true for downstream signals: the module responsible for recognizing the word apple will tell all the thousand downstream modules responsible for recognizing “e” that the image “e” is expected if “a”, “p”, “p” are already recognized " and "l". This tree of connections is not repeated for each word or phrase recognition module that wants to inform lower level modules that the image "e" is expected. This tree is common. For this reason, an average estimate of eight upstream and eight downstream signals for each recognition module is quite reasonable. But even if we increase this value, it will not change the final result much.

So, taking into account 3 x 10 8 (300 million) recognition modules and 72 bytes of memory for each, we find that the total memory size should be about 2 x 10 10 (20 billion) bytes. And this is a very modest value. Conventional modern computers have this kind of memory.

We performed all these calculations to roughly estimate the parameters. Given that digital circuits are about 10 million times faster than networks of neurons in the biological cortex, we do not need to reproduce the massive parallelism of the human brain - very moderate parallel processing (compared to trillions of parallelism in the brain) will be quite enough. Thus, the necessary computational parameters are quite achievable. The ability of brain neurons to reconnect (remember that dendrites are constantly creating new synapses) can also be simulated using appropriate software, since computer programs are much more plastic than biological systems, which, as we have seen, are impressive but have limits.

The brain redundancy required to obtain invariant results can certainly be reproduced in a computer version. The mathematical principles for optimizing such self-organizing hierarchical learning systems are quite clear. The organization of the brain is far from optimal. But it doesn't have to be optimal - it has to be good enough to enable the creation of tools that compensate for its own limitations.

Another limitation of the neocortex is that it has no mechanism for eliminating or even evaluating conflicting data; This partly explains the very common illogicality of human reasoning. To solve this problem we have a very weak ability called critical thinking, but people use it much less often than they should. The computer neocortex could include a process that identifies conflicting data for subsequent revision.

It is important to note that designing an entire brain region is easier than designing a single neuron. As has already been said, more high level model hierarchies are often simplified (there is an analogy with a computer here). Understanding how a transistor works requires a detailed understanding of the physics of semiconductor materials, and the functions of a single real-life transistor are described by complex equations. Digital circuit, which multiplies two numbers, contains hundreds of transistors, but one or two formulas are enough to create a model of such a circuit. An entire computer, consisting of billions of transistors, can be modeled using a set of instructions and a register description on several pages of text using a few formulas. Programs for operating systems, language compilers, or assemblers are quite complex, but modeling a private program (for example, a language recognition program based on hidden hierarchical Markov models) also comes down to a few pages of formulas. And nowhere in such programs will you find a detailed description physical properties semiconductors or even computer architecture.

A similar principle is true for brain modeling. One particular recognition module of the neocortex, which detects certain invariant visual images (for example, faces), filters audio frequencies (limiting the input signal to a certain frequency range), or estimates the temporal proximity of two events, can be described in much fewer specific details than the real ones. physical and chemical interactions that control the functions of neurotransmitters, ion channels and other elements of neurons involved in the transmission of nerve impulses. Although all of these details must be carefully considered before moving to the next level of complexity, much can be simplified when modeling the operating principles of the brain.

<<< Назад
Forward >>>

Despite their best efforts, neuroscientists and cognitive psychologists will never find a copy of Beethoven's Fifth Symphony, words, pictures, grammatical rules or any other external cues in the brain. Of course, the human brain is not completely empty. But it doesn't contain most of the things people think it contains - even simple things like "memories".

Our misconceptions about the brain have deep historical roots, but we are especially confused by the invention of computers in the 1940s. For half a century, psychologists, linguists, neuroscientists and other experts on human behavior have argued that the human brain works like a computer.

To get an idea of ​​how frivolous this idea is, consider the brains of babies. A healthy newborn has more than ten reflexes. He turns his head in the direction where his cheek is scratched and sucks everything that comes into his mouth. He holds his breath when immersed in water. He grabs things in his hands so tightly that he can almost support his own weight. But perhaps most importantly, newborns have powerful learning mechanisms that allow them to change quickly so they can interact more effectively with the world around them.

Feelings, reflexes and learning mechanisms are what we have from the very beginning, and when you think about it, that's quite a lot. If we lacked any of these abilities, we would probably have difficulty surviving.

But here’s what we don’t have from birth: information, data, rules, knowledge, vocabulary, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols and buffers - elements that allow digital computers behave somewhat rationally. Not only are these things not in us from birth, they do not develop in us during life.

We don't keep words or rules that tell us how to use them. We do not create images of visual impulses, store them in a short-term memory buffer, and then transfer the images to a long-term memory device. We do not recall information, images or words from the memory register. All this is done by computers, but not by living beings.

Computers literally process information - numbers, words, formulas, images. The information must first be translated into a format that a computer can recognize, that is, into sets of ones and zeros (“bits”) collected into small blocks (“bytes”).

Computers move these sets from place to place into various areas of physical memory, implemented as electronic components. Sometimes they copy sets, and sometimes they transform them in various ways - say, when you correct errors in a manuscript or retouch a photograph. The rules that a computer follows when moving, copying, or working with an array of information are also stored inside the computer. A set of rules is called a "program" or "algorithm". A set of algorithms working together that we use for different purposes (for example, buying stocks or dating online) is called an “application”.

This known facts, but they need to be spelled out to be clear: computers work on a symbolic representation of the world. They do store and retrieve. They really process. They really have physical memory. They are truly driven by algorithms in every way.

However, people don’t do anything like that. So why do so many scientists talk about our mental activity as if we were computers?

In 2015, artificial intelligence expert George Zarkadakis released a book, In Our Image, in which he describes six different concepts that people have used over the past two thousand years to describe human intelligence.

In the earliest version of the Bible, humans were created from clay or mud, which an intelligent God then imbued with his spirit. This spirit “describes” our mind - at least from a grammatical point of view.

The invention of hydraulics in the 3rd century BC led to the popularity of the hydraulic concept of human consciousness. The idea was that the flow of various fluids in the body - "bodily fluids" - accounted for both physical and spiritual functions. The hydraulic concept persisted for more than 1,600 years, all the while hampering the development of medicine.

By the 16th century, devices powered by springs and gears had appeared, which inspired René Descartes to argue that man is a complex machine. In the 17th century, British philosopher Thomas Hobbes proposed that thinking occurs through small mechanical movements in the brain. By the beginning of the 18th century, discoveries in the field of electricity and chemistry led to the emergence of a new theory of human thinking, again of a more metaphorical nature. In the mid-19th century, German physicist Hermann von Helmholtz, inspired by recent advances in communications, compared the brain to a telegraph.

Albrecht von Haller. Icones anatomicae

Mathematician John von Neumann stated that the function of the human nervous system is "digital in the absence of evidence to the contrary", drawing parallels between components computer machines that time and areas of the human brain.

Each concept reflects the most advanced ideas of the era that gave birth to it. As one might expect, just a few years after the birth of computer technology in the 1940s, it was argued that the brain worked like a computer: the brain itself played the role of the physical carrier, and our thoughts acted as the software.

This view reached its zenith in the 1958 book The Computer and the Brain, in which mathematician John von Neumann stated emphatically that the function of the human nervous system is “digital in the absence of evidence to the contrary.” Although he acknowledged that very little is known about the role of the brain in the functioning of intelligence and memory, the scientist drew parallels between the components of computer machines of that time and areas of the human brain.

Image: Shutterstock

Thanks to subsequent advances in computer technology and brain research, an ambitious interdisciplinary study of human consciousness gradually developed, based on the idea that people, like computers, are information processors. This work now includes thousands of studies, receives billions of dollars in funding, and has been the subject of numerous papers. Ray Kurzweil's 2013 book Making a Mind: Unraveling the Mystery of Human Thinking illustrates this point, describing the brain's "algorithms", its "information processing" techniques, and even how it superficially resembles integrated circuits in its structure.

The idea of ​​human thinking as an information processing device (IP) currently dominates in human consciousness both among ordinary people and among scientists. But this is, in the end, just another metaphor, a fiction that we pass off as reality to explain something we don’t really understand.

The imperfect logic of the OR concept is quite easy to formulate. It is based on a fallacious syllogism with two reasonable assumptions and a wrong conclusion. Reasonable Assumption #1: All computers are capable of intelligent behavior. Reasonable Assumption #2: All computers are information processors. Incorrect conclusion: all objects capable of behaving intelligently are information processors.

If we forget about formalities, then the idea that people should be information processors just because computers are such is complete nonsense, and when the concept of AI is finally abandoned, historians will probably view it from the same point of view as now To us, the hydraulic and mechanical concepts look like nonsense.

Carry out an experiment: draw a hundred-ruble bill from memory, and then take it out of your wallet and copy it. Do you see the difference?

A drawing made in the absence of an original will certainly turn out to be terrible in comparison with a drawing made from life. Although, in fact, you have seen this bill more than one thousand times.

What is the problem? Shouldn't the "image" of the banknote be "stored" in the "storage register" of our brain? Why can't we just "refer" to this "image" and depict it on paper?

Obviously not, and thousands of years of research will not allow us to determine the location of the image of this bill in the human brain simply because it is not there.

The idea, promoted by some scientists, that individual memories are somehow stored in special neurons is absurd. Among other things, this theory takes the question of the structure of memory to an even more intractable level: how and where is memory stored in cells?

The very idea that memories are stored in individual neurons is absurd: how and where in a cell can information be stored? We will never have to worry about the human mind running amok in cyberspace, and we will never be able to achieve immortality by downloading our soul to another medium.

One of the predictions, which was expressed in one form or another by futurist Ray Kurzweil, physicist Stephen Hawking and many others, is that if human consciousness is like a program, then technologies should soon appear that will allow it to be loaded onto a computer, thereby greatly enhancing intellectual abilities and making immortality possible. This idea formed the basis of the plot of the dystopian film Transcendence (2014), in which Johnny Depp played a scientist similar to Kurzweil. He uploaded his mind to the Internet, causing devastating consequences for humanity.

Still from the film "Supremacy"

Fortunately, the concept of OI has nothing even close to reality, so we don't have to worry about the human mind running amok in cyberspace, and sadly, we'll never be able to achieve immortality by downloading our souls to another medium. It's not just a lack of software in the brain, the problem is even deeper - let's call it the problem of uniqueness, and it is both fascinating and depressing.

Since our brains have neither “memory devices” nor “images” of external stimuli, and the brain changes over the course of life under the influence of external conditions, there is no reason to believe that any two people in the world will react to the same stimulus in the same way. If you and I attend the same concert, the changes that happen in your brain after listening will be different from the changes that happen in my brain. These changes depend on the unique structure nerve cells, which was formed during the entire previous life.

This is why, as Frederick Bartlett wrote in his 1932 book Memory, two people hearing the same story will not be able to retell it exactly the same way, and over time their versions of the story will become less and less similar to each other.

"Superiority"

I think this is very inspiring because it means that each of us is truly unique, not only in our genes, but also in the way our brains change over time. But it's also disheartening, because it makes the already difficult work of neuroscientists almost impossible to solve. Each change can affect thousands, millions of neurons or the entire brain, and the nature of these changes is also unique in each case.

Worse, even if we could record the state of each of the brain's 86 billion neurons and simulate it all on a computer, this enormous model would be useless outside the body to which the brain belongs. This is perhaps the most annoying misconception about the human structure, which we owe to the erroneous concept of OI.

Computers store exact copies of data. They can remain unchanged for a long time even when the power is turned off, while the brain supports our intelligence only as long as it remains alive. There is no switch. Either the brain will work without stopping, or we will not exist. Moreover, as neuroscientist Stephen Rose noted in 2005's The Future of the Brain, a copy of the brain's current state may be useless without knowing the full biography of its owner, even including the social context in which the person grew up.

Meanwhile, huge amounts of money are spent on brain research based on false ideas and promises that will not be fulfilled. Thus, the European Union launched a project to study the human brain worth $1.3 billion. European authorities believed the tempting promises of Henry Markram to create a working simulator of brain function based on a supercomputer by 2023, which would radically change the approach to the treatment of Alzheimer's disease and other ailments, and provided the project with almost unlimited funding. Less than two years after the project launched, it turned out to be a failure, and Markram was asked to resign.

People are living organisms, not computers. Accept it. We need to continue the hard work of understanding ourselves, but not waste time with unnecessary intellectual baggage. Over the half-century of its existence, the concept of OR has given us only a few useful discoveries. It's time to click on the Delete button.

Robert Epstein is a senior psychologist at the American Institute for Behavioral Research and Technology in California. He is the author of 15 books and the former editor-in-chief of Psychology Today.

Every human brain is something special, an incredibly complex miracle of nature, created through millions of years of evolution. Today our brain is often called a real computer. And this expression is not used in vain.

And today we will try to understand why scientists call the human brain a biological computer, and what interesting facts exist about it.

Why the brain is a biological computer

Scientists call the brain a biological computer for obvious reasons. The brain, like the main processor of any computer system, is responsible for the operation of all elements and nodes of the system. As is the case with RAM, hard drive, video card and other PC elements, the human brain controls vision, breathing, memory and any other process occurring in the human body. He processes the received data, makes decisions and performs all the intellectual work.

As for the “biological” characteristic, its presence is also quite obvious, because, unlike the usual computer equipment, the human brain is biological in origin. So it turns out that the brain is a real biological computer.

Like most modern computers, the human brain has a huge number of functions and capabilities. And we offer some of the most interesting facts about them below:

  • Even at night, when our body is resting, the brain does not fall asleep, but, on the contrary, is in a more active state than during the day;
  • The exact amount of space or memory that can be stored in the human brain is this moment unknown to scientists. However, they suggest that this "biological HDD» capable of storing up to 1000 terabytes of information;
  • The average weight of the brain is one and a half kilograms, and its volume increases, as in the case of muscles, from training. True, in this case, training involves gaining new knowledge, improving memory, etc.;
  • Despite the fact that it is the brain that reacts to any damage to the body by sending pain signals to the corresponding parts of the body, it itself does not feel pain. When we feel a headache, it is only pain in the tissues and nerves of the skull.

Now you know why the brain is called a biological computer, which means you have done a little training of your brain. Don't stop there, and systematically learn something new.

An organ that coordinates and regulates all vital functions of the body and controls behavior. All our thoughts, feelings, sensations, desires and movements are associated with the work of the brain, and if it does not function, the person goes into a vegetative state: the ability to perform any actions, sensations or reactions to external influences is lost.

Computer model of the brain

The University of Manchester has begun building the first of a new type of computer, the design of which imitates the structure of the human brain, BBC reports. The cost of the model will be 1 million pounds.

A computer built on biological principles, says Professor Steve Furber, should demonstrate significant stability in operation. “Our brain continues to function despite the constant failure of the neurons that make up our nervous tissue,” says Furber. “This property is of great interest to designers who are interested in making computers more reliable.”

Brain Interfaces

In order to lift a glass several feet using mental energy alone, wizards had to train for several hours a day.
Otherwise, the lever principle could easily squeeze the brain out through the ears.

Terry Pratchett, "The Color of Magic"

Obviously, the crowning glory of the human-machine interface should be the ability to control a machine with thought alone. And getting data directly into the brain is already the pinnacle of what virtual reality can achieve. This idea is not new and has been featured in a wide variety of science fiction literature for many years. Here are almost all cyberpunks with direct connections to cyberdecks and biosoftware. And control of any technology using a standard brain connector (for example, Samuel Delany in the novel “Nova”), and a lot of other interesting things. But science fiction is good, but what is being done in the real world?

It turns out that the development of brain interfaces (BCI or BMI - brain-computer interface and brain-machine interface) is in full swing, although few people know about it. Of course, the successes are very far from what is written about in science fiction novels, but, nevertheless, they are quite noticeable. Currently, work on brain and nerve interfaces is mainly being carried out as part of the creation of various prosthetics and devices to make life easier for partially or completely paralyzed people. All projects can be divided into interfaces for input (restoration or replacement of damaged sensory organs) and output (control of prostheses and other devices).

In all cases of direct data input, it is necessary to perform surgery to implant electrodes into the brain or nerves. In case of output, you can get by with external sensors for taking an electroencephalogram (EEG). However, EEG is a rather unreliable tool, since the skull greatly weakens brain currents and only very generalized information can be obtained. If electrodes are implanted, data can be taken directly from the desired brain centers (for example, motor centers). But such an operation is a serious matter, so for now experiments are being conducted only on animals.

In fact, humanity has long had such a “single” computer. According to Wired magazine co-founder Kevin Kelly, millions of PCs connected to the Internet Cell phones, PDAs and other digital devices can be considered as components of a Unified Computer. Her central processor is all the processors of all connected devices, her hard drive is hard disks and flash drives around the world, and RAM- the total memory of all computers. Every second, this computer processes an amount of data equal to all the information contained in the Library of Congress, and its operating system is the World Wide Web.

Instead of nerve cell synapses, it uses functionally similar hyperlinks. Both are responsible for creating associations between nodes. Each unit of thought, such as an idea, grows as more and more connections are made with other thoughts. Also online: large quantity references to a specific resource (nodal point) mean its greater significance for the Computer as a whole. Moreover, the number of hyperlinks on the World Wide Web is very close to the number of synapses in the human brain. Kelly estimates that by 2040, the planetary computer will have computing power commensurate with the collective brain power of all 7 billion people who will inhabit the Earth by that time.

But what about the human brain itself? A long-outdated biological mechanism. Our gray matter runs at the speed of the very first Pentium processor, from 1993. In other words, our brain operates at a frequency of 70 MHz. In addition, our brains operate on an analog principle, so there can be no question of comparison with the digital method of data processing. This is the main difference between synapses and hyperlinks: synapses, reacting to their environment and incoming information, skillfully change the organism, which never has two identical states. The hyperlink, on the other hand, is always the same, otherwise problems begin.

However, it must be admitted that our brain is significantly more efficient than any artificial system created by people. In a completely mysterious way, all the gigantic computing abilities of the brain are located in our skull, weighs just over a kilogram, and at the same time it requires only 20 watts of energy to function. Compare these figures with the 377 billion Watts that, according to approximate calculations, are currently consumed by a Single Computer. This, by the way, is as much as 5% of global electricity production.

The mere fact of such monstrous energy consumption will never allow the Unified Computer to even come close to the efficiency of the human brain. Even in 2040, when the computing power of computers becomes sky-high, their energy consumption will continue to increase.