Can machines have consciousness, according to neuroscientists? It seems that yes

As if the director wanted to make you believe in it, the main character of the film “Out of the car” in 2015, shot by Andrew Garland, is not Caleb – a young programmer who was instructed to evaluate the machine consciousness. No, the main character was Ava, a striking humanoid AI, naive in appearance and mysterious inside. Like most films of this kind, “Out of the car” leaves the viewer to answer the question: was Ava really conscious? At the same time the film skillfully avoids the thorny question, which was asked to answer loud movies on AI: what is consciousness and can it be at the computer?

Hollywood producers are not the only ones trying to answer this question. As the computer intelligence develops at breakneck speed – not only surpassing the capabilities of people in games such as DOTA 2 and go, but doing it without the help of a human – this question again rises in wide and narrow circles.

Will the mind break through in the machines?

This week in the prestigious journal Science, a review was published by cognitive scientists, doctors Stanislas Dehen, Hokwan Lau and Sid Quaider from the French College at the University of California, Los Angeles and the PSL Research University. In it, scientists said: not yet, but there is a clear way forward.

Cause? Consciousness is “absolutely computable,” the authors say, because it arises in the process of specific types of information processing that are made possible by the hardware of the brain.

There is no magic broth, no divine spark – even an empirical component (“what is it – to have consciousness?”) Is not required for the introduction of consciousness.

If consciousness proceeds purely from calculations in our one and a half kilogram organ, equipping the machines with a similar property is just a matter of translating biology into code.

Just as modern powerful machine learning methods are heavily borrowed from neurobiology, we can achieve artificial consciousness by studying structures in our own brains that generate consciousness and realizing these ideas as computer algorithms.

From the brain to the robot

Undoubtedly, the field of AI has received a high degree of push, thanks to the study of our own brain, both its form and function.

For example, the deep neural networks, the architectural algorithms that formed the basis of AlphaGo, are created following the example of multi-layered biological neural networks organized in our brains.

Training with reinforcement, the type of “learning,” in which AI learns from millions of examples, goes back to the age-old technique of training dogs: if the dog does something right, he gets a reward; otherwise she will have to repeat.

In this sense, the translation of the architecture of human consciousness into machines seems to be a simple step towards artificial consciousness. There is only one big problem.

“Nobody in the realm of AI is working on the creation of conscious machines, because we simply do not have to take. We simply do not know what to do, “says Dr. Stuart Russell.

Multilayered consciousness

The hardest part that must be overcome before starting to create thinking machines is to understand what consciousness is.

For Dehenan and colleagues, consciousness is a multilayered construct with two “dimensions”: C1, information stored in ready-made form in consciousness, and C2, the ability to receive and track information about oneself. Both are important to the mind and can not exist without each other.

Let’s say you are driving a car and a beacon that warns of a small gasoline residue lights up. The perception of the indicator is C1, the mental representation with which we can interact: we notice it, we act (refuel) and tell about it later (“Gasoline ended on the descent, lucky – it came down”).

“The first meaning that we want to separate from consciousness is the notion of global accessibility,” Dehene explains. When you realize the word, your whole brain understands this, that is, you can skip this information through various modalities.

But C1 is not just a “mental album”. This dimension is an entire architecture that allows the brain to attract several modalities of information from our senses or, for example, from memories of related events.

Unlike subconscious processing, which often relies on certain “modules”, competent in solving a specific set of tasks, C1 is a global workspace that allows the brain to integrate information, make decisions about actions, and follow through.

By “consciousness” we mean a certain representation, at a certain point in time, which fights for access to the mental working space and wins. Winners are distributed among various brain computing schemes and are kept in the center of attention throughout the process of making decisions that determine behavior.

Consciousness C1 is stable and global – all related brain circuits are involved, the authors explain.

For a complex machine like the clever C1 car – this is the first step to solving an impending problem, such as a low fuel reserve. In this example, the indicator itself is a subconscious signal: when it lights, all other processes of the machine remain uninformed, and the car – even if equipped with the latest visual processing tools – swings past the filling station without hesitation.

With C1, the fuel tank will notify the car computer (allowing the indicator to penetrate the “conscious mind” of the car), so that, in turn, activates the GPS to search for the nearest station.

“We believe that the machine will convert it into a system that will extract information from all the available modules and make it available to any other processing module to which this information can be useful,” Dehene said. “This is the first sense of consciousness.”

Meta-knowledge

In a sense, C1 reflects the ability of the mind to extract information from outside. C2 also goes into introspection.

The authors define the second network of consciousness, C2, as “meta-cognition”: it reflects when you recognize something, or perceive, or simply make a mistake. (“I think I should have refueled at the last station, but I forgot”). This dimension reflects the connection between consciousness and the sense of one’s own self.

C2 is a level of consciousness that allows you to feel more or less confident in making a decision. From the point of view of computer technology, it is an algorithm that shows the probability that a solution (or calculation) will be correct, even if it is often perceived as a “sixth sense”.

C2 also triggers roots in memory and curiosity. These self-control algorithms let us know what we know and what we do not know – it’s a “meta-memory” that helps you find the right word “at the tip of the tongue”. Observing what we know (or do not know) is especially important for children, says Dehen.

“Young children absolutely need to monitor what they know to learn and show curiosity,” he says.

These two aspects of consciousness work together: C1 draws relevant information into our working mental space (discarding other “possible” ideas or solutions), and C2 helps with long-term reflection on whether conscious thought has led to a useful result or response.

Returning to the example with the small fuel indicator, C1 allows the car to solve the problem instantly – these algorithms globalize the information and the car learns about the problem.

But to solve the problem, the car will need a catalog of “cognitive abilities” – self-awareness of what resources are easily accessible, for example, a GPS map of filling stations.

“A car with self-knowledge of this kind is what we call work with C2,” Dehene says. Since the signal is available globally and is monitored as if the machine is looking at itself from the outside, the car will take care of the indicator of a small fuel reserve and behave the same way as a person – will reduce fuel consumption and find a gas station.

“Most modern machine learning systems do not have any self-control,” the authors note.

But their theory seems to be on the right track. In those examples where the introspection system was implemented – in the form of a structure of algorithms or a separate network – AI developed “internal models that were meta-cognitive by nature, which allowed the agent to develop (limited, implicit, practical) self-understanding.”

Toward conscious machines

Will the car, which has models C1 and C2, behave as if it has a mind? It is very likely that a smart car will “know” that it sees something, expresses confidence in it, informs others about it and finds the best solution to the problem. If his self-monitoring mechanisms break, he can also experience “hallucinations” or visual illusions peculiar to people.

Thanks to C1, he can use the information he has and use it flexibly, and thanks to C2, he will know the limits of what he knows, Dehene says. “I think this machine will have consciousness,” and not just seem like that to people.

If you have a feeling that consciousness is much more than a global exchange of information and self-observation, you are not alone.

“Such a purely functional definition of consciousness can leave some readers unsatisfied,” the authors admit. “But we are trying to take a radical step, perhaps by simplifying the problem. Consciousness is a functional property, and as we continue to add functions to machines, at a certain point these properties will characterize what we mean by consciousness, “concludes Dehene.

Leave a Reply

Your email address will not be published. Required fields are marked *