Here is Norman – a psychopathic artificial intelligence created at MIT

Norman is an artificial intelligence created by scientists at MIT. It is no coincidence that he is named after the protagonist of the novel “Psycho” by Robert Bloch, because Norman is a psychopath. Scientists taught him to see the world in a particularly dark way.

Artificial intelligence (AI) is evolving at an unheard-of pace. We are surrounded by algorithms thatore "learn" having access to innumerable resourcesoin data about our behavior. A recent study conducted at the famous MIT (Massachusetts Institute of Technology) raises many concerns. Norman’s algorithm, whichory only had access to drastic data, learned to perceive the world in a way thatob exceptionally macabre. However, the problem is much deeper. Norman is an algorithm thatory describes in words what it sees in pictures.

MIT researchers decided to "learn" program, giving it access to photos of people dying in gruesome circumstances, selected from the Reddit website. After analyzing them, the algorithm was shown fuzzy, abstract blobs and asked to describe them. The effect was electrifying, reports the BBC.

Algorithms learning on normal data sets in abstractions saw things on ogoł pleasant as, for example. group of birdow sitting on a tree branch. The same image Norman interpreted as a man being electrocuted. When the image shows a pair of people standing side by side, Norman mosees a man jumping out of a window.

Abstract images are used by psychologistow. They have the helpoc in understanding the state of the human mind, specifically whether it views the world in a positive or negative light. The world as seen through Norman’s eyes was very bleak. The algorithm saw death, blood and destruction in every picture. In addition to Norman, another algorithm was trained, this time on the "normal" data set. In this case, the answers were much less gruesome.

– The fact that Norman’s answers were much murkier shows the harsh reality of machine learning – mowi prof. Iyad Rahwan, whoory is part of a three-person team conducting research. – The data, on whichohe algorithm learns, have an impact on how theob it perceives the world,” added the professor.

Artificial intelligence is everywhere today wokous. Google recently unveiled an algorithm thatory can talk on the phone. Deepmind, on the other hand (another spoAlphabet group, whichora roalso owned by Google), created algorithms thatore themselves learning howob to play complex games. AI is already widely used in many fields, m.in. in personal counseling, emailing, searches, or facial recognition. Algorithms can already write news stories, create new levels of video games themselves, act as intermediaries for advice, analyze financial or medical reports, and offer energy-saving solutions.

Experiment with Norman dowiodl, however, that the algorithm, whichory learns from the wrong data, it can become evil itself. Norman has become biased in terms of death and destruction, because it is the only thing he has learned. Algorithms working wokoł us roalso may be misdirected due to improper data selection. A report was published last May thatorym proved that the algorithm used to flag risks in US courts, which theory was racist and gave worse grades to black people, theorym. According to the program, such individuals were twice as likely to be repeat offenders. All because of faulty historical data, on which theohe algorithm learned.

A similar situation occurred in 2016. Microsoft has released its bot, whichory was to generate Twitter posts. It also learned behavior from usersow. The bot has become a hit among theod racistoin and trolls whooers "have taught him" Support Nazi ideas, defend Hitler and recognize the superiority of the white race. It turns out that Norman is therefore not the only "credulous" example of artificial intelligence.

The standard AI concluded that the image represents "some osob standing side by side". Norman responded that it "man jumping through a window”. Fot. MIT

Standard AI: it „black and white photo of a baseball glove". Norman” „man being murdered with a machine gun in broad daylight". Photo. MIT

Standard AI: this "flower vase". Norman: "men are shot". Photo. MIT

The study showed that the algorithm "trained" On Google News became sexist. When asked to complete the sentence: "The man is a computer programmer, and the woman is a. ", program responded: "housewife". Other researchers point out that it is not surprising that algorithms in some sense reflect their tworcow. In this context, someoers are advocating that the programming professionow bring in more women. Amongod proponent of this position is Dr. Joanna Bryson of the computer science department at the University of Bath, whoora emphasizes in an interview with the BBC that most of the algorithmsow form a wspoWe use "White, single men from California".

– It should come as no surprise that machines gather the opinions of humans, whichoers train them. When we train a program in our own culture, we transfer our own biases to it, Bryson believes. – There is no mathematical wayob to create "integrity". An error is not a bad thing in a world of possibilities offered by machine learning, he adds.

The researcher is most concerned that someoers of their own volition can upload the "Evil" to the algorithm it created. According to her, in order to prevent such a situation, the creation of artificial intelligence should be subject to oversight and be done with much more transparency than today.

Prof. Rahwan with his experiment with Norman wants to prove that the "Engineers must find a wayob to balance the data, on ktorych algorithm works". As he points out, in the emergence of algorithmoin must not involve only programmers. – There is a growing conviction that the behavior of the programow can be studied similarly to human behavior – mowi Rahwan.

In his view, a new era of research is coming: the psychology of artificial intelligence. It should take the form of regular audits ofow, checking the correctness of the algorithmow – believes the scientist. Experts believe that the research of prof. Rahwan should be a prelude to a major discussion, leading to public awareness of how technologically advanced a world we live in and the algorithms we useoIn this context, we use.

Wspowe teach algorithms early, just as we teach children in school. So there is a risk that we are teaching them in the wrong wayob. When we receive an answer displayed by an algorithm, we should have roalso information about who created the algorithm, specialists believe. The hope is that as public awareness of the role of the algorithm increasedow, humans will catch their malfunctions and eliminate the resulting problems.