Artificial intelligence not only lies ... but also to tamper with your personal memory

Perhaps one of the hidden and malicious risks presented by artificial intelligence techniques is, and related to them is the ability to tamper with memories. Psychologist Elizabeth Loftus spent 50 years proving the ease of manipulating human memory and made people believe that events that originally did not occur, especially if the matter is in the hands of prosecutors or police while questioning witnesses. Luftos, Professor of Psychology at the University of California -irvin, works with researchers at the Massachusetts Institute of Technology today to study how artificial intelligence can tamper with what we think we remember. This ability to manipulate even when people know that they are looking at texts and pictures created by artificial intelligence. The results indicate that this technique can double the ability of people to cultivate false memories in the minds of others. The memory is not a tape of recording in a famous series of experiments that started in the 1970s. Luftos showed that once appropriate suggestions were made, psychologists could plant memories among people who were lost in a mall while they were children, or suffered from eating eggs or ice cream with strawberry flavor on a picnic. The last thing was enough to alienate them from these foods. Despite all the evidence, many people still think that memory is like a recorded bond of events, and this wrong idea about the nature of our minds makes us more vulnerable to manipulation. “People who adopt the memory registration bar model do not realize that memory is a constructive process,” according to Luftos, and explains that the brain builds memories of distributed parts collected at different times. We understand intuitively to forget that it is a loss or evaporation of memories, but we cannot imagine that it can be the addition of false details to it. Also read: China and America in the artificial intelligence breed: availability of everyone in exchange for the desire to lead Luvitos also studies the confusing influence of what is known as ‘guided recordings’, where the polls integrate deceptive information into the question form, such as: “What will you think of Joe Biden if you know that he is condemned to tax evasion?” It just imagines how artificial intelligence can implement this kind of raw on a large scale is anxious. Imaging of forgery and cultivation of memories The researcher BATNABURN of the “Media Lab” at the Massachusetts Institute of Technology indicates that memory manipulation is completely different from the deception of people with deep falsifying tracks. You do not need to create a complicated counterfeit version of the New York Times, but it is sufficient to persuade someone that he has read something there in the past. He said, “People do not usually doubt their own memory.” Pataranaburn led 3 experiments on memory, the first of which was how a management adopted on a conversational robot could change the testimony of witnesses as soon as he included proposals in his questions, in an extension of the former praise work on human interrogation. In that study, the participants watched a video of an armed robbery. Then some of them were misleading questions, such as, “Was there a supervisory camera near the place where the thieves poured the car?” About a third of these later remember that they saw thieves praying by car, although there was no car at all. This false anniversary remained even a week later. The multiple effect of artificial intelligence was divided into three groups: a group that received no misleading questions, another in writing, and a third of a conversational robot. The last group formed 1.7 times false memories compared to those who received misleading information in writing. Another study showed that illegal abstract or conversations can easily cultivate wrong memories in a story a person reads. The most dangerous, as Pataranaborn says, who received these Pronouge summaries or conversations of artificial intelligence less than real information, and they were less confidence in the real facts they mentioned. See also: How does the ‘beautiful great right’ affect science and innovation in America? As for the third study, it tested how artificial intelligence can plant false memories using images and videos. 200 volunteers are divided into 4 groups. Each collection showed 24 photos – some of the news, and others like photos of weddings on the media. The original photos are also vulnerable to deception after minutes, each team watched a modified version: a team that saw the original photos themselves, another team watched adapted photos of artificial intelligence, and a third team saw the changed photos and changed in short videos with artificial intelligence, while the last team was fully born with artificial intelligence. Even the collection that the original photos saw held wrong memories – and this is expected because of the problems of remembering 24 distinctive photos. However, the groups exposed to any level of manipulation have seen a higher rate of false memories. She recorded the highest mutilation of memory in the group that watched videos generated from artificial intelligence images. Young people were more likely to adopt false memories compared to the elderly, while the level of education had no significant impact. Interestingly, these false memories did not need the participants that the content was real, as they were informed from the beginning that they would see content created with artificial intelligence. Between the truth and the illusion in the experience of the photos, some adjustments included changes in the background – such as adding a military presence to a general gathering or changing the weather – while maintaining most features of the original image. As I learned from experts, the misleading information must include a true story of at least 60% to influence it. This recent research should lead us to a broader discussion on the effect of technology on our awareness of reality, an effect that goes beyond the mere spread of misleading information. Algorithms on social media drive people to adopt marginal ideas and conspiracy theories through the illusion of popularity and influence. The robots of artificial intelligence will have more hidden and most difficult consequences, so we must remain open to change our opinions based on strong facts and arguments – and warn of efforts to change our beliefs by twisting what we see, feel or remember.