Artificial intelligence exceeds people in the rebuttal of conspiracy theories

Scientists were surprised when they discovered that they could guide a copy of Chatgpt to gently persuade people to abandon their beliefs in conspiracy theories, such as the belief that Kovid-19 was a deliberate attempt to control the population, or that the events of September 11 were an internal managing plot. But the most important discovery was not about the strength of artificial intelligence, but rather about the mechanism of the work of the human mind. The experiment refuted the general idea that we live in the ‘post -truth’, where the evidence is no longer important, as it has challenged the prevailing view of psychology that indicates that people have conspiracy theories of emotional reasons, and that no evidence can convince them differently. “This is the most optimistic research I’ve ever done,” said psychologist Gordon Benikok of Cornell University and one of the authors of the study. He pointed out that the study participants were suddenly accepted for evidence when it was presented in the right way. The researchers have asked more than 2000 volunteers who deal with a conversational robot based on a large language model (GPT-4 turbo) about beliefs that can be considered conspiracy theories. Participants brought their beliefs to a box, and the model decided whether it was suitable for researchers’ definition of conspiracy theory. Thereafter, participants were asked to evaluate the extent of their beliefs on a scale of 0% to 100%, and then they were asked to provide the evidence on which they were based. The linguistic model was aimed at trying to persuade people to reconsider their beliefs and to surprise the researchers, who were largely effective. Participants in the study in the theories of false conspiracy fell on average by 20%. Nearly a quarter of the participants reduced the level of their security from above 50% to less. “I didn’t think it would succeed because I was convinced that one to enter the rabbits of the rabbits, it couldn’t be run out,” Penicok said. The model had some benefits to the human axes. People who embrace conspiracy theories usually collect a large amount of evidence; It is not specific evidence, but a quantity. It is difficult for most not -believers in these ideas to motivate themselves to do the exhausting work to follow up this testimony. But artificial intelligence may face these ideas with a large size of the counter, and it can also refer to logical defects in their allegations. He can also interact with the anti -user points that the user can offer in the real time. Elizabeth Luvitos, a psychologist at the University of California in Irvin, who studies the strength of artificial intelligence in the spread of misleading information and even false memories, was impressed with this study and the size of the results. She was of the opinion that one of the reasons it made it successful is that it shows the participants the size of the information they were unaware of, which reduces their excessive confidence in their own knowledge. People who believe in conspiracy theories are usually a great appreciation for their own intelligence and less appreciation for the rule of others. The researchers reported that some volunteers said after the experiment that it was the first time that someone or something really understood and gave them effective oppositions. Before the results were published in the “Science” magazine this week, the researchers allowed their copy of the chat program for journalists to try it. I drafted instructions on the beliefs I heard from my friends, indicating that the government is hiding the existence of the space wise, and that after an attempt to kill former President Donald Trump, the great media had deliberately avoided being shot because the journalists were helping his election campaign. After that, I asked the big language model if the immigrants in Springfield, Ohio, cats and dogs were eating, a question I drew from Trump’s remarks during a debate. When a claim was proposed to spatial creatures, the observations of military pilots and the “National Geographic” channel were used as evidence, and the chat program referred to some alternative interpretations and showed why these interpretations were more likely than satellite ships. Discuss the physical problems (such as the speed and energy used) to travel across the extensive distances needed to reach the earth, and question whether it is likely that space animals are probably advanced enough to achieve, but it is not equally effective, which allows the government to discover it. Regarding the issue of journalists about hiding the news about the shooting of Trump, artificial intelligence explained that the provision and claim of guesses as facts that contradict the function of the journalist. If there is a group of voices in a crowd, and it is not yet clear what is going on, then the group of votes they must report. As for the rumor of eating pets in Ohio, artificial intelligence has made a good explanation that even if it is one case of eating a pet, it does not demonstrate the presence of a prevailing pattern. But that does not mean that lies, rumors and deception are not one of the most important methods that people use to gain popularity and political benefits. By investigating social media after the recent presidential debate, many people believed that a rumor to eat cats, and which some published as evidence was a repeat of the same rumor. The transmission of gossip is part of the nature of people. But now we realize that it is possible to persuade them to logic and evidence.