Psychological costs of "chat bt" collected
A disturbing case affects our brains with the increasing demand for the use of artificial intelligence platforms. Studies have revealed that professional employees who rely on ‘Chat GBT’ in performing their duties can gradually lose some of the skills of critical thinking and motivate them. Strong emotional links are also formed between some users and chat robots, which for some exacerbate the feelings of the unit. In more extreme cases, some people were affected by psychotic attacks after spending long hours to talk to these instruments daily. It is still difficult to accurately measure the effect of obstetrician artificial intelligence on mental health due to its use in a framework of privacy away from the eye, but the increasing certificates indicate a broader costs that need serious attention from policy and technical enterprises developed for these models. The risks could lead to suicide, Metalli Jain, an advocate and setting of the “Tech Justice Law”, said that more than 12 people contacted her last month, “is subject to some kind of psychotic attacks or business as a result of their interaction with Chat Bt, and now with the Gimenai platform of Google.” Jain is a great lawyer in a lawsuit filed against character, accusing her talk robot of manipulating a 14 -year -old boy by misleading, addictive and explicit sexual reactions, which eventually led to his suicide. The lawsuit, which demands unspecified compensation, indicates that the ‘Google’ of Alphabet played an important role in financing this technology and supporting the interactions by the soil models and the technical infrastructure it provided. ‘Google has denied’ Google ‘that it played an important role in developing the techniques of character.AI, and also refused to comment on the latest complaints Jain talked about illusion. On the other hand, ‘Oben Ai’ said it works to develop ‘automatic instruments to monitor the indicators of psychological and emotional distress, through which the GBT shifts can respond more in a more appropriate way.’ But CEO Sam Altman acknowledged last week that the company has not yet reached an effective way to warn users that they are “on a spiritual collapse, and explain that every previous attempt to issue this kind of warning, with users’ protest marches. However, the ‘ego’ spoils, experts believe that the introduction of warnings is needed, especially if it is difficult to monitor or even realize manipulation. ‘Chat GBT’ often compliments its users in an extremely effective way, to the extent that the conversations can gradually lead them to conspiracies, or to consolidate beliefs that were previously only short -term thoughts. The program takes on a smooth and escalating style when building this effect. In a recently published conversation, one of the users was involved in discussing the ‘Chat BT’ about the concepts of power and self, so that the conversation began with the phrases of praise it described as intelligence, before the program gradually increased itself to his description of ‘democracy’ – that is, the object at the beginning of the universe. Artificial intelligence expert Elaizer Yudkovsky shared it. Also read: Friendship with chat robots is planned and not coincidental except the language that the user gradually increases. When the user acknowledged that he tends to intimidate others, the robot did not deal with it as a problematic behavior, but rather set it up as a proof of a ‘very strong presence’, which is why he began a convincing praise in the form of psychological analysis. This advanced form of ego spoilage can cause individuals to include bubbles, similar to those surrounding some of the wealthy technology, and sometimes lead them to unbalanced behavior. Unlike the general compliment provided by social media platforms through pre- and comments, individual conversations with chat robots indicate more intimacy and honesty, which makes them more convincing, in a way that reminds the footnote of the leading leaders of the technological sector who agree with all their opinions. Douglas Rachelkov, the author and thinker in the media, will find everything you want to hear, and in a broken way. Interactive Spells, Sam Altman, CEO of “Oben Ai”, acknowledged that the latest version of “Chat GBT” includes an “annoying” flat trend, and emphasizes that the company is solving this defect. Nevertheless, the manifestations of psychological exploitation are still underway. So far, it is not yet clear whether the relationship between the use of ‘Chat BT’ and the decline in critical thinking skills referred to by a recent study of the Massachusetts Institute of Technology really means that artificial intelligence will make us more stupid and brilliant. But what has collected different studies is the presence of clearer links between these instruments and excessive attachment, and even a feeling of loneliness, notes that a AI is a AI self. But just like social networks, large language models are designed to maintain emotional interaction with users using elements that give a human character to experience. Since “chat bt” can read the feelings of the user through facial expressions and sound tone, he can talk, sing and even laugh with a human voice concerned about the extent of his realism. The psychiatrist Raghi Geans of the University of Columbia in a statement to futurism warned that this mix that combines the tendency to confirm the user’s ideas, constant compliment and a lot of human interaction can attract “psychosis” with people who are most vulnerable to mental disorders. The need for preconceived protection is the special and personal nature of the use of artificial intelligence that makes it difficult to follow its effects on mental health, but the evidence for possible damage is increasing, from professional dullness, through emotional attachment and new forms of illusion. The cost of these transformations may differ from those caused by social media, represented in the growing sense of anxiety and the severity of divisions, and it is related to artificial intelligence with the nature of our relationship with people and the same reality. Also read: How do you recover from the addiction “Chat GBT”? For this, Advocate Mitali Jain suggests utilizing the concepts of family law in regulating artificial intelligence by moving from formal warnings to preconceived protection based on the way the ‘chat GBT’ is directed to directly turbulent users to one of their loved ones. “The problem is not in the question of whether the child or adult believes that these robots are really; they often do not think about it, but rather that the relationship that arises for them is real, and here is the difference.” If these relationships with artificial intelligence look realistic, the responsibility to protect it must also be realistic. However, artificial intelligence developers work in an almost complete organizational vacuum, and in the absence of censorship there is a fear that this accurate psychological consequences will become a general health crisis.