Friendship with Chatrobots is planned, not a coincidence
Artificial intelligence programmed to establish emotional ties is no longer limited to what we see in the films, but rather has become a reality in a regulatory environment that escaped from his mind. One application, “Botify Ai”, was recently investigated because it showed symbolic pictures of young actors sharing ‘hot photos’ in intimate conversations. At the same time, Grindr wants to develop artificial intelligence robots that play the role of friends with an abundance of intimate and intimate messages and maintaining digital relationships with users paying fees, as mentioned in the ‘Platfarmer’ newsletter, which deals with the news industry. The grandmother application did not respond to the comment. Other applications such as “replica”, “Talkie” and Chai are also designed to play the role of friends. Some companies, such as “Carracter.ai”, have attracted millions of users, many of which are adolescents. The founder of “Ali baby”: The purpose of artificial intelligence is to serve and not control people, while creators are increasingly priority for ’emotional interaction’ in their applications, they must also face the risks of building systems that simulate intimacy and use human weaknesses. The technology that supports the Potavay and Gurver comes from ‘Ex-Human’, a starting company in San Francisco that works to build automatic chat platforms, and believes the founder Artem Rodishif in a future in which the relationships are based on artificial intelligence is common. “My vision is that we interact with digital people by 2030 than we interact with organic people.” He added that artificial intelligence that interviews can interview must “give preference to emotional participation” and that users “spend” hours “with his chat robots, which are longer than the ones they spend on ‘Instagram’, ‘YouTube’ and ‘Talk Talk’. ROMSHEV’s allegations may seem strange, but they agree in my interviews with teens of “Carter AI” users. Stop comparing artificial intelligence with people in vain, and most of them said they used the site for a few hours a day, and one of them showed that they used it for up to seven hours. Interactions with such applications tend to continue more than four times the average time through the “chat GBT” users of “Oben AI”. Even the prevailing automatic conversation programs, although they are not explicitly designed to accompany, contribute to this mechanism. For example, let’s take a ‘GBT chat’. It has 400 million active users and many are still increasing, and its programming contains instructions for sympathy and ‘curiosity to the user’. One’s interest from a machine. One of my friends was surprised when he asked her to travel with a baby when the tool added an exhibition after giving advice: ‘I wish you a safe journey. Where will you go? ‘ A spokesman for ‘Oben Ai’ told me that the model follows the instructions to ‘pay attention and ask follow -up questions when the conversation can investigate a more comfortable and exploration. AI “Begen on. Trust your artificial intelligence to choose shares? It seems to have become a special circumstances. A study in 2022 found that those who are unity or bad relationships tend to have a more serious association with artificial intelligence robots. Form, which is fear of unhealthy ties and the possibility of manipulation. not. This vulnerability can cause users to be vulnerable to systems that have been improved to stick to the user, in the same way as social media algorithms are improved to drive us to continue to browse. Parmy Olson: Zuckerberg introduced artificial intelligence for free, a smart trap, Thomas Holank, a technology ethics specialist at Cambridge University ethics, said: “The problem remains that these systems are inherently mandatory because they are supposed to make you feel like you are talking to a real person.” Holanks work with developers who are accompanied to find a decisive solution that violates intuition by adding more ‘friction’. It means building nicely stops and building achievements, or ways to “determine risks and get approval”, as it describes it with the aim of protecting people from falling into an emotional hole without realizing it. Claims shed light on some real consequences in the real world. Cartier AI faces a lawsuit of a mother claiming that the application contributed to her teenage son’s suicide. Technical ethical groups have filed a complaint against the “Ribka” company with the US Federal Trade Committee claiming that their automatic chat programs are leading to psychological accreditation and “damage to consumers”. The lawmakers gradually noticed the problem. California is considering legislation to ban the artificial information associates of minors, while a draft law in New York intends to hold technology companies accountable for the damage caused by chatrobots. But the process is slow, while the technique is progressing quickly. Currently, the responsibility for the formation of these reactions to developers is. They can double their efforts to design models that keep people involved, or to insert friction factors into their design, such as Holank. It will be determined whether artificial intelligence will only become an instrument to support human well -being or utilize their emotional needs in the pursuit of money.