The artificial intelligence sector is needed to fight the ways to falsify deeply
If people are concerned about artificial intelligence, the reason for this is not limited to what it will bring in the future, but they are rather evoking what was also in the past, especially the bad impact of social media. The hate speech, together with misleading information, has for years overcome the content of the content of Facebook and Twitter and spreads all over the world. Today, photos and videos are intended with deep forgery technology invaded by the same platforms. Although Facebook’s responsibility for its distribution can be misrepresented by its platform, artificial intelligence businesses that make these technologies play their role in confronting such content. Unfortunately, just like the case of social media businesses, artificial intelligence businesses are working on this issue. I have communicated with about ten of the Tructer AI businesses, providing tools that are capable of producing photos, videos, texts and false sounds that are difficult to distinguish from what is real, to ask about how they reach the users’ dedication to its rules. All of them replied that she was using electronic programs to monitor what users produce, while most of them said they also cost human elements to verify these systems, but almost one of the companies agreed to expose the number of those who supervise these systems. Also read: Fraud of deep forgery technology costs a global business $ 26 million in the absence of legal organization, but should these businesses reveal such information whatsoever? Unlike the drug, food and vehicle industries, artificial intelligence businesses are not legally binding to reveal the details of their safety procedures. Like the case of companies on social media, artificial intelligence businesses can keep the confidentiality of their work just as much as they want, and this situation is likely to continue in the coming years. Although the law associated with artificial intelligence drafted by the European Union deals with ‘transparency requirements’, it is not yet clear whether this law will be subject to safety practices in artificial intelligence companies to the same type of audit that motor and food manufacturers are subject to. These sectors have taken decades to adopt strict safety standards, but the world cannot afford the effects of the behavior of artificial intelligence instruments for absolute freedom for the same period in light of their very rapid development. Midjourney eventually expressed its electronic program to generate many realistic photos that show even pores and fine lines in the skin of some politicians. With the beginning of an election year with distinction, during which about half the world population will go to the polls, the organizational void means that the content -generated content can have a destructive impact on democracy, women’s rights, creative arts and many others. There are ways to address this problem, including the pressure on artificial intelligence enterprises to meet the standards of transparency in terms of safety procedures followed, and it begins to ask questions about it. When I communicated with ‘Oben A.’, ‘Microsoft’ and ‘Midjorn’, I made sure my questions were simple, such as: How can you impose the application of your rules through electronic programs and the human element, and how many people assigned to this work? Also read: Personalizing the personality with “deep forgery” leads a new wave of fraud crimes the role of the human element. Most companies were ready to provide my different paragraphs that contain details of the procedures they use to prevent abuse (and their vague formulation as public relations texts). For example, “Oben A.” Have two teams that help train artificial intelligence models to increase the safety or ability to detect harmful production. The company, which is behind the controversial photo program “Stable Diffusion”, said it uses “filters” to ban the images that violate the promises, and that human observers were verified by the input and photos warned. But only a few companies have revealed the number of people they used to oversee their systems. The task of this is like safety inspectors, and on social media they call supervisors in the content and play a difficult but essential role in reviewing the content that algorithms warn on social media as a racist, for example, or offensive to women or violent. Facebook contains more than 15,000 supervisors who maintain the integrity of the website without narrowing users’ freedoms, as people are most able to maintain such an exact balance. It is true that, because of the filters included in most artificial intelligence instruments, they do not deliver abusive content if people on “Facebook”, but these instruments can be more safer and more reliable if businesses employ more supervisors. People are best to confront abusive content in the absence of better electronic programs, as the existing has proven their palaces so far. Increased employment shows the spread of pornographic cuts of star Tyler Swift, intended for deep forgery technology, and records that mutilated the voice of President Joe Biden and other international politicians, including that artificial intelligence companies and technology companies do not invest enough in safety procedures. However, it should be acknowledged that using more people to help businesses apply their bases is more like throwing buckets of water on a home where the fire is dominated as it will not solve the problem, but it can temporarily improve the situation. Also read: After the distribution of Taylor Swift photos .. What do you know about “deep forgery”? “If you are an emerging business that builds an artificial intelligence product, the employment of human elements in different developmental phases is a very wise procedure, not essential,” says Ben Whitilo, the founder of the Modration newsletter, which deals with online safety. Several artificial intelligence businesses have acknowledged that they have only a supervisor or two human elements. The runway to generate videos said its work researchers are responsible for this work, while the company “description”, which produces an instrument for the reproduction of sounds that bear the name (too much), said it was only achieved from a sample of cloned votes to ensure that they were identical with approval, followed by users with their votes. The company spokesman justified this by saying that the verification of user enterprise violates their privacy. Artificial intelligence enterprises enjoy unparalleled free to maintain the confidentiality of their work. But if she seeks the confidence of the public, organized bodies and civil society, it is in its interest to reveal the way it works to show the world how to apply its rules. Renting more people is also not a bad idea in itself. It threatens to focus on the race to make artificial intelligence ‘smarter’ exaggerated, so that fake photos look more genuine and text is smoother and cloning cloning is mastered by dragging a world of dangers and confusion. The more authentic procedure now increases employment to detect safety standards before everything becomes more difficult.