"Ant", supported by Jack, is not to promote artificial intelligence with Chinese chips

The ANT group, backed by billionaire Jack Ma, developed techniques for the training of artificial intelligence models using Chinese -made semiconductors, which would reduce the cost of 20%, according to people who are familiar with the matter. ‘Ant’ used local chips, including chips of the ‘Ali Baba’ group, and from the Huawei Technologies, to train its models according to the “Mix Experts” approach to learn the machine, according to the people. They said that the results she achieved were similar to those provided by the discs of “Inviteia”, such as “H800”, and demand that their identity be revealed as the information is not public. Hangzhou -based ant still uses a vitia chips to develop artificial intelligence, but it has become increasingly dependent on other alternatives, including the discs of the ‘Advanz Micro Device’ business and Chinese chips in their latest models, according to a person. Artificial intelligence -breed represents this ‘enrolled’ in an accelerated race between Chinese and US businesses, especially after Deepseek showed the effectiveness of training for models at much lower cost than the billions of rands invested by ‘Oben AI and Alphabet’ that Google owns. This development also confirms the efforts of Chinese companies to find local alternatives to the semi -middle conductors of “Inviteia”. Although H800 is not the most advanced, it is a relatively strong segment, and is currently prohibited under US restrictions from export to China. ‘Ant’ published a research article this month in which he claims that the models were sometimes better than the ‘Meta platforms’ models in certain normative tests, which Bloomberg News could not verify independently. But if these allegations are correct, “ant” techniques can be an additional step in the development of artificial intelligence in China, by reducing the cost of reasoning and supporting artificial intelligence services. With the continued flow of large investments to artificial intelligence, knowledgeable mixing models have become a common choice, after Google and the starting company ‘Deep Sick’, based in Hangzhou and other companies, are based. This approach divides the tasks into smaller data sets, as if there are a team of specialists, each focusing on a specific part of the task, which makes the process more effective. “Ant” refused to comment in ‘NE -mail statement. Invidia chips usually the training of expert mixing models is usually dependent on high -performance slides, such as the graphic processing units that sell ‘invitation’. But the cost of this process so far has a hindrance to very small businesses, which has limited the spread of these models on a broader scale. Therefore, “ant” works to find more effective ways to train major linguistic models and overcome this obstacle. This trend is evident from the title of the research article published by the company, as it clearly defined its goal: “The expansion of the model of the model without the need for graphic processing units with high cost.” This approach comes in contrast to the “Invidia” strategy. CEO of the company, Jensen Huang, emphasized that the demand for computer will continue to rise, even with the rise of more efficient models, such as the “R1” model of “deeply sick”. Huang believes that businesses will need stronger chips to increase their income, not cheaper chips to reduce costs. For this reason, Invidia continued to develop large graphic processing units, characterized by a greater number of treatments and transistors, and a higher memory capacity. Ant’s research article highlights the acceleration of innovation and technological advances in the artificial intelligence sector in China. If the allegations of the company are confirmed, this reflects the approach of China to achieving self -sufficiency self -sufficiency as it has used low -cost models with a computerized efficacy as alternative to advanced US chips prohibited under export restrictions, Robert Lea, an analyst “Bloom Mountain Intellement” -Symbol contained, with the 6.35 million Yuan million -million million. (880 thousand dollars), but it managed to reduce these costs to 5.1 million yuan using lower specification devices. It should be noted that encrypted symbols are the information units that absorb the model during training to understand the world and give accurate answers to users. According to people, according to the people of artificial intelligence applications, “Ant” plans to plan to utilize the latest achievement in major linguistic models, such as Ling-Plus and Ling-Lite, in artificial artificial intelligence applications, including healthcare and financial services. The company has acquired on the Chinese “Haodf.com” platform this year to improve artificial intelligence services in healthcare. It also owns the “Zhixiaobao” app, which works as a personal assistant based on artificial intelligence, as well as a smart financial consulting service called “Maxiaocai”. ‘Ant’ in his research article indicated that the Ling Lite model achieved better performance in one of the basic standards to understand the English language compared to one of the ‘Lama’ models for ‘Mita’. The Ling Light and Ling Plus model also fared better than the “Deep Seck” models equivalent to Chinese language tests. “If you find a single weakness to defeat the best teacher of Kong Fu in the world, you can say that you have defeated him, and for this reason the practical application in the real world is crucial,” says Robin Yu, technical head of the Shengshang Tech, who specializes in artificial intelligence solutions and based in Beijing. The major language models, “Ant” Made Ling, Open Source models, where the “Ling Light” model contains 16.8 billion data that represent adjustable settings that work such as buttons and handles that target the model. As for the “ling plus” model, it contains 290 billion data, making it large models in the field of language models. By comparison, according to Mit Technology Review, experts appreciate the “CABT -4.5” model of the “Chat BT” application 1.8 trillion data, while the “R1” model of the “Deep Sik” contains 671 billion data. ‘Ant’ faces challenges in some aspects of training training, especially in terms of stability. In his research article, it indicated that any simple changes in the devices or in the structure of the model can lead to problems, including the jump in the error rate of the models.