Openai's next AI agent is a self-testing software engineer doing what people won't be | Mint
Openai is working on a new AI agent that will surely cause quite a stir if the description given by Openai CFO Sarah Friar is something to do. Friar recently confirmed that the Chatgpt manufacturer will soon release its third AI agent named ‘A-SWE’ or Agent software engineer, which can perform not only the tasks that a normal software engineer can do, but also be able to perform other additional tasks such as quality assurance, error testing and bugbashing. In an interaction with Goldman Sachs, Friar said: “The third (AI-agent) to come is what we call A-Swe … Agentic software engineer. And it’s not just to increase the current software engineers in the workforce, which is a kind we can do by Copilot today, but rather an agent software engineer who can build an app for you.” “It could take a PR that you would give to any other engineer and build it. But it doesn’t just build it, it does all the things that software engineers hate to do. It does its own QA, its own quality assurance, its own error testing and bug -bashing, and it does documentation, things you can never do software engineers, and you can suddenly force your software engineering workforce. ” Friar added. Openai has its first AI agent – operator – in January, soon followed by Deep Research in February – Both AI offerings are currently only available to the paying customers of Chatgpt. Why don’t you freak it now? Openai has a history of high claims about its products, some of which never realize. Take deep research, for example, at the time of introducing it, Openai said that the new instrument could replace a research assistant – an allegation that Friar repeated on Friday. Although many of Openai’s peers, including Xai and confusion, have rolled out similar instruments, it is still unclear how much of a research assistant’s role can really take over this AI models. The reason? They are still prone to hallucinations – to constantly generate information that is simply not true. The problem is not that these great language models are inaccurate – after all, people also make mistakes. What is more of this is how these models provide false information with an air of absolute confidence, making it more difficult to separate the fact of manufacturing. And that hasn’t changed much since Chatgpt was first rolled out to the public in the end of 2022. So, when Openai says that the upcoming AI agents can essentially do everything the current software engineers do and more, take the demands with a pinch of salt. First published: 13 Apr 2025, 07:26 AM IST