Openai roll O3 and O4-Mini: From coding and math to footage, how Chatgpt's new models handle it all | Mint

Openai released its most advanced AI models on Wednesday-u3 ando4 mini. These models are a good step in how artificial intelligence can reason, solve problems and even use instruments to get things done, the company in San-Fransisco claims. “These new models are part of Openai’s O-Series, and they are designed deeper before answering, and they help tackle more difficult, more complicated questions in less time,” the company said in a blog post. What is Openai O3? The head model, Openai O3, is now the most powerful reasoning model that Openai has built. It is alleged that it has performed exceptionally well on subjects such as programming, math, science and even visual analysis-which set new standards on well-known academic standards such as CodeForces, SW-Bench and MMMU. According to the company, O3 makes 20 percent less major mistakes than its predecessor, O1, especially in areas with high skills such as business consultation and technical innovation. What is Openai O4-Mini? For those looking for speed and efficiency, O4-Mini is a smaller, but powerful model, built for quick, cost-effective reasoning. Despite its size, it turns head-on the maths on math-heavy exams such as the 2024 and 2025 Aime and better than earlier models over a variety of voice and non-stem tasks, Openai claims. Features of Openai O3 and O4-Mini These models are now much better to decide when and to use tools. They can search on the web, execute Python code, analyze images, generate cards and explain their findings-all without holding a lot of hand. So, if you ask something like, “How will California’s summer energy consumption be compared to last year?” – The model can look for the data, build a prediction, generate a graph and guide you through the reasoning behind the prediction. Another excellent function is their ability to work with footage in a much more intelligent way. You can upload a photo of a whiteboard, a sloppy sketch or even an onscherp textbook diagram, and thereby interpret and reason the model – sometimes the image is even manipulated as part of its thinking. This kind of visual reasoning is something that earlier models couldn’t really do well. Both O3 and O4 mini now have full access to chatgpt tools-inclusive file analysis, web search, code interpretation and image generation. What is new and updated? These models are trained to know when they use each tool, which helps them handle more complicated tasks with ease and flexibility. Openai also strengthened its safety protocols. These models have been trained with updated safety data and strictly tested on risk areas such as cyber security, bio-threats and even AI self-improvement, the company claims. The company says that both models have still passed their most demanding safety tests and that it remains far below any high -risk thresholds. Along these upgrades, Openai introduced that CLI, a simple but powerful coding agent, could run directly from your computer’s terminal. This brings the reasoning ability of O3 and O4 mini directly to your local area, supports tasks such as reading screenshots or working with your own code base. It is Open Source and is already on Github directly, and Openai also starts a $ 1 million awards program to support innovative projects using Codex CLI. From April 16, 2025, Chatgpt Plus, Pro and Team Consumers can access O3, O4-Mini and the new O4-Mini Hight and replace older versions such as O1 and O3 mini. Business and education users are gaining access next week. Free users can also try O4-Mini by selecting the new ‘think’ option when typing an assignment. First published: 17 Apr 2025, 04:19 PM IST