Yes, they do. The team at OpenAI has confirmed in a blog post that future OpenAI models will become more capable in cybersecurity. ChatGPT’s owners confirmed that their models are steadily improving their cybersecurity capabilities and expect this trend to continue. Bad actors have been trying to use AI to conduct malicious activity since AI chatbots were first introduced. And the more powerful the models get, the easier it becomes for people to launch malicious attacks.
The statement comes as OpenAI also released GPT-5.2, which they call the company’s “best model yet for everyday professional use“. OpenAI also states the new model is much better at writing code than previous versions. Further is planning to establish an advisory group, the Frontier Risk Council, to advise on how to limit the malicious use of AI models. And also is testing the waters for future use of a recently launched tool, Aardvark, which is currently in beta.
Key takeaways
- OpenAI confirms that AI models will reach a high cybersecurity risk and is racing to address it.
- Advanced AI tools continue to dramatically lower the bar for cybercriminals.
- GPT-5.2 is better at reasoning, coding, and agentic tasks, and it is a step towards “high” cyber-risk territory.
- OpenAI is responding to both defense and offense. Creating a cybersecurity council to offer suggestions on how to limit malicious use and launching Aardvark, aimed at helping organizations stay a step ahead of criminals.
Why is OpenAI worried about increased cybersecurity risks?
Just a few years ago, launching a sophisticated cyber attack required extensive knowledge and experience. However, with the rapid advancements in AI, people with basic cybersecurity knowledge can guide AI solutions to identify system loopholes or suggest ways to circumvent security measures. The many powerful Malware-as-a-Service (MaaS) offerings on the Dark Web have also empowered criminals with no extensive IT background to launch sophisticated attacks.
OpenAI recognizes the importance of preventing people from using their solution for malicious purposes. So they are actively assembling an advisory group they hope will help them limit hackers’ use of their solution for malicious purposes. OpenAI is actively training current and future models to either refuse or safely respond to harmful requests. The models will also remain useful for educational and defensive use cases.
What is new in GPT-5.2?
OpenAI just released its latest AI model for professional work, which it says is the most capable model series yet for this type of knowledge work. Only paid ChatGPT users can access it, and OpenAI will soon roll it out to all Microsoft 365 Copilot users. Key features include advancements in reasoning, reduced hallucinations, and improved professional work applications. OpenAI’s latest GPT-5.2 comes with improved communication and better agentic capabilities, too.
OpenAI’s Security Testing Tool Aardvark
OpenAI is currently in beta testing a solution called Aardvark. The organization hopes that Aardvark will help businesses catch vulnerabilities in source code. It acts as a security partner, tests an organization’s security, and suggests ways to improve it. While the solution is not widely available yet, OpenAI is still accepting organizations willing to participate in beta tests and meet the requirements. OpenAI states that software is the backbone of every industry. They hope Aardvark will help organizations strengthen their security and prepare better for hacker attacks.
It is not a secret that OpenAI knows that AI could be used maliciously by users. While the organization is stoked about the progress and capabilities of future models, OpenAI is also actively working to ensure users can use AI only for safe and beneficial purposes. Every new model released by the organization comes with improved capabilities, and OpenAI appears to be working on finding ways to channel those reasoning capabilities only for beneficial use.
OpenAI is also hoping that organizations will be able to use AI to protect themselves, leveraging solutions such as Aardvark. Aardvark aims to strengthen security and help organizations’ systems stay strong and prepared for hackers throwing MaaS cyberattack tools at your family or business.