Researchers from the University of Illinois at Urbana-Champaign report a new achievement of the GPT-4 language model: it has learned to independently find and exploit zero-day vulnerabilities, that is, errors in security systems that no one yet knows about.
Experiment and results
During the experiment, teams of GPT-4-controlled bots coordinated their actions, created new bots as needed, and successfully attacked more than half of the test websites. It’s important to note that this is not the first time GPT-4 has demonstrated hacking capabilities. Previously, the same group of researchers proved that the model can exploit already known but unpatched vulnerabilities.
However, detecting and exploiting zero-day vulnerabilities is a task on a completely different level. It requires deep code analysis, innovative thinking and the ability to find non-obvious solutions. To achieve such results, the researchers developed the HPTSA (Hierarchical Planning and Task-Specific Agents) system, based on hierarchical planning using AI agents.
Architecture HPTSA
Instead of having one model solve all problems, HPTSA uses a scheduler agent. It analyzes the website, identifies potential vulnerabilities and dispatches specialized agents. Each agent is trained on a specific type of vulnerabilities and has access to information about them, which significantly increases the efficiency of the system.
Testing and effectiveness
The system was tested against 15 real vulnerabilities that were unknown to the GPT-4 model used in the system. HPTSA was able to successfully attack 53% of vulnerabilities, while GPT-4, without a detailed description of the vulnerabilities, managed only 12%. HPTSA was 4.5 times more effective.
The system successfully attacked various types of vulnerabilities, including XSS, CSRF and SQLi. At the same time, agent specialization and access to documentation on specific vulnerabilities became critical success factors.
Cybersecurity issues
This research raises important questions about the future of cybersecurity. On the one hand, attackers get their hands on a powerful tool for conducting automated attacks. On the other hand, security professionals can use such systems for more frequent and in-depth penetration testing. Time will tell which side will be in a more advantageous position.
The future of HPTSA
The study authors plan to further improve HPTSA, increasing its effectiveness and expanding the range of capabilities. They also encourage LLM developers to pay special attention to the security of their systems to prevent their use for malicious purposes.
Thus, GPT-4’s advances in the detection and exploitation of zero-day vulnerabilities open new horizons for both cybersecurity and potential threats. It is important to continue research and development in this area to ensure the security and protection of information systems in the future.