Methodology

Cybersecurity: Offensive vs. Defensive AI

Artificial Intelligence can be used by organizations to guard against and react to threats, but also by attackers to commit their misdeeds.
2 min

While it can be seen as a risk – an IBM study carried out in 2023 indicated that almost half of the executives surveyed feared that the adoption of generative AI would lead to new security pitfalls – it is also a prospect for perfecting security tools and automating some tasks. 

So, in practice, what are the potential uses of AI in the context of cybersecurity? Let’s take a look. 
 

Offensive AI: a constantly evolving threat 

  • Help in generating malicious code or scripts 
  • Help with phishing messages, voice cloning or deepfakes 
  • Help with processing stolen data 
     

Defensive AI: anticipation and learning to stay ahead of the game 

  • Intrusion test automation 
  • Reverse engineering assistance 
  • Automated classification and prioritization of security events 
  • Analysis of malicious files and behavior 
  • Automated threat response 
  • Continuous learning to adapt and reinforce security 
     

AI brings productivity gains at all levels, and for the moment it helps defenders more than attackers.  

Today’s widely spread vast generalist models such as ChatGPT are costly and lack the precision needed to perform crucial cybersecurity tasks. 

Also, models are specializing, becoming smaller and more efficient for specific tasks… and fine-tuning will continue to develop for applications in cybersecurity. 

Cybersecurity - AI - Offensive vs Defensive

Want to know how we concretely use Artificial Intelligence
at HarfangLab? Have a look right here: