Automation, prediction, text or image generation… the uses of AI are multiplying and expanding. But in concrete terms, in cybersecurity, and specifically for HarfangLab’s EDR , what are these uses? With what objectives and for what results? We explain.
To begin with, what is AI? Definition.
“AI is a model applied to a use. It’s Machine Learning in an app. In other words, for EDR, it’s, for example Deep Learning applied to malware detection.”
Constant Bridon, Lead AI – HarfangLab
And what does Artificial Intelligence do in HarfangLab’s EDR?
To identify and remediate threats, HarfangLab relies on various engines, including an AI that can be activated on all endpoints.
This engine is a meta-model called HL-AI, which is based on 2 families of algorithms, and is used to determine the criticality of security alerts:
- Gradient Boosted Trees (GBT)
Discriminates malware/goodware on the basis of variables extracted from the executable file.
240: the number of variables needed to identify malicious files - Convolutional Neural Networks (CNN)
Convolutional neural network applied to an image transposition of binary files (malware / goodware).
1M: the number of parameters (GPT3 has 175 billion)
The advantage of combining these two methods is that you can choose between :
- reduce false positives while maintaining detection performance,
- or open the floodgates to detect as many malicious files as possible, at the cost of a slightly higher number of false positives.
And that’s not all.
In concrete terms, what are the advantages of the HL-AI Artificial Intelligence engine?
- The ability to contain risk upstream, by predicting the harmfulness of a file when it is executed.
- An engine that runs directly in the agent for optimum protection of the endpoint, even when it’s not connected to a network.
- The ability to identify unknown threats from malicious file bases, and reinforce protection by supplementing detection rules.
- Optimized libraries and models: our neural network is a maximum of 5MB, including dependencies!
A continuously improving engine
The cyber threat is constantly evolving, which is why the engine is regularly trained to adapt to new risks.
Based on around 2 million files, the models are improved with each new version to target the lowest possible false-positive rate, without degrading detection capacity.
Trend in the number of false positives, divided by 10 between versions 2.10 and 2.30
(example for Critical false alarms on Windows, representative of the trend for all false positives)
The sensitivity of detection engines is also refined by taking advantage of their complementarity, to minimize false positives.
In practice, HL-AI is a detection engine that complements the other Sigma / Yara / IOC / Ransomware engines.
It is also possible to specialize where other engines are lacking, while reducing its sensitivity to alerts already covered.
Want to find out more about what our EDR does
does to detect and remediate threats?