Generic LLMs in Cybersecurity

10 views
Download
  • Share
+0
Create Account or Sign In to post comments
#LLMs #cybersecurity #AI security #model poisoning #supply chain attacks #malware detection #log analysis #intrusion detection #zero-day threats #bio-inspired computing #secure AI #adversarial AI #responsible AI #AI in cybersecurity #software vulnerabilities #AI risk #GLLMs

Generic LLMs offer powerful capabilities for cybersecurity, but also introduce serious risks, like adversarial manipulation and vulnerable code generation. The talk highlights both the potential and the pitfalls in an era of universal AI solutions.

Generic Large Language Models (GLLMs) are continuously being released with increased size and capabilities, promoting the abilities of these tools as universal problem solvers. While the reliability of GLLMs' responses is questionable in many situations, these are augmented/ retrofitted with external resources for different applications including cybersecurity. The talk will discuss major security concerns of these pre-trained models: first GLLMs are prone to adversarial manipulation such as model poisoning, reverse engineering and side-channel cyberattacks. Second, the security issues related to LLM-generated codes using open-source libraries/codelets for software development can involve software supply chain attacks. These may result in information disclosure, access to restricted resources, privilege escalation, and complete system takeover. 

This talk will also cover the benefits and risks of using GLLMs in cybersecurity, particularly, in malware detection, log analysis, intrusion detection, etc. I will highlight the need for diverse AI approaches (non-LLM-based smaller models) trained with application-specific curated data, fine-tuned for well-tested security functionalities in identifying and mitigating emerging cyber threats including zero-day attacks.

Hosted by the Santa Clara Valley Chapter of the Computational Intelligence Society. More details: https://r6.ieee.org/scv-cis/

Generic LLMs offer powerful capabilities for cybersecurity, but also introduce serious risks, like adversarial manipulation and vulnerable code generation. The talk highlights both the potential and the pitfalls in an era of universal AI solutions.

Speakers in this video