LLM Security: Novel Challenges Require Novel SolutionsElad Schulman of Lasso Security on OWASP Checklist, Zero Trust and LLM Security
Large Language Models, or LLMs, have burst into the spotlight since their launch, with research and analyst companies suggesting that nearly 80% companies are experimenting with LLMs. Despite their benefits, several concerns regarding the security of LLMs have surfaced. The new cybersecurity challenges - albeit inherent to such tech revolutions - may require dedicated security strategies for mitigation.
Prompt injection attacks, insecure output handling, software supply chain vulnerabilities, sensitive information disclosure and over-reliance on LLMs are some of the new LLM-related threats included in the OWASP Foundation's LLM AI Security & Governance Checklist.
Conventional security tools cannot handle LLM security threats effectively. The DLP solutions or browser security tools only take generic problems into consideration and do not have enough depth and breadth to discover and observe, said Elad Schulman, CEO and co-founder, Lasso Security. Enterprises need to look into the entire spectrum of the security challenge including employees, developers, third-party plug-ins, or even internal applications that are connected to LLMs. They consequently need a security solution that offers deep discovery, observability, and detection and response.
In this video interview with ISMG, Schulman also discussed:
- Security concerns of open-source and proprietary LLMs;
- The importance of input and output validation;
- A deep and wide approach to securing LLMs.
Schulman is a tech entrepreneur with more than 20 years of leadership experience in corporates and start-ups. He is an avid investor and adviser in tech ventures and has extensive industry experience in cybersecurity, cloud data security and LLM security, among others.