Cybersecurity

Unlocking the Potential of Gen AI in Cyber Risk Management

AWS' Clarke Rodgers on Harnessing AI-Powered Automation for Compliance
Unlocking the Potential of Gen AI in Cyber Risk Management
Clarke Rodgers, director of enterprise strategy, AWS

Generative AI stands at the forefront of revolutionizing cybersecurity within enterprise strategy, as illuminated by the AWS Enterprise Strategy team. The fusion of generative AI and cybersecurity necessitates a meticulous approach, addressing concerns ranging from embedding security measures in language models to navigating regulatory landscapes. In an interview with ISMG, Clarke Rodgers, director of enterprise strategy with AWS, discussed the realm of generative AI and its role in cyber risk management.

See Also: Business Rewards vs. Security Risks of Generative AI: Executive Panel

Edited Excerpts Follow:

How does generative AI better prepare the CIOs and CISOs in their overall cyber risk management strategy?

Generative AI presents a dual perspective. While some hail it as a phenomenal technology for business advancement, others approach it cautiously, evaluating its novelty through security and risk lenses. Discussions with CISOs, CTOs and CIOs frequently revolve around concerns regarding the use of commercial off-the-shelf AI services, especially free ones. These services might expose sensitive company data or produce unreliable outputs (hallucinations). Ensuring accuracy and verification becomes pivotal. At AWS, we provide Amazon Bedrock, empowering customers to utilize leading language models (LLMs) within their virtual private clouds (VPCs), safeguarding data under existing security and compliance measures.

Should the focus be on embedding security measures within language models during their development (security by design), or is relying on VPCs a more sensible approach, perhaps combining both for enhanced security?

Two pivotal facets must be considered. Prioritizing security during LLM creation is fundamental. Similar to building applications, embedding security-by-design principles into these models involves meticulous development scrutiny, adherence to secure coding practices, and seeking certifications validating robust security measures. This aligns with inquiries about security protocols in traditional applications.

While trusting LLM providers' security measures is crucial, incorporating VPCs adds a layer of assurance. Operating LLMs within VPCs allows for enhanced oversight and control, ensuring user data doesn't inadvertently refine the original model. Despite relying on provider security protocols, VPC utilization instills confidence by guaranteeing data separation and control. These strategies complement each other, forming a dual-layered approach fortifying language model security.

How does the regulatory environment impact security and risk management in generative AI, and does clearer regulatory guidance help?

Regulatory guidelines significantly influence security practices in generative AI. Despite recent standards and guidelines from entities such as the U.S.'s CISA and the U.K.'s National Cyber Security Center, enforceable directives are yet to emerge. Until firm regulatory guidance manifests, companies engaged in AI production or usage must implement best security practices akin to traditional applications. CISOs and CIOs should establish guardrails against data overflow, prevent unauthorized data access, and conduct strong security awareness programs. Addressing unstructured data challenges requires mature identity access management systems and zero trust mechanisms, securing data regardless of its location.

Can security automation, particularly aided by AI, help ease the burden on CSOs and CIOs, especially concerning reporting breaches to regulators?

Security automation powered by AI plays a pivotal role in streamlining various security functions, alleviating the workload for CSOs and CIOs and facilitating regulatory compliance. Security automation significantly simplifies routine security tasks, allowing human resources to pivot toward more intricate risk analysis and strategic decision-making. One of the notable contributions of AI lies in its assistance in meticulous code inspection and vulnerability assessment.

For instance, tools such as Amazon Inspector for Lambda code and Amazon Detective provide indispensable support. Amazon Inspector aids in the comprehensive examination of code, identifying potential vulnerabilities or security loopholes within the Lambda functions, which are integral parts of many cloud applications. This early identification ensures preemptive measures are taken to fortify these vulnerabilities before deployment. Additionally, Amazon Detective helps security analysts by correlating and organizing vast amounts of data to identify patterns or anomalies that might signify a security issue. By leveraging machine learning and AI-driven insights, it streamlines the process of identifying and addressing them effectively. This proactive approach not only bolsters an organization's security posture but also aids in meeting regulatory compliance requirements by addressing potential breaches or vulnerabilities beforehand.

How should CSOs and CIOs approach adopting generative AI in their organizations, especially when met with resistance due to security concerns?

A purpose-driven approach aligning with business needs is essential. Organizations should evaluate generative AI's necessity based on the unique business outcomes it can achieve. Experimentation and cautious adoption aligned with business objectives are crucial, including rigorous security evaluations before integration into production environments. Following standard security practices, such as evaluating new technology in a sandbox environment, is prudent. Highly regulated industries often employ such practices to understand new services' workings and compatibility with existing security measures before granting further access. This control and evaluation process extend beyond generative AI to any new enterprise technology.

Rodgers joined AWS in 2016, but his experience with the advantages of AWS security started well before he became part of the team. In his role as CISO for a multinational life reinsurance provider, he oversaw a strategic division’s all-in migration to AWS in 2015. He is passionate about helping executives explore how the cloud can transform security and working with them to find the right enterprise solutions.


About the Author

Rahul Neel Mani

Rahul Neel Mani

Founding Director of Grey Head Media and Vice President of Community Engagement and Editorial, ISMG

Neel Mani is responsible for building and nurturing communities in both technology and security domains for various ISMG brands. He has more than 25 years of experience in B2B technology and telecom journalism and has worked in various leadership editorial roles in the past, including incubating and successfully running Grey Head Media for 11 years. Prior to starting Grey Head Media, he worked with 9.9 Media, IDG India and Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing cio.inc, you agree to our use of cookies.