Every day, organizations are finding new uses for AI models. As with any transformative technology, AI introduces both risks and opportunities for businesses, and organizations should be prepared to protect their AI technology at the same level they protect traditional “crown jewel” or other sensitive data. Senior leadership will look to CISOs for guidance, both on how to protect AI and how to use it to enhance the security of their organizations. What follows is some guidance on how a CISO might proceed as AI technology evolves and is deployed.
Understanding Organizational AI Initiatives
- Establish AI Credibility: The CISO must not assume the organization will immediately see them as an advisor on AI. The CISO should invest considerable learning in both reading about and using AI technologies hands-on. CISOs who do not normally engage in hands-on learning should change their approach as an exception. CISOs should engage in grass roots conversations with senior leadership executives to establish their place in strategic AI conversations.
- Gather Information and Form Policies: Similar to how they would treat other business initiatives, the CISO should understand what role AI is now playing internally and, if possible, what role it will play. This will help to shape policies covering data privacy, ethical use of AI, and compliance with relevant regulations. The CISO will not be solely responsible for these policies but must be a stakeholder when they are developed.
- Collaborate and Integrate Security into the AI pipeline: Mimicking the involvement they might have with their Dev Teams, the CISO should work closely with the teams developing and using AI in the organization. This collaboration can help facilitate the integration of security considerations into the development and deployment of these technologies.
Consuming Publicly Available AI
- Train and Spread Awareness: The CISO should strive to make all employees aware of the potential security risks associated with the use of AI and large language models. This includes training employees on how to use these technologies responsibly and securely. Part of this should involve adding “deep fake” content to existing phishing awareness training.
- Identify Uses: The CISO should to gather information on how AI is being used internally, first through inquiry, and then by verifying and hunting using existing tools. CASB and EDR can gather data on public AI endpoint traffic volume and the sources generating the traffic.
Protecting Internally Developed AI
- Protect Models: Internally developed AI models can quickly become “crown jewels” of an organization and the same policies and protections that are placed on existing high target data should also be placed on AI models.
Enhancing Incident Response
- Purple Team: Conduct purple teams that include AI-generated payloads and scripts to understand the effectiveness of current tooling.
- New TTX Injects: Include AI in the next tabletop exercise scenario and begin to generate an incident response plan for security incidents (data loss, AI model poisoning, etc.).
Enhancing Detection and Response
- Increase SOC Efficiency: Enhance your SOC’s agility by integrating AI where appropriate. For example, quickly develop SIEM alerts and custom EDR rules for emerging threats and combine them with SOAR technologies to show your analysts full contextual information so they can triage and mitigate threats faster.
- Reduce Attack Surface: Prevent vulnerabilities from entering production by using AI SAST tooling to identify security flaws in source code.
Potential Risks with AI
There are some clear risks that the CISO should also convey to the organization as using public AI becomes more ingrained into the culture:
IP and Content Risks
- Potential Intellectual property (IP) or contractual issues may arise given the lack of approvals necessary to use the data to train or develop the AI models.
- AI may be generating misleading and harmful content caused by low-quality data used to train generative AI models.
Data Loss and Tampering Risks
- AI models create the risk of lost client or internal data as they are entered into the models. We don’t yet know how the public models treat input data and how/if the queries are stored with security in mind.
- Vulnerable to “poisoning” if the data it uses as input is tampered with.
Usage Risks
- Many AI models lack transparency in how they make decisions.
- They can create misinformation and other harmful content. Humans may then pass off AI-generated “hallucinations” or incorrect responses as fact.
AI will continue to be part of how organizations conduct business. Like other recent new technologies and adoptions like cloud, protecting the business processes and the systems that support AI will need to be part of the cybersecurity plan.

Chris Salerno
Chris is SRA’s Managing Director focused on talent and client success delivering engaging table top exercises and security strategy.
Prior to shifting his focus to developing the next generation of cyber security professoinals, he led SRA's CyberSOC practice, and has conducted hundreds of penetration tests, red and purple teams, and security assessments.
Chris has been a distinguished speaker at RSA, FS-ISAC, H-ISAC, BlackHat, B-Sides and SecureWorld.