Get Off the Neverending AI Treadmill and Secure Your Organization

by | Feb 12, 2025

With the recent news and controversy surrounding the release of the Deepseek Large Language Model, there was immediate concern and controversy about many things, including privacy, security, and whether it was actually built at a reduced cost versus other models (without relying on reverse engineering or IP theft).   Somewhat to the benefit of Deepseek, this controversy overshadowed the next bit of news, which was that Deepseek very quickly had a massive security breach, with loss of presumably all their user interaction data.  This brought up a concern and topic I frequently find myself discussing with clients about how to approach developing a secure approach for your company’s AI journey.

A key concept that I like to emphasize with clients is to avoid the insatiable thirst to constantly chase the latest and greatest LLM capabilities.   Kurzweil coined the term “paradigm shift rate” to represent an accelerating rate of change of technology.  We have seen this more distinctly in LLMs more than any other technology.  If you recall, the world knew nothing of LLMs prior to the launch of ChatGPT.  Within a mere month or two, Google, Facebook, Microsoft, and others all were debuting competitive technologies.  It is astounding how quickly the industry is approaching commodification of this technology.  If you’re saying to yourself “yeah, but AI has gotten so much more useful since the first ChatGPT dropped…”, then you are correct but it’s important to understand these advancements largely are not coming from better LLM technology.  They are coming from advancements in tools and techniques to seamlessly bring your data, user context, and integrations of workflow to the LLM doorstep for processing.  The adoption of vector databases, agentic and RAG design patterns, fine tuning, API integrations, and more have given LLMs far more utility than they’ve ever had before.  To my original point: avoid chasing the latest LLM. You will get more utility out of building better data integrations with your existing LLM than trying to leverage the latest and greatest.

So, what does this have to do with Security?  What you may have experienced is that this incessant chase for the latest and greatest AI technology amounts to death by a thousand papercuts in an enterprise scenario.  Tools like HuggingFace, while a huge repository for innovation, is loaded with unproven and potentially insecure tools that may introduce more than just traditional cyber security risks, even processing accuracy and integrity issues.

What follows is a rare alignment of priorities and incentives for the sake of Security and Technology development, and an opportunity for the CISO, CTO, and CIO to align and march the organization forward towards better safe use of AI technologies.  Here are the key points to consider in your adoption and securing of AI technology:

  • Do not rely on the developer of the latest and greatest AI technology (such as an LLM) to be your compute provider.  Organizations that are building LLMs are generally not well equipped to build and manage a secure computing environment for your enterprise data while they are simultaneously participating in the largest technology arms race in human history.
  • Do identify a security AI runtime platform where you have full control of identity, logging, security controls, deployments, and technology selection.  You probably already have one; the 3 very best platforms available are Azure, AWS, and Google Cloud.  Many folks are unaware, but all of these platforms provide full stack development tools ranging from easy-to-use SaaS AI technologies, to platform and raw infrastructure services, usually allowing you to even bring your own LLM (such as Deepseek!). This approach will not only allow you to contain and monitor usage of AI, but it will also allow your technologists to focus on learning to utilize a common set of tools, building through similar design patterns.

A great example of this is the approach that Microsoft has taken with Azure.  Within 2 weeks of the headline grabbing Deepseek announcements, Microsoft was able to deliver its Azure users a fully private instance of Deepseek made available in your private tenant.  This came with the reinforcements of the enterprise license agreement including no data sharing with developers at Deepseek or Microsoft for training purposes.   Microsoft’s strategy in this area appears quite smart, taking the gold rush strategy of “sell shovels to the miners, don’t be a miner yourself”.  While they do some proprietary AI development with projects like Phi3/4+, their strategic priority for the last 2 years has been building its hardware and cloud software technologies to allow for rapid onboarding of best of breed LLM/AI technologies from partners. They are building the supplemental tools such as vector databases, fine tuning capabilities, and integration of your enterprise data.  If you recall my earlier point, these are the innovations that make AI feel like it’s made huge progress in the past 2 years, less so the (still noteworthy) improvements in LLM reasoning and knowledge capabilities.

Getting in front of AI risks by building and outfitting a modern cloud environment’s AI stack is a rare win-win opportunity for IT departments.   It not only allows you to establish good and consistent controls and give people a safe and approved place to experiment, but also aligns your organization’s innovators to collaborate and build the next big thing freely.

 

Mike Pinch

Mike Pinch
Chief Technology Officer |  Archive

Mike is Security Risk Advisors’ Chief Technology Officer, heading innovation, software development, AI research & development and architecture for SRA’s platforms.  Mike is a thought leader in security data lake-centric capabilities design.  He develops in Azure and AWS, and in emerging use cases and tools surrounding LLMs. Mike is certified across cloud platforms and is a Microsoft MVP in AI Security.

Prior to joining Security Risk Advisors in 2018, Mike served as the CISO at the University of Rochester Medical Center. Mike is nationally recognized as a leader in the field of cybersecurity, has spoken at conferences including HITRUST, H-ISAC, RSS, and has contributed to national standards for health care cybersecurity frameworks.