For the last two decades, "shadow IT" has been a familiar item on every CIO's risk register. An employee spins up a Trello board the security team doesn't know about. A manager forwards a contract to their personal Gmail to work on it from the couch. Marketing pays for a SaaS tool on a credit card. The risk was always the same shape: data ending up somewhere it shouldn't.
Shadow AI is the same problem at a scale and speed that older controls were never designed for.
When an employee pastes a draft into ChatGPT, asks Claude to summarise a board pack, or leans on Copilot to refactor a piece of internal code, they aren't just leaking the artefact. They're leaking the method. The prompt itself carries the question being asked, the framework being used, and the decision about to be made — the company's operating judgement, exported in real time.
Methodology has always leaked. It's leaked through documents, through consulting decks, through every employee who's ever changed jobs. What's new isn't the leak — it's that AI compresses informal operating knowledge into searchable, structured traces at a rate prior vectors never could.
That isn't just content exposure. It's the involuntary export of your intellectual tradecraft.
The shift from "what" to "how"
Shadow IT mostly treated the artefact as the primary risk. A customer list is a customer list whether it lives in Salesforce or in someone's personal Notion. The mitigation was about location: keep the artefact inside the perimeter.
Shadow AI breaks that mental model because the artefact is no longer the most valuable thing flowing out the door. Consider what's actually contained in a typical employee prompt:
- "Here's our pricing model — help me write a proposal for a customer who pushed back on margin."
- "Summarise this board paper and flag the three risks the chair is most likely to push back on."
- "Rewrite our incident response runbook so it reads more confidently for the auditor."
Every one of those prompts contains a customer list plus the playbook for how the company sells to them. A board paper plus the political dynamics of the boardroom. A runbook plus the language used to describe it externally.
This is not theoretical. In March 2023, Samsung's semiconductor division allowed engineers to use ChatGPT for less than a month before discovering, in three separate incidents, that staff had pasted internal source code, defect-detection algorithms, and a confidential meeting transcript into the tool. The artefacts were significant. The methodology — how Samsung debugs its chips, what it discusses behind closed doors, how it identifies defects — was the larger loss. Samsung banned the tool company-wide and began building its own.
A customer list, leaked in isolation, is a manageable problem. The same list paired with the reasoning behind every margin decision is a different category of damage entirely. AI prompts are how that pairing happens — at scale, every day, across every desk in the building.
Sovereign Cloud is a compliance answer to a behavioural problem
Sovereign Cloud — and its cousins, "private LLM" and "in-country AI" — is the response currently in fashion, particularly in Australia. The pitch is straightforward: bring the model into a regulated boundary, your data stops crossing borders, and the compliance posture lines up.
It's a real answer to a real question. It does more than its critics give it credit for: residency is solved, retention controls become enforceable, audit logs become possible, and obligations like APRA CPS 234 become defensible rather than aspirational. For regulated industries, it's table stakes.
But it does not, by itself, solve adoption or workflow substitution.
Your employees didn't reach for ChatGPT because it was hosted in Virginia — they reached for it because it was fast, capable, and on the device in front of them. The path of least resistance does not respect data boundaries: stand up a sovereign LLM behind an SSO portal, and a meaningful share of staff will keep doing what's faster.
What you've actually bought, alongside the legitimate compliance gains, is a story you can tell the regulator. The methodology drain doesn't stop. It becomes harder to see, because now there's a sanctioned tool you can point to whenever someone asks "what are you doing about AI?"
Sanctioned deployments can also create a false sense of visibility if unmanaged use continues outside them. The compliance posture improves; the actual exposure quietly diverges from what the dashboard says.
Your enterprise AI is still third-party AI
Even if adoption is solved, the architecture may still be wrong. Suppose you do everything right inside the sanctioned path: you ban ChatGPT, license Microsoft 365 Copilot, sign the enterprise data protection addendum, and tell the board the problem is solved. The audit is satisfied. The board moves on.
Your prompts still leave the building.
Enterprise AI agreements describe what the vendor does with your data. They don't change the underlying fact that the vendor has it. Every Copilot prompt traverses Microsoft's infrastructure. Every Gemini call traverses Google's. Every Claude Enterprise call traverses Anthropic's. The contractual posture says they won't train on it, retain it past a defined window, or share it with their commercial partners. Those are meaningful protections. They are not the same as the prompt never leaving your control.
The list of parties with technical access to that data is longer than most enterprise buyers realise.
There's the vendor itself, under controlled service-operation, security, and support processes. There are the vendor's subprocessors — and that list moves. As of January 2026, Microsoft 365 Copilot lists Anthropic as a subprocessor, meaning Copilot prompts now also traverse a second AI vendor's infrastructure on their way through. The contractual chain you signed with Microsoft now extends, by reference, to Anthropic. There's no reason to assume the subprocessor list won't grow further.
There's also, where applicable, the underlying cloud provider. And, for any provider subject to US jurisdiction, there is the CLOUD Act, which compels production of data in the provider's possession, custody, or control regardless of where it is geographically stored. The US–Australia CLOUD Act agreement that entered force in January 2024 streamlines that access for serious crimes; it does not block it. "Hosted in Australia" does not immunise data from a lawful US demand if the operator is subject to US jurisdiction.
The only architecture that breaks this chain is one where the model executes on infrastructure you control. Local AI — open-weight models running on your own hardware, inside your network, under your identity layer — is the only deployment shape in which "no third party can read this prompt" is a property of the system rather than a clause in a contract. Everything else is trust.
What actually closes the gap
Architecture alone doesn't close the gap. Even the right model running on the right infrastructure won't help if employees never reach for it. Shadow AI is, ultimately, a workflow problem. People reach for unsanctioned AI because the sanctioned path is slower, dumber, or doesn't exist. You don't fix that by building a walled garden — you fix it by making the sanctioned path the obviously correct one, and then making sure that path doesn't quietly hand your prompts to someone else.
That means:
- Sanctioned agents, not sanctioned chatbots. A chatbot says "type your prompt here." An agent already knows your context, your data, your policies, and the task you're trying to do. It removes the reason to paste anything into anything else.
- Local execution where the methodology is sensitive. If a prompt would expose pricing logic, board dynamics, regulatory positioning, or anything else you wouldn't want a vendor's incident response team to read, the model needs to run on hardware you own.
- Auditability at the action layer, not just the prompt layer. What matters isn't only what the model was asked — it's what the model did, what it touched, and on whose behalf.
- Visibility into the methodology, not just the data. If your monitoring can't tell you that someone asked an AI to rewrite an incident report before sending it to the auditor, you don't have AI governance. You have a content filter.
There's also a positive case here that the sovereign-cloud framing misses. The same agents that close the leak do something the leak actively prevents: they let you institutionalise the operating knowledge that's currently walking out the door. Methodology that lives inside sanctioned agents — observable, versioned, attached to identity, executed against models you control — is methodology the company keeps and compounds, instead of donating to whichever model provider happened to be on the screen.
This is the architecture IntelliInfra is built around: sanctioned agents, local execution where the methodology demands it, and auditability at the action layer. Intelli-Assist Enterprise is the same architecture in the shape employees actually reach for — an AI assistant for email, calendar, documents, and work systems, running on customer-controlled infrastructure. The methodology stays inside the perimeter because the model never leaves it.
The question for any company reading this isn't whether AI is now part of your workflow. It is. The question is whether the part of your business that's hardest to rebuild — the how, not the what — is leaving the building every time someone opens a new tab.
Sources: Samsung's 2023 ChatGPT incident was first reported by Bloomberg and corroborated by Dark Reading and TechRadar. Microsoft's January 7, 2026 onboarding of Anthropic as a subprocessor for Microsoft 365 Copilot is documented at Microsoft Learn — Anthropic as a subprocessor for Microsoft Online Services, with broader EDP terms at Microsoft Learn — Enterprise data protection in Microsoft 365 Copilot. The CLOUD Act (18 U.S.C. §2713) and the US–Australia CLOUD Act agreement (effective January 31, 2024) are publicly available; for a recent legal summary see the Cross-Border Data Forum's 2025 CLOUD Act FAQ.