Lock Your AI - CLIMB Podcast
This week, our CEO Jonathan Mortensen joined Jay Kapoor on the CLIMB Podcast to unpack a question that’s suddenly top-of-mind for every security leader:
What actually happens to your data when you talk to an AI?
Jonathan broke down how today’s AI browsers and assistants are quietly creating the same class of vulnerabilities that once plagued the web - like SQL injection for the age of LLMs.
“Browsers are always loading someone else’s code to run on your computer,” he explained. “Now, the AI sitting beside that browser can hallucinate a command — even send an email you never meant to send.”
The conversation traced how employees, often unknowingly, are pasting sensitive company data into consumer chatbots. With billions of LLM requests flowing through third-party intermediaries each week, even a small percentage containing PII or trade secrets adds up to a massive exposure surface.
Jonathan outlined the sectors most at risk, such as finance, healthcare, and legal, and warned that “a copy-paste into a chatbot can be a compliance breach waiting to happen.”
From legal contracts to technical guarantees
The discussion culminated in what we’re building at Confident Security: provably private AI inference.
“We make a technical guarantee — not just a legal one — that nobody can see your data except the model itself.”
Our open-source standard, OpenPCC, extends the privacy architecture Apple pioneered for on-device AI, making fully available for anyone to use. OpenPCC lets users run models in encrypted enclaves, preventing data from being seen or stored—with secure receipts to prove it.
Jonathan’s message on CLIMB was clear:
Privacy is not a setting. It’s a system design choice—and it must be provable.