OC3 registrations are now open! Join the premier event for confidential computing online or in Berlin on March 27.

privatemode dark large

Privatemode

The confidential GenAI service


Privatemode gives you access to state-of-the-art large language models (LLMs) with complete data privacy, secured by end-to-end confidential computing.

GenAI without the security and privacy worries


Privatemode is a GenAI service like ChatGPT. The big difference is that Privatemode keeps your data private. To achieve this, Privatemode uses a technology called confidential computing. Your data gets encrypted before it leaves your device and remains protected throughout, even during processing.

Within its secure environment, Privatemode runs Meta Llama 3.1 or other state-of-the-art LLMs, for example from Mistral, and Deepseek R1 coming soon.

Privatemode product

Use Privatemode via API or with the app


You can use Privatemode through an intuitive API, that can be used as a drop-in replacement for OpenAI. The API allows for automated, bulk processing of sensitive data. In addition, there's a app version with a familiar chatbot interface. The app is available for Windows and macOS.

Easy to use and powerful

Lock icon

Verifiably secure


Continuum keeps your data always encrypted and protects it from the cloud and service providers.

Plug-and-play

Plug-and-play


Continuum provides an OpenAI-compatible API. You only need to run a small proxy for data encryption and attestation. Alternatively, an SDK is available.

Performance icon

High performance


Continuum is fast and offers a selection of state-of-the-art LLMs open source LLMs, for example, Meta Llama 3.1.

World icon

Available worldwide


Continuum is hosted in the EU, the US, and soon other geographies. We use Azure and other high-quality infrastructure providers.

Empower your organization with Confidential GenAI

Reduce costs



Use secure cloud-based AI instead of building out your own capabilities on-prem.

Unlock potential

Process even your organization's sensitive data with the help of AI.

Increase productivity

Provide your employees with a trustworthy and compliant co-pilot.

Assure customers

Protect your customers' data while providing state-of-the-art AI-based services.

E2E confidential computing


In Privatemode, prompts and responses are fully protected from external access. Prompts are encrypted client-side using AES-256 and decrypted only within Privatemode's confidential computing environment (CCE), enforced by Intel and AMD CPUs and Nvidia H100 GPUs. Data remains encrypted at runtime within the CCE, ensuring it never appears as plaintext in main memory.

End-to-end confidential computing

Verifiable security


The CPUs and GPUs enforcing Privatemode's confidential-computing environment issue cryptographic certificates for all software running inside. With these, the integrity of the entire Privatemode backend can be verified. Verification is performed on the user side via the Privatemode proxy or SDK before sharing any data.

Verifiable security

Blackbox architecture


Privtemode is architected such that user data can neither be accessed by the infrastructure provider (for example, Azure), nor the service provider (Edgeless Systems), nor other parties such as the provider of the AI model (for example, Meta). While confidential-computing mechanisms prevent outside-in access, sandboxing mechanisms and end-to-end remote attestation prevent inside-out leaks.

Blackbox architecture

FAQ

Can I build my own AI application with Privatemode?

Yes, you can build your own AI application with Privatemode. Privatemode offers an easy-to-use API for inference.

Where is Privatemode hosted?

Continuum is hosted on Microsoft Azure. However, Privatemode's architecture ensures that neither Microsoft nor Edgeless Systems can access your data.

Which large language models (LLMs) does Privatemode serve?

Currently, Privatemode uses Llama 3.1 70B. We'll add more models going forward, like Deepseek R1.

How can I test Privatemode?

You can test Privatemode by requesting a trial API key. If interested, please request the API key here.

What API does Privatemode provide?

Privatemode provides an OpenAI-compatible inference API that allows users to securely interact with LLMs.

Can I upload files to Privatemode ?

Document upload is currently not part of Privatemode, but you can of course use your own retrieval augmented generation (RAG) pipeline with the Privatemode API.

Do I need to trust Edgeless Systems?

No. Privatemode leverages confidential computing, and provides hardware-enforced end-to-end encryption, ensuring we can never see your prompts or replies. For more information, visit see the documentation.

Get in touch


Do you have questions or remarks around Privatemode? Leave your details and we'll get back to you shortly.

The form failed to load. Please send an email to contact@edgeless.systems. Loading likely fails because you are using privacy settings or ad blocks.