OC3 registrations are now open! Join the premier event for confidential computing online or in Berlin on March 27.
Privatemode
Privatemode gives you access to state-of-the-art large language models (LLMs) with complete data privacy, secured by end-to-end confidential computing.
Privatemode is a GenAI service like ChatGPT. The big difference is that Privatemode keeps your data private. To achieve this, Privatemode uses a technology called confidential computing. Your data gets encrypted before it leaves your device and remains protected throughout, even during processing.
Within its secure environment, Privatemode runs Meta Llama 3.1 or other state-of-the-art LLMs, for example from Mistral, and Deepseek R1 coming soon.
You can use Privatemode through an intuitive API, that can be used as a drop-in replacement for OpenAI. The API allows for automated, bulk processing of sensitive data. In addition, there's a app version with a familiar chatbot interface. The app is available for Windows and macOS.
Continuum keeps your data always encrypted and protects it from the cloud and service providers.
Continuum provides an OpenAI-compatible API. You only need to run a small proxy for data encryption and attestation. Alternatively, an SDK is available.
Continuum is fast and offers a selection of state-of-the-art LLMs open source LLMs, for example, Meta Llama 3.1.
Continuum is hosted in the EU, the US, and soon other geographies. We use Azure and other high-quality infrastructure providers.
Reduce costs
Use secure cloud-based AI instead of building out your own capabilities on-prem.
Unlock potential
Process even your organization's sensitive data with the help of AI.
Increase productivity
Provide your employees with a trustworthy and compliant co-pilot.
Assure customers
Protect your customers' data while providing state-of-the-art AI-based services.
In Privatemode, prompts and responses are fully protected from external access. Prompts are encrypted client-side using AES-256 and decrypted only within Privatemode's confidential computing environment (CCE), enforced by Intel and AMD CPUs and Nvidia H100 GPUs. Data remains encrypted at runtime within the CCE, ensuring it never appears as plaintext in main memory.
The CPUs and GPUs enforcing Privatemode's confidential-computing environment issue cryptographic certificates for all software running inside. With these, the integrity of the entire Privatemode backend can be verified. Verification is performed on the user side via the Privatemode proxy or SDK before sharing any data.
Privtemode is architected such that user data can neither be accessed by the infrastructure provider (for example, Azure), nor the service provider (Edgeless Systems), nor other parties such as the provider of the AI model (for example, Meta). While confidential-computing mechanisms prevent outside-in access, sandboxing mechanisms and end-to-end remote attestation prevent inside-out leaks.
Can I build my own AI application with Privatemode?
Where is Privatemode hosted?
Which large language models (LLMs) does Privatemode serve?
How can I test Privatemode?
What API does Privatemode provide?
Can I upload files to Privatemode ?
Do I need to trust Edgeless Systems?
Do you have questions or remarks around Privatemode? Leave your details and we'll get back to you shortly.
The form failed to load. Please send an email to contact@edgeless.systems. Loading likely fails because you are using privacy settings or ad blocks.