Continuum AI is now public. Try out the most secure GenAI service!

Continuum

The confidential GenAI service


Continuum gives you access to state-of-the-art large language models (LLMs) with complete data privacy, secured by end-to-end confidential computing.

GenAI without the security and privacy worries


Continuum is a GenAI service like ChatGPT. The big difference is that Continuum keeps your data private. To achieve this, Continuum uses a technology called confidential computing. Your data gets encrypted before it leaves your device and remains protected throughout, even during processing.

Within its secure environment, Continuum runs Meta Llama 3.1 or other state-of-the-art LLMs, for example from Mistral.

Continuum product

Use Continuum via API or from your browser


The Continuum service provides an intuitive API that can be used as a drop-in replacement for OpenAI. The API allows for automated, bulk processing of sensitive data. In addition, there's a browser-based version with a familiar chatbot interface.

Easy to use and powerful

Lock icon

Verifiably secure


Continuum keeps your data always encrypted and protects it from the cloud and service providers.

Plug-and-play

Plug-and-play


Continuum provides an OpenAI-compatible API. You only need to run a small proxy for data encryption and attestation. Alternatively, an SDK is available.

Performance icon

High performance


Continuum is fast and offers a selection of state-of-the-art LLMs open source LLMs, for example, Meta Llama 3.1.

World icon

Available worldwide


Continuum is hosted in the EU, the US, and soon other geographies. We use Azure and other high-quality infrastructure providers.

Pricing

Trial


  • Test 14 days for free

  • Up to 1 million tokens

  • Self-service

Starter


  • Pay-per-token

  • 5€ / 1M input, 15€ / 1M output

  • Support via e-mail

Enterprise


  • Custom integration

  • Individual models

  • Support via e-mail, chat and phone

Empower your organization with Confidential GenAI

Reduce costs

Use secure cloud-based AI instead of building out your own capabilities on-prem.

Unlock potential

Process even your organization's sensitive data with the help of AI.

Increase productivity

Provide your employees with a trustworthy and compliant co-pilot.

Assure customers

Protect your customers' data while providing state-of-the-art AI-based services.

E2E confidential computing


In Continuum, prompts and responses are fully protected from external access. Prompts are encrypted client-side using AES-256 and decrypted only within Continuum's confidential computing environment (CCE), enforced by Intel and AMD CPUs and Nvidia H100 GPUs. Data remains encrypted at runtime within the CCE, ensuring it never appears as plaintext in main memory.

End-to-end confidential computing

Verifiable security


The CPUs and GPUs enforcing Continuum's confidential-computing environment issue cryptographic certificates for all software running inside. With these, the integrity of the entire Continuum backend can be verified. Verification is performed on the user side via the Continuum proxy or SDK before sharing any data.

Verifiable security

Blackbox architecture


Continuum is architected such that user data can neither be accessed by the infrastructure provider (for example, Azure), nor the service provider (Edgeless Systems), nor other parties such as the provider of the AI model (for example, Meta). While confidential-computing mechanisms prevent outside-in access, sandboxing mechanisms and end-to-end remote attestation prevent inside-out leaks.

Blackbox architecture

Continuum in the media

Dark Reading

Sep. 30, 2024

Securing AI With Confidential Computing

"This architecture allows the Continuum service to lock itself out of the confidential computing environment, preventing AI code from leaking data. In combination with end-to-end remote attestation, this ensures robust protection for user prompts."

Microsoft

Sep. 24, 2024

General Availability: Azure confidential VMs with NVIDIA H100 Tensor Core GPUs


"With Continuum AI, we've created a framework for the end-to-end confidential serving of LLMs that ensures the highest level of data privacy, setting a new standard in AI inference solutions."

NVIDIA

Jul. 2, 2024

Advancing Security for Large Language Models with NVIDIA GPUs and Edgeless Systems

"Edgeless Systems introduced Continuum AI, the first generative AI framework that keeps prompts encrypted at all times with confidential computing by combining confidential VMs with NVIDIA H100 GPUs and secure sandboxing."

FAQ

Can I build my own AI application with Continuum?

Yes, you can build your own AI application with Continuum. Continuum offers an easy-to-use API for inference.

Where is Continuum hosted?

Continuum is hosted on Microsoft Azure. However, Continuum's architecture ensures that neither Microsoft nor Edgeless Systems can access your data.

Which large language models (LLMs) does Continuum serve?

Currently, Continuum uses Llama 3.1 70B. We'll add more models going forward.

How can I test Continuum?

You can test Continuum by requesting a trial API key. If interested, please request the API key here.

What API does Continuum provide?

Continuum provides an OpenAI-compatible inference API that allows users to securely interact with LLMs.

Can I upload files to Continuum?

Document upload is currently not part of Continuum, but you can of course use your own retrieval augmented generation (RAG) pipeline with the Continuum API.

Do I need to trust Edgeless Systems?

No. Continuum leverages confidential computing, and provides hardware-enforced end-to-end encryption, ensuring we can never see your prompts or replies. For more information, visit see the documentation.

Get in touch


Do you have questions or remarks around Continuum? Leave your details and we'll get back to you shortly.

The form failed to load. Please send an email to contact@edgeless.systems. Loading likely fails because you are using privacy settings or ad blocks.