Airlock is an AI policy layer to prevent the disclosure of sensitive information, such as PII and PHI, in your AI applications.

It can be very hard to know what information is in AI training data, and this can lead to AI models disclosing sensitive information to its users. It can be even riskier when using AI models trained by others.

With Airlock, the outputs of your AI models are inspected or modified based on your policy. You can evaluate AI-generated text for sentiment and offensiveness.

Airlock FAQ | Airlock User’s Guide

Airlock runs in your cloud and your data stays in your cloud.

How Airlock Works

You have an AI-powered chat bot and you want to apply a policy to the chat bot prior to returning responses to the user.

Airlock runs in your cloud and exposes an API. You send the AI-powered chat bot’s output to Airlock where it applies your policy. The modified text is returned to your application where it can now be returned to the user.

AI Policies

You define policies that Airlock will apply to your AI-generated text. Your policies control how PII is managed.

Cloud Agnostic

Airlock can run in any cloud is available on the AWS, Google Cloud, and Microsoft Azure marketplaces.