PREVENT PII AND PHI DISCLOSURE IN YOUR AI APPS

Airlock

Airlock is an AI policy layer to prevent the disclosure of sensitive information, such as PII and PHI, in your AI applications.

It can be very hard to know what information is in AI training data, and this can lead to AI models disclosing sensitive information to its users. It can be even riskier when using AI models trained by others.

With Airlock, the outputs of your AI models are inspected or modified based on your policy. You can evaluate AI-generated text for sentiment and offensiveness.

Airlock runs in your cloud and your data stays in your cloud.

Release Notes

How Airlock Works

You have an AI-powered chat bot and you want to apply a policy to the chat bot prior to returning responses to the user.

Airlock runs in your cloud and exposes an API. You send the AI-powered chat bot's output to Airlock where it applies your policy. The modified text is returned to your application where it can now be returned to the user.

AI Policies

Define policies that Airlock will apply to your AI-generated text.

You can create as many policies as you need to define your AI application's business, privacy, and security requirements.

Cloud Agnostic

Airlock can run in any cloud is available on the AWS, Google Cloud, and Microsoft Azure marketplaces.

Contact us for on-prem or other deployment scenarios.

See a Demo

Request a 30 minute demo to see Airlock in action.

Request a Demo