import { GQTSecurity } from '@gqt/security';
const client = new GQTSecurity({ apiKey: process.env.GQT_API_KEY });

// Protect AI calls with quantum-safe encryption
const response = await client.secure({
  model: 'gpt-4',
  prompt: userInput,
  protection: {
    promptInjection: true,
    outputValidation: true,
    quantumSafe: true
  }
});
Developer Portal

Build Secure AI Applications

SDKs, APIs, and tools to protect your AI systems from adversarial attacks and quantum threats. Drop-in security for any AI stack.

Quick Start

Secure Your AI in 5 Minutes

1

Install SDK

Add our SDK to your project

2

Configure

Set your API key and options

3

Protect

Wrap your AI calls with security

Python
# Install the SDK
pip install gqt-security

# Initialize with your API key
from gqt_security import GQTSecurity

client = GQTSecurity(api_key="your-api-key")

# Protect your AI calls
@client.protect(
    prompt_injection=True,
    model_integrity=True,
    quantum_safe=True
)
async def secure_ai_call(prompt: str):
    response = await openai.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}]
    )
    return response

# All attacks automatically blocked, all comms quantum-encrypted
result = await secure_ai_call(user_input)
SDKs & Libraries

Native SDKs for Every Stack

First-class support for Python, TypeScript, Go, and Rust. Each SDK is fully typed, well-documented, and designed for production use.

Python

gqt-security
pip install gqt-security
  • Async/await support
  • Type hints
  • OpenAI/Anthropic integrations

TypeScript

@gqt/security
npm install @gqt/security
  • Full type safety
  • ESM & CJS
  • React hooks

Go

gqt-go
go get github.com/gqt/gqt-go
  • Context support
  • Zero dependencies
  • High performance

Rust

gqt-rs
cargo add gqt-rs
  • Memory safe
  • Async runtime
  • WASM support
Security Tools

Everything You Need to Secure AI

Core

Prompt Injection Scanner

Detect and block prompt injection attacks in real-time. Works with any LLM.

Core

Model Integrity Monitor

Continuous verification that your models haven't been tampered with or poisoned.

Enterprise

Quantum-Safe Encryption

Post-quantum cryptography for all agent communications and data at rest.

Enterprise

Execution Sandbox

Isolated runtime environments that contain potential damage from compromised AI.

Core

CI/CD Integration

GitHub Actions, GitLab CI, and Jenkins plugins for security testing in your pipeline.

Core

CLI Tools

Command-line tools for security scanning, key management, and deployment.

API Reference

REST & GraphQL APIs

Full-featured APIs for when you need more control. REST for simplicity, GraphQL for flexibility. Both with comprehensive documentation.

OpenAPI 3.1 specification
Interactive API explorer
Webhook support for real-time events
Rate limiting with burst allowance
99.99% uptime SLA

Core Endpoints

POST/v1/protect
POST/v1/scan
GET/v1/models/{id}/integrity
POST/v1/encrypt
GET/v1/agents/{id}/trust
POST/v1/audit/log
Integrations

Works With Your Stack

OpenAI
Anthropic
Google AI
AWS Bedrock
Azure OpenAI
Hugging Face
LangChain
LlamaIndex
Kubernetes
Docker
Terraform
AWS

Ready to Secure Your AI?

Get started with our free tier or contact us for enterprise pricing. Our team is here to help you build secure AI applications.