import { GQTSecurity } from '@gqt/security';
const client = new GQTSecurity({ apiKey: process.env.GQT_API_KEY });
// Protect AI calls with quantum-safe encryption
const response = await client.secure({
model: 'gpt-4',
prompt: userInput,
protection: {
promptInjection: true,
outputValidation: true,
quantumSafe: true
}
});Build Secure AI Applications
SDKs, APIs, and tools to protect your AI systems from adversarial attacks and quantum threats. Drop-in security for any AI stack.
Secure Your AI in 5 Minutes
Install SDK
Add our SDK to your project
Configure
Set your API key and options
Protect
Wrap your AI calls with security
# Install the SDK
pip install gqt-security
# Initialize with your API key
from gqt_security import GQTSecurity
client = GQTSecurity(api_key="your-api-key")
# Protect your AI calls
@client.protect(
prompt_injection=True,
model_integrity=True,
quantum_safe=True
)
async def secure_ai_call(prompt: str):
response = await openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)
return response
# All attacks automatically blocked, all comms quantum-encrypted
result = await secure_ai_call(user_input)Native SDKs for Every Stack
First-class support for Python, TypeScript, Go, and Rust. Each SDK is fully typed, well-documented, and designed for production use.
Python
gqt-securitypip install gqt-security- Async/await support
- Type hints
- OpenAI/Anthropic integrations
TypeScript
@gqt/securitynpm install @gqt/security- Full type safety
- ESM & CJS
- React hooks
Go
gqt-gogo get github.com/gqt/gqt-go- Context support
- Zero dependencies
- High performance
Rust
gqt-rscargo add gqt-rs- Memory safe
- Async runtime
- WASM support
Everything You Need to Secure AI
Prompt Injection Scanner
Detect and block prompt injection attacks in real-time. Works with any LLM.
Model Integrity Monitor
Continuous verification that your models haven't been tampered with or poisoned.
Quantum-Safe Encryption
Post-quantum cryptography for all agent communications and data at rest.
Execution Sandbox
Isolated runtime environments that contain potential damage from compromised AI.
CI/CD Integration
GitHub Actions, GitLab CI, and Jenkins plugins for security testing in your pipeline.
CLI Tools
Command-line tools for security scanning, key management, and deployment.
REST & GraphQL APIs
Full-featured APIs for when you need more control. REST for simplicity, GraphQL for flexibility. Both with comprehensive documentation.
Core Endpoints
/v1/protect/v1/scan/v1/models/{id}/integrity/v1/encrypt/v1/agents/{id}/trust/v1/audit/logWorks With Your Stack
Ready to Secure Your AI?
Get started with our free tier or contact us for enterprise pricing. Our team is here to help you build secure AI applications.