What is PromptWall?
PromptWall sits between your application and foundation models (OpenAI, Anthropic, Google, Azure, Bedrock) and enforces:- Prompt injection detection — blocks jailbreak, exfiltration, and tool-smuggling attacks
- Answer grounding — verifies LLM responses match verified sources; rejects hallucinations
- Tool governance — webhooks registered by you, executed with HMAC signing and SSRF protection
- Full audit trail — every request logged; optional zero-retention mode with S3 + KMS
- Learning loop — feedback captured with PII redaction, used to tune policies over time
Quickstart (5 minutes)
From signup to your first verified answer in five minutes.
Concepts
How PromptWall works, its three operational modes, and the governance pipeline.
API Reference
Complete reference for /v1/verify, /v1/chat, and the tool registry.
Python SDK
pip install promptwall — the official Python client.
Three modes, one API
| Mode | Who pays for LLM tokens | Best for |
|---|---|---|
| Verify only | No LLM call | Customer already has an answer, wants it validated |
| Webhook BYOK | You (customer’s API key) | You control the LLM, we govern the pipeline |
| Webhook Managed | PromptWall | Fastest integration; we run the LLM for you |
Production-ready from day one
- SOC2-friendly architecture — KMS envelope encryption, audit archival, zero-retention mode
- Multi-tenant isolation — row-level security at the database layer
- EU hosting — Frankfurt region with GDPR-compliant data residency
- SLA tiers — 99.5% / 99.9% / 99.99% uptime based on plan