# AIR Blackbox > The trust infrastructure between human intent and AI execution. Verify. Filter. Stabilize. Protect. AIR Blackbox is open-source trust infrastructure that sits inside every AI call -- between your team and the AI stack -- providing decision traceability, escalation intelligence, operational drift detection, and human oversight proof. v1.10.0. 1,500+ tests. 74% coverage. Runs locally. Apache 2.0. AI made generation abundant. What becomes valuable now is the infrastructure that verifies, routes, constrains, and records machine-assisted work in real time. Compliance is the wedge. Trust infrastructure is the platform. ## Why AIR Blackbox Enterprise governance platforms (Credo AI, Holistic AI, OneTrust) audit after the fact. AI security firewalls (Arthur AI, Lasso, Lakera) filter threats. AIR Blackbox sits inside the call -- at the interception layer between human intent and AI execution. That gives you: - **Verify** -- HMAC-SHA256 tamper-evident audit chains for decision traceability - **Filter** -- PII detection and prompt injection scanning in real time - **Stabilize** -- 48 compliance checks in CI/CD for operational drift detection - **Protect** -- Human oversight attestation (Art. 14 delegation logging) ## Key Links - [Homepage](https://airblackbox.ai) - [GitHub](https://github.com/airblackbox/gateway) - [PyPI](https://pypi.org/project/air-blackbox/) - [Audit Chain Specification (Open Standard v1.0.0)](https://airblackbox.ai/spec) - [CI/CD Integration Guide](https://airblackbox.ai/ci-cd) - [Compliance Mapping](https://airblackbox.ai/compliance-mapping) - [Blog](https://airblackbox.ai/blog/) ## Installation pip install air-blackbox ## Quick Start air-blackbox comply --scan . -v ## Features - 48 EU AI Act + GDPR compliance checks (Articles 9-12, 14-15) - Article 12 Compliance Layer -- static + runtime analysis for tamper-evident logging - ML-DSA-65 (FIPS 204) quantum-safe digital signatures - HMAC-SHA256 tamper-evident audit chains (open standard) - Self-verifying .air-evidence bundles for auditors - Prompt injection detection (20 patterns, 5 categories) - GDPR scanner (8 checks) - Bias and fairness scanner (6 checks) - ISO 42001 + NIST AI RMF + Colorado SB 24-205 crosswalk - Trust layers for LangChain, CrewAI, OpenAI SDK, Claude Agent SDK, Google ADK, AutoGen, Haystack - Agent-to-agent (A2A) compliance protocol - Pre-commit hooks - MCP server integration - Runtime validation engine (tool allowlists, content policy, PII output scanning) ## Quality - 1,500+ tests (unit + integration) - 74% code coverage (CI-enforced floor: 70%) - Zero lint warnings (ruff E/F/W/I rules, CI-enforced) - Consistent formatting (ruff format, CI-enforced) - Integration tests for LangChain, OpenAI SDK, CrewAI - CI matrix: Python 3.10, 3.11, 3.12 ## Independent Validation - Academic: AEGIS (arXiv:2603.12621, March 2026) independently published the same interception-layer architecture - Analyst: McKinsey "State of AI Trust in 2026" names trust infrastructure as critical for the agentic era - Market: 28% of US firms have zero confidence in their AI data quality (AnalyticsWeek, 2026) ## License Apache 2.0 ## EU AI Act Deadline August 2, 2026 -- High-risk AI systems must comply