Skip to content

AlbertShamin/controlled-ai-core

Repository files navigation

Controlled AI Core™ (CAIC)

Controlled AI Core™ (CAIC) is an independent governance and control standard for autonomous AI systems,
currently published for open, non-commercial use.

CAIC defines a practical, auditable, and human-accountable architecture for deploying AI systems with controlled autonomy, explicit responsibility, and enforceable safety mechanisms.


Purpose

As AI systems transition from assistive tools to autonomous and agentic systems, organizations face growing risks related to:

  • Loss of human control
  • Undefined responsibility and accountability
  • Regulatory non-compliance
  • Unsafe or irreversible autonomous actions

Controlled AI Core™ addresses these risks by providing a vendor-neutral governance architecture that makes AI systems controllable, auditable, and legally defensible.


What CAIC Is

CAIC is:

  • A governance and control standard, not a software product
  • Model-agnostic and vendor-independent
  • Designed for enterprise and high-risk AI systems
  • Focused on operational control, not ethics statements or principles
  • Built around human responsibility by design

What CAIC Is Not

CAIC is not:

  • An AI model or framework
  • A domain-specific regulation (medical, financial, etc.)
  • A certification or compliance guarantee
  • An open-source software license

Core Concepts

The standard is built around several key mechanisms:

  • Controlled AI Core™ Architecture
    A mandatory control layer between users and AI models

  • Autonomy Levels (L0–L4)
    Explicit limits on what decisions AI systems are authorized to make

  • Policy Engine
    Machine-enforceable rules that define allowed and forbidden actions

  • Pre- and Post-Inference Gates
    Control points that evaluate requests and responses before and after model execution

  • AISO (AI System Owner)
    A formally accountable human role responsible for AI system behavior

  • Kill-Switch & Incident Control
    Mandatory emergency shutdown and incident response mechanisms

  • Immutable Audit Logging
    Verifiable records of all AI decisions and control actions


Current Status

  • Version: v1.2
  • Publication status: Public, non-commercial
  • Commercial licensing: Not active (reserved for future versions)
  • Certification programs: Not active
  • Domain-specific policies: Not included

Licensing & Trademark

Controlled AI Core™ is an original work.

  • Usage is governed by the terms defined in LICENSE.md
  • Trademark usage is governed by TRADEMARK.md
  • Commercial use, certification, and redistribution require explicit written permission from the author

Author

Albert Shamin
Independent author and maintainer


Disclaimer

Controlled AI Core™ is provided as a governance and architectural standard.
It does not constitute legal advice, regulatory approval, or a warranty of compliance.

Organizations remain fully responsible for the lawful and safe operation of their AI systems.


Contact

For inquiries, collaboration proposals, or future licensing discussions,
contact information will be published when applicable.