Provenance and review tracking for AI-generated code.
In fast-moving environments, using LLMs to generate code accelerates development, but it introduces varying levels of risk. aicodesign provides lightweight Python decorators to explicitly mark the review status and trust boundaries of AI-generated code running in production.
pip install aicodesignuv add aicodesignThis library standardizes AI code into three distinct categories based on human verification:
- Code Reviews: 0
- Test Reviews: 0
- Concept: The code and its tests were generated by an LLM and pushed without human review. It is a raw draft. Emits a runtime logger warning when executed.
- Code Reviews: 0
- Test Reviews: 1+ (Human Verified)
- Concept: The internal logic is unreviewed (a black box), but the code is bounded by strict, human-reviewed unit tests. We know what it does, even if we haven't audited how it does it.
- Code Reviews: 1 (Human Verified)
- Test Reviews: 1+ (Human Verified)
- Concept: A human developer has reviewed the AI's logic and tests, officially putting their name on the line alongside the LLM. Requires a mandatory reviewer argument.
from aicodesign import ai_draft, ai_blackbox, ai_co_signed
# Tier 3: Pure AI Draft
@ai_draft(ticket="HFT-101")
def calculate_momentum_alpha(prices):
# Unreviewed logic and tests
pass
# Tier 2: AI Blackbox
@ai_blackbox(ticket="HFT-102", notes="Tests verify strict output boundaries")
def parse_exchange_feed(payload):
# Logic is unreviewed, but a human vetted the test harness
pass
# Tier 1: Co-Signed Code
@ai_co_signed(reviewer="alice.dev", ticket="HFT-103")
def update_order_book(book, new_orders):
# A human has audited the logic and tests
passAll decorators attach metadata to the functions, making it easy to build CI/CD guardrails or runtime telemetry to track AI code execution:
print(update_order_book.__ai_provenance__) # Output: "co_signed"
print(update_order_book.__ai_reviewer__) # Output: "alice.dev"MIT