Skip to content

Architectural Decision Record (ADR)

Title: AI Model Hosting and Usage Decision
ADR ID: ADR-00X
Status: Proposed / Approved / Superseded
Date: [YYYY-MM-DD]
Author(s): [Name(s), Org, Contact Info]
Reviewer(s): [Architecture board, cyber lead, PM, etc.]


1. Context

Briefly describe the context in which the AI model will be used:

  • Project / Mission Area:
  • Classification Level: (e.g., Unclassified, CUI, IL4, IL5, IL6)
  • Stage in SDLC: (e.g., Code Generation, Testing, Planning Support)
  • Team / Software Factory:
  • Model Use Purpose: (e.g., LLM-generated tests, agentic task coordination, documentation support)

2. Decision

We have selected the following model hosting and usage pattern:

  • Hosting Model:
  • ☐ Public SaaS (e.g., OpenAI.com, Bard)
  • ☐ Gov SaaS / Controlled Cloud (e.g., Azure OpenAI IL5)
  • ☐ Self-Hosted / Open Source (e.g., LLaMA 2, Mistral)
  • ☐ Hybrid / RAG with Controlled External Model Access

  • Model(s) Chosen:

  • Access Method: (e.g., API call, RAG via broker, IDE plugin)
  • Environment of Use: (e.g., Platform One, air-gapped enclave, GovCloud pipeline)

3. Rationale

Provide justification for the selected model and hosting choice:

Factor Consideration Notes
Mission Risk & Classification
Required Trust Level
Pipeline Integration Capability
Model Transparency / Versioning Needs
Sustainment Capacity
Vendor Compliance (FedRAMP, IL4/5)
Security & Privacy Constraints
Performance Requirements
Prompt / Output Governance Maturity

4. Controls and Safeguards

Document the cybersecurity and governance guardrails in place:

  • [ ] Model version control (locked or tracked)
  • [ ] Prompt and output logging enabled
  • [ ] Prompt templates versioned and reviewed
  • [ ] Human-in-the-loop validation (where applicable)
  • [ ] Egress controls for hybrid or external access
  • [ ] Automated or manual rollback plan in place
  • [ ] Integration with existing DevSecOps logging and policy

5. Consequences

Describe potential consequences or tradeoffs of this decision:

  • Known Risks (e.g., latency, vendor lock-in, insufficient observability):
  • Fallback Plan or Alternative:
  • Periodic Review Timeline (e.g., every 6 months or upon model update):

  • [Link to system architecture doc]
  • [Trust calibration checklist or framework alignment]
  • [Supply Chain Risk Management record]
  • [DoD AI Strategy / AI RMF mapping]
  • [Testing and evaluation results, if any]