AI disclosure
The AI Use Disclosure describes how Han AI uses artificial intelligence, the foundation-model providers it relies on, and the known limits of AI Output. It is incorporated into the MSA at §10 and the DPA at Annex A.
What the System does with AI
Han AI uses large language models, vision-language models, and (where configured) image- and video-generation models to:
- Read and interpret messages, documents, images, and audio.
- Generate text, images, code, and structured data.
- Make recommendations and, where authorised in your SOW, take actions on your behalf.
- Operate continuous agents that run on a schedule or in response to events.
- Surface insights from Client Data.
The System is probabilistic. The same input can produce different outputs, and the output can be wrong.
Foundation-model providers
| Provider | Used for | Region |
|---|---|---|
| OpenAI | Reasoning, text generation, vision analysis | United States |
| Anthropic | Reasoning, long-context analysis, agent execution | United States |
| Google (Gemini, where used) | Multimodal reasoning, video understanding | United States |
The authoritative sub-processor list is published at hanai.systems/sub-processors and reproduced in DPA Annex B.
Training opt-out
Where a provider offers a setting that controls whether submitted content may be used to train their models, Han AI configures that setting to opt out of training by default. Han AI does not fine-tune models on your Client Data, and does not share Client Data with third parties beyond the disclosed sub-processors.
What humans review
You are responsible for human review of AI Output before relying on it in any of the following contexts:
- Decisions that materially affect an individual’s employment, health, finances, legal rights, or safety.
- Public communications attributed to you.
- Financial transactions, contracts, or commitments.
- Regulatory filings or representations to authorities.
- Operational changes with significant downstream impact.
Where the System is configured to take autonomous actions, the scope of autonomy and the approval gates, blast-radius limits, and audit logs are agreed in your SOW.
Known limits of AI Output
- May be inaccurate (“hallucination”) — verify before relying on it.
- May be biased — models reflect their training data.
- May be outdated — knowledge cutoffs apply.
- May not be unique — generative output can resemble what another user receives.
- May not be protectable by copyright in your jurisdiction.
- Is not professional advice.
Sensitive data
Do not submit health records, biometric identifiers, government identifier numbers, information about children under sixteen, or content covered by privilege without prior written agreement and additional safeguards.
Provider changes
Han AI may change the foundation-model provider or model used for any function within the System on notice to you, where the change improves quality, cost, speed, or safety.
Full document
TODO: confirm public PDF URL for the AI Use Disclosure.