This article defines Axum’s clear, company‑wide standard for safe, effective AI that keeps humans in control and client privacy protected.
📍 Step 1: Know Axum’s Responsible AI Principles
These are Axum’s non‑negotiables. Use them to guide every AI task.
- Privacy‑first, client‑first. Never input confidential or personally identifiable client data into AI tools unless (a) the tool is on Axum’s approved list, and (b) the contract and data controls permit it. Our mission centers inclusion, climate‑positive impact, and community trust — these values extend to our AI use.
- Human in the Loop (HITL) by design. Humans within Axum are expected to own the prompt, review, and final decision. AI assists; people remain accountable for outcomes.
- Data minimization & secure handling. Use the minimum data required; prefer synthetic, anonymized, or aggregated data when feasible.
- Fairness & explainability. Check for biased outputs; prefer approaches that make reasoning traceable and explainable to clients.
- Safety, robustness & reliability. Stress‑test prompts and outputs; avoid deploying untested AI to client‑facing channels.
- Transparency & accountability. Disclose AI assistance to internal stakeholders and, when client‑facing, follow client disclosure preferences and contracts.
✅ Tip: If a principle conflicts with a deadline, the principle wins. Escalate early.
🚧 Note: These guidelines complement, not replace applicable regulations (e.g., EU AI Act risk‑based requirements).
📍 Step 2: Use the Human‑in‑the‑Loop Workflow
Apply this loop to all Axum AI tasks (drafting, analysis, coding, research).
- Plan → Define purpose, audience, data allowed, and red lines (what the AI must not do).
- Prompt → Provide context, constraints, and sources; ban client secrets unless the tool is approved.
- Review → Fact‑check, test for bias and safety, and verify against trusted sources; iterate prompts.
- Decide & Sign‑off → A named human owner approves or rejects before any client use.
(Insert Screenshot: HITL‑Workflow‑Checklist)
✅ Tip: For higher‑risk tasks (e.g., decisions affecting people/services), apply dual control (two qualified reviewers).
📍 Step 3: Protect Client Data & Privacy
Axum’s privacy‑first rules for AI.
- Approved tools only. Use Axum‑approved vendors with enterprise controls (data isolation, no training on our prompts/outputs by default). See the Approved AI Tools list.
- Minimize & mask. Replace names, IDs, and sensitive details with placeholders or anonymized samples whenever possible.
- Secure storage. Save prompts/outputs in Axum workspaces with access controls; never in personal drives.
- Client consent & contracts. Respect client instructions on AI usage and data sharing; when in doubt, do not upload.
🚧 Note: Some jurisdictions require human oversight and record‑keeping for certain use cases. If the use may be “high‑risk,” consult the Partner heading the project before proceeding.
📍 Step 4: Check for Accuracy, Bias & Safety
Make outputs trustworthy before sharing or acting.
- Factuality checks. Cross‑verify claims against authoritative sources; for numbers, compute independently.
- Bias sweeps. Test prompts for disparate outcomes and stereotyped language; adjust prompts and data.
- Safety tests. Probe for harmful or disallowed content or hallucinated content
- Explainability. Prefer methods that can be explained to a non‑technical client stakeholder.
✅ Tip: Keep a short “validation note” (what you checked, what changed) with the file or ticket.
📍 Step 5: Document, Disclose & Get Approvals
Create a trail that shows responsible use.
- RAI Note (mandatory). For client‑facing outputs, attach a one‑pager: purpose, data used, tools/models, human reviewers, evaluation steps, and known limitations.
- Disclosure. Where appropriate, note that AI assisted (“draft generated with AI and reviewed by Axum”).
- Approvals. For higher‑risk uses or public release, obtain partner + Legal/Compliance sign‑off.
📍 Step 6: Use Approved Tools & Vendors
Choose solutions that meet Axum’s governance bar.
- Prefer vendors approved internally by Axum and that have enterprise controls
- Contractual controls. Require no data retention for prompts/outputs by default, robust access controls, audit logs, and clear incident response.
✅ Tip: If a tool lacks enterprise‑grade privacy, it’s not approved for client data.
📍 Step 7: Report Incidents & Improve Continuously
When something goes wrong, or nearly does, tell us fast.
- Report within 24 hours of discovering a potential breach, harmful output, bias issue, or misuse by messaging the internal AI team
- Triage & remediate. Pause affected workflows, notify owners, and document fixes.
- Learn & tune. Update prompts, playbooks, and controls; feed lessons into training.
✅ Tip: We review these incidents as soon as they are reported and work to continue to iterate our worklfows
✨ Quick Summary
- ✅ Keep humans in control—prompt, review, and approve before any client use.
- ✅ Protect privacy—approved tools only; minimize/mask data; follow contracts.
- ✅ Test for accuracy, bias, and safety before sharing.
- ✅ Document & disclose use of AI and get approvals for higher‑risk cases.
- ✅ Choose compliant vendors that Axum has approved internally
- ✅ Report incidents quickly and improve controls continuously.
🔗 What’s Next?
- Go to the Consulting Workflows Section
