AI Usage Control Baseline

Establish Control Over AI Already in Use

AI is already in use across drafting, meeting summaries, code assistance, customer communications, and internal workflows. Leadership often cannot see where AI is being used, what data is leaving the organisation, or which uses require immediate intervention.

The output is a register of active AI uses, an exposure classification, and a clear proceed / conditions / stop control model.

Commercial Summary

AI Usage Control Baseline

Establish control over AI already in use.

A focused paid intervention for organisations where AI use has already spread across drafting, meeting summaries, code assistance, customer communications, and internal workflows without a reliable management view.

You receive a register of active AI uses, an exposure classification, and a clear proceed / conditions / stop control model.

Duration and investment

1 week: AUD 7,500
2 weeks: AUD 15,000

Best for
Organisations that need immediate control over AI already in use

Output
AI use register, exposure classification, control baseline, leadership briefing

Request Baseline Review Download Sample Output
What this catches

The Baseline is designed to surface the kinds of use that create immediate exposure, including:

  • Draft contracts run through personal AI accounts
  • Client meetings recorded on personal devices and sent to third-party transcription tools
  • Whiteboards, screenshots, or internal planning material uploaded to consumer AI tools
  • AI-drafted customer or complaint responses sent without review
  • Candidate or employee data entered into browser-based AI tools without control

This is not hypothetical.
These patterns are already appearing inside businesses before leadership has a reliable management view.

The Problem

The Issue Is Already Inside the Business

AI use typically begins in fragments: proposal drafting, customer communications, code assistance, policy writing, recordings, screenshots, and external tools adopted without oversight.

  • Informal usage spreads before leadership has a reliable internal map.
  • Some uses are harmless. Others touch client data, commercial terms, or customer-facing outputs.
  • Ownership, review discipline, and approved-use boundaries are often absent.

The immediate issue is not AI adoption. It is the absence of a reliable management view, exposure classification, and control.

The Intervention

A Practical Control Baseline

A focused engagement for organisations with informal AI use already in place. It establishes where AI is being used, what information is involved, whether human review exists, and classifies each use as proceed, proceed with conditions, or stop.

01

Targeted Intake

Identify relevant roles, functions, tools, and active use cases.

02

Follow-up and Contradiction Testing

Resolve incomplete disclosure through targeted follow-up, role-based prompts, and contradiction testing across functions.

03

Control Classification

Classify each use as proceed, proceed with conditions, or stop, then brief leadership on immediate exposure.

What You Receive

A Controlled Management Instrument

Designed to give leadership a defensible management view of active AI use and immediate control requirements.

01

AI Use Register

Active AI use across the organisation, including shadow or previously undisclosed use where surfaced.

02

Exposure Classification

Each use classified by data sensitivity, external impact, decision consequence, and review discipline.

03

Control Baseline

Approved, restricted, and prohibited use guidance tailored to the organisation’s actual environment.

04

Leadership Briefing

Executive summary of what can proceed, what requires conditions, and where immediate exposure sits.

05

Escalation Path

Where deeper governance, lifecycle, vendor, or cost issues surface, the engagement gives a clear basis for whether a broader structural assessment is required.

06

Specimen Output Reference

The sample output shows the shape of the deliverable directly. No gating. No email capture. No download friction.

Undisclosed Use

What About AI Use People Do Not Declare?

The Baseline does not assume every use is declared on first pass. It surfaces incomplete disclosure through role-based prompts, contradiction checks, and targeted questioning around personal devices, screenshots, recordings, browser extensions, and external tools.

Important

The objective is not surveillance. It is a defensible management view of where exposure likely exists and where control is absent.

Engagement Context

Where This Sits

The AI Usage Control Baseline is a focused intervention for immediate control over informal AI use. It is not a structural governance programme.

If deeper governance, lifecycle, vendor, cost, or board-level capital issues are already in scope, the 4-week Structural AI Architecture Sprint is the appropriate intervention.

Next Step

Request Baseline Review

We will assess whether the AI Usage Control Baseline is appropriate for your organisation. If the need is deeper than a baseline, we will say so directly.

Sample Output

The specimen output is available as a direct download. No gating, no signup wall, and no analytics eventing in v1.

Download Sample Output

Request Baseline Review

Required fields are marked by validation. We respond within 48 hours.