An open letter to CIOs and CTOs: AI use requires user trust, not training
Part 1 of 3
AI isn't just assisting anymore. It's taking action in day-to-day work. But most IT teams are still managing it the old way: document the rules, train the team, and hope for the best.
That approach doesn't hold up to the scale, speed, and black-box nature of AI. And you can't prove it's working. The real challenge is giving clear, defensible evidence that AI use is controlled, appropriate for each role, and getting better over time.
Most organizations believe they are taking AI readiness seriously. They have AI acceptable use policies, training programs, and governance language. Many organizations have moved quickly and in good faith to put structure around adoption.
And yet, what most of them have built is proof they set expectations—not proof those expectations are actually being followed.
Those are not the same thing.
And that gap is now a real business problem.
When systems can draft, decide, recommend, and act inside workflows, the penalty for confusion becomes dangerous: privacy and compliance failures, rework at scale, reputational damage, and a widening gap between what leadership believes is happening and what is actually happening in the business.
Confusion becomes exposure and liability; it becomes operational risk.
For decades, corporate training has been stuck in an old argument: is it cost or investment?
This argument survived because the perceived cost of misunderstanding software was tolerable: missed features, slower adoption, and more help desk tickets.
AI ends that debate. Not because it adds capability, but because it changes the cost of being wrong.
Large studies of AI adoption show the problem is not capability but organizational readiness. Deloitte's 2025 study of more than 1,800 senior executives found that only 20% of surveyed organizations had achieved meaningful returns from AI — and confirmed that those organizations outperformed peers by treating AI as an enterprise transformation. AI ROI: The paradox of rising investment and elusive returns. Deloitte.
AI creates a structural shift in enterprise operations because it increases three things at once:
Scale
A single mistake can be amplified across thousands of outputs, transactions, or customer interactions.
Opacity
AI outputs can be difficult to explain, placing new demands on governance and human judgment.
Action
Systems are moving from recommendation to execution, which creates new accountability and rollback needs.
Speed does not close a readiness gap. It widens the blast radius of one.
If employees don't know how to exercise judgment inside AI-enabled workflows, faster deployment only scales weak practice more quickly. If leaders can't see whether policy is translating into real behavior, wider adoption only makes the problem harder to detect.
According to the IBM Cost of a Data Breach Report (July 2025), one in five organizations reported a breach due to shadow AI — and organizations that used high levels of shadow AI saw an average of $670,000 in higher breach costs than those with low or no shadow AI.
AI readiness is an organization's ability to adopt AI while staying in control — technically, operationally, and legally.
In practical terms, readiness has four characteristics that CIOs should recognize immediately. Readiness should be:
Updated as tools, policies, and risks evolve — not an annual event.
Different expectations for an analyst, a manager, a developer, and an executive.
Tied to observable behaviors and operational outcomes, not just attendance.
Produces defensible evidence that governance and compliance teams can rely on.
AI is not just spreading across the enterprise; its functional scope is expanding at high speed. From single-use prompts to multi-step agents, from one-off experimentation to use across multiple business functions, and from content generation to workflow execution. AI readiness belongs in the same category as cybersecurity readiness: defined boundaries, human oversight, measurable evidence, and the ability to stop and recover when something goes wrong.
NIST frames AI risk as lifecycle governance — not a single training event — and explicitly recommends continuous measures tied to operating risk profiles. The EU AI Act points in the same direction: human oversight and ongoing risk management, not point-in-time compliance. McKinsey reports that organizations are now using AI in more business functions than ever before, with most respondents reporting AI use in more than one business function for the first time. The State of AI: Global Survey 2025 | McKinsey
If you want to know whether your organization is treating readiness as an operational discipline, ask these questions. If any of them are hard to answer, you've found the work:
The organizations that capture durable value from AI will not do so because they moved first. They will do so because they built the operational capability to move with control: clear boundaries, visible oversight, defensible evidence, and the ability to intervene when something goes wrong.
Without that capability, "move fast" is not a strategy — it is a gamble.
The old debate — "is training a cost or an investment?" — is obsolete. The choice now is between operational control and operational exposure.
The organizations that win will not be those with the most courses, policies, or communication cascades, but those that can repeatedly prove human+AI systems operate safely: clear boundaries, credible oversight, and auditable evidence. Across teams, across workflows, across changing systems.
The question isn't whether your organization has an AI readiness program. Most do.
The question is whether it would survive contact with reality — with an auditor, a regulator, or a workflow that went wrong. If you're not sure, that's the answer.
NIST's AI Risk Management Framework and the EU AI Act both point organizations toward lifecycle governance and human oversight as the way to enable safe value capture. [NIST; EU AI Act]
Researched and created by BrainStorm Inc.