By R Douglas Orsagh April 27, 2026
For AI To Be More Effective, It Must First Get More Boring

With all due respect to Benson Boone and GloRilla, artificial intelligence (AI) is the (perceived) Gen Z of technology right now. AI is smart, fast, and wildly capable — yet still early in its impact.
AI is in everything, everywhere, all at once. Every company says they’re using AI and getting great results. Every executive references AI as much as a CrossFit practitioner mentions CrossFit. Every roadmap has AI splashed across the top like truffle oil: expensive, fashionable, and rarely the main ingredient. To put a point on it, a recent famous (notorious) MIT study found that 95% of organizations are getting zero in return. And that’s despite a total of $30 – $40 billion enterprise investment into GenAI. Yikes.
This lack of return isn’t because AI isn’t working. It is. Employees are using it constantly to draft emails, summarize meetings, and speed up one-off tasks. That’s not wrong. It’s actually impressive.
It’s just… very Gen Z. Helpful, creative, individually productive, and completely disconnected from how enterprises scale value.
Organizations don’t win on isolated bursts of individual productivity. Instead, organizations win on repeatable outcomes. Predictable execution. Processes that hold together under pressure.
A myth has driven countless AI projects with zero return on investment (ROI): Enterprise AI success will emerge organically from enough smart people “playing around.”
That’s not a strategy. That’s just hope. And hope isn’t a strategy.
Telling Me Questions and Asking Me Lies
Despite what it seems every boardroom is saying right now, AI isn’t a strategy.
Are you saying “we have an AI strategy” after simply licensing a tool, training employees, or launching an innovation lab? If yes, it’s like saying “we have a fitness strategy” because you bought a NordicTrack and a pair of white socks.
Tools don’t create outcomes. Processes do.
A lot of AI programs start backward. They start with the tool, give it to people, and then hope a valuable process materializes. The result? Predictable — impressive demos, scattered wins, and zero systemic impact.
This predictability is how enterprises figuratively end up taking the most expensive approach in technology adoption:
“AI is the answer… We’ll figure out the question!”
But AI is less effective when introduced to find problems. Instead, it should be introduced to remove friction from problems we already understand.
AI succeeds when three things are true:
- It starts operating inside a defined, existing workflow (more on this in a bit)
- The outcome is measurable and unambiguous
- Constraints are explicit, enforced, and non-negotiable
AI has been failing for key reasons:
- It’s positioned as the solution instead of an enabler
- Teams start with tools and training, then hunt for something — anything! — to automate
The pattern isn’t subtle. It’s just inconvenient. Constraints feel boring and lack that shine that comes with finding a new application for GenAI. Constraints don’t look good on slides. They also don’t make for exciting demos.
But — bear with me — they really work.
The Simple AI Profitability Model to Avoid the “Flop”
Despite the breathless headlines, successful AI adoption follows the same arc as every major technology shift dating back to the wheel’s invention.
Step 1: Drive efficiency.
Step 2: Scale.
Step 3: Unlock profitability.
Trying to skip steps doesn’t make you bold... it just leads to AI flopping around searching for a purpose. In fact, sloppy AI adoption multiplies inefficiency, often at painful expense. And often invisibly, until something breaks.
This inefficiency leaves most organizations dealing with the constant flop valley — flip-flopping from shiny new report to the next with no true ROI achieved. Not because they lack ambition, but because they lack the patience to build on existing processes.
Why Close & Consolidation Is an Excellent Application for AI
Close and consolidation is an ideal environment for a disciplined AI approach for four reasons:
- Defined, repeatable workflows
- Clear rules and deadlines
- High data volume with needed historical context
- Strong requirements for accuracy, auditability, and explainability
With the close, finance stops theorizing and starts committing. Numbers become statements. Statements become disclosures. Disclosures get signed, audited, and scrutinized by people whose job demands skepticism. This space isn’t where “pretty close” or “directionally correct” enjoys a long shelf life.
In fact, most AI conversations politely skip something important: AI in finance doesn’t work without the support of a financially intelligent and fully governed data layer.
Finance doesn’t need AI to be creative. Instead, finance needs AI to be right. However, AI can’t be right with underlying data fragmented across systems, governed inconsistently, or stitched together at the last minute with hope and a pivot table.
Ultron is a maniacal robot, not an accountant, and doesn’t automatically understand the financial context of underlying data. Despite being a fantastic interface, the horizontal agent (Ultron) layer still needs the business and financial intelligence embedded in the underlying unified data model. Finally, expanding the horizontal agentic layer to appropriately “understand” and “utilize” financial information would take more tokens than ever passed through Chuck E. Cheese locations.
Today, close and consolidation already depend on a unified financial truth. One set of numbers, one structure, and definitions that mean the same thing on Day 2 and Day 27. With these, that’s what makes a unified, financially intelligent data model, one that’s system‑agnostic and can connect to any source system, so critical.
Enterprise resource planning (ERP) here. Subledger there. A planning system off to the side while insisting it’s “basically the same number.” Finance doesn’t care where the data originates. Instead, finance cares that the data reconciles, holds history, and survives uncomfortable questions from auditors, executives, and regulators.
Many AI initiatives quietly unravel at this point. Why? Generic models and one‑off agents are asked to reason over inconsistent inputs and shifting definitions. Despite producing outputs, these models and agents don’t provide answers anyone wants to defend in a meeting with Legal present.
In close and consolidation, that hesitancy doesn’t fly.
When embedded into this environment — operating on governed, standardized financial data — AI stops being speculative and starts being useful. AI can flag anomalies, surface exceptions, compare performance across periods, and focus human attention on what really matters.
For that reason, close and consolidation isn’t just a “good” AI use case. Close and consolidation is one of the few places where AI is forced to grow up.
And as it turns out, boring (with controls, context, and guardrails) is exactly what makes AI work.
Explainability and Human Judgment Are Features, Not Bugs
A number doesn’t mean anything by itself. If I told you I weigh 150 pounds, you wouldn’t know whether that’s healthy, risky, impressive, or concerning without context. Height matters. Age matters. Muscle mass matters. Medical history definitely matters. (Would it help if I told you I was 6 foot 3 inches?)
The number alone tells you almost nothing.
Finance works the same way.
Any AI that produces numbers without financial context is like a scale reading with no accompanying medical chart. It might be accurate, yes. Without the story, however, a number is meaningless. Worse, it can be misleading.
This nuance is where many AI use cases collapse. When leadership or auditors ask for the “why” behind the numbers, the system shrugs. Kind of like a scale that flashes a number without any insight into what changed, what caused it, or whether anyone should worry. IT has merely recognized a pattern, not provided an explanation. And AI wouldn’t know the explanation.
Close and consolidation doesn’t allow such ambiguity. Instead, the processes demand traceability, justification, and accountability. AI must show its work. When it operates under scrutiny and not blind trust, the numbers aren’t just measured — they're understood.
“But We Just Need a Website Agent!”
In the late ‘90s, I was an equity analyst at an investment bank. We were hearing from every company that their best asset was their website. Brick and mortar were so passe. When we asked about the path to company profitability, we were told “clicks.”
One survivor of this era (only a fraction of the capital poured into this area yielded results) exceled at the never-been-done-before business of… selling books. That was very cutting edge… in 1440. The AI gold rush feels a lot like the modern equivalent of “we just need a website!”
The current AI gold rush feels like a modern remake of that moment, except that “we just need a website” has been replaced with “we just need an agent.”
Teams spin up one‑off AI agents to automate isolated tasks — without shared data, governance, or clear ownership. These experiments can be clever. Sometimes even impressive. However, they rarely scale. At the enterprise level, things that don’t scale don’t last.
We’ve seen this movie before.
The dot‑com bubble didn’t burst because the internet failed. The bubble burst because business models without clear, measurable outcomes collapsed under scrutiny. The technology worked. The strategy didn’t.
Sound familiar?
AI experiments without process alignment will fade away. AI embedded into governed, measured, and grounded in systems that already matter will persist, quietly outperform, and eventually make the hype look very dated.
AI Needs to Get a Little More Boring
AI isn’t here to replace thinking. Nor is AI here to replace processes. Instead, it’s here to accelerate well-designed systems.
The organizations that win with AI won’t look revolutionary at first. They’ll look disciplined. Process‑driven. Maybe even a little boring.
And with time?
They’ll look incredibly successful.
Read more about how to make the most of AI in our eBook, The CFO’s Guide to Finance AI.



