Back
Technology
October 21, 2025

The age of AI and the cost of getting lost in the hype

Aaron Harris, CTO at Sage

The AI boom is in full swing. From generative tools to autonomous agents, it feels like everyone is racing to add AI to their products. But beneath the buzzwords lies a sobering reality: not all AI is built without bias or responsibly.

In a landscape where over 70,000 AI companies have emerged, maturity, accountability and regulation truly matters if we are to adopt AI in the right way. According to Gartner, only a fraction of these companies have moved beyond experimental pilots into production-ready systems, a sign that ‘responsible AI’ is far from a guarantee.

For finance leaders, this distinction matters. Getting swept up in the hype without proper guardrails can jeopardise the core that accounting relies on most, trust.

Mature AI, made for the real world

Some companies bolt AI onto their platforms and call it innovation. We take a different approach.

Sage AI is embedded directly into the workflows that matter most to small and medium businesses. It automates repetitive tasks, surfaces insights and reduces friction in financial operations.

Just as important to this approach is how we execute and build AI systems from the ground up. Our models are trained on curated, real-world financial data and verified content from trusted professional bodies, such as the AICPA. Knowing when to invest in the right supplier or business is just as important as working with like-minded businesses that equally want AI to be a success in the industry.

And it’s this approach that is subsequently backed by a growing portfolio of AI patents, including those focused on semantic caching, hallucination reduction and deterministic execution. Because when the stakes are this high, ‘good enough’ isn’t good enough.

Trust built in by design

After all, I don’t believe AI is here to replace people; it’s here to make their work more valuable. Many finance leaders want to embrace AI but are cautious – and rightly so. Bias, opacity and unchecked ambition have created a credibility gap across the entire tech industry.

This isn’t just a finance challenge either; fintech and healthcare companies navigating fraud detection or healthcare providers deploying diagnostic AI face the same pressure to prove reliability. In fact, it’s why over a third (36%) of engineering and IT leaders say that reliability and compliance are their top development priorities over the next two years.

The ‘move fast and break things’ mindset might work in some corners of Silicon Valley, but it doesn’t belong in financial systems that demand precision and accountability. 28% of accountants cite legal and ethical issues as their top risk, serving as a reminder that trust isn’t a nice-to-have but a necessity.

Most other industries fall in line with this concern, with leaders recognising that technology shouldn’t be confined to a black box-style approach. Microsoft introduced its Responsible AI Standard, Google has rolled out model cards and IBM piloted AI Factsheets, all to show how systems are built, trained and verified.

Maximising human potential

The other challenge that businesses of all shapes and sizes must address is the growing trust gap. Why? Because transparency and accountability aren’t mere abstract values but directly influence one another. It’s why only 33% of executives say that their companies actually disclose their AI governance framework, even though employees and consumers think it’s important.

In sectors like legal and finance, where the smallest of errors carry huge consequences, the demand for ‘trustworthy AI’ is accelerating. This is where partnering with industry experts and professional bodies can make all the difference when making sure AI models are built ethically.

It can also be the all-important vehicle to upskill workforces and create industry-specific training materials, where AI can understand the nuance, ethics and terminology unique to each business. We’ve taken a similar approach by investing in a Trust Label. This idea didn’t come from a boardroom, however – it was shaped by real conversations with customers who asked for transparency, accountability and assurance.

Setting the standard for AI

Companies shouldn’t be in an AI arms race, chasing headlines or vanity metrics. We need to be focused on building what businesses actually need: AI they can trust, systems they can rely on and partnerships that stand the test of time.

Being a trusted partner means knowing when to innovate, and when to pause, listen and protect. It means leading with purpose.

Trust is not a product feature. It is the foundation. And in an AI-powered era, it’s the difference between thriving and getting lost in the noise.