HomeAI & CloudAI will elevate accounting to new levels – but only if you...

AI will elevate accounting to new levels – but only if you trust it to do so

Imagine an era where accountants use AI not just for routine tasks but to gain deeper insights into financial narratives, allow more strategic thinking, and fuel business growth. That future is closer than you think. But it hinges on one critical factor: trust. It’s a common friction point as we explore the possibilities AI brings to accounting.

70% of ACCA members agree AI can increase the time they have to focus on business-critical tasks and research shows that 69% of accountants believe AI has a positive impact on the profession. However, 89% of respondents in the same study admitted to having at least one concern about AI, and a more general study by KPMG found three in five people are wary of trusting AI systems.

So, if trust is the biggest barrier to AI adoption, how do the people creating AI solutions secure it? How do we instil a trust in AI that allows it to transform the industry in meaningful ways? I believe it is about presenting a methodical approach. Placing as much importance on the ‘how’, ‘when’, and ‘why’ AI is being implemented, as the ‘what’ it can do.

Transparency breeds trust

Building trust in your AI solutions starts by clearly stating what you are setting out to do with the technology, explaining the why, and then delivering on the ‘how’ with transparency. The foundation should be creating and sharing AI and data principles that prioritise ethical considerations around data security and the responsible use of AI. Data is essential to training and evolving AI solutions so providing peace of mind to customers that it is going to be used ethically is essential. That means considering the individual, group, and wider societal impact of AI use and, in practice, could involve outlining your reasons for using data and investing in research and development – such as the safe use of Large Language Models (LLM) – to actively work towards preventing bias in the technology itself.

Innovation and assurance in equal measures

AI innovation can take efficiency and productivity in accounting to new levels. However, innovation alone is not enough to ensure uptake. In a sector that exists to provide assurance to businesses and markets, those using the technology need to trust that it will complete tasks competently and safely. Because, ultimately, however much automation takes place, there will always be a human accountable for the result and the buck stops with them.

We must go beyond simply automating repetitive tasks by making investments in technology that enables accountants to deliver trusted, strategic leadership to the businesses they serve. At Sage, we bring this to life in our technology vision; continuous accounting, continuous assurance, and continuous insights.

Continuous accounting captures business activity and accounts for it in real-time. But no accountant will use that data to drive decision-making unless they trust that it is accurate. So, we pair this investment with continuous assurance, where we use AI and other technologies to detect anomalies that can be reviewed by a human when needed to make critical decisions. It is at that point that accounting teams are freed from the cycles like closes and audits and can be turned to focus on the future. This gives us our third area of innovation investment, continuous insights, where we use AI to develop an understanding of the business forecast and alert accountants to changes so that they can decide in real time what to do.

AI has a place, and it’s not everywhere

Providers of AI technology need to have an attitude of humility and accountability and forget the “move fast and break things” mantra that has existed in traditional technological innovation.

AI is capable of extraordinary things and allows for incredible leaps in what humans can achieve. But there must be an honest admission that it is not perfect for every scenario. For example, at Sage, our biggest area of research for our data science and AI engineering team is how to detect and prevent AI hallucinations. We also live by the mantra that we will never use AI in a way that erodes a customer’s trust in Sage or our products – that means avoiding developing AI to solve scenarios where you simply cannot afford to have a hallucination in the first place.

Close collaboration with customers

If customers are going to feel confident in using the AI, they should also feel like they are in control. That means ensuring processes are in place for people to confirm and review what they receive. After all, it is human nature to want to feel in control, therefore erring on the side of caution and building user control into the AI experience is important for building and enhancing trust.

But customer involvement must go further than simply making sure they feel in control and build trust while using AI. It is important to involve them in the development stage too, so that it reflects and represents the people we build technology for. Unlike traditional tech, you cannot build AI without the customer. Both from a practical point of view – they provide the data that will train the AI – and holistically speaking, ensuring that the AI is behaving the way our customers would expect a human to behave.

Rather than trying to predict where AI will have the greatest impact on productivity, instead, work closely with your customers to understand their needs and challenges. This will uncover what the priority use cases are and help you to approach AI development with a high degree of adaptability. At Sage, we build human feedback into the experience so we can identify in real time when something needs adjusting. This has been – and continues to be – pivotal to the introduction of Sage Copilot, our new generative AI-powered business assistant. For us, it’s not only about using technology to help our customers to be more productive but also to enable them to have more fulfilling jobs and lives.

That idea of working in partnership with customers should be built in early on and inform the ‘when’ of your AI strategy. Until we are satisfied that our AI solutions have met the trust and reliability needs of our early customers, we won’t bring a new product to market.

Building trust at every stage

There is no doubt there is excitement around AI in accounting. Yet, in exploring its role in the industry, it’s clear the important part of the puzzle is ensuring accountants trust it.

Reaching that point, where AI is integrated with confidence and enhances the accounting profession at every turn, requires careful implementation, clear communication, and collaboration from AI providers throughout the journey. It is about helping accountants unfamiliar with a complex technology understand that AI a) is here to augment, not replace, human intelligence, b) will not compromise data integrity and will do the job competently, and c) will always include a human touch as part of a controlled environment.

The knock-on effect is an integrated technology that not only improves efficiency but fosters deeper trust in the financial insights accountants provide. It’s time to embrace this opportunity, ensuring AI serves as a partner that enhances expertise and reliability so that we can enter a new era of accounting built on trust and innovation.

RELATED ARTICLES