Over the last 25 years, we have only just managed to get up to a running speed with ‘data-analysis’ (for most, performance is still at the ‘5K in 30 minutes’ level), but not a day goes by without coming across an article on machine learning (ML) or artificial intelligence (AI) in auditing.
I certainly do believe in the power of ML and statistics combined with data-analysis. We have launched Product Launch – Discovering The Unknown. [insert link naar flyer]
I find myself hopeful following recent publications on ML in auditing, and the fact that there are players on the market who are experimenting with ML, such as ML as part of ongoing fraud detection. I have exceptional respect for such developments. But I also feel like I should offer a voice of reason concerning the way forward, with AI as the most distant (pointless?) speck on the horizon.
The gist of the CTA-rapport (Dutch Audit Reform Report issued February 2020) is not: ‘deep learning will replace accountants’. Nor should we give a new generation who are considering a study of accountancy (with Coney Minds receiving approximately twenty questions per month) the impression that they are (or will become) obsolete.
All of this overhyped blather has really started to get on my wick. Full disclosure, I am slowly starting to become convinced that there is only precious little advance in terms of using AI in auditing. Our field is simply not that complicated. What makes us think that using AI technology would improve our verification methods? Make them faster? Smarter? More effective? Better? Current data-analysis technology (factual data, based on statistics, checked for compliance) already meets the demands, is more than sufficiently available on the market, even though we are (largely) ignoring it.
‘Artificial Intelligence learns by acquiring knowledge and learning how to apply it.‘ AI as the supreme goal, the ultimate replacement of me as a person, my role, as auditing accountant. Myself being replaced by a neural network. I would like to recommend everyone with an interest in AI and ML to read this article. There is a world of neural networks and neutrons determining your Spotify playlist. There is a large difference between AI and ML. However, there is an even greater difference with data-analysis.
We are seeing more and more wonderful examples of ML in the world around us. Consider your Netflix algorithm. Consider Siri on your Apple device. Or consider the impressive developments in healthcare, where neural networks are used to recognise tumours using MRI with the highest degree of accuracy. Impressive technology, potentially invaluable.
So please do not misunderstand me. I am impressed with ML. More specifically, I am particularly impressed with developments with respect to the self-driving car. Here, the technology being used is referred to by actual experts as machine vision. This is a technology that will come to play a major and crucial role in traffic, in self-driving cars. Naturally, there are various factors of great importance to consider. The system must be able to properly classify objects, people, and situations, and it must be able to do so very quickly. There are incredibly complex neural networks at the basis of this recognition process. Do we really need these neural networks in auditing to verify company transaction flows and internal control frameworks as part of the audit of the financial statements?
Keep in mind that we are still referring to ML. We are still referring to neural networks specifically designed for a certain task. The architecture and algorithms are calibrated to the task, and the neural network is trained using data – after which it can get to work on real samples.
In the case of Spotify or Netflix, the training data is comprised of the viewing and listening behaviours of all other subscribers, with the output of the neural networks being presented to you. What are the chances of a film like The Accountant being among the favourites in your viewing profile?
Spoiler alert. One of the biggest challenges in 2021 and beyond is training a neural network. The training methods that have been developed thus far require huge, correctly labelled data-sets, which make it possible to continually correct the parameters and improve the neural network. Developing machine leaning applications at the office or client package level requires information in bulk.
To further give colour to the notion of ML, here is an exaggerated example from the perspective of auditing practice. Picture, if you will, my replacement. What should be a neural network’s minimum in terms of the capabilities if it is to make enough of an impression on me as to encourage me to shelve data-analysis and statistics indefinitely? Well, this neural ML network should be able to scan data from various sources, become part of a team-planning event, complete initial risk assessments together with the team and, more importantly, use trained data-sets to detect deviations that I am currently unable to identify using data-analysis or statistics.
As far as I am concerned, relationships (Starreveld) are not something the neural network needs to be able to identify (BETA-formulae are still data-analysis). No, what I am looking for is an improvement in the sensors so as to be able to detect deviations we are currently unable to identify in vertical transaction flows (turnover, compensation, inventory, etc.).
For example: the Waffle employment agency has 100,000 staff members across ten different sectors with 25 different collective labour agreements determining payment rates. An ML solution analyses 1,300,000 compensation instances (13 periods of four weeks) and finds the deviations based on trained data-sets. This is ML, not scripting (old school data-analysis) but an analysis based on an algorithm (supervised learning), trained by an auditing team across a three-year period.
So if the neural network can use automated detailed sample testing to establish the reliability of data-variables, and find, analyse, document, and discuss deviations with the client, then I would almost be sold on ML.
If all of the aforementioned were to result in the conclusions that adequate auditing information was obtained using pre-defined audit assertion, and that the financial statements were assessed using every GAAP in milliseconds, I would fire all of my staff on the spot. But to arrive at that level for an auditing practice with, say, ten typologies, I would need to train algorithms for YEARS AND YEARS, using thousands of hours of data-science. I am willing to take this step, but how about others in my field? Should this be something for all of us to take part in?
Unsupervised machine learning
To repeat: automating the aforementioned activities requires a trained ML network. One that has been trained several million times. Good luck with that. No, I feel it would be better to place our bets on what is known as unsupervised machine learning. In an unsupervised learning setting, no more than a few general rules are defined, and the task of using the data presented to identify what is or is not desirable is left to the algorithm.
A frequently heard example is that of Alpha Go Zero, which taught itself how to play chess by playing against itself, without having been given anything beyond basic rules. After several hours of reinforcement learning, Alpha Go Zero was able to defeat one of the top-ranking chess programs. Now, if we were to develop that kind of Google-accountant, things would certainly start looking up!
And the step from ML to AI is an even greater one. AI neural networks are able to keep leaning indefinitely, enabling them to cope with new situations in an ongoing manner. This makes them even more comparable to the way in which the human brain operates. This way, recurrent neural networks could be employed to learn and to keep learning. The ultimate goal.
Another example: your auditing client has taken over a competitor. This means that (conventionally), the auditing cycle should be repeated. But not if you are using an AI neural network. Because, using the due diligence file, the AI network learns which of the company’s components are differently organised, and automatically updates the work programme. One day AI. Or not. Because: is this something we should want, does it offer added value?
Naturally, the technology will continue to develop. Naturally, the AFM is quietly wondering whether there will be any more need for a watchdog if ML/AI should become the principal force in auditing.
For now, it is mostly science fiction. But that should not be taken to mean that ML and AI are not yet part of the auditing domain. I frequently see posts involving ML and AI in auditing on LinkedIn. It always makes me smile. I sometimes ask what makes an AI application an AI application, apart from the application being called ‘XX.Ai’. Riots tend to ensue, because AI marketing simply barrels on; even a reasonably simple data-analysis script is lauded as AI, these days. I keep angling for an invitation to come take a look in the AI kitchen. These invitations are given rarely, if ever.
There is no AI in auditing. Not in the Netherlands, not in the US, not in China. There are small ML-related developments, there is a limited number of ‘plug and play’ ML-based scripts being discovered. In my team as well. We have been working with GO (www.globalorange.nl) over the last 12 months. We have a roadmap, but not the millions in the bank needed for large-scale developments. We have launched Product ML Launch, we are hoping that different audit firms will join our journey.
Today’s real auditing challenge remains the same as it was 30 years ago – the proper use of good old data-analysis in auditing. Comparing data-analysis to ML and AI is like comparing Captain Caveman to Captain America, like comparing a souped-up tricycle to an F1 car; in technical terms, data-analysis is less than impressive, but the accounting profession has yet to fully embrace or employ this technology.
Let us keep a positive-yet-critical attitude towards developments, and mainly continue to work on improvements in financial auditing that are already possible today.