BLOG:  Digital Financial Reporting

This is a blog for information relating to digital financial reporting.  This blog is basically my "lab notebook" for experimenting and learning about XBRL-based digital financial reporting.  This is my brain storming platform.  This is where I think out loud (i.e. publicly) about digital financial reporting. This information is for innovators and early adopters who are ushering in a new era of accounting, reporting, auditing, and analysis in a digital environment.

Much of the information contained in this blog is synthasized, summarized, condensed, better organized and articulated in my book XBRL for Dummies and in the chapters of Intelligent XBRL-based Digital Financial Reporting. If you have any questions, feel free to contact me.

Accounting Bots are Coming!

The accounting bots are coming!  The following two companies have been pointed out to me over the last couple of days:

  • Pilot: "Pilot takes care of your bookkeeping from start to finish so you can focus 100% on making your business succeed." (These guys have raised $60 million in funding.)
  • Botkeeper: "Better than humans, better than machines. Automated bookkeeping with a human touch."
  • FirmAI: "FirmAI is a centralised repository of current and experimental business intelligence tools (BITs). Any tool that advances business automation is simply called a BIT; this includes among others, machine learning, econometric, statistical and decision optimising tools."

So what are these accounting bots really capable of?  This article, Will Bots Eventually Take The Jobs Of Accountants & Bookkeepers?, helps you understand why you should care and what you should do.  This article by Deloitte, The Robots are Coming, is where that first article gets some of its graphics.

How do these bots relate to XBRL?  Well, first of all; I suspect that neither of the two companies above, Pilot or Botkeepeer, (a) are aware of XBRL or (b) leverage anything that XBRL offers.  That is what I suspect.  I will find out if I am right or wrong.

What does XBRL provide that might provide leverage to people creating bots?  Well, first the accounting bots need to be putting the right information into the ledgers.  As I point out in the document, General Ledger Trial Balance to External Financial Report, ultimately that information ends up in a financial report.

Also, Andrew Nobel and I acquaint you to the notion of a "fact ledger" in the document Introducing the Fact Ledger.  We point out that if you do put a bit of additional information in a general ledger, you can automate the creation of the primary financial statements from the general ledger.

As I point out in Accounting Process Automation Using XBRL, a financial report contains "branches" and "leaves".  The GL accounts are the leaves, an XBRL taxonomy can provide the branches.  Together the leaves and branches are used to create a report.

Accounting, reporting, auditing, and analysis are not separate things.  They are one thing and should not bee looked at as individual silos.  True automation is automating the entire process.  The metadata crosses over between those separate tasks.  If the right information is entered in the beginning and then flows throughout the process, true automation can be achieved.

That will help us arrive at DFIN's vision, Deloitte's vision, Blackline's vision, Auditchain's vision, the CFA Institute's vision, the SEC's vision, etc.  All those visions have one thing in common: the general purpose financial report.

What is important to tie all of this together?  Well, the first step is to recognize that these are not separate silos.  Second, you need to understand how computers work in order to understand their true capabilities and how to harness those capabilties to perform work.

Posted on Monday, April 29, 2019 at 07:20AM by Registered CommenterCharlie in | CommentsPost a Comment | EmailEmail | PrintPrint

Ontology Spectrum

I updated my ontology spectrum graphic that I explained in another blog post. Here is the updated version:

 (Click image for larger view)

One thing worth noting is that a big shift occurs when relatins are defined formally as opposed to informally (the red zig zag in the diagram).

(Disjunction | Transitive | Is-a | Has-a | Part-of | Part relations | Logic gates | Cardinality | Class | Subclass)

As I said in the previous blog post, the graphic above was inspired by this graphic (see slide 4) and some of these other graphics.

In addition to updateing the graphic, I mapped that ontology spectrum to implementations in two technology stacks: XBRL and the Semantic Web Stack.  Recall this blog post where I point out the need for a unifying logic and compare/contrast the XBRL and Semantic Web stacks.

Here are the mappings from the ontology spectrum to:

As I understand it, it is the intent of the Standard Business Report Model(SBRM) to support either technology stack.  This is my view as to what the SBRM will end up looking like. I could be wrong, it is not complete.

What people tend to misunderstand is that the application you are trying to create picks where you need to be on the ontology spectrum.  You don't get to pick.  The level of quality, precision, accuracy, breadth, depth, and leads to a some required level of expressiveness that you need to make your application work effectively.  Impedance mismatches between what you need and what you provide causes problems.


Class relations defined using XBRL

Properties defined using XBRL

AI is taxonomies and ontologies coming to life (NOT like humans learn)

Posted on Saturday, April 27, 2019 at 11:14AM by Registered CommenterCharlie in | CommentsPost a Comment | EmailEmail | PrintPrint

Deriving Information Using XBRL Formula Chaining (Example)

In prior blog postsI have pointed out that XBRL Formulas have some specific deficiencies. One of those deficiencies involves chaining.  XBRL Formula processors cannot do forward or backward chaining.

Well, there is a way you can achieve a similar result using XBRL Formula.  What you can do with an XBRL Formula processor is "chain" a process together.  Now, you have to put everything in the right order because XBRL Formula processors cannot do forward or backward chaining as I have pointed out.

The objective is to duplicate these fundamental accounting concept relations continuity cross checks. All of these Excel extraction tools similarly use sequential processes as opposed to forward or backward chaining.

Here is how you do that validation using XBRL Formula:

And there you have it! While it is true that a human has to string the rules together in the correct order and that this process is not that effecient because you have to reload XBRL instances multiple times; this process does work and so it is effective.

What this example also clearly demonstrates is that there is a use case for deriving facts that were not reported so that consistency checks can be executed.  If it is the case that information about unreported facts cannot be determined, the XBRL instance is fundamentally unusable by automated processes.  At the same time, this shows that information can be used by automated processes since you can chain the process together effectively.

I have run this or had this run through the UBmatrix and Fujitsu XBRL Formula Processors.  I am going to try and get this run through two more processors to confirm that this works consistently.


DFIN: A New Approach to Data Quality

People are starting to value quality. In an excellent article, A New Approach to Data Quality: Preparing for a RegTech, SupTech and AI Future, DFIN Solutions (Donnelley) make two important points.

The first point is this: (emphasis is mine)

Global reporting and compliance have remained document-bound for too long. With recent advances in artificial intelligence (AI) and other forms of machine learning, however, companies are poised to meet their regulatory and compliance requirements in completely new ways.

The article points out that humans are no longer the primary consumers of information.  For example, the SEC says that 85% of the documents visited in their EDGAR filing system are machines.

Second, they say is to make quality paramount. The article goes on to give numerious specific ways to improve the quality of reported information.  Why is the need for quality so important?  Because machines are dumb and if the information is not of high quality, the information those machines are using is basically garbage and not usable.

Posted on Tuesday, April 23, 2019 at 10:14AM by Registered CommenterCharlie in | CommentsPost a Comment | EmailEmail | PrintPrint

PWC: 2019 AI Predictions: Six AI priorities you can’t afford to ignore

In an excellent article, 2019 AI Predictions: Six AI priorities you can’t afford to ignore, PriceWaterhouseCoopers provides good, practical advice about how artificial intelligence should be approached by business professionals.

Posted on Tuesday, April 23, 2019 at 10:08AM by Registered CommenterCharlie | CommentsPost a Comment | EmailEmail | PrintPrint