BLOG: Digital Financial Reporting
This is a blog for information relating to digital financial reporting. This blog is basically my "lab notebook" for experimenting and learning about XBRL-based digital financial reporting. This is my brain storming platform. This is where I think out loud (i.e. publicly) about digital financial reporting. This information is for innovators and early adopters who are ushering in a new era of accounting, reporting, auditing, and analysis in a digital environment.
Much of the information contained in this blog is synthasized, summarized, condensed, better organized and articulated in my book XBRL for Dummies and in the chapters of Intelligent XBRL-based Digital Financial Reporting. If you have any questions, feel free to contact me.
Entries from May 12, 2019 - May 18, 2019
Conclusions about XBRL-based Digital Financial Reporting
Some people prefer the "rush to detail" approach, prematurely focusing on detailed tasks before they have considered the broader goals and objectives. This approach is like starting to build a house by laying bricks, rather than drawing plans and establishing foundations. It is like taking a journey on a ship and not bringing a compass, a map, or even a rudder.
Me, I don't use that "rush to detail" type of an approach. I prefer building on a solid foundation. I prefer establishing a framework, discovering principles, creating a theory, and then testing that theory to be sure everything works as expected.
Over the past 11 years I have maintained a blog related to XBRL-based digital financial reporting. That blog is like my "lab notebook". Here are all the entries into my lab notebook.
Periodically, I take the information from my lab notebook and organize it, summarize it, synthesize it, reduce the information to its essence; the goal is to put the information into more useful form. Here is that summary of all of my notes.
Myself and others have done a lot of experiments. You can see those experiments throughout my lab notebook entries.
The grandest experiment is Pesseract. Today, Pesseract is a working proof of concept. Creating Pesseract helped a software engineer and I figure out what it would take to create an expert system for creating financial reports and exactly how to do it. If you are currious and motivated, you can use Pesseract to think about and understand XBRL-based digital financial reporting. I know, because that is how I learned.
You cannot understand these conclusions if you don't have the correct and complete set of background information. Trying to understand what is happening is like trying to walk around New York City with a map of Chicago. A paradigm shift is occurring. You will need a new map, old maps will not work. Here is the map that I created: Computer Empathy. That map is not perfect, but it is the best I could do. Perhaps someone else can take a stab at it and do better.
Here are the conclusions that I reached related to XBRL-based digital financial reporting, information that supports those conclusions, and resources that might help you understand my conclusions or that you can use to reach your own and perhaps different conclusions.
- XBRL-based general purpose financial reports provide value: While perhaps not all financial reports will be digital, some portion will be digital and that portion that is digital will increase over time. Here is my case for XBRL-based general purpose financial reports.
- High-quality XBRL-based reports are necessary: Financial reports are high-fidelity, high-resolution, high-quality information exchange mechanisms. Whether the format is paper, e-paper, or XBRL; the quality level must be the same. Quality is measurable. High-quality is achievable.
- XBRL does work if used correctly: XBRL does work if it is used correctly. Most regulators are not using XBRL correctly, they are leaving things out. To make XBRL work internally within your organization or if people are going to use XBRL in situations where it's use is not mandated by some regulator, it MUST WORK. It can work. Understanding the ontology spectrum and how to implement the ontology spectum using XBRL is critical information. Here is the method that I use for implementing XBRL and the framework that I use to do it.
- XBRL can provide benefits related to accounting, reporting, auditing, and analysis in a digital environment: XBRL is not just for financial reporting. Many, many processes will be impacted. While I very much doubt that all of Deloitte's vision of "The Finance Factory" will be realized any time soon, the impact will be very significant. The modern finance platform will be created. A financial transformation will occur. It is already occurring.
- Digital distributed ledgers is a useful technology: The impact of digital distributed ledgers will be significant. XBRL and digital distributed ledgers are a match made in heaven.
- The technology falacy points out that people create change, technology enables change: Technology does not create change, people create change. Technology enables people to create change. PWC, EY, KPMG, and Deloitte are all telling their clients that the fourth industrial revolution is real. Their audit departments might not believe that yet, but they will eventually.
- Best practices are superior because they work: A best practice is a method or technique that has been generally accepted as superior to any known alternatives because it produces results that are superior to those achieved by other means or because it has become a standard way of doing things. Just because someone has a position of authority does not make their approach a best practice. Here are proven best practices for evaluating the quality of an XBRL-based financial report.
- Know-how is knowledge: One type of practical knowledge is know-how; how to accomplish something.
- Elegant and simple is better than a complicated kludge: A kluge is a term from the engineering and computer science world that refers to something that is convoluted and messy but gets the job done. Elegance is the quality of being pleasingly ingenious, simple, and neat. Elegance is about beating down complexity. Creating something complex is easy. Creating something simple and elegant is hard work. XBRL-based financial reporting will be elegant.
- Rome was not built in a day: XBRL-based financial reporting is an evolution that will unfold over time. There are four common mistakes that both business professionals and technical professionals tend to makerelated to XBRL-based reporting: (1) Thinking "data" when they should be thinking "information"; (2) Not understanding the ontology spectrum; (3) Underestimating the power of classification; and (3) Misunderstanding a machine's capabilities to acquire knowledge.
I say question conventional wisdom related to creating XBRL-based financial reports. When US GAAP and IFRS filings to the SEC both contain errors related to high-level accounting relations, something is obviously not right. Ask yourself: What might be wrong? How do you fix it? Conventional wisdom is not necessarily true. These are the four common errors in thinking that I encounter repeatedly.
Don't know where to start to get your head around XBRL-based digital financial reproting? Start here with this six minute video: How XBRL Works. After that, read this. And then this. And then fiddle with this. Maybe even download this.




Four Common Mistakes Related to Understanding Artificial Intelligence
There are four common mistakes that I see made over and over by both business professionals and technical professionals related to understanding and harnessing the capabilities of artificial intelligence:
- Having a "data" oriented perspective as contract to an "information" oriented perspective. (DIKW Pyramid)
- Not properly understanding the correlation between expressiveness and reasoning capacity. (Ontology Spectrum)
- Underestimating the power of "classification" and not understanding how software leverages classification. (Classification)
- Misunderstanding a machine’s capabilities to acquire knowledge. (Knowledge Aquisition)
The following sections explain each of these four mistakes and information about how to overcome each mistake:
Having a "data" oriented perspective as contract to an "information" oriented perspective
Information is data in context. That context information is generally not stored in a relational database. The graphic below shows the context information which are basically additional business rules that explain the data in more detail, put that data into context, turn the data into information, and then allow the information to be understood by or exchanged between software systems. To understand the difference between "data" and "information", see the DIKW Pyrimid. To overcome this mistake, think "information" rather than "data".
Not properly understanding the correlation between expressiveness and reasoning capacity
There is a direct correlation between the expressiveness provided by a taxonomy, ontology, logical theory, or some other classification method and the reasoning capabilities that can be achieved within software applications. The more expressive such a classification system is, and the more of that knowledge that is put into machine-readable form; the more powerful the reasoning capabilities of software applications which can read that machine-readable information. Further, if you have gaping holes in what is expressed in your taxonomy/ontology and you therefore don't meet the needs of the application you are trying to create you will experience quality problems. For more information see the ontology spectrum. Make sure you don't have an impedance mismatch between the taxonomy/ontology you create and the application you are using that taxonomy/ontology for.
Underestimating the power of "classification" and not understanding how software leverages classification
Classification provides three things: First, you can "describe" the model of something. Second, you can use that description of the model to "verify" an instance of the model of something against that description. To the extent that you have machine-readable rules, that verification process can be automated. Third, you "explain" or spell out or tell a software application (software algorithm, AI) knowledge about the state of where you are in your agenda of tasks necessary to meet some goal. To the extent that you have machine-readable rules, software can assist human users of the software in completing the tasks in their agenda and achieving that goal. For more information see this blog post on the power of classification. Recognize that formal is better than informal and more is better than less.
Misunderstanding a machine’s capabilities to acquire knowledge
The utility of a "thick layer of metadata" (i.e. classifications) is not disputed. What is sometimes disputed is how to best acquire that thick layer of metadata. Basically, there are three approaches:
- Have a computer figure out what the metadata is: (machine-learning, patterns based approach) This approach uses artificial intelligence, machine learning, and other high-tech approaches to detecting patterns and figuring out the metadata. However, this approach is prone to error.
- Tell the computer what the metadata is: (logic and rules based approach) This approach leverages business domain experts and knowledge engineers to piece together the metadata so that the metadata becomes available. However, this approach can be time consuming and therefore expensive.
- Some combination of #1 and #2: Striking the correct balance and creating a hybrid approach where humans and computers work together to create and curate metadata.
Note that machine learning is prone to error. Also, machine learning requires training data. Machine learning works best where there is a high tolerance for error. Machine learning works best for: capturing associations or discovering regularities within a set of patterns; where the volume, number of variables or diversity of the data is very great; where the relationships between variables are vaguely understood; or, where the relationships are difficult to describe adequately with conventional approaches.
This PWC article is an excellent tool and helps you understand how to think about artificial intelligence. This article helps you better understand machine learning. For more general information, please see Computer Empathy.




Understanding the Power of Classification
It was the Greek philosopher Aristotle (384-322 B.C.) that first came up with the idea of classifying plants and animals by type, essentially creating the notion of a hierarchy or taxonomy. The idea was to group types of plants and animals according to their similarities thus forming something that looked like a "tree" with which most people are familiar.
People tend to be less familiar with the notion of a "graph". A tree, or hierarchy, is actually a type of graph. Trees/hierarchies tend to be easier to get your head around. But the real world can be more complicated than the rather simple relations that can be represented by trees/hierarchies. Here is a simple example:
(Click image for larger view)
When you use some formal, explainable nomenclature (a system of naming) to define things; then you can uniquely idenfity and refer to those things. If you go a step further and classify those things by formally defining relations between those things you can do even more. Some types of formal relations include: Is-a, Has-a, Part-of, Part relations, Class relations, Subclass, Disjunction, Transitive.
And so classification helps you work with the "things' and the "relations between things". Put this information into machine-readable form and what looks like magic can occur.
Classification provides three things: First, you can “describe” the model of something. Second, you can use that description of the model to “verify” an instance of the model of something against that provided description. To the extent that you have machine-readable rules, that verification process can be automated. Third, you “explain” or spell out or tell a software application (software algorithm, AI) knowledge about the state of where you are in your agenda of tasks necessary to meet some goal. To the extent that you have machine-readable rules, software can assist human users of the software in completing the tasks in their agenda and achieving that goal. That is what is meant by "AI is taxonomies and ontologies coming to life."
(Click image for larger view)
And so how do you get all these terms and classification which helps you understand the relations between the terms? As explained in this article by PWC, you create them. That is what they mean by "you have to label the data". They did not explain that particularly well, but that is the reality. While it is true that eventually machines will be able to do some of this classification, machines first need to be trained. Machines need training data. There are different types of machine learning.
To get the best results the definition of terms and classification of those terms should be formal rather than informal. Formal puts you higher on the ontology spectrum. Done correctly, you can create powerful software tools that can reliably leverage the machine-readable terms and classifications.
And that, in a nutshell, is the power of classification. Or, that will help you understand why poor classifications or informal, unusable classifications inhibit functionality.




Complex Systems
As was pointed out in Object-Oriented Analysis and Design with Applications (page 28), the role of a software development team is to engineer the illusion of simplicity.
As stated starting on page 12 here, and similarly here, by Grady Booch, there are five common characteristics to all complex systems:
- There is some hierarchy to the system.
- The primitive components of a system depend on your point of view.
- Components are more tightly coupled internally than they are externally.
- There are common patterns of simple components which give rise to complex behaviors.
- Complex systems which work evolved from simple systems which worked.
The process of defining requirements usually results in incomplete specifications and incorrect specifications. A study in 1999 of requirements specifications found that they are typically only 7% complete and 15% correct.



