BLOG: Digital Financial Reporting
This is a blog for information relating to digital financial reporting. It is for innovators and early adopters who are ushering in a new era of digital financial reporting.
Much of the information contained in this blog is summarized, condensed, better organized and articulated in my book XBRL for Dummies and in the three documents on this digital financial reporting page.
US GAAP financial disclosures are articulated in human readable form via the Accounting Standards Codification (ASC)which is published by the FASB.
Each disclosure fits into one of those topics. There could be an exception, but if there is; all one needs to do is create a new topic for the disclosure and add it to that new topic.
Here is a human readable list of disclosures, organized by topic, for commercial and industrial companies which report to the SEC (i.e. I did not include disclosures which relate to specific industries such as depository and lending institutions, brokers and dealers in securities, insurance companies, etc.). This is a machine readable version of the same disclosure information.
Subset of disclosures for testing
US GAAP XBRL Taxonomy (Commercial and insustral companies)
The US GAAP XBRL Taxonomy articulates information about those disclosures. This is a human readable of a subset (commercial and industrial companies) of the US GAAP XBRL Taxonomy information in human readable form. This is a list of all the pieces of that taxonomy. This is ONE of the pieces expressed in machine readable form.
A disclosure prototype is an standard template of a disclosure. The US GAAP XBRL Taxonomy is not expressed in a manner in which disclosures should be expressed. The US GAAP XBRL Taxonomy is more of a "pick list of stuff". You have to figure out how to organize that stuff.
This is a list of prototypes for each of the subset of disclosures I am testing expressed in RSS (both machine readable and readable by humans). This is a list of the same prototypes expressed in machine readable RDF. This an example of ONE prototype. Prototypes exist for each of the 221 disclosures.
A disclosure exemplar is an example of a disclosure from some SEC XBRL financial filing which provided that disclosure. This is a list of sets of exemplars for the 221 disclosures I am testing provided in human readable and machine readable RSS format. Here is that same information provided in machine readable RDF.
This is an example of one set of exemplars for one disclosure provided in machine readable RDF. Sets of exemplars exist for each of the 221 disclosures in the subset I am testing.
Every SEC XBRL financial filing is an exemplar
Every SEC XBRL financial filing is an exemplar. For example, consider this list of the components (pieces) of a financial fling by Microsoft provided by SECXBRL.info. This is the model structure and the fact table of the document and entity information (the first component of the disclosure).
However, you need to figure out if the piece of the SEC XBRL financial filing is a GOOD example or a BAD example. All the exemplars which I selected are good examples. Business rules prove that the examples are good examples.
Business rules are what help external financial reporting managers who create SEC XBRL financial filings correct. While this is not all the business rules needed for each disclosure, it is a human and machine readable RSS list of business rules necessary to make sure the subset of disclosures I am testing are correct. This is ONE such business rules expressed using XBRL definition relations. These business rules exist for each of the disclosures in the subset I am testing.
Testing of representation model and business rules
The exemplars where manually selected by examining countless SEC XBRL finanaical filings. Why do that? It both proves the representation model and the business rules.
Rendering of information
A rendering of the information contained in a financial report is nothing more than a machine taking the model structure, the fact table, an understanding of the different concept arrangement patterns and member arrangement patterns, and an understanding of common approaches to rendering information, and the machine generating the rendering. Some renderings are static such as the SEC XBRL viewer. Other renderings are dynamic, such as the XBRL Cloud Viewer.
Financial Report Ontology (FRO)
The financial report ontology (FRO) contains more metadata and opportunities to create even more metadata by expressing various relations. Sometimes you have to express the relations, other times the relations are already expressed in the ontology. For example, the fundamental accounting concepts metadata is expressed in the financial report ontology. Here are the concepts (human readable | machine readable), here are the mappings to the US GAAP XBRL Taxonomy concepts (human readable | machine readable), here are the impute rules (human and machine readable), here are the relations between the fundamental accounting concepts (human readable | machine readable).
Possibilities if machines could read AND understand this metadata
What if machines could read all of this information? Well, machines can read it. Each set of metadata is machine readable. So reading the information is not the question. Machines understanding the information is what is necessary. THAT is what the financial report disclosure engine is all about.
Ask your software vendor if the software that they provide to you understands this information. Most will say that the applications understand some. But the truth is, it really does not understand the metadata that I have provided here. What some software vendors do is hard code some rules into their software applications. But that means two things: (a) software developers were needed to express the information, (b) software developers are needed to express MORE information.
Business users need to be in control of financial disclosure metadata
Business users need to be in control of financial disclosure metadata. Does that mean they create and edit that metadata using the XML files that I have provided? Certainly not. I have tools for creating that metadata. The tools are relatively easy to use. Unfortunately, the tools are not of commercial quality yet.
Business users use tools to create metadata. Business rules engines, financial disclosure engines, and other "engines" or "processors" work with that metadata to verify the business rules. Can all rules be evaluated this way? I don't know, but I would speculate not. But there are many, many that can.
All of this needs to be unambiguous. Computers cannot deal with ambiguity at all. Sometimes it seems like they can, but that is not what is really going on. What is going on is that someone either made some decision as to how to interpret the situation or someone make some list of things the computer can understand to sort through the ambiguity.
Standards are good
Standards are more efficient and effective ways of working with this metadata. I am trying to get as much as possible into the XBRL format. Not sure how successful I will be. If I am not successful, that means that XBRL is lacking something. No problem, it can be added to XBRL later if enough people feel they need that functionality.
One approach to understanding something that I use is to try and create a spectrum of all possible options. In applying this to a situation that I have been involved with for 15 years, I think I realized something. While people use the term "standard" and think they understand the term, it seems as though a lot of people are misinterpreting the term. Let me explain.
First, we need to be sure we are on the same page when it comes to certain specific terms. These are the terms I am using with definitions provided by Google and the Merriam-Webster dictionary. The terms are broken into sets with go together: (bear with me here)
Term set 1:
- Standard: used or accepted as normal or average (Google); something established by authority, custom, or general consent as a model or example (MW)
- Arbitrary: based on random choice or personal whim, rather than any reason or system (Google); depending on individual discretion (as of a judge) and not fixed by law (MW)
Term set 2:
- Nuance: a subtle difference in or shade of meaning, expression, or sound (Google); a subtle distinction or variation (MW)
- Subtle: so delicate or precise as to be difficult to analyze or describe (Google); hard to notice or see : not obvious (MW)
- Negligible: so small or unimportant as to be not worth considering; insignificant (Google); so small or unimportant or of so little consequence as to warrant little or no attention (MW)
Term set 3:
- Objective: not influenced by personal feelings or opinions in considering and representing facts (Google); based on facts rather than feelings or opinions : not influenced by feelings (MW)
- Subjective: based on or influenced by personal feelings, tastes, or opinions (Google); based on feelings or opinions rather than facts; relating to the way a person experiences things in his or her own mind (MW)
- Judgment: the ability to make considered decisions or come to sensible conclusions (Google); an opinion or decision that is based on careful thought (MW)
Term set 4:
- Policy: a course or principle of action adopted or proposed by a government, party, business, or individual (Google); definite course or method of action selected from among alternatives and in light of given conditions to guide and determine present and future decisions (MW)
- Requirement: a thing that is needed or wanted (Google); something that is needed or that must be done (MW)
- Choice: an act of selecting or making a decision when faced with two or more possibilities (Google); the act of choosing : the act of picking or deciding between two or more possibilities (MW)
- Option: a thing that is or may be chosen (Google); the opportunity or ability to choose something or to choose between two or more things (MW)
So this is my argument, given the terms as used above:
The purpose of a standard is to create some accepted "norm". As I explained in a prior blog post, there are certain advantages to creating and agreeing on standards.
To create a shared reality to achieve a this specific purpose: To arrive at a shared, common enough view of "true and fair representation of financial information" such that most of our working purposes, so that reality does appear to be objective and stable. So that you can query financial information reliably, predictably, repeatedly, safely.
By way of contrast, a standard is different than arbitrary. To arrive at standard, personal preference, individual preference, individual discretion, convenience, randomness and such are given up to arrive at some greater good.
Now, what is given up? Things that are negligible are given up. Things that are subjective are given up. It should NOT BE THE CASE that important nuances, subtleties, or things that are objective be given up. If the nuances, subtleties and other things which are in fact objective are not considered, the system becomes too simplistic and such a poor representation of reality that the system is not useful.
How can you tell the difference between something that is negligible and something that is an important nuance, subtlety, or something else which is objective? That is the nature of the agreement by which one arrives at some standard. To make this distinction between what is important and what is not important requires two things: professional judgment and an understanding of the pros and cons of the decision.
In another blog post I elaborated on why this process can be difficult. Getting computers to do things is not that tough for IT professionals. Look at all the things computers do for us today. Consider accounting systems. Sure beats keeping the books using paper journals and ledgers. Accountants provided a lot of input over the years to get the accounting systems to work as they need the accounting systems to work.
Now we are taking things up a notch. Now IT professionals can help business professionals create financial statements in more efficient and effective ways. The question is not whether this is possible or whether it will be done. It will be done.
The question is, will that software be more standard and therefore do more for accountants and cost less; or will it be less standard and cost more. Will the software be more standard and therefore less ambigous and therefore easier to use; or will it be less standard, have more options, and therefore be harder to use.
Financial reports have certain requirements? Of course. Do accountants have choices? Sure they do. US GAAP has options. Once certain choices are made by a company, then a policy is set as to how things are to be done.
So, whether disclosures will be digitized is not what should be at the forefront of a professional accountants mind. What should be on their mind is whether it will be expensive or will it be less expensive. The more standard things are, the less they will cost and the easier they can be to use. The more arbitrary they are, the more they will cost and the harder they will be to use. Another term for arbitrary is proprietary. Software vendors love proprietary. Proprietary creates software lock in which is good for software vendors, but bad for software buyers.
None of this has to do with dumbing down financial reports, taking away important areas of flexibility which enable reporting entities to communicate nuances, important subtle distinctions or other objective considerations. In fact, dumbing down financial reporting needs to be avoided at all costs.
But to the extent negligible and therefore unimportant individual personal preferences can be eliminated, financial reporting can be made less costly, more timely, and higher quality by leveraging machines to automate certain specific tasks. Not all tasks. Just the tasks which make sense for computers to perform.
In my view, this is an appropriate high-level architecture to build the underlying "engines" or "processors" which make digital business reporting and digital financial reporting work as business users need it to work:
So let me explain why I think this and what the pieces are. First, if you follow Grady Booch's five common characteristics to all complex systems, you break the complex pieces into smaller less complex pieces. Second, you expose business users to things business users understand.
Working from bottom to top, this explains the pieces: (Note that you can load the code examples into Microsoft Visual Studio and then use the Object Browser to view the class model. Here is what you can see because I am documenting the code using XML comments.)
- XBRL Processor: Most software vendors who support XBRL understand what this is and are very comfortable with this level. This is the "techie level". You read the XBRL Technical Specification, you test against the XBRL Conformance Suite, and you are off to the races.
- Business Report Processor/Engine: The next level is a generalized version of a business report. This is defined by the XBRL Abstract Model 2.0. I would go one step further and define a more constrained application profile or even better a handful of application profiles. Here is one example general business reporting application profile which is based on the US GAAP Taxonomy Architecture. Basically, this profile does not allow tuples and the other stuff the US GAAP Taxonomy Architecture prohibits, but is more restrictive and safer to use because it forces better consistency. A proof by the Financial Report Semantics and Dynamics Theory shows that 99.9% of SEC XBRL financial filings follow these semantics and fit into this representation.
- Financial Report Disclosure Processor/Engine: Building on top of the business report processor is another processor which is unique to financial reporting. There may be a desire to split this out even further to US GAAP, IFRS, and maybe even SEC specific financial reporting. That is to be determined and the answer will unfold as the classes are built out.
There are WAY, WAY more pieces than what I am showing in the graphic above: import/export, rendering, validation, workflow management, content management, query. But I left those pieces out to better focus on the differentiation of these specific pieces.
Most software does not break these out correctly. That causes reduced software flexibility and harder to use software. It also causes error prone SEC XBRL financial filings. What if software did not allow you to create mismatched Level 3 text blocks and Level 4 detailed disclosures such as the ones documented on this page? Or better yet, what if the financial report disclosure processor had an intimate understanding of these digital financial report principles so you could never make these sorts of mistakes? That is the entire point of having the disclosure processor/engine.
Can anyone point me to a better architecture that actually works correctly? I would recommend that you may want to go back to the blog post Data and Reality before you tell me that this cannot work.
Something else that provides clues as to what a financial report disclosue processor does is the metadata the processor leverages.
I wish I knew 15 years ago what I know now. When I learned some things about the IT world, I made two mistakes. First, I thought these things had been around for quite some time. Object Oriented Programming is actually relatively new. Second, not everyone in the IT world understands this stuff. There are people who are good at object oriented programming, and there are those who are not.
This document, Object Oriented Programming by Carl Erickson, provided some enlightening information about the creation of software.
On page 40 of the PDF:
A study in 1999 by Elemer Magaziner of requirement specifications found that they are typically only 7% complete and 15% correct.
Except for very constrained domains, re-implementations of existing functionality, or trivial applications, the process of defining requirements is almost guaranted to result in:
That is pretty low. Even the statistic was three or even four times what it is, it would still be low. Most of the stuff that I work on are things which have never existed before. It is no wonder that they tend to not work as I would expect.
On page 7 of the PDF:
Booch identifies five common characteristics to all complex systems:
1. There is some hierarchy to the system. A minute ago we viewed hierarchy as something we did to cope with complexity. This view says that a complex system is always hierarchic. Perhaps people can only see complex systems in this way.
2. The primitive components of a system depend on your point of view. One man’s primitive is another complexity.
3. Components are more tightly coupled internally than they are externally. The components form clusters independent of other clusters.
4. There are common patterns of simple components which give rise to complex behavior. The complexity arises from the interaction or composition of components, not from the complexity of the components themselves.
5. Complex systems which work evolved from simple systems which worked. Or stated more strongly: complex systems built from scratch don’t work, and can not be made to work.
Also on page 40:
Generality is good, but comes at a cost. Some balance must be met. Developers who really become OO (object oriented) are particularly prone to this problem. The elegant, complete, general solution to a class design becomes a goal unto itself. You’ve got to know when good is good enough.
I cannot tell you how many conversations that I have had with software developers about them over-generalizing something. Take an "element" in XBRL. When you generalizing everything down to an element, you totally lose important differences between the categories of elements: hypercube, dimension, member, primary items, concept, abstract.
To create good software you need three things:
- Extremely deep business domain expertise for the business domain for which the software will be developed.
- Extremely deep technical expertise and at least some of this expertise should be in the form of a technical architect/engineer. Now, I don't mean engineer how many IT people use the term engineer. Many IT people tend to think that everyone who understands how to program is an engineer. This is NOT the case.
- Extremely good communication between the deep domain experts and the deep technical experts. This is extremely hard.
Developing good software is hard work. It is no wonder that business users don't particularly care for software which is used to create SEC XBRL financial filings. Not only is the software hard to use but they don't even work correctly for their intended purpose. Now I understand why this is the case.
So don't get me wrong here. I am NOT saying that software vendors are the problem. I mention THREE points above. You need all three. Business domain professionals are just as culpable as the technical experts. Clearly the requirements or purpose of the software are not being communicated effectively.
More to come...
- Disobedience over compliance. You don't win a Nobel Prize by doing what you are told, you win Nobel Prizes by questioning authority and thinking for yourself.
- Emergence over authority. How does the authority of a system exist? Authority is not about someone with a title, authority is about leadership.
- Practice over theory. It is better to have a fact than a theory.
- Learning over education. Education is what others do to you, learing is what you do to yourself.
- Pull over push. Pull from the network as you need it rather than stocking what you think you might need.
- Compasses over maps. You need to have a compass heading of where you want to to, not an entire map.
- Resilience over strength. Understand how to recover (bounce back, resilience) rather than focusing on how not to fail.
- Systems over objects. Focus on the system, not the objects in the system.
- Risk over safety. Take risks, don't spend your entire time figuring out how not to fail.