BLOG: Digital Financial Reporting
This is a blog for information relating to digital financial reporting. This blog is basically my "lab notebook" for experimenting and learning about XBRL-based digital financial reporting. This is my brain storming platform. This is where I think out loud (i.e. publicly) about digital financial reporting. This information is for innovators and early adopters who are ushering in a new era of accounting, reporting, auditing, and analysis in a digital environment.
Much of the information contained in this blog is synthasized, summarized, condensed, better organized and articulated in my book XBRL for Dummies and in the chapters of Intelligent XBRL-based Digital Financial Reporting. If you have any questions, feel free to contact me.
Entries from February 19, 2012 - February 25, 2012
Semantic Object Reconciliation Clarifies Areas of Ambiguity of SEC XBRL Financial Filings
This document provides a reconciliation between the semantic objects of the Financial Report Semantics and Dynamics Theory, the model which I put together for implementing SEC XBRL financial filings, and the XBRL technical syntax. It helps make clear two areas of ambiguity of SEC XBRL financial filings:
- When should a network be used and when should a [Table] be used. Said another way, what meaning exactly does a network have and what meaning does a [Table] have.
- When should a characteristic which describes a financial fact be modeled as a concept and when should it be modeled as a [Member] of an [Axis].
These two issues are a bit of a big deal because having these choices makes it more difficult to create easy to use software and increases the complexity and therefore the cost of creating SEC XBRL financial filings.
Anyone who creates SEC XBRL financial filings probably understands these issues. OK, so what do you do about this? Well, for the first issue; many SEC filers are already doing a good thing which is to create a one-to-one correlation between networks and [Table]s. This is a safe thing to do. Some filers, and I know that 100% of Edgar Online filings are done this way (I used to work there), is to always use a [Table].
So basically, if there is a one-to-one correlation between networks and [Table]s then there is no ambiguity between what a network means and what a [Table] means because they mean the same thing. As such the first issues is not that hard to work around, although it does not fix the problem.
The second issues is somewhat trickier but there are some guidelines you can use. First off, your choice of whether to model something as a concept or an [Axis] is many times determined by how something else has been modeled.
For example, if you want to figure out how to model the components of "cash and cash equivalents" or "inventory" and you want them to show up correctly on your balance sheet and you want your balance sheet to foot correctly; you pretty much need to model the detailed components as concepts because everything else on the balance sheet is a concept.
On the other hand, you could choose to disclose the details in the disclosures and therefore you would choose either approach; modeling those details as concepts or as [Member]s of an [Axis]. In this case, as long as there are no other areas which would be goofed up and not fit together properly; I personally would choose to model the information as [Member]s of an [Axis] in most cases. The primary reason is that you can then connect things together better, for example the details and policies which might relate to those details.
The data point model tries to solve this issue of trying to figure out whether to model something as a concept or as the [Member] of an [Axis] (i.e. this issue is not unique to the US GAAP Taxonomy) by modeling a set of base items as concepts and all further details as [Member]s of an [Axis]. However, that might be a bit too radical of an approach to standardize on. But the data point model is additional justification to vier toward using [Member]s of an [Axis] if you have a choice between two options.




Second Software Product Verifies Core Financial Integrity Semantics
For about a year now I have contended that SEC XBRL financial filings (10-K and 10-Q) have a set of core financial semantics. I summarized these last (and best) in my Financial Report Semantics and Dynamics Theory. Until now I was the only one that had code which proved these semantics, and my code was not commercial quality code. That changed today.
Today, I ran those same core financial semantics rules over the same set of 8098 SEC XBRL financial filings using a commercial off-the-shelf software application. I expressed these semantic rules using their rules language and ran the rules using their rules engine, and I got the same results using that software application that I got using the software I created.
I want to try and run the rules against my entire set of test cases; as of now I have only spot checked because I cannot execute the rules in batch mode. Working on that.
More to come...




Over/Under Verifying your SEC XBRL Financial Filing?
Many who create (or help someone who is create) are over verifying their SEC XBRL financial filing. Others are under verifying. Both over verifying and under verifying should be avoided.
If you are over verifying you are doing work which takes time and costs money which either is being done already by software or could be done using software. For example, if you are checking the type attribute value of a [Table] or [Axis]; that is a waste of time. Every [Table] and every [Axis] is required to have a type attribute value of string (or xbrli:stringItemType). Same sort of deal for the period type of a [Table] or [Axis]; both are required to have a type attribute value of "duration". Pretty much all good software enforces this type of rule.
Another type of over verifying is checking things like the contextRef on facts which have zero semantic meaning. All that a contextRef does is hook things together, pure syntax. If you verify that the characteristics of a fact are correct; there is a 100% probability that the "hook" between the fact and those characteristics is correct. So, why would you verify the contextRef? No need.
If you are under verifying you are not doing enough work to be sure your SEC XBRL financial filing is correct. An example of this is not creating XBRL Formulas for (or some other means of verifying) the many, many computations which exist such as roll forwards and dimensional aggregations which an XBRL calculation relation will not verify as being correct. This link is to a report which verifies that the some 40 computations which exist in my model SEC XBRL financial filing are correct. As you know, financial reports have many computations, if you are not creating XBRL Formulas or something else to verify that they are correct, under verifying your filing and the chance that en error exists is high.
Another type of under verifying is not checking the structure of your [Table], [Axis], [Member]s, [Line Items]. A [Table] with no [Axis] makes little sense. A [Member] mixed in with the [Line Items] makes no sense. All of these types of tests can be automated, checked by a software application.
In this matrix I tried to summarize the types of things which a computer software application can verify. The matrix shows the types of report elements in the rows and the properties a report element could have in the columns. NA means that a report element does not have that property. Properties which carry semantic meaning are listed.
- Properties which computer software can verify 100% are highlighted in gray.
- The type of things humans need to verify which are highlighted in yellow,
- The types of things where both human effort and software can be used which are highlighted in light orange.
There are other examples of over verifying and under verifying. But this will get you started figuring out the right mix and if software or humans should be doing the work.



