Back in November 2009 I gave an A+ to one SEC XBRL filer, Citigroup, for meeting a base set of criteria. By February 2010 the number which got an A+ grew to 24. By March the list grew to 92. Then in September 2010 it grew to 4875. If one looks at the XBRL Cloud EdgarDashboard, you don't see much red in the "E" for EFM validation errors column or even the "W" for warnings.
Great job SEC filers! Lots of improvement. This is not to say a lot more work is needed.
Another significant milestone has been reached: a readable rendering.
What do I mean? Well, looking at an extreme example will make the point. Consider the example below:
It would be hard for me to believe that anyone sees this example as something which should be emulated.
Why do SEC XBRL filings render like this? Well, let me tell you who has the most readable rendering that I have seen. There are only two criteria and I am not saying that this is the only SEC filing which meets these criteria. What I am saying is that it is the first SEC filing which I have run across which meets the criteria.
The first criteria is that the filing must be a 10-K, meaning it has lots of rather complex stuff in it. The second criteria is that you can look at the information in the SEC XBRL financial filing and you are not totally confused by what you see.
So who is the winner of this distinction? Google.
- SEC Viewer for Google's 10-K
- XBRL Cloud Viewer for Google's 10-K
- Here is the Google XBRL Taxonomy (so you can see what modeling approach they used)
- XBRL instance (load this into your favorite XBRL software)
What is even more interesting is that I get close to the same renderings in three different software applications created by three different entities. First of course you have the SEC viewer which was created by the SEC. The second is XBRL Cloud. That viewer is freely available to use for looking at SEC filings, thus the link above. The third company is CoreFilings and their product is Magnify. You have to either buy that software or get a trial version for 30 days to try it out.
Do you understand what this means? One global standard, three different software vendors. I am sure that there are more software vendors can load and render Google's filing. But I have only used three.
And how did Google achieve this distinction?
- Google created many smaller pieces, rather than fewer larger pieces. Google's filing has 92 networks. Remember that half of them are XHTML "[Text Block]"s; so I figure that there are about 40 or 50 pieces, given that the primary financial statements do not have [Text Block]s.
- Google modeled the information. If you go look at that not-so-good example above, it is pretty easy to tell that whoever modeled that has no notion of a "model" in their mind and that all they did was pick a piece of the financial statement, parsed the HTML presentation into pieces, and then put some "tag" on the piece. Google did not do that. Google clearly thought about what they were doing, they modeled the information, and the rendering turned out pretty good as a result. Perhaps not perfect, but there is not one of those pieces which looks even remotely like the example shown above.
- Google matched up [Axis] and [Line Items] well. A major reason for poor renderings is mismatches between the [Axis] and the [Line Items] for reported facts. This relates to point "1" above (many smaller pieces, rather than fewer larger pieces). Proper match ups, which Google most of the time, causes better renderings.
That is really all it takes to get a good rendering. Don't believe me? Take a look a the renderings of the disclosure templates I am creating. If you find one that you don't understand (clearly you need to understand the accounting) please let me know. The disclosure templates are small pieces, but don't be fooled by that. Add more [Axis] or add more [Line Items] and you will still have good renderings.
Bad renderings are caused by: (1) bad models and (2) bad rendering engines. If you have a good rendering engine and a bad modeling, you will have a bad rendering. Good rendering engines cannot fix bad modelings.
Great work Google! I think I will go through all Google's computations and see how well they did on those.