BLOG: Digital Financial Reporting
This is a blog for information relating to digital financial reporting. This blog is basically my "lab notebook" for experimenting and learning about XBRL-based digital financial reporting. This is my brain storming platform. This is where I think out loud (i.e. publicly) about digital financial reporting. This information is for innovators and early adopters who are ushering in a new era of accounting, reporting, auditing, and analysis in a digital environment.
Much of the information contained in this blog is synthasized, summarized, condensed, better organized and articulated in my book XBRL for Dummies and in the chapters of Intelligent XBRL-based Digital Financial Reporting. If you have any questions, feel free to contact me.
Entries from January 1, 2009 - January 31, 2009
SEC Issues Final Ruling: Interactive Data to Improve Financial Reporting
The US Securities and Exchange Commission issued their final ruling relating to the use of interactive data (i.e. XBRL) by public companies for reporting to the SEC. The final ruling can be found here.
Why Web-based APIs Are Useful and Important
Not a lot of business people understand what an API (or Application Programming Interface) is or why it is important to them. Even fewer typically understand what a web-based API is, which is even more important,interesting, and will change the way a lot of things get done.
I am going to endeavor to briefly explain, as best as I can, why these web-based APIs are so important to understand principly because business people really should be asking for them because they are so useful and save tons of time and money.
Consider this API documentation: http://developer.zoominfo.com/ Please don't think about what the API provides in this case which is an interface into ZOOMINFO contact information. This has nothing to do with ZoomInfo. I am only using this web page as an example because the documentation of the API is so good. If you take a look at the documentation, you will get a good idea what web-based API is and what you can do with it. Look at it, particularly the documentation page.
Here is another simple web-based API: http://www.xbrlsite.com/EarningsEstimate/ I created this 5 or so years ago when I was learning about these web-based APIs. This was modeled after another API which I was given access to. This API works, you can change the parameters in the URL and different data will come in the XML documents which are returned based on the information you provide. Again, this is just a demo, a prototype.
Why is this so important to understand? Well, this is important because it changes the way a lot of things can be done. This sort of web-based API provides information which a computer application can communicate with, performing tasks which, if the API did not exist, humans would have to perform.
Why should you have to go to a web page, look at it, then manually grap information by reading it or copying and then pasting the information into an application or Excel spreadsheet? Here is an example of what I am talking about. I am consciously picking an example that has nothing to do with financial reporting. Take a look at the CIA World Fact Book.
When you click on the link above, you go to the "Afghanistan" page of that book. Scroll down and look at the GDP (Gross Domestic Product, purchasing power parity). Imagine that you wanted that information for each country for some model you maintained in an Excel spreadsheet. No problem, the CIA using our tax payer dollars provides that information here. You can probably dink around with that, tweak it, and get it into your model. It would take time, but it can be done. But what if you wanted some relationship that they did not provide you in one of their lists? Say, the relation between GDP and the number of cell phone users? Or whatever.
What if you did not have to go through this cut and paste process? What if you could write a rather simple query and get the information returned exactly where you wanted it in your spreadsheet? What if your model needed data from the CIA World Fact Book and from the Rick Steves Travel Guide used together. Copying and pasting really does not cut it.
In the early days of the internet people trying to get information went through a process known as "screen scraping" trying to get information from a web site and use that information some place else. The process is complicated and error prone.
Edgar Online uses a somewhat similar approach to grabbing information from the filings of public companies. They do this by reading in the SGML or HTML filings and then parsing the information trying to glean useful bits and make it available to those that want the information. This parsing is complicated and expensive.
Others simply read the information and rekey it into databases.
But XBRL changes all that. You don't have to rekey information, you can simply ask for information and some piece of software will go get the information you are looking for you and return it. But how is that software going to present that information to you? In the form of a web page which you can rekey into some Excel model?
No, or hopefully not. Hopefully there will be some API interface provided by the SEC, like the one ZoomInfo provides (and as well documented hopefully). The FDIC provides an interface into the information they collect from banks, although I have been having trouble figuring out how to use it.
Imagine the possibilities:
- Imagine a query interface which provides information from the SEC IDEA system which you can then automatically populate an Excel model. It is highly likely that either the SEC will provide this or that some third party, such as Edgar Online, will provide such a web-based API interface.
- But what about other things. What if you had some way of analyzing something, say a tax return or some other data set, and you wanted to provide your professional knowledge to others but rather than doing this on the phone, you did it by providing a web-based API, for a fee of course, which you distributed via the Web to anyone in the world! Every time someone used the API they would pay you a fee because you provided useful analytical insights with your model or algorithm.
- Imagine what additional analysis can be done because there is more data available and because the cost of performing the analysis is so low.
- Why do companies even file information with regulators, why can't regulators just require companies to provide a "window" via an API into their systems so the regulator can go look at what the regulations say they can look at whenever they choose to look at it? Or, same deal with auditors, both internal and external. Why the heck do internal auditors, external auditors, and regulators all need different information?
I am sure you can think of some better examples. The bottom line here is that web-based APIs which access information or processes will dramatically change how things get done.




Thoughts About XBRL Application Profiles
The idea of different XBRL application profiles have been around for quite some time. And there have even been some application profiles around, but people really have not been calling them application profiles. For example, it seems to me that XBRL GL is an application profile. The US GAAP Taxonomy Architecturerefers to application profile, calling for those who extend the US GAAP Taxonomy to follow the architecture of the US GAAP Taxonomy. This is from page 4 of that document:
"Application profile- These voluntary restrictions followed by [US GAAP Taxonomy] version 1.0 architecture form an "application profile" for the use of XBRL features within the taxonomy. It is strongly recommended that extensions to version 1.0 stay within this application profile. Systems using version 1.0 may, at their option, set rules which force extension taxonomies to stay within this application profile."
Seems to me that every taxonomy is really an application profile. Why would you want someone to go outside the bounds of a base taxonomy? So, it seems that COREP, FINREP, the Netherlands, IFRS and other taxonomies also form application profiles, although perhaps not as explicitly as the US GAAP Taxonomy.
When I think of application profiles though, I think of something more general. Does it really make sense for every taxonomy to have its own application profile? That does not seem right to me.
I experimented with what several of use called XBRLSwhich I believe is also an application profile. And that is not really any specific taxonomy, it is general. It does borrow heavly from ideas used in the US GAAP Taxonomy.
Reflecting back though, it seems to me that the following would be a rather good set of application profiles for XBRL:
- XBRL Multidimensional Model Application Profile: This follows the multidimensional model, using cubes (or hypercubes), dimensions (or axes), and primary items (or members).
- XBRL Relational Model Application Profile: This follows the relational model making it really easy for information to fit into the rows and columns of a relational database table.
- XBRL Spreadsheet Model Application Profile: This is designed to make it easy for information to go back and forth between XBRL and a spreadsheet, leaving all that goofy formatting most people seem to pack inside spreadsheets behind, focusing only on the information.
- XBRL XML Content Model Application Profile: This is basically tuple heavy, using the XML content model the way many other XML languages use that content model, but rooted in XBRL. The up side of this is that rendering things is rather trivial. The downside is that it is not very extensible.
- XBRL GL Application Profile: This is specific to working with XBRL GL (XBRL Global Ledger).
- General XBRL Application Profile: By default, if you don't set any constraints, this is the application profile you are using. (But frankly, why anyone would ever do this to themselves is beyond me.)
It seems to me that mixing these application profiles within one taxonomy could causes problems. For example, the US GAAP Taxonomy makes heavy use of the multidimensional model, but not everything in the taxonomy participates in XBRL hypercubes. Seems to me that this is contrary to the multidimensional model where everything fits neatly into a cube. (Whereas in the COREP taxonomy everything does exist within an XBRL hypercube, staying true to the multidimensional model.) The year should provide good information as people start using the taxonomy and instance documents prepared using that taxonomy.
A more general way of saying this is that, say, mixing XBRL Dimensions and tuples would be quite odd potentially. Taxonomy creators have tended to shy away from this, either using tuples or XBRL Dimensions but never using both.
Also, it seems to me that some of these simpler models can actually make getting into XBRL and using it a lot easier. Over the past couple of years I have been pretty down on tuples, but I think I am reversing that opinion. It makes a lot of sense to express things which go into relational databases using tuples, as long as you create your tuples correctly.
It would be a good thing if certain taxonomies used the same architectures, or application profiles. Evidence of this is the effort to create the Taxonomy Architecture Interoperabiltiy (TAI). The document linked to states:
"Together with the US Securities and Exchange Commission and the JFSA, the IASC Foundation XBRL team has initiated work towards Taxonomy Architecture Interoperability. The objective of this initiative, which has recently been joined by the European Commission, is to identify opportunities for Taxonomy Architecture alignment."
Clearly this would be a very good thing.
Having everyone who ever creates a taxonomy start from scratch when building a taxonomy would not be a good thing, it seems to me. As such, intuitively it seems to me that application profiles are a very good thing.




XBRL Planet's List of XBRL Projects Around the World Provides a Few Lessons
XBRL Planet maintains a list of XBRL projects from around the world. You can go to the XBRL Planet and view this list of projects laid out on top of a map. The data which is laid on top of the map comes from an XML file provided by the site: http://xbrlplanet.org/planet/xbrlprojects2xml.php. The XML file looks to be generated from information contained in a database.
The map is great, but I personally wanted a list of projects more in the form of a table so I could look at the projects. So, I created that table. Now, I actually did this in a number of different ways, experimenting with different approaches. I ended up with this as one of the approaches: http://www.xbrlsite.com/xbrlplanet/ReadProjectsXML.aspx. Nothing spectacular, but it gives me what I want and I learned a few things I was trying to learn about reading XML files using VB.Net.
I think that it is pretty slick that I can simply use someone else's data to maintain my tabular list. They update their data, my list gets updated. This is just as interesting as Conor O'Kelly from XBRL Planet leveraging Google Maps to show the location of projects around the world. There is even a word for this, mashups. Or, at least this is something like a mashup; the data really is not coming from more than one source, in these two situations the data comes from one place and the formatting comes from somewhere else.
On the XBRL Planet web site is the following disclaimer:
"This map is powered by Google Maps and may reside on servers not controlled by us."
Now, XBRL Planet controls their data, but Google controls the program to render the data on the map. I control the program which generates my tabular list, but XBRL Planet controls the data. Interesting. Normally one party controls everything and is ultimately responsible for the content.
There seems to be a word for this also: data governance.
I have no idea if the XML Planet data is correct, when it is updated or even if it is updated. The XML Planet data is also linked to several other sources of information. There are links to the project's web site, to Wikipedia which explains a little about the organzation undertaking the project, and to IASPLUS for information about how IFRS might impact the project.
The third question I had about this little process I went through is were there any advantages of having the information provided by XBRL Planet in XBRL, rather than simply in XML. So, I took the XML from XBRL Planet, created a style sheet, and converted the XML into XBRL. You can see this here: http://www.xbrlsite.com/xbrlplanet/ReadXBRLPlanet.aspx.
The conversion process was trivial. Also, every time the XBRL Planet information is updated, my little XBRL prototype of the XML data is convered to XBRL. There are some advantages and also some drawbacks from doing this. The advantages that I saw are that I have all these applications which work with XBRL and I am familiar with XBRL. So, for me, it is easier to do things with XBRL. Also, although I did not use them in this example, I can see that being able to use XBRL Formulas to help validate the data would be quite nice. One drawback is that for XBRL you are required to have a schema file available. Well, I did not create a schema file (i.e. a taxonomy). I can see other advantages of the language features (i.e. providing the XBRL Planet information in multiple languages) are probably easier using XBRL.
One final thought. Wouldn't it be cool and useful if XBRL International would:
- Create some sort of RSS feed or something on the XBRL International web site.
- Have each of the XBRL International Jurisdictions create some sort of XML or XBRL file with information about XBRL projects in their jurisdiction.
- Have each of the jurisdictions maintain this project information on their web site, responsible as part of their commitment to XBRL International to keep these current.
- Have the RSS or other feed from item "1" above point to each of those lists of projects.
- Have accurate data on XBRL Projects around the world, managed by XBRL International and the jurisdictions.
Seems like this would be a great demonstration of the power of XBRL.





GE Using Blogs to Disclose Information
After my blog post about using RSS/ATOM for financial information disclosure, a reader of my blog made me aware that GE is using blogs to disclose information to investors.
Reuters has a good article about GE's use of a blog, GE embraces blogs, some see disclosure worry.
One of the most interesting things about this article is this quote:
"Any time you shrink the number of outlets you use or the number of methods you use, you raise the potential for reaching a smaller audience," said Ken Dowell, executive vice president at PRNewswire. "I don't think that levels the playing field at all."
Huh. PRNewswire not thinking blogs are a good idea. Seems to me they have a bit of an interest in maintaining the status quo.
Anyway, this experimentation by GE is great.



