BLOG:  Digital Financial Reporting

This is a blog for information relating to digital financial reporting.  This blog is basically my "lab notebook" for experimenting and learning about XBRL-based digital financial reporting.  This is my brain storming platform.  This is where I think out loud (i.e. publicly) about digital financial reporting. This information is for innovators and early adopters who are ushering in a new era of accounting, reporting, auditing, and analysis in a digital environment.

Much of the information contained in this blog is synthasized, summarized, condensed, better organized and articulated in my book XBRL for Dummies and in the chapters of Intelligent XBRL-based Digital Financial Reporting. If you have any questions, feel free to contact me.

Entries from January 3, 2010 - January 9, 2010

Another Step Toward Understanding OWL, RDF and How They Relate To XBRL

This is another step in the journey of understanding OWL, RDF, and how they relate to XBRL.  You can see other blog posts relating to this here.

OK, so I think I am getting more and more dialed in as to what OWL, RDF, and XBRL offers.  At this point I am not totally sure about this.  Consider this blog post brainstorming. I will use this post to get feedback from others who understand these things better than I do and then tune my perceptions.  So be sure to check back later to see where these perceptions end up.

RDF, OWL, and XBRL are all "graphs".  Or maybe I should say that they all share the same approach, graphs, to articulate information of some sort.  That approach was chosen because of the flexibility of graphs to expressing information.  You can do a lot with "subject-predicate-object" relations.  These "subject-predicate-object" relations where not invented by computer science.  Aristotle and Socrates, long ago, used these types of relations in philosophy it seems.

RDF is a "global standard way" of articulating subject-predicate-object relations.  It is a general tool.  It can articulate any subject-predicate-object relation.  If is very flexible.  It is very verbose.  There were lots of different syntaxes for expressing the RDF, one syntax which seems to have been catching on is the XML format of RDF. (I found another useful primer on all this here.)

One issue with RDF is that is that you can express ANY subject-predicate-object relation, whether it is logical or not logical.  OWL is used to express constraints on the subject-predicate-object relations.  (It is also somewhat of a "short cut" approach to creating "common" relations it seems.  But, lets not focus on that).  It seems as though OWL is sort of like a "schema" for the RDF if you understand XML Schema or database schemas.  Basically OWL expresses constraints on the RDF which software can use to determine if the RDF expressed "follows the rules" so to speak.  This is critically important just like a database schema is important or an XML schema is important.

OWL is very, very "powerful".  It goes far, far beyond what a database schema can express or an XML schema can express.  OWL can be used to express semantic meaning, it seems like ANY semantic meaning which can be expressed can be expressed in OWL.

XBRL's architecture uses "graphs" to express many things.  Those that created XBRL did this because XBRL needed flexibility.  The approach of using graphs gives you the flexibility.  You have subject-predicate-object relations in XBRL.  Again, remember that pretty much anything can be expressed in this manner, so clearly XBRL should be expressible in this manner.  And in fact some technical people "expressed XBRL in OWL" (I say this loosely).  You can see those OWL ontologies here.  It is something only your mother could love.  Basically, what they did is run a style sheet over the XBRL schemas and converted them into OWL.  They took one syntax, XML Schema, and converted it into another syntax, OWL.  Not really that useful for business people.  Might have some sort of technical use, that will be seen later.

Now, XBRL took some additional "short cuts".  RDF is built to express anything.  XBRL is built to express business information, a subset of "anything".  Could RDF/OWL be used to express the XBRL syntax.  Yes.  Is this useful?  I think it does have some utility.  But, there are better uses for OWL than expressing a logical model of the XBRL syntax.

So, XBRL is a "short cut" to expressing business information in a form that computers can make use of.  XBRL is a general format.  No one uses "XBRL".  Everyone uses some subset of XBRL, some application profile. This is why the COREP taxonomy does not work with the US GAAP Taxonomy.  Every XBRL taxonomy or system has a different "application profile" because it uses a different architecture.

It seems to me that one thing OWL can be used for is to "express" or "document" those different application profiles.  Other things can be used to model an application profile of XBRL, UML for example could do this.  OWL could be a very, very valuable tool for documenting an application profile.  Today application profiles are either not documented at all or rather poorly documented in a Word document or PDF.  No computer can read those documents.  Computer programmers have to read the documents, extract information, and build applications to work with the different XBRL application profiles.  For example, here is the US GAAP Taxonomy architecture.  Here is the SEC test suite.  Here is the CEBS FINREP taxonomy architecture.

I really don't know if it is possible for an OWL ontology to be constructed and for software to automatically generate "tests" of the XBRL application profile.  That could be nice.  But, having a consistent way to document XBRL application profiles could be nice.  UML could be that way.  OWL could also be that way, it seems.

But there is another thing that OWL can be used for.  What I am seeing is that anything can be expressed in OWL.  Well...almost anything.  There is one big constraint.  What you want to express has to be logical.  If it is not logical, it cannot be expressed.  Therefore, if it IS logical it CAN be expressed.  Additionally, you can see WHAT is expressed!  That is even more interesting and has more utility.

For example, OWL can be used to articulate where a taxonomy can be extended, what the information model can look like, what is allowed, what is not allowed, and so forth.  OWL basically can document how things work.  For example, you could use OWL to document how the US SEC XBRL filings "work".  By seeing that, you can determine things like does it work the way you WANT it to work.  Or, is there a better way.

Another way OWL might help XBRL is adding additional information to XBRL.  Now, XBRL can be used to add additional information.  For example, the definition linkbase of XBRL can be used to express "arcroles" and things which can be used to express meaning.  For example, the XBRL Dimensions specification did this.  But is XBRL really the best way to express additional information?  The XBRL way has its pros and cons.  The OWL way also has its pros and cons.  Does it really matter?  XBRL and OWL are only syntax.  The important thing is that what is expressed is logical, it is done in as standard a way as possible, and it works.

One final thing which I want to mention before I wrap up this post is my realization that RDF/OWL does seem to have one very significant thing "missing".  I really don't know if I really should say it is missing, more it is something that to really make use of RDF/OWL information you have to build domain specific software on top of it.  Like I said, RDF/OWL can be used to express anything.  Its biggest strength is also its biggest weakness.  Because it can be used to express anything, you have to write software which understands the specific subjects, the specific predicates, and the specific objects and does useful things with those relations.  There is one thing which I admit that I don't really grasp yet (at least one thing, could be more).  Software can be built which "learns" from the relations, building additional relations.  This can be useful, but I cannot grasp this right now.  This is in the realm of artificial intelligence.  Maybe this will work, maybe this will not.

By contrast, an XBRL processor only has to understand XBRL.  An XBRL processor can easily convert XBRL into RDF/OWL.  To understand the RDF/OWL syntax but more importantly the SEMANTICS as well as an XBRL processor, you would basically have to rebuild the functionality which exists in an XBRL processor into an RDF/OWL processor.  Why would you do that?  Besides, XBRL processors don't even work at high enough level, they still deal mostly with syntax, not enough with semantics.  And I am not talking about things like XBRL Formula which verifies if the semantics is correct.  I am talking about DOING USEFUL THINGS with the expressed semantics.  No general XBRL tool does this at the level a business person would find particularly useful these days.  I would also point out that XBRL is ahead of RDF/OWL in regard to having a working method of validating semantics.  XBRL has XBRL Formula, the Semantic Web folks have something in the works (i.e. they do realize that this is important).

It seems that no matter what happens, XBRL is going to have to fit into the Semantic Web.  Getting XBRL into RDF/OWL is trivial for an XBRL processor.  Doing anything useful with the RDF/OWL is going to take more than want XBRL processors offer these days.  What an RDF/OWL engine can learn from, say, a data dump from XBRL from something like the entire SEC XBRL filing database could be quite interesting.

So that is what I seem to be seeing.  Not totally sure if I am correct on all of this or any of this.  The next step is to float these ideas by some people who grasp these things better than I do and see what they have to say.  Discussions with some of them have yielded what I have thus far (i.e. this blog entry).  But, there is still a ways to go.

One thing that I can say with a pretty high level of confidence is that if you live in this information age and you are a business person and you don't understand what metadata is and what you can do with it you are at a distinct disadvantage.

What is your opinion?

 

Posted on Saturday, January 9, 2010 at 07:45AM by Registered CommenterCharlie in , , , | CommentsPost a Comment | EmailEmail | PrintPrint

Auditing SEC XBRL Filings

Making sure that your SEC XBRL filing is correct is not a matter of technical wizardry, but rather understanding the types of things which can go wrong. If you understand the types of things which can go wrong you can be proactive and make sure those things don't go wrong and if they do be sure to detect the error.

Creating an SEC XBRL filing is a lot like figuring out a Sudoku puzzle.  If you understand Sudoku than you know that there are not multiple answers to a puzzle, there is one answer.  This is also the case for an SEC XBRL filing: there is really only one right answer given a set of financial information one is trying to disclose.  If you think about it that makes sense.  Would you really want there to be more than one right answer?  How useful would that be.

While the SEC does not require independent audits of XBRL filings submitted to them currently, you still need to have internal processes and checks on your work to be sure what you are creating is correct. A step beyond this is having your internal audit function check to be sure your processes are good ones and that they are yielding quality results.  Further, it is just a matter of time before the SEC will require XBRL filings to be audited by an independent accountant.  While the accounting/auditing profession works to figure out and formalize what these independent audits of XBRL will look like, being ignorant of what is required to output a quality product is really not a good idea.

Accountants like checklists. We use checklists when we issue a financial statement, a disclosure checklist.  Eventually these disclosure checklists will incorporate the things you need to do to be sure your XBRL filings are correct.  Here is a list of many of the types of the questions you should be asking yourself and ways to detect these types of mistakes so they can be corrected before you press that "Submit" button and your work goes to the SEC and is available for the entire world to see.

  • Do I have the right taxonomy? If you find yourself adding a lot of concepts and relations you may want to spend a little more time seeking out additional taxonomy components you might have missed.
  • Do I have the right industry concepts?(if you are in a specialized industry) While some specialized industries such as banking and insurance have specific entry points to the US GAAP taxonomy, other industry specific concepts are spread throughout the taxonomy and can be hard to locate.  Again, if it seems like concepts and relations probably should exist but you did not find them, they probably do exist.  You just need to spend more time looking.  Using a taxonomy viewer search capabilities (in a good tool) you can generally find concepts by searching for them.  Be aware that they may not be called exactly what you would call them so your searches of the taxonomy have to be pretty creative these days.  Eventually someone will create a nice synonym database which will make this process easier.
  • Am I extending the taxonomy correctly?  Did you build your [Table]s correctly? Did you build your [Roll Forward]s correctly?  The US GAAP Taxonomy is not random or arbitrary.  It was built in a particular way.  You need to understand how things are built so that you can build your extensions so that they are consistent with the way the US GAAP Taxonomy is constructed.  Eventually software will provide more help, today this is more challenging.  Some taxonomy creation tools provide validation (information model validation) which helps you detect inconsistencies so you can correct them.
  • Am  I allowed to extend the taxonomy in a specific location?  An extreme example will show what is meant here.  It would make no sense to add concepts on the balance sheet which were siblings to "Assets" or "Liabilities" or "Equity".  What other broad categories are there on the balance sheet?  As you get lower and lower into the details it becomes more challenging to understand if you really should be extending a taxonomy in a specific area.
  • Should I create a new taxonomy concept?  When is it appropriate to create a new taxonomy concept?  For example, a minority of filers (3 of about 500) created a concept "Net Changes in Cash and Cash Equivalents" which is the sum of all changes in cash on the statement of cash flows.  The other 497 did not.  Those numbers make it seemly hard to justify an entity creating a new concept rather than using an existing concept.  In the case of net changes in cash and cash equivalents it is rather obvious that a new concept should not have been created.  In other cases it is not quite as obvious.  Judgement is needed and knowledge of how XBRL works is necessary to help you pick the appropriate course of action.
  • Are all the XBRL instance fact values associated with the correct XBRL taxonomy concepts?  Should you be using the concept "Cash" or "Cash and Short Term Investments" or "Cash and Cash Equivalents" or some other version of cash from the US GAAP Taxonomy?  Was some sort of tagging mistake made?  Many types of these errors can be detected by a computation not adding up correctly.  Other tagging mistakes will not be detected using the validation of computations.  Understanding XBRL enough to understand which approach to use to detecting tagging errors is important to know.
  • Are all the XBRL instance fact values associated with the correct context?(i.e. entity, entity segment, period, and so forth)  This is similar to detecting concept tagging errors.  Again, the validation of computations will help detect this type of error in many but not all cases.
  • Are all numeric XBRL instance fact values associated with the correct units? (i.e. dollars, Euros, shares, pure and so forth) This is likewise similar to detecting concept tagging errors. 
  • Are all numeric XBRL instance fact values associated with the appropriate decimal value? (i.e. rounded to thousands, millions, billions, or not rounded at all) This is likewise similar to detecting concept tagging errors. 
  • Are all numeric XBRL instance fact values properly associated with other XBRL instance fact values?(i.e. are the computations correct) Clearly all your numbers should add up properly where they should add up.  This can be similar to detecting tagging errors, if you have the computation expressed in your business rules.  Realize that the US GAAP Taxonomy does not express all computations.  This does not mean that your numbers don't need to add up.  Just like in your printed financial it can be embarrassing when your "Increase (Decrease) in Receivables" on your case flow statement does not tie to the actual change in receivables on your balance sheet, the same relations need to work in your XBRL filing.  One big missing set of computations are those for [Roll Forward] type relations.  XBRL US does not publish those computations.  They are easy enough to auto-generate from the XBRL taxonomy.  You need to do that to create the XBRL Formulas (the best way) or some other means of verifying the accuracy of your XBRL instance fact values.
  • Is the polarity of numeric fact values correct?(i.e. did you enter a number as a positive when it should have been entered as a negative) People tend to confuse how a number is presented and how it should be put into the XBRL instance: as a positive or as a negative.  Different people have different ideas on what should be shown as a positive and what should be shown as a negative on a financial statement.  For comparability purposes, XBRL had to get creators of financial information to agree.  That is why XBRL is not about how the number is presented, it is about communicating clearly whether something should be added or subtracted.  You can present it in your HTML or paper filing however you like.  Once you realize this, it is quite easy to test the polarity of a number: via the computation validation.  If a number is not involved in any computations, being sure the polarity is correct can be more challenging, it even might be a task which needs to be checked manually.
  • Does the XBRL instance match the HTML or other version of the financial statements?While a short term problem which will exist when companies have to file both HTML and XBRL, you do need to be sure that what you are saying in your HTML and XBRL is the same thing.  Many companies are starting to realize that you can generate the HTML from the XBRL information. This can make the process of reconciling the two formats substantially easier.  While you may not have to file the HTML with the SEC much longer, clearly you are going to want to look at the financial information expressed in that XBRL instance. Even a "geek" cannot look at an XBRL instance and determine if it is correct.  You will likely always use some sort of rendering to help you be sure your XBRL instance is properly created.  What you do need to understand is the process used to create the human readable format so you understand what could break and give you misleading results which would potentially result in an incorrect conclusion on your part.

Good use of things like XBRL Formula to create the business rules you need to enforce automated tests to be sure that, say, all your numbers add up and that your taxonomy is constructed appropriately can help you weave your XBRL Sudoku puzzle together correctly.  Understanding what can go wrong will help you understand the automated tests which you need to get things right.

All this seems like a great deal of work right now and it is because old processes are being used to generate a new type of output.  Besides, today an SEC filer submits both the old HTML or ASCII filings to the SEC along with the XBRL filings.  This will change.  The SEC has already stated publicly that the HTML/ASCII filings will eventually go away and the only submissions will be in XBRL.  That makes sense; why would you want to have to reconcile the HTML/ASCII filings to XBRL filings?

Eventually, when the current legacy processes are replaced with processes appropriate for creating financial statements using XBRL the process will not only be easier but the quality of the output will be improved and the effort and resources required to achieve the high level of quality will be significantly reduced.  This is because all those XBRL tags can be leveraged to allow computers to perform many, many more of the checks humans are required to perform today.  Generally accountants don't realize this yet but once this software starts showing up they will recognize the benefits.

For more information on what can go wrong in your SEC XBRL filings see this blog post.  It contains the top 10 errors which are found in SEC XBRL filings.  Also, realize that XBRL is XBRL, be it filed to the SEC or someone else.  Other XBRL is subject to the same types of errors as SEC filings.  The point is that this information is useful for XBRL submissions to any regulator, not just XBRL filings to the SEC.

Federation of European Accountants Issues Policy Statement on XBRL

The Federation of European Accountants (FEE) issued a policy statement on XBRL.  The six page policy statement titled eXtensible Business Reporting Language (XBRL) - The impact on accountants and auditors can be found here on the Web.

The policy statement goes further than anything else that I have seen coming out of the accounting community in saying that accountants are going to have to understand XBRL.  I agree with these statements.  The FEE states

There are two pillars to XBRL: an IT literacy pillar and a technical accounting and auditing pillar.  Organisations wishing to adopt XBRL will need to consider both pillars in establishing their training needs to ensure successful use of XBRL.

The FEE also points out key areas for training for accountants, identifying necessary skills required including:

  • Select the appropriate taxonomy and download from the appropriate web page;
  • Identify taxonomy elements required for any particular instance;
  • Identify when a valid taxonomy extension is required;
  • Create valid extensions; and
  • Create valid instance documents in line with the appropriate specification or user guide.

If you are an accountant anywhere in the world, particularly if you practice in the area of public accounting, you definitely want to take a look at this policy statement as a clue of what is coming down the road in the world of accounting and financial reporting.

FINREP Publishes Architecture, Packed with Helpful Information

FINREP (FINancial REPorting, see http://www.eurofiling.info/) has published an XBRL architecture which is worth taking a look at if you are a student of XBRL and trying to figure out the best approach to use it.  FINREP, which relates to financial reporting of financial institutions in Europe and uses International Financial Reporting Standards, is published by CEBS (Committee of European Banking Supervisors).

The architecture document can be found here.  If you go to the EUROFILING home page (above) you can find additional information such as explanatory documentation, the draft taxonomy files (note that this is draft version).

CEBS has always been great about transparency of what they are doing, why, and the reasoning that has gone into what they are doing.  The information they publish is intended for those participating in the COREP and FINREP projects, but this information is also very helpful to others who are making use of XBRL who have to grapple with similar issues.  Why reinvent the wheel?  Why not learn from what these regulators are spending vast amounts of resources to figure out?

In particular, this presentation is incredibly helpful. This is a PDF of a 325 page PowerPoint presentation!  I would have hated to sit through that meeting.  But, a tremendous amount of work was put into the slide deck.  The presentation goes into detail about issues such as use of default dimensions, typed dimensions, and other things people don't tend to think about.

The bottom line here is that the FINREP architecture and other information is worth the effort to study.  Other regulators, corporations, and consultants who help these folks can glean massive amounts of useful insight from what FINREP has made available.  This architecture is not applicable to only financial reporting, it is applicable to FINREP in general.  FINREP itself learned a lot from what COREP created.  A next step the FINREP and COREP folks will go through is to reconcile the two different architectures, perhaps resulting in one architecture used for both FINREP and COREP.

Key aspects of the architecture include:

  • It breaks a big reporting use case into bite size pieces.  Reports are broken into "Tables".  There are "core" tables and "non-core" or detailed tables.
  • XBRL Dimensions are used to express these tables.  The multidimensional model is used consistently, there is no mixing of XBRL Dimensions and non-dimensional XBRL.
  • Tuples are not used, rather XBRL Dimensions are used to express complex facts.  (See the presentation if you don't understand this.)  This is consistent with the US GAAP and IFRS taxonomy architectures.
  • The architecture addresses the IT need to model data and the business user need to present information well.

Dead Tree and Dead Rock Financial Statements

I ran across two photos of old, old financial statements which I really got a kick out of and I thought I would share.  Check these out.

I have previously heard people refer to paper based financials as "dead tree format". This takes the cake though. Here is a "dead rock format" financial statement.

Early balance sheet

And here is one of those dead tree formats:

Wachovia National Bank balance sheet

Both of these photos come from Wikipedia

Posted on Sunday, January 3, 2010 at 09:10AM by Registered CommenterCharlie in , , | CommentsPost a Comment | EmailEmail | PrintPrint