BLOG:  Digital Financial Reporting

This is a blog for information relating to digital financial reporting.  This blog is basically my "lab notebook" for experimenting and learning about XBRL-based digital financial reporting.  This is my brain storming platform.  This is where I think out loud (i.e. publicly) about digital financial reporting. This information is for innovators and early adopters who are ushering in a new era of accounting, reporting, auditing, and analysis in a digital environment.

Much of the information contained in this blog is synthasized, summarized, condensed, better organized and articulated in my book XBRL for Dummies and in the chapters of Intelligent XBRL-based Digital Financial Reporting. If you have any questions, feel free to contact me.

Entries in Modeling Business Information Using XBRL (213)

More Improved Definitions: Fidelity, Integrity, Intersection

In my last blog post I updated my definition for a fact.  The definitions of three other terms were improved per feedback received.  To provide a bit of context, these terms relate to the notion of creating a verifiably true and fair representation of financial information within the Guide to Verification of an SEC XBRL Financial Report, but apply to other documentation as well.

As stated within the Guide to Verification of an SEC XBRL Financial Report, a financial report can be said to be valid if it possesses certain traits which can be defined in general terms and for clarity are listed below to bring them into the reader's mind:

  • Completeness: Having all necessary or normal parts, components, elements, or steps; entire.
  • Correctness: Free from error; in accordance with fact or truth; right, proper, accurate, just, true, exact, precise.
  • Consistency: Compatible or in agreement with itself or with some group; coherent, uniform, steady. Holding true in a group, compatible, not contradictory.
  • Accuracy: Correctness in all details; conformity or correspondence to fact or given quality, condition; precise, exact; deviating only slightly or within acceptable limits from a standard.

While these four notions which relate to the "trueness" and "fairness" must exist for every fact reported by a financial report, they also need to exist when considering the financial report in its entirety.

Two other notions help bring the notion of trueness and fairness of information at the fact and at the report level into focus.  These are the improved definitions of these two terms:

  • Fidelity: Fidelity relates to the loyal adherence to fact or detail; exactness. The representation of the facts and circumstances represented within a financial report properly reflect, without distortion, reality.  High fidelity is when the reproduction (a financial report) with little distortion, provides a result very similar to the original (reality of company and environment in which company operates).
  • Integrity: Integrity is holistic fidelity. Integrity relates to the fidelity of the report in its entirety, of all parts of a financial report, from all points of view.  Integrity is holistic accuracy, accurate as a whole. Integrity is the quality or condition of being whole or undivided; completeness, entireness, unbroken state, uncorrupt. Integrity means that not only is each component of a financial report is correct but all the pieces of the financial report fit together correctly, all things considered.

In much of my documentation I also used integrity to mean something else.  I have untangled this terminology and introduced another term, intersection.

To understand integrity correctly, it is important to understand the notion of an "intersection".  An intersection is defined as:

  • Intersection: An intersection is a physical connection between two pieces of a financial report.  Generally an intersection is some report element such as a [Table], an [Axis], a [Member] or a Concept.  Intersections can be further explained by business rules.

An example will help you understand the notion of an intersection.  Consider the concept "inventories".  Inventories might appear as a line item on the balance sheet.  Total inventories might be detailed or a breakdown provided within the disclosures.  While the label "Inventories" might appear on the balance sheet and perhaps "Total inventories" in the disclosures, that is actually on reported fact presented in two places within a financial report.  That fact has the characteristic of both relating to the same concept.  All other characteristics are likewise the same.  In essence the fact intersects the balance sheet with the detailed breakdown of inventories, it defines an intersection.

Looking at this from another perspective helps see the importance of intersections.  What if this information was modled incorrectly and rather being expressed as one single fact shared by to components of a financial report; and rather two different concepts were used and two different facts provided within a financial report.  In this case, the intersection between the two components would be masked.  As a result, errors could be introduced within the financial report and the error would likewise be masked.  For example, if two facts are modeled and the balance sheet fact was one number and the detailed breakdown of total inventories was some different number then the balance sheet and the detailed breakdown would not agree.

Part of integrity is that there are no such modeling mistakes and therefore no mathematical errors which could possibly be masked by a modeling mistake.

Some software leverages these sorts of intersections, other software does not.  I pointed out some time ago that intersections were leveraged within the Firefox add-on for XBRL.  At that time I used the term hypercube jumping to describe this.  The XBRL Cloud Viewer leverages these sorts of intersections. CoreFilings Magnify does.  Other software likely does also.

The mistake I made with the term integrity was that I was using it to mean both the definition of integrity above and the notion of an intersection.  The improvement is that I recognize that these are two different notions so they need to different terms, even though the two notions are related to a degree.

Updated Definition of Fact: for the Purpose of Interpretation

I stumbled upon something which I find rather important.  It may sound subtle, but I believe the distinction is important.

Someone provided feedback related to the definition of a fact.  They felt that I did not have the definition quite right.  So, I looked into this.  I knew that the XBRL Abstract Model 2.0 had to define a fact, so I thought I would have a look at their definition to see if there was any thing there which I could leverage (...er...ah...steal).

The XBRL Abstract Model 2.0 uses the term "DataPoint" rather than "fact".  It also uses other different terms such as "aspect".  See here for a reconciliation of this terminology.  This is their definition of DataPoint:

The DataPoint metaclass defines a reportable item of business information contextualized by a set of aspects that identify or describe the item (this is an XBRL primary item fact), and a corresponding data point value (the reported item of business information).

The key word here is contextualizedThis dictionary defines contextualized as:

to place (a word, event, etc.) into a particular or appropriate context for the purpose of interpretation or analysis

"For the purpose of interpretation of analysis."  THAT is exactly how one should look at defining information, so that the information can be property interpreted!

And so, this is my revised definition for a fact as used in my digital financial reporting model:

A fact defines a single, observable, reportable piece of information contained within a financial report, or fact value, contextualized for clear interpretation or analysis by one or more characteristics.  Numeric fact values must also provide the additional traits "units" and "rounding" to enable appropriate interpretation of the numeric fact value.  Facts may have zero or many parenthetical explanations which provide additional descriptive information related to the fact.

(If you don't understand the term characteristics, please see here for specific examples and the Financial Report Semantics and Dynamics Theory for more information.)

Why is this important?  That is exactly what creators of financial reports should be thinking when they create those reports: how the information within the report will be interpreted.  This is why any important characteristics must be articulated along with values and also why it is better to explicitly state what you are saying rather than leave it up to users of the information to imply meaning because they may imply the wrong meaning.

Another term for this is disambiguate which is defined here as:

to establish a single semantic or grammatical interpretation for

Ambiguity, the enemy of automated reuse, is

"doubtfulness or uncertainty as regards interpretation" or "vagueness or uncertainty of meaning".

Unambiguous is:

admitting of no doubt or misunderstanding; having only one meaning or interpretation and leading to only one conclusion

Please don't make the mistake of confusing the need for and the freedom to exercise professional judgment in the creation of a financial report.  That is not the point or the issue here.  Accountants alone determine what should be communicated within a financial report.  What information to report is not the issue here.

What is at issue here is if accountants do say something, what they said should be interpreted the same by all computer software applications or their users.  What I am talking about here is the objective facts, the information level.  The information should have the same meaning to all users of the information.

How reported facts are used, whether reported facts are used at all, the implications or ramifications of that meaning the facts hold, and such is up to the user of those facts, the users of the reported information.  That is a different level of interpretation, more on the level of knowledge.

The point here is that all users of any financial report information set should fundamentally interpret the fundamental facts, the core building blocks, base meaning of the reported information, in the same way:  Assets as of December 31, 2012, for the consolidated entity is $100,000.  Facts such as this are objective, not open to interpretation, the fact itself has one meaning.  Is having $100,000 in such assets good or bad?  That is a different question, certainly open to interpretation.

This distinction is important to understand.

IFRS Foundation publishes XBRL Formula Linkbase 2012

In a message posted to XBRL-Public, the IFRS Foundation announced the release of business rules for the IFRS taxonomy. The business rules are made available using XBRL Formula.  This is the text of their post should you not be a member of the XBRL-Public list:

The IFRS Formula Linkbase 2012 is now available to download. The current version of the formula linkbase is an updated version of the formula prototype which was released in October 2011. The 2012 formulae are designed to work with the IFRS Taxonomy 2012. The majority of the improvements between the current version and the prototype are related to the content.

The formula linkbase can be used with software packages supporting the XBRL formula specification 1.0 and allows for additional validations of the reported facts. The IFRS Formula Linkbase 2012 is developed in a generic manner, which means it can be used directly on the filings created, on the basis of the IFRS Taxonomy (instance documents) or the company specific extensions to the IFRS Taxonomy (filer extension taxonomy and instance document). The guidance documentation for formula linkbase is available to download with the current release.

Technical changes between the current version and the prototype include:

· Positive and negative formula validation has been placed in separate files.
· Redundant members in the filter for the Dimension aggregation formulae are now removed.
· Precondition expressions in the validation have been simplified (Earnings per share formulae).

For futher information and to download tthe files, please visit
http://www.ifrs.org/tools/IFRS+Taxonomy+Formula+Linkbase+2012.htm

Posted on Tuesday, August 14, 2012 at 06:58AM by Registered CommenterCharlie in , | CommentsPost a Comment | EmailEmail | PrintPrint

Every Implementation is Model-based: It is Really about Model Level

I was seeing something incorrectly which made it more difficult to see why problems in the form of complexity and errors occur.  So basically, it appears that my perception was incorrect.  This line of thinking below seems to be more correct about the advantages of "model-based" approaches to working with XBRL.

The bottom line it is not about whether to use a modeling-based approach; you have no choice but to use some sort of model-based approach.  The real question is about the "level" of your model.

This is what I mean.

When you want to get a computer to do some sort of work for you there is always a need to use some sort of "model".  It is the model which enables you to express your business problem in a form where you can communicate with a computer and the computer can understand you.

The problem with most software vendors software when it comes to XBRL is that they have exposed the wrong model level to both business users and other software developers who have to use their XBRL processors and other business reporting infrastructure.  Almost universally software vendors expose the XBRL technical syntax level to both business users and to software developers.

That means a number of things.  First, everything is harder and more complex because everything relates to the XBRL technical syntax model which is complex, hard for business users to relate to, offers a lot of flexibility because there are numerous ways to achieve the same thing within the XBRL technical syntax, and so forth.

But the biggest missing piece from XBRL processing capabilities implemented at this level is the notion that semantics even exists in that model.

That means two things.  First, business users have to "reconcile" the semantic model of their business problem to the exposed technical syntax model themselves.  And so, while business user can do this correctly because the XBRL technical syntax can work correctly to express the business semantics of the information; business users can also get things wrong because the flexibility exposed by the XBRL technical syntax level offers many/multiple ways to achieve some result.

Therefore business users could pick a right or wrong approach and in order to understand the difference, the business user needs more training on the complex technical syntax level to be sure they are expressing the semantics correctly (which they do understand) using the technical syntax (which most business users will never understand and every business user will struggle to understand.

But what is even worse than the above is the fact that because these XBRL implementations which are built using the XBRL technical syntax model; they have no notion at all of any semantic model.  Not only does this make things complex and therefore more costly to work with but there are significant gaps between the XBRL technical syntax which must now be crossed.

The only way to cross these gaps is for business users to work with IT people, project by project, system by system, business report by business report; because business users simply do not have the means to cross this gap by themselves.

There are exactly two types of processes which can be used to identify semantic mathematical and logical mistakes/errors: automated processes and manual processes.  Automated processes are best because they are performed by computers, they are more reliable, they can get you to 100% reliability because computers do the work consistently every time without fail, and overall automated processes cost less.   Manual processes must be employed whenever automated rules are not provided or where automated processes simply cannot achieve the result. In both those cases, humans must do work manually.

All that "stuff" that the business user and IT technical users would need to create could, and should, be built into any business reporting platform or system.  Why would it not be built in?

If these things needed to cross the gap between semantics and technical syntax were built into systems then the business users could then simply configure, without need for technical people to get involved, the business semantic level mathematical and logical rules.

The business users would not need to worry about technical syntax verification/validation with either modeling approach, technical syntax model or semantic level model.  Both of these modeling levels clearly need to verify that the technical syntax is correct.

It is this gap between the technical syntax modeling level and the semantic modeling level which provides the most value to business users because it both hides the complexity of the technical syntax level and it keeps things safe and technically correct for the business user because it ONLY allows business users to express the semantic mathematical and logical rules correctly. 

Now, the business users can also make mistakes of expressing the semantic mathematical and logical rules inconsistently.  But incorrectly and inconsistently are different problems caused by different things.  If a business reporting system provides something like a set of profiles from which a business user might pick, each profile correctly configured by a highly-skilled IT engineer or architect; then the business user would be free to pick the profile they desired and safely work with the XBRL technical syntax using that profile.  Or, maybe the business user would pick some other profile but still a set of correct and safe choices.  Or, perhaps the system allows for in addition to picking from some set of available profiles the ability for a business user to collaborate with an IT technical person with the right skills to define or change an existing profile.  Each of these approaches enables different levels of flexibility but also assures that the reconciliation or implementation of the technical syntax to be consistent with the best practice uses of the technical syntax to correctly express business semantics.

Now, XBRL International can solve this at a higher level and these "profiles" can be agreed to at a higher level, shared between implementations and taxonomies making all this even safer and simpler.  That is what the XBRL Abstract Model 2.0 (created by XBRL International) does.  It makes these people articulating taxonomies realize/recognize that they must also, in order to achieve true interoperability, make some of these decisions, or select which options should be used in essence articulating their implementation profile of how the XBRL technical syntax should be used for their system.

Some software vendors are beginning to implement semantic level layers within their software offerings.  These semantic layers offer this sort of higher-level modeling.  Each implementation does not have to figure out the correct architecture; that has been done by highly-skilled technical users which have a true grasp of the XBRL technical syntax.

I see at least three big advantages of using such a modeling approach with a higher-level semantic model built right in with other XBRL processing capabilities:

  • Ease of switching between technical syntax.  say between different XBRL technical syntax options or even versions or even totally different syntaxes such as to the W3C Government Linked Data RDF/OWL Data Cube Model or some other form of XML
  • Safety.  Business users can work safely at the business semantics level and not make bad technical choices, most reporting problems could likely be solved without IT assistance at all but by following good IT practices.  But, if need be, business users and IT professionals could collaborate to create new profiles as necessary.
  • Ease of use.  Complexity is significantly reduced because business users work at the semantic level, not the technical syntax level.

Understanding the moving pieces at play here will help you pick the software solution which provides you with the appropriate level of flexibility.  Not everyone desires 100% flexibility because they may not need it or they wish to avoid expense involved with basically starting their implemantation from scratch.

Posted on Thursday, August 9, 2012 at 08:54AM by Registered CommenterCharlie in | CommentsPost a Comment | EmailEmail | PrintPrint

Digital Financial Report: Criticize by Creating

Michelangelo, one of the great creators in Western history, put it succinctly:

"Criticize by creating."

I have been maintaining what I ended up calling a "reference/model implementation" of a digital financial report since about 2004 or so.  The intent of the reference/model implementation was to put all the different "patterns" which I discovered within financial reports together to see if I could get everything to work correctly.

This process was a struggle.  The good thing is that I learned a lot by undertaking that struggle.

 Software did not work correctly, even when software would detect errors different software reported different sets of errors, few things were documented, few examples were provided, everyone seemed to have a different opinion as to what the "right" approach was, many people were secretive and did not share information; I could go on but I think you might understand what I am saying.  The bottom line is that it was hard to create examples and it was even harder to make sure you did not make any mistakes.

I am putting the final touches on my next iteration of a reference/model implementation.  This time things are different.  I cannot get into details now, but I can say that software is improving significantly, there are plenty of examples to learn from, there is less need to deal with things at the XBRL technical syntax level, there is more agreement as to business report semantics; there are other reasons things will be different.

These disclosure templates are a step toward understanding what a digital financial report expressed using XBRL might best look like.  Many of these are more complicated examples, I have spent the last year or so understanding these, piecing them together into an improved reference/model implementation of a digital financial report, and trying to understand how best to create such reports.

Why do I go through all this effort?  Why do I persist in this struggle?  To learn.  Here is the way I see it. This "SEC XBRL experiment" and these other such XBRL experiments around the world will either succeed or fail.  Sticking with the SEC experiment; one of two scenarios are going to play out:

  1. Abandon XBRL.  The SEC and public companies are going to walk away from XBRL, abandoning this experiment for being too costly for public companies with too little benefit in terms of usable information.
  2. Embrace digital financial reporting.  If it is embraced and used, clearly it will have to work correctly.

There are many different definitions of "embrace digital financial reporting".  It could be embraced in certain progressive areas of the world, it could be adopted globally, it could be adopted only for public company financial reporting, it could us IFRS only, it could use IFRS and US GAAP, it could include private companies, not-for-profits, state and local government financial reporting; there are lots of permutations and combinations which are possible.

One thing seems pretty clear. The probability of the "abandon XBRL" scenario being part of our future is going down.  With the investment companies are making trying to get XBRL to work appropriately it is hard to believe that those companies are going to want to simply walk away from XBRL.  But maybe.

Either way, both to avoid the abandonment of XBRL and to have a contingency plan for the global adoption of digital financial reporting, I struggle to figure out how to best harness this technology for accounting and financial reporting.

And that is why I continue to create things such as this reference/model implementation of a digital financial report.  To learn about the tools, to help the accounting profession learn about the tools, and to otherwise figure all this stuff out.

So, you don't think that what I am creating is what digital financial reporting should be like, how it should work?

Criticize me by creating an alternative.  Prove that your alternative works better than what I have created.  Or, if you don't have the where-with-all or understanding to create your own example; look at my example and the examples of others, ask good questions, and reach your own conclusion as to what might be the best approach.

Let me explain what I have put together and why and the criteria I use to evaluate it.

A financial report has many components.  A component is simply a piece of a financial report.  A component defined as being a set of facts which go together for some specific purpose within a financial report. A component can also be broken down into subcomponents.

The reference/model implementation I created has about 30 components.  Each component is provided for two reasons. 

  1. Provide examples of how to model different components of a financial report. 
  2. Show how the components of a financial report and that the components fit together correctly.

The reference/model implementation is a balance between providing too little and providing too much.

On the one hand, the reference/model implementation digital financial report should look like a financial report.  On the other hand, real financial reports can be quite large, repeat the same sorts of things many times, and be an overwhelming example to work with because of its size.  The reference/model implementation looks enough like a financial report and has the pieces of a typical financial report and therefore will not confuse accountants which understand what a financial report should look like.  But the reference/model implementation also has all the moving pieces which need to interact with one another correctly.

Everything in the reference/model is there for a specific reason.  Accounting is well understood and the reference/model is not about accounting and not about changing accounting or financial reporting.

The reference/model is about figuring out how to use structured mediums such as XBRL to articulate information which is expressed today using unstructured mediums such as paper and electronic paper-type mediums such as HTML, PDF, or Microsoft Word.  The reference/model is about figuring out what a digital financial report should look like, all things considered.

The reference/model implementation "works correctly" by one definition of works correctly.  Each aspect of "correctly" can be shown and also "incorrectly" can be pointed out because "correct" is so explicitly defined.  (This is as opposed to the situation where correct is not well defined and therefore it is hard to figure out if something is, or is not, correct.)  If a modeling approach is changed in one area of the reference/model implementation which breaks the model in another area, that modeling option is not considered as an option because it cannot be made to work.

It is the objective balancing of all the allowable options and the fact that when used together the financial report works correctly from a financial reporting perspective and from a technical perspective which decides whether some modeling approach is appropriate or inappropriate.  The intent here is to minimize subjectivity.  When multiple options work, the option which seems to work the best, all things considered, which is used.

While the reference/model implementation is correct, by this author's definition of correct; other definitions of correct are possible and other definitions of "best modeling approach" are possible.  That other approach could be a slight tweaking of this reference/model implementation or it could be a totally overhauled version.  However, any other version of any digital financial report should be able to pass the criteria established for this reference/model implementation.

Others may have additional criteria which a digital financial report must have.  Perhaps this author missed something or for some other reason neglected to include an important aspect of a digital financial report.  If that is the case, the reference/model implementation should be tested against that criteria.  On the other hand, any other implementation of a digital financial report should either be able to (a) pass this author's criteria or (b) show why this author's criteria is incorrect.

The criteria which were used to judge my reference/model implementation are enumerated here.  These are the self-imposed criteria which were used to evaluate this reference/model implementation and define "correct":

  • Every model structure is logical and consistent.  Meaning there are no inconsistent and therefore perhaps confusing or potentially misinterpreted modeling situations.  For example, an [Axis] as part of a [Table] definition makes sense; an [Axis] within a set of [Line Items] does not.
  • Every computation is expressed and proven to work correctly.  Every computation must be proven to work correctly by passing one or more business rules.  If a computation relation exists and it is not expressed, then there is no way to tell if the computation works correctly per the XBRL medium.
  • No duplicate facts.  Duplicate facts result from modeling errors and therefore should not exist.
  • Everything is consistent.  If there is no specific reason for an inconsistency which can be articulated which justifies the inconsistency; then you are being inconsistent and one of the approaches must be dropped.  Inconsistencies cause additional training costs and additional burden, and unnecessary burden on the user to somehow rationalize the inconsistency.
  • Each property is correct.  Each property of any component, fact, report element, or parenthetical explanation must be correct from a business meaning or semantics perspective.
  • Meaning can be logically explained to a business user.  The meaning of each and every aspect of the digital financial report can be explained, logically, to a business user.  If the meaning cannot be explained, then it cannot be considered to be correct.

Do you feel that I am missing some criteria?  Please let me know.  I will consider your feedback, determine if it seems like an appropriate criteria, maybe ask you some questions, and then improve my criteria.  I have no problems with making things better.

Think about taking me up on my challenge.  It is important that us as accountants to figure out how to properly harness and use, or not use, digital financial reporting within our profession.

Who am I to define "correctly works"?  However, if I am not the appropriate party to establish such a definition, perhaps you can point me to a better definition or improve upon my definition.