BLOG:  Digital Financial Reporting

This is a blog for information relating to digital financial reporting.  It is for innovators and early adopters who are ushering in a new era of digital financial reporting.

Much of the information contained in this blog is summarized, condensed, better organized and articulated in my book XBRL for Dummies and in the three documents on this digital financial reporting page.

World's First Expert System for Creating Financial Reports 

Remember when I posted that I had created the world's first machine-readable financial reporting checklist?  Well, I now have a machine that can read that checklist.

I believe that a software developer and myself have created what I can honestly call the world's first expert system for creating financial reports (as far as I am aware). 

But what is even more interesting is that what drives this expert system is a global standard XBRL-based general purpose business reporting expert system.  The system is both in the form of a tool that is very approachable by business professionals and an API interface. 

There are many sources for defining what an expert system or what people are now tending to call "knowledge based systems".  This web site points out two very important things.  First, the components of an expert system.  Secondly, which is even more interesting, is a discussion of a notion of a general purpose expert system.  Imagine an expert system where the syntax of the fact database and knowledge base is a global standard: XBRL

The diagram at the bottom of this post explains the components of an expert system.  Here is how we implemented those expert system components in our software:

  • Knowledge acquisition: The domain knowledge which was represented represented in the form of machine-readable business rules were created manually (i.e. not via machine learning). And so, the application has interfaces for creating business rules.
  • Knowledge base of rules: The knowledge base of rules is represented 100% using the XBRL technical syntax.  The knowledge base of rules includes an XBRL taxonomy schema, XBRL linkbases, and XBRL Formulas.  I defined some XBRL arcroles that are used in the XBRL definition relations to represent specific types of relationships.
  • Fact Database:   The fact database is an XBRL instance or set of XBRL instances.  My fact database can be from ONE XBRL instance such as a single financial report, all the different periodic reports of one entity, the reports for some set of economic entities that report, or all the way up to every XBRL instance that makes up the SEC EDGAR database.
  • Reasoning, Inference, Rules Engine:  The reasoning/inference/rules engine is an XBRL processor + and XBRL Formula processor + additional processing that overcomes specific deficiencies in the XBRL Formula specification.  The primary deficiencies relate to a lack of chaining (we support forward chaining, a lack of inference logic to derive new facts using the rules of logic, and deficiencies in specific types of problem solving logic (which we added via the XBRL arcrole definitions).
  • Justification and Explanation Mechanism:  The justification and explanation mechanism provides information, generally in a controlled natural language type format, which is very readable and understandable to business professionals and an "audit trail" that enables a business professional to trace any piece of information all the way back to its origin within a financial report fact or knowledge base business rule.

Using the notion of "profiles", the application supports US GAAP-based financial reporting, IFRS-based financial reporting, what I call a "general profile" that provides an architecture and any business reporting scheme simply has to supply the metadata for that reporting scheme.  Here is a brief initial video that I have created to help show the GUI.

Here is a document that helps you understand the current validation capabilities of the application. Why is this important to understand?  Because it helps you understand the knowledge that is in the applications knowledge base of rules and the capabilities of the reasoning/inference/rules engine.

In my view the approach that we took to create this application is very interesting and provides insight that would help others leverage the XBRL global standard.  We are very happy to help others who want to understand what we believe we have created.  If you want additional information, please contact me.  Additional information will also be provided on my blog

 

Components of an Expert System:

(Click image for larger view)

Posted on Thursday, April 27, 2017 at 12:42PM by Registered CommenterCharlie | CommentsPost a Comment | References1 Reference | EmailEmail | PrintPrint

Processing Complex Logical Information or Structured Knowledge

(Please consider this a draft at the moment, a work-in-progress. I am trying to get this 100% precise.  That is doable, but this is a painstaking task because it is so detailed and there are parts that I am pulling together and learning as I write this information.)

Artificial intelligence is coming perhaps sooner than you might realize. For example, the article The Use of AI in Banking is Set to Explode says that 32% of all banks use some sort of predictive technology. This Wired article starts off,

"IT'S HARD TO think of a single technology that will shape our world more in the next 50 years than artificial intelligence."

And this article, Top 5 Jobs Robots Will Take First, points out that accountants are high on the list of those who will be impacted by AI.  My personal view is that the best way to protect your job is to learn as much as possible about AI and how it works.

A financial report is complex logical information.

Before XBRL, a financial report was unstructured information and therefore the only way you could interact with that financial report was to have a human that understands financial reports read the financial report and pull information out.  Or, possibly, you would write a computer algorithm that would parse the unstructured text to try and glean information from the financial report. 

With XBRL, information reported in a financial report is structured and can still be read by humans using renderings generated from the structured information; but the information can also be read by machine-based processes directly.  (See the video How XBRL Works for more information about the difference between structured and unstructured information.)

So how do you process complex logical information, or structured knowledge, such as the information found in an XBRL-based intelligent digital financial report?  How do those creating such reports create the reports correctly so that the reports mean what the creator intended so that the correct information is conveyed to users of the financial report?  How to analysts, investors, regulators and others know that the report they are using has been created correctly?  How do standard setters such as the FASB know that they created the US GAAP XBRL Taxonomy correctly to enable the creator and users to interact with harmony, minimizing dissonance?

Well, people (meta-engineers) like Benjamin Grosof, Ian Horrocks, John Sowa and others have been working to solve that problem for 25+ years.  Today if you try and look for an answer to the question, "How do you process complex logical information?" the answer exists but that answer looks like a bit of a convoluted mess if you don't understand what has been going on the past 25 years.  But it is not really a mess.  There are different opinions because there are different "camps" because there are different needs, different target audiences, and different approaches have been taken to solve the same problem.  There is a solution and I will get to that.

But the problem itself, processing complex logical information, has its roots in artificial intelligence. That is one of the problems the artificial intelligence community had to solve in order to get artificial intelligence to work. Does artificial intelligence work?  Well, the article The Use of AI in Banking is Set to Explode seems to think so. There are other clues that it is working.

So how did they make it work?  Who made it work?  To process complex logical information you had to represent that complex logical information in a form that a computer would read, understand, and effectively work with.  Three general approaches were used to solve that problem:

One more approach to representing complex knowledge is worth mentioning. Description logic is another method to representing complex knowledge but in the past, description logic was not machine-readable.  That is changing.  OWL 2 DL has a description logic style syntax and supports SROIQ description logic as I understand it.

There are two approaches to specify knowledge: based on axioms (used by ontology) and model based (used by business rules and schema).

So who got it right? Is an ontology, business rules, or a schema the best way to represent complex logical information?  Is it better to use axioms or a model to represent knowledge?  There has to be only one best way, right?  Well...no.

In an article, The Semantic Web and the Business Rules Approach ~ Differences and Consequences, Silvie Spreeuwenberg answers that question in this way:

"Fundamentalism for one position undermines collaboration between the two communities."

I agree with Ms. Spreeuwenberg.  Too many people tend to exist in silos and believe everything in their silo is right and every other silo is wrong.  Another way to say this is, "If the only tool you have is a hammer, then everything looks like a nail." 

Some people, like the meta-engineers I mentioned, crossed the silos.

In another blog post I pointed to a paper by John Sowa that explained the problems caused by fads, trends, misinformation, politics, arbitrary preferences, and competing standards.

There is something that is common between all three knowledge representation approaches: ontology, business rules, and schema.  That common thing is logic. As I mentioned in the blog post Describing Systems Formally, Aristotle created logic in about 450 B.C. Logic is a discipline of philosophy. Logic has been around a long time and is useful for many things.

Logic is the study of the principles of correct reasoning. Formal logic helps identify patterns of good reasoning and patterns of bad reasoning. These logic systems can be used to describe how things work so you can understand if they are working as expected.

Another definition of logic from the Book of Proof is as follows.  Logic is a systematic way of thinking that allows us to deduce new information from old information and to parse the meanings of sentences. There are different definitions of logic and you could likely have interesting philosophical and theoretical debates about logic.  Or, you can use logic as a useful tool.

Business professionals use logic and reasoning in everyday life.  Logic and reasoning are not hard to understand; in fact, humans tend to have an innate understanding of logic and reasoning.  Some people tend to use logic and reasoning more than others but that is a different story.

Business professionals care about correct reasoning.  As I explained in the 15 XBRL-based Digital Financial Report Principles, fundamentally a financial report is a system and that system needs to work.  There needs to be harmony between all the stakeholders that play a role in the process of working with financial reports: standards setters, report creators, data aggregators, analysts, regulators. Logic can serve as a communications tool that helps maximize harmony.

Each of these stakeholders tends to have a natural understands logic. People who have no formal training in logic still tend to understand logic.  Sure, perhaps a bit of additional training in logic would help business professionals work with complex logical information even better.

OK, so let us assume that you buy the argument that logic is a good tool to describe complex logical information.  Let us assume we want to use logic. Which logic is the best logic to use for the task?  There are lots of different logic systems. 

In his presentation, Survey of Knowledge Representations for Rules and Ontologies, Benjamin Grosof answers that question.  That presentation is very technical.  I have tried to distill the essence of what Mr. Grosof is saying into this explanation below that should be both accurate and understandable by business professionals.

Here is an overview of some logical systems.  What I am concerned with is picking the logical system that can be used by software application so that the software provides results that are reliable, predictable, repeatable and otherwise safe to use. (i.e. software that blows up on us or is not reliable is not very useful to business professionals)

  • Higher order logic tends to be powerful but unsafe to implement in software because it tends to be too complex and can lead to unexpected or unpredictable behavour by the software.
  • First order logic (a.k.a. first order predicate calculus, predicate logic) tends to be safer to implement in software, but not all first order logic is safely implementable in computer systems.
  • Horn logic is a subset of first order logic. Horn logic is safer than first order logic because it explicitly prohibits some things that make it first order logic unsafe. But Horn logic is still not safe enough for many ways business professionals use software.  Prolog uses Horn logic as does ISO Prolog.
  • Datalog imposes additional specific restrictions on Horn logic making the Datalog logic provably safe, reliable, and therefore predictable.  Datalog is a subset of Prolog that is, as I understand it, as safe as a SQL database.
  • SQL or relational databases were the first commercially successful semantic technology. SQL is based on relational algebra which is based in logic. SQL is reliable, predictable, and therefore safe.  Business professionals have been successfully using relational databases for 35 years.  They trust relational databases.  Do relational databases make mistakes? Actually no, they don't.  Programmers and others might make a mistake in logic in their interaction with a SQL database.  But the SQL database itself reliably and predictably provides answers to the questions we ask and they don't blow up. Why?  Because of the logic system that relational databases use.
  • RuleLog safely extends Datalog adding some specific aspects of higher order logic that can be safely implemented.  For example, RuleLog adds non-monotonic or defensible inferences.  You can think of non-monotonic logic as reasoning based on probabilities.

The following is my best attempt to describe a deductive system using terms which a business professional might be familiar with.  

A properly functioning deductive system must be sound, complete, and effective.  A fundamental principle of logic is that a fact (or declarative statement or proposition) is a logical consequence of one or more other facts (or declarative statement or proposition).    A deductive system is sound if any fact that can be derived in the system is a logically valid fact. A deductive system is complete if every logically valid fact is derivable.  A deductive system also shares the property that it is possible to effectively verify that a purportedly valid deduction is actually a valid deduction; such a deduction system is called effective.

A deductive system can be extended.  While a fact might not be directly derivable; a fact may be defensible. Probabilistic reasoning or non-monotonic logics provide the feature whereby non derivable but defensible inferences can be made.  It is crucial that derived knowledge and non-derived but defensible knowledge be distinguishable.

Systems are not homogeneous, they tend to be heterogeneous even within one organization.  As such being able to use either ontology-based approaches, rule-based approaches, and/or schema-based approaches to describe complex logical information has advantages.  But to exchange information between these different systems the systems must agree on a logic.

The following graphic shows somewhat of a hierarchy of logics. Business professionals need to be conscious of the capabilities of the problem solving logic they are using, the expressive power of the problem solving logic, and the propensity of the problem solving logic to "blow up" or have some sort of catastrophic failure (i.e. not be safe to use).  If this information is laid out appropriately then business professionals can make good choices.

(Click image for larger view)

There are two logics that I left off the graphic above that are very important but I did not add them because I don't want to clutter that fairly straight-forward graphic and I don't know the relative problem solving logic.  The two logics are ISO/IEC Common Logic and OMG Semantics of Business Vocabulary and Business Rules (SBVR). Common logic and SBVR are logically equivalent as I understand it.  As I understand it, they are a subset of RuleLog.  I would also like to understand where OWL 2 DL fits into that graphic.  All things considered, the safest and most expressive problem solving logic is RuleLog.

The graphic below shows a complete knowledge based system. In order to process complex logical information while you do need some sort of problem solving logic, it does not matter if you use an ontology, or business rules, or a schema as a delivery mechanism for that logic.  You also need to be conscious of the expressive power of the problem solving logic.

(Click image for larger view)

But it is important that your system is provably sound, complete, and effective. Your system needs to work.  Your complex logical information needs to be correct.  Business professionals need to know that information is correct and that systems are reliable, operate predictably, and are otherwise safe.

A question that you might have is, "What syntax should you use to represent the logic you select?" I will answer that question in a blog post in the very near future.

Posted on Saturday, April 22, 2017 at 06:28AM by Registered CommenterCharlie in | CommentsPost a Comment | EmailEmail | PrintPrint

Public Company Quality Continues to Improve, 11 Quality Leaders

There are now 11 software generators and filing agents that have 90% or more of their XBRL-based public company financial reports consistent with all of the fundamental accounting concept relations continuity cross-checks.

This ZIP file contains an Excel Spreadsheet which contains detailed information about errors.

Per my measurements, the quality of XBRL-based public company financial reports continues to improve. If you compare the 2016, 2015, and 2014 results you can definitly see the improvement.

Here is the summary of consistency with the fundamental accounting concept relations continuity cross-check business rules as of March 31, 2017:

(Click image for larger view)

Comparison March 2017 with November 2016:

(Click image for larger view)

Compare March 2017, March 2016, March 2015:

(Click image for larger view)

 

**********************PRIOR RESULTS**********************

Previous fundamental accounting concept relations consistency results reported: November 28, 2016; August 31, 2016; June 30, 2016; March 31, 2016; February 29, 2016; January 31, 2016; December 31, 3015; November 30, 2015; October 31, 2015; September 30, 2015; August 31, 2015; July 31, 2015; June 30, 2015; May 29, 2015; April 1, 2015; November 29, 2014.

Information helpful in understanding errors.

Financial Transparency Act of 2017

Remember The Data Act?  Well, Rep. Darrell Issa (Republican from California) is giving it a shot again.  He has introduced the Financial Transparency Act of 2017.

To amend securities, commodities, and banking laws to make the information reported to financial regulatory agencies electronically searchable, to enable RegTech applications, and for other purposes.

We shall see...

Posted on Saturday, March 25, 2017 at 07:18AM by Registered CommenterCharlie in | Comments1 Comment | References1 Reference | EmailEmail | PrintPrint

Accountants Make Top 5 List of Jobs Replaced by Robots

In his article, The 5 Jobs Robots Will Take First, Shelly Palmer lists the top 5 jobs that will be replaced by robots or computer algorithms.  Here is his list:

  1. Middle management
  2. Commodity salespeople
  3. Report writers, journalists, authors, and announcers
  4. Accountants and bookkeepers
  5. Doctors

What does this mean for you?  It is worth watching this video interview to get important details.  An Oxford University study concludes that 47% of jobs in the United States are at risk of being automated using computers over the next 20 years.

Palmer says, "If your job is taking a number from one box in Excel and put it in another box in Excel and writing a narrative about how it got there, thinking that report is a big deal; machines are coming for you and coming fast."

Here is what Palmer says about accountants and bookkeepers:

4 – Accountants & Bookkeepers

Data processing probably created more jobs than it eliminated, but machine learning–based accountants and bookkeepers will be so much better than their human counterparts, you’re going to want to use the machines. Robo-accounting is in its infancy, but it’s awesome at dealing with accounts payable and receivable, inventory control, auditing and several other accounting functions that humans used to be needed to do. Big Four auditing is in for a big shake-up, very soon.

Technology is neither good or bad, it is simply a fact of life.  Just like machines replaced manual labor during the first industrial revolution, software will replace manual and cognitive work in the fourth industrial revolution.  You need not worry, you need to adapt.  

How do you adapt?  The best way to adapt is to forge man-machine partnerships; learn how to leverage machines.  Learn how machines work.  You need to wrap your head around "digital".  The first step in adapting is understanding the process that is about to occur.  In another article, The 5 Jobs Robots Will Take Last, Palmer points out that almost every human job requires us to perform some combination of the following four basic types of tasks:

  • Manual repetitive (predictable)
  • Manual nonrepetitive (not predictable)
  • Cognitive repetitive (predictable)
  • Cognitive nonrepetitive (not predictable)

Manual is using one's hands or physical action to perform work. Cognitive is using one's brain or mental action or a mental process of acquiring knowledge/understanding through thought, experience, use of the senses, or intuition to perform work. 

Predictable manual or cognitive tasks can be automated.  Unpredictable manual or cognitive tasks cannot be automated.  He gives the example of an assembly line worker that performs mostly manual repetitive tasks which, depending on complexity and a cost/benefit analysis, can be automated. On the other hand, a CEO of a major multinational conglomerate performs mostly cognitive nonrepetitive tasks which are much harder to automate.

If your skills include only the ability to perform manual or cognitive repetitive tasks, you need to get new skills.  Those most at risk are college students.  College students are paying very high prices for education in accounting that, if students are not careful, could be obsolete very soon.  Don't make that mistake.

I wrote a paper, Introduction to Knowledge Engineering for Professional Accountants, which goes into more detail.  In fact, it would be helpful for most accountants to read all of Part 1 of Intelligent XBRL-based Digital Financial Reporting. This will help you understand how to identify what may or may not be automated.

Information barbarians will not fare well in the information age. Adapt.  Further, accounting education needs to change.

Posted on Tuesday, March 7, 2017 at 07:30AM by Registered CommenterCharlie in | CommentsPost a Comment | EmailEmail | PrintPrint