« Artificial Intelligence and Knowledge Engineering Basics in a Nutshell | Main | Prototype SBRM Represented in XBRL »

Enhanced Description of Ontology-like Thing

The following is an enhanced description of an ontology-like thing that is approachable to business professionals.  This definition is inspired and synthesized from the basic textbook definition of an ontology provided in Ontology Engineering by Elisa Kendall and Deborah McGuinness; Michael Uschold's insightful description of an ontology-like things in his presentation Ontologies and Semantics for Industry; and Shawn Riley's description of an ontology's common components in Good Old-Fashioned Expert Systems (With or Without Machine Learning).

An ontology or ontology-like thing is a model that specifies a rich and flexible description of the important

  • terms: (terminology, concepts, nomenclature; primitive terms, functional terms);
  • relations: (relationships among and between concepts and individuals; is-a relations, has-a relations; other properties);
  • assertions: (sentences distinguishing concepts, refining definitions and relationships; axioms, theorems, restrictions); and
  • world view: (reasoning assumptions, identity assumptions)

relevant to a particular domain or area of interest, which generally allows for some certain specific variability, and as consciously unambiguously and completely as is necessary and practical in order to achieve a specific goal or objective or a range of goals/objectives.  An ontology-like thing enables a community to agree on important common terms for capturing meaning or representing a shared understanding of and knowledge in some domain where flexibility/variability is necessary.

And so, the reason for creating an "ontology-like thing" is to make the meaning of a set of terms, relations, and assertions explicit; so that both humans and machines can have a common understanding of what those terms, relations, and assertions mean.  An "instance" or "sets of facts" (a.k.a. individuals) can then be evaluated as being consistent with or inconsistent from some defined ontology-like thing created by that community.

The level of accuracy, precision, fidelity, and resolution expressively encoded within some ontology-like thing depends on the application or applications being created that leverage that ontology-like thing.

An ontological commitment is an agreement by the stakeholders of a community to use some ontology-like thing in a manner that is consistent with the theory of how some domain operates represented by the ontology-like thing.  The commitment is made in order to achieve some specific goal or goals established by the stakeholders in a community sharing the ontology-like thing.

The ontology-like thing and instances (values) created per that ontology-like thing form a sharable conceptualization or logical system or formal system or formal logical system (or even more precisely finite logical system) that can be tested and proven to be:

  • Consistent (no assertion of the system contradict another assertion)
  • Valid (no false inference from a true premise is possible)
  • Complete (if an assertion is true, then it can be proven; i.e. all assertions exists in the system)
  • Sound (if any assertion is a theorem of the system; then the theorem is true)
  • Fully expressed (if an important term exists in the real world; then the term can be represented within the system)

Every word used to describe an ontology-like thing is there for a reason.  The term "flexible" is included to indicate that we are not talking about a form.  The logical systems we are concerned with have a certain amount of variability and alternatives are allowed, and so the system needs to be extensible.

The system needs to be predictable, reliable, and safe; free from catastrophic failures which would cause undesirable instability.

As pointed out in the ontology spectrum; a dictionary, a thesaurus, a taxonomy, an ontology, and a logical theory are all different types of ontologies.  All types are useful, but what you are trying to get out of the system needs to be matched to what you put into the ontology-like thing.  If you leave one assertion out, errors could creep into the logical system.

There are all sorts of other things that provide the same sorts of functionality as ontology-like things. Many times terms used are different, definitions are somewhat different, what is trying to be achieved is different.  These differences tend to cause confusion and complexity. But the differences tend to be small and the similarities more significant.

Fads, trends, misinformation, politics, and arbitrary preferences all tend to cause distractions from the real choices that need to be made.

The real focus should be on the fact that artificial intelligence applications are brought to life by the metadata provided by ontology-like things.

In particular, high quality curated metadata will supercharge these sorts of applications. Some people say that data is the new oil.   In fact, the Economist declares this in the article, "The world’s most valuable resource is no longer oil, but data."  But others point out, I think correctly, that "If data is the new oil, then metadata is the new gold."

If you read this article, Data Curation: Weaving Raw Data Into Business Gold (Part 1) , the author uses crude oil, refined gasoline, and refined racing fuel as a metaphor to explain the value of metadata. Metadata is simply data about data.  An ontology-like thing is machine-readable metadata.  Curated metadata provided in an ontology-like thing provides the racing fuel used by artificial intelligence applications.

Curated metadata provides that is necessary to make artificial intelligence to work, to supercharge AI.  But creating that metadata takes a lot of work. Machine learning is not a viable short cut. Use short cuts and your AI foundation will be a fragile house of cards.  While machine learning is very, very useful; it is most valuable when it supplements ontology-based things created by humans. Machine learning will never be able to create the initial ontology-like thing.

So, there are two major techniques for implementing artificial intelligence:

  • Logic and rules-based approach (expert systems): Representing processes or systems using logical rules. Uses deductive reasoning.
  • Pattern-based approach (machine learning): Algorithms find patterns in data and infer rules on their own. Uses inductive reasoning; probability.

You can combine both approaches and create a third approach which is a hybrid of both approaches. But you need to use the right tool for the job.

The two letters "AI" appearing on the cover of the Journal of Accountancy helps one recognize the significance of artificial intelligence on accounting, reporting, auditing, and analysis. Ontology-based things is part of that story.  Recognize that XBRL is a syntax for creating ontology-like things for financial reporting.

If there is any confusion in your mind about any of this, I would strongly recommend that you read Computer Empathy. That document provides the background necessary to best absorb this blog post. Successful engineering is about understanding how and why things break or fail.

############

Ontologies, Taxonomies, and Bears -- Oh my!

Fortune: 50% failure rate for enterprises implemneting artificial intelligence

Towards a Theory of Semantic Communication

What is an Ontology?

Forbes: Taxonomies, Ontologies, and Machine Learning: The Future of Knowledge Management

First-order logic, deductive systems

Posted on Friday, July 19, 2019 at 07:33AM by Registered CommenterCharlie in | CommentsPost a Comment

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
All HTML will be escaped. Hyperlinks will be created for URLs automatically.