Many who create (or help someone who is create) are over verifying their SEC XBRL financial filing. Others are under verifying. Both over verifying and under verifying should be avoided.
If you are over verifying you are doing work which takes time and costs money which either is being done already by software or could be done using software. For example, if you are checking the type attribute value of a [Table] or [Axis]; that is a waste of time. Every [Table] and every [Axis] is required to have a type attribute value of string (or xbrli:stringItemType). Same sort of deal for the period type of a [Table] or [Axis]; both are required to have a type attribute value of "duration". Pretty much all good software enforces this type of rule.
Another type of over verifying is checking things like the contextRef on facts which have zero semantic meaning. All that a contextRef does is hook things together, pure syntax. If you verify that the characteristics of a fact are correct; there is a 100% probability that the "hook" between the fact and those characteristics is correct. So, why would you verify the contextRef? No need.
If you are under verifying you are not doing enough work to be sure your SEC XBRL financial filing is correct. An example of this is not creating XBRL Formulas for (or some other means of verifying) the many, many computations which exist such as roll forwards and dimensional aggregations which an XBRL calculation relation will not verify as being correct. This link is to a report which verifies that the some 40 computations which exist in my model SEC XBRL financial filing are correct. As you know, financial reports have many computations, if you are not creating XBRL Formulas or something else to verify that they are correct, under verifying your filing and the chance that en error exists is high.
Another type of under verifying is not checking the structure of your [Table], [Axis], [Member]s, [Line Items]. A [Table] with no [Axis] makes little sense. A [Member] mixed in with the [Line Items] makes no sense. All of these types of tests can be automated, checked by a software application.
In this matrix I tried to summarize the types of things which a computer software application can verify. The matrix shows the types of report elements in the rows and the properties a report element could have in the columns. NA means that a report element does not have that property. Properties which carry semantic meaning are listed.
- Properties which computer software can verify 100% are highlighted in gray.
- The type of things humans need to verify which are highlighted in yellow,
- The types of things where both human effort and software can be used which are highlighted in light orange.
There are other examples of over verifying and under verifying. But this will get you started figuring out the right mix and if software or humans should be doing the work.