In her book An Introduction to Ontology Engineering (PDF page 23), C. Maria Keet, PhD, provides discussion about what constitutes a good and perhaps a not-so-good ontology. There are three categories of errors she discusses:
You get a good ontology when the precision of the ontology is high and the coverage of the ontology is high. Precision is a measure of how precisely you do or can represent the information of a domain within an ontology-like thing as contrast to reality. Coverage is a measure of how well you do or can represent a domain of information within an ontology-like thing.
The graphic shown below helps you visualize the notions of precision and coverage. The "universe" (also called the universe of discourse or simply domain) is shown in light gray. The pink indicates the set of what you represented in your ontology-like thing. The green indicates what could possibly be represented using the language you are using (i.e. determined by where the language you are using sits in the ontology spectrum).
If you represent the things that you should represent (i.e. your coverage is good) and you do so such that the ontology-like thing accurately represents reality, then you get a good ontology-like thing. But if an ontology-like thing cannot do what it should be able to do then it is a bad ontology-like thing. And things can go wrong when you have high precision but not enough coverage or if you have low precision with high coverage or things can become really bad if neither your precision nor coverage are what you should have created given the goal you are trying to achieve.
And so, precision and coverage matter when it comes to creating an ontology-like thing. Representing information in machine-understandable for is not something that just happens. It is a lot of work. Balancing the objective you are trying to achieve, the language that you use, what you put into your ontology-like thing, and how well your ontology-like thing represents your universe impacts how well your system will work.