This image shows a comprehensive overview of the findings in the MES Quality Commander® (MQC) in the form of a heatmap, along with a text that says, "Software quality is built on details".
Comprehensive Overview of Findings in the MES Quality Commander® (MQC)

Everything Begins with Findings - Understanding the Foundation of Software Quality Monitoring

This images showes a software quality heatmap in the MES Quality Commander® (MQC).
Figure: Software Quality Heatmap in the MES Quality Commander® (MQC)

When you look at your project quality, the first thing you hopefully see is an overview. Aggregated metrics derived from quality assurance data provide insight into where things stand. These metrics summarize findings into percentages, trends, and status indicators. This overview is necessary and useful. It helps you orient yourself.

At the same time, however, it hides important information – by design.

What you see in an overview is the result of aggregation. What you do not see are the details that these metrics are built on. To understand where quality comes from, where risks form, and why numbers change, you must look at the findings behind the overview.

This image shows a comprehensive overview of the findings in the MES Quality Commander® (MQC) in the form of a heatmap.
Figure: A Comprehensive Overview of Findings in the MES Quality Commander® (MQC)

Software Quality Is Not the Overview Alone

A project may appear solid at first glance. Aggregated metrics, such as guideline and test compliance or code coverage, provide a general overview of quality. While these values can improve over time, they cannot show all aspects the software is actually made of.

To understand how to improve quality, you need to move from the overview to the underlying details. These details are the findings generated throughout the development and testing process. In model-based quality assurance, these findings demonstrate:

  • where requirements are met,
  • where guidelines are followed,
  • where complexity or inconsistencies arise,
  • which elements are covered.

You can only see the true, detailed state of your project by examining this model-based quality assurance data in detail. While the overview may appear reassuring, detailed insight comes from understanding the underlying findings.

Where Findings Come From

The findings result directly from the activities carried out during development and quality assurance. They appear because you continuously ask questions about your software system and evaluate the responses as part of your model-based quality assurance process.

Typical questions include:

  • Is each requirement covered by at least one test? If coverage is complete, you receive a passed finding. If coverage is missing, you get a failed finding.
  • Is the complexity of a subsystem acceptable? Values within an expected range are considered acceptable (passed finding), while increased complexity may trigger a warning or even a failed finding.
  • Are modeling or implementation guidelines being followed? Each deviation from a guideline creates its own warning or failed finding.
  • Behaves a test case according to the requirements (often implemented in assessments)?

Each answer adds detail to the overall picture. Passed findings show you what works as intended, while warning and failed findings point to areas that need attention.

Taken together, findings form the basis of your overview. They provide the level of detail that aggregated metrics alone cannot show. Findings reveal what is unclear, what may be missing, and what already works well.

The Challenges That Come with Findings

As long as you are working with only a few models, sifting through findings may feel manageable. However, this quickly changes as projects grow.

Your findings come from many different sources. Static and dynamic testing and review comments all generate model-based quality assurance data. The challenge lies not in the data itself but in the way it is distributed across tools and reports.

You check one model, then the next, and so on. Static testing results live in one tool, dynamic testing findings live in another, and review comments live somewhere else. Keeping track of everything is time-consuming and distracting. Important patterns remain hidden because the data is scattered across many individual views.

At this point, there is nothing wrong with the findings. What gets lost is visibility.

A Central View of Findings

Imagine having a central overview of everything without losing access to the details. All findings from multiple tools and models are brought together in one place as a consistent set of quality assurance data.

Rather than checking the results of models individually, you can see which areas produce issues, how severe they are, and where problems recur across models. This reduces review effort, reveals patterns, and helps you focus on what really matters.

The MES Quality Commander® (MQC) provides this type of centralized visibility and easily scales across many models. To see how it works, have a look at the video below.

Video: Live demonstration of what it looks like when all your finding data is collected in MQC

What You Gain When Nothing Gets Lost in the Overview

The problem is not having an overview. Losing the connection to the underlying findings is.

When you work with model-based quality assurance data at the level of findings, you gain clarity. You understand why numbers change, what influences quality, and where risks begin to emerge. Rather than reacting to trends, you can identify their causes.

Quality does not improve just because an overview looks great. Quality improves when we fix the underlying issues. We need the quality overview connected with the details of the findings on which it is based.

Would you like to see how a centralized view of findings works for yourself?

Get in Touch with Us

This image shows Hartmut Pohlheim.
Dr. Hartmut Pohlheim
Managing Director

* Mandatory field

Please add 3 and 4.