Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reorganize metrics #8972

Open
fniessink opened this issue Jun 13, 2024 · 0 comments
Open

Reorganize metrics #8972

fniessink opened this issue Jun 13, 2024 · 0 comments
Labels
Epic Composite issue Metric(s) New, enhanced, or removed metric

Comments

@fniessink
Copy link
Member

fniessink commented Jun 13, 2024

The metric types in Quality-time suffer from a poor structure. Several metric types could be one metric types. This issue identifies these overlapping metrics.

CI-jobs

There are two metrics that count CI-jobs:

These metrics can be merged into one new metric: "CI-jobs"

User stories:

Pipelines versus jobs

Context:

  • Jenkins has jobs with builds or pipelines with builds. In the case of pipelines, the pipelines contain jobs/stages.
  • GitLab has pipelines with runs. A pipeline can consist of jobs, which are stages of the pipeline. Each repo has one pipeline.
  • Azure DevOps has pipelines with runs. In Azure DevOps a repo can have multiple pipelines. Pipelines have stages (called jobs).

Currently, Quality-time has four metrics that use Jenkins/GitLab/Azure DevOps pipelines and/or jobs as source:

Problems:

  • The failed CI-jobs and unused CI-jobs both count jobs, only with different status, and could/should be one metric with a parameter to distinguish between the statuses.
  • As the terminology differs between the three CI-tools, things are pretty confusing. Does the failed CI-jobs also support pipelines? Does it support jobs within pipelines? Etc.

Solution:

Create a new metric, to ultimately replace to current metrics: "Builds". Builds are runs of pipelines or jobs. The metrics counts the number of builds of the pipelines and/or jobs specified by the user. Description: "The builds metric counts the number of builds of pipelines or CI-jobs. Builds can be filtered by name, status, trigger, time period, and branch."

Parameters:

  • The period in which to look for builds
  • The statuses of builds
  • The triggers of builds
  • The branches built
  • The name of the pipeline and/or job
  • Due to the confusing terminology, we first draft the data model before implementing the story.

Tasks:

  • Create the data model for the "builds" metric.
  • Review and refine

Violations

There is one metric for all types of programming rule violations, but there are also a number of more specific metrics:

In the case of SonarQube, the specific violations could be measured via the generic violations metric, if that metric had parameters to select the rules applicable to e.g. complex units or long units.

Source up-to-dateness

The source up-to-dateness metric is used to measure whether a source has up-to-date information. However, in practice this metric is used to answer two different questions:

  1. Is the source up-to-date, e.g. did SonarQube perform an analysis recently, or did a job run recently?
  2. Is an artifact up-to-date, e.g. was the software architecture document updated recently?

Solution would be to split the source up-to-dateness metric in two separate metrics, for example:

  • Source up-to-dateness
  • Artifact up-to-dateness

Performancetest metrics

There are four performance test metrics that have rather different names:

  • Scalability. Scalability measures the point at which a breakpoint occurs in a performance(stress)test, and is an indication of the quality of the performance test, not of the scalability of the software. A better name would perhaps be "Performancetest breakpoint".
  • Slow transactions. Slow transactions does indeed count the number of slow transactions. This gives an indication of the application performance.
  • Performancetest duration. Performancetest duration is correctly named.
  • Performancetest stability. Performancetest stability is not the stability of the performance test, but of the performance of the software under test. A better name would be "Software stability" or "Software performance stability".
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Epic Composite issue Metric(s) New, enhanced, or removed metric
Projects
Status: To be refined
Development

No branches or pull requests

1 participant