You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The metric types in Quality-time suffer from a poor structure. Several metric types could be one metric types. This issue identifies these overlapping metrics.
The failed CI-jobs and unused CI-jobs both count jobs, only with different status, and could/should be one metric with a parameter to distinguish between the statuses.
As the terminology differs between the three CI-tools, things are pretty confusing. Does the failed CI-jobs also support pipelines? Does it support jobs within pipelines? Etc.
Solution:
Create a new metric, to ultimately replace to current metrics: "Builds". Builds are runs of pipelines or jobs. The metrics counts the number of builds of the pipelines and/or jobs specified by the user. Description: "The builds metric counts the number of builds of pipelines or CI-jobs. Builds can be filtered by name, status, trigger, time period, and branch."
Parameters:
The period in which to look for builds
The statuses of builds
The triggers of builds
The branches built
The name of the pipeline and/or job
Due to the confusing terminology, we first draft the data model before implementing the story.
Tasks:
Create the data model for the "builds" metric.
Review and refine
Violations
There is one metric for all types of programming rule violations, but there are also a number of more specific metrics:
In the case of SonarQube, the specific violations could be measured via the generic violations metric, if that metric had parameters to select the rules applicable to e.g. complex units or long units.
Source up-to-dateness
The source up-to-dateness metric is used to measure whether a source has up-to-date information. However, in practice this metric is used to answer two different questions:
Is the source up-to-date, e.g. did SonarQube perform an analysis recently, or did a job run recently?
Is an artifact up-to-date, e.g. was the software architecture document updated recently?
Solution would be to split the source up-to-dateness metric in two separate metrics, for example:
Source up-to-dateness
Artifact up-to-dateness
Performancetest metrics
There are four performance test metrics that have rather different names:
Scalability. Scalability measures the point at which a breakpoint occurs in a performance(stress)test, and is an indication of the quality of the performance test, not of the scalability of the software. A better name would perhaps be "Performancetest breakpoint".
Slow transactions. Slow transactions does indeed count the number of slow transactions. This gives an indication of the application performance.
Performancetest stability. Performancetest stability is not the stability of the performance test, but of the performance of the software under test. A better name would be "Software stability" or "Software performance stability".
The text was updated successfully, but these errors were encountered:
The metric types in Quality-time suffer from a poor structure. Several metric types could be one metric types. This issue identifies these overlapping metrics.
CI-jobs
There are two metrics that count CI-jobs:
These metrics can be merged into one new metric: "CI-jobs"
User stories:
Pipelines versus jobs
Context:
Currently, Quality-time has four metrics that use Jenkins/GitLab/Azure DevOps pipelines and/or jobs as source:
Problems:
Solution:
Create a new metric, to ultimately replace to current metrics: "Builds". Builds are runs of pipelines or jobs. The metrics counts the number of builds of the pipelines and/or jobs specified by the user. Description: "The builds metric counts the number of builds of pipelines or CI-jobs. Builds can be filtered by name, status, trigger, time period, and branch."
Parameters:
Tasks:
Violations
There is one metric for all types of programming rule violations, but there are also a number of more specific metrics:
In the case of SonarQube, the specific violations could be measured via the generic violations metric, if that metric had parameters to select the rules applicable to e.g. complex units or long units.
Source up-to-dateness
The source up-to-dateness metric is used to measure whether a source has up-to-date information. However, in practice this metric is used to answer two different questions:
Solution would be to split the source up-to-dateness metric in two separate metrics, for example:
Performancetest metrics
There are four performance test metrics that have rather different names:
The text was updated successfully, but these errors were encountered: