Skip to content

Question - Threshold limits - Data driven or Expert driven #214

@Ak784

Description

@Ak784

Hello Everyone,

We see most of our DQD checks for lab threshold failed. While I understand the threshold values are customizable, can I know whether the threshold values that are configured are based on expert inputs or any data-driven approach (meaning gathering statistics about certain measurements from multiple sites and choosing the most common one, etc).

I ask this because, if we have to adapt the DQD threshold checks based on our region characteristics, we would like to follow a similar procedure.

So, few questions

a) DQD rule checks are expert-driven or data-driven? Is there any way I can read about how certain rules and thresholds were configured?

b) If it's expert-driven, Is it possible to know how it was done? Like 10-20 experts sat down and arrived at a threshold value for these measurements?

c) I understand there is an overlap between Achilles Heel (less DQ rules) and DQD (more DQ rules). Is there any reason as to why we have a new tool called DQD instead of extending the existing Achilles Heel? Am trying to understand the background info, so I can suggest an appropriate tool to our stakeholders

d) Is there any other advantage to using DQD over Achilles (except exhaustive rules coverage). Is it the dashboard? Can sites communicate the quality of their dataset to other sites by saying that their data has a quality of 97% or 95% etc. (which is not possible in Achilles)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions