Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mutual Information Sensitivity #176

Merged
merged 36 commits into from
Aug 12, 2024
Merged

Conversation

max-de-rooij
Copy link
Contributor

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

I attempted an initial implementation of #175 and it seems to work, but it should be checked for correctness.

max-de-rooij and others added 23 commits July 15, 2024 12:33
add raw string annotation
KSRank is stochastic and slight variations in the outcome measures are possible. New tests check whether parameters are correctly identified as sensitive.
Make tests less sensitive to random variability in the sensitivity criterion. Also indicate the random variability influence in the documentation.
Use JuliaFormatter
Add more detailed explanation of f(Y) function signature, and fix typos in math.
Add description of returned KSRankResult struct
Change math $ into ``
Renamed KS_Rank to RSA in line with how the method is known in literature.
fix compatibility issues with SortingAlgorithms.jl
Bump lower bound of StatsBase to 0.33.7
Modify default argument of n_dummy_parameters to 10
Add mutual information sensitivity analysis method from Lüdtke, N., Panzeri, S., Brown, M., Broomhead, D. S., Knowles, J., Montemurro, M. A., & Kell, D. B. (2007). Information-theoretic sensitivity analysis: a general method for credit assignment in complex networks. Journal of The Royal Society Interface, 5(19), 223–235. https://doi.org/10.1098/RSIF.2007.1079
As `ComplexityMeasures` is being maintained, choose this dependency instead of `InformationMeasures` which has not been updated in 4 years.
Update docstring to include additional detail about the discretization entropy.
Copy link
Member

@Vaibhavdixit02 Vaibhavdixit02 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bump the version to 2.7 in the PR so we can do a release right after we merge this

Also apply formatter and update readme

res1.S1_Conf_Int

@test res1.S1≈[0.1416, 0.1929, 0.1204, 0.0925] atol=1e-3
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These look significantly different from the usual Sobol values, especially the fourth parameter having such a large value is concerning?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the the fourth parameter doesn't show up in the model at all so it should be 0 regardless of other values, or at least close to it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you compute the confidence bounds using more than 1 bootstrap sample, the 4th parameter is within the bounds, so this method indicates it's not sensitive. Should I add this to the docs?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mutual information method only has meaning when compared to its bounds . I am not sure how useful it is in the practical sense to have the bootstraping be 1 by default instead of 1000. MI values can vary a lot depending on the distributions of x and y, irrespectively of their joint distribytions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed the second and total order indices as they didn't make sense based on what I computed and knew from the sobol indices. I now adjusted for the upper confidence bounds and reduced this method to its most basic form. In this way, maybe the other indices can be left for a later implementation.

Second order and total order give some weird results. For now, I've implemented a very basic version of the first order sensitivities using mutual information. The outcomes also correspond to the sobol first order outcomes.
@Vaibhavdixit02
Copy link
Member

Don't know why the documenter can't find the version of ComplexityMeasures you want to use here?

@Vaibhavdixit02 Vaibhavdixit02 merged commit 04b16fe into SciML:master Aug 12, 2024
10 of 11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants