-
-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mutual Information Sensitivity #176
Conversation
add raw string annotation
KSRank is stochastic and slight variations in the outcome measures are possible. New tests check whether parameters are correctly identified as sensitive.
Make tests less sensitive to random variability in the sensitivity criterion. Also indicate the random variability influence in the documentation.
Use JuliaFormatter
Add more detailed explanation of f(Y) function signature, and fix typos in math.
Add description of returned KSRankResult struct
Add @doc signature
Change math $ into ``
Use correct link
Renamed KS_Rank to RSA in line with how the method is known in literature.
fix compatibility issues with SortingAlgorithms.jl
Bump lower bound of StatsBase to 0.33.7
Modify default argument of n_dummy_parameters to 10
Add mutual information sensitivity analysis method from Lüdtke, N., Panzeri, S., Brown, M., Broomhead, D. S., Knowles, J., Montemurro, M. A., & Kell, D. B. (2007). Information-theoretic sensitivity analysis: a general method for credit assignment in complex networks. Journal of The Royal Society Interface, 5(19), 223–235. https://doi.org/10.1098/RSIF.2007.1079
As `ComplexityMeasures` is being maintained, choose this dependency instead of `InformationMeasures` which has not been updated in 4 years.
Co-authored-by: George Datseris <[email protected]>
Update docstring to include additional detail about the discretization entropy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bump the version to 2.7 in the PR so we can do a release right after we merge this
Also apply formatter and update readme
test/mutual_information_method.jl
Outdated
|
||
res1.S1_Conf_Int | ||
|
||
@test res1.S1≈[0.1416, 0.1929, 0.1204, 0.0925] atol=1e-3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These look significantly different from the usual Sobol values, especially the fourth parameter having such a large value is concerning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the the fourth parameter doesn't show up in the model at all so it should be 0 regardless of other values, or at least close to it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you compute the confidence bounds using more than 1 bootstrap sample, the 4th parameter is within the bounds, so this method indicates it's not sensitive. Should I add this to the docs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mutual information method only has meaning when compared to its bounds . I am not sure how useful it is in the practical sense to have the bootstraping be 1 by default instead of 1000. MI values can vary a lot depending on the distributions of x and y, irrespectively of their joint distribytions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed the second and total order indices as they didn't make sense based on what I computed and knew from the sobol indices. I now adjusted for the upper confidence bounds and reduced this method to its most basic form. In this way, maybe the other indices can be left for a later implementation.
Second order and total order give some weird results. For now, I've implemented a very basic version of the first order sensitivities using mutual information. The outcomes also correspond to the sobol first order outcomes.
Don't know why the documenter can't find the version of ComplexityMeasures you want to use here? |
Checklist
contributor guidelines, in particular the SciML Style Guide and
COLPRAC.
Additional context
I attempted an initial implementation of #175 and it seems to work, but it should be checked for correctness.