Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat!: Implement covariance-based impact calculations #515

Open
wants to merge 22 commits into
base: master
Choose a base branch
from

Conversation

MoAly98
Copy link
Collaborator

@MoAly98 MoAly98 commented Mar 6, 2025

This addresses #513.

To-do for this:

  • save impacts method used in ranking results as metadata
  • create alternative-looking plots for method
  • CLI support for method specification
  • Add pure statistical uncertainty impact in impacts summary

Expected missing coverage in this PR since they are implemented but not exposed in API:

  • Impacts per modifier type are not tested
  • Impact summaries are not tested

These should be addressed in a follow-up PR where the API gets exposed and tests are added. This can probably be a PR on its own where we return a grouped impact table/plot.

* BREAKING CHANGE: default ranking method switches to covariance-based
* add option in `fit.ranking` to specify which impact-calculation method should be used, supporting NP shifting and covariance-based method. 
* Place holder with no implementation added for computing impacts by global observable (aux data) shifting   
* Modularised the computation of impacts using the different methods
* return impacts summary (total systematic/statistical/normfactors/MC stats impacts) from impact calculations
* All impact calculations return the parameter impacts for each type of modifier, as well as the combined list of impacts for all modifier types. 
* The `RankingResults` object now holds information on what impact method was used
* Different plotting aesthetic is implemented for covariance-based impacts
* Ranking plot files now have the impact method used as a suffix
* Impacts computation method can be specified from CLI
* StatOnly impacts are computed and stored in impacts summary

@MoAly98 MoAly98 self-assigned this Mar 6, 2025
@MoAly98 MoAly98 marked this pull request as draft March 6, 2025 13:18
Copy link

codecov bot commented Mar 6, 2025

Codecov Report

Attention: Patch coverage is 95.52239% with 6 lines in your changes missing coverage. Please review.

Project coverage is 99.72%. Comparing base (22fc9b2) to head (e0351eb).

Files with missing lines Patch % Lines
src/cabinetry/visualize/plot_result.py 70.58% 2 Missing and 3 partials ⚠️
src/cabinetry/fit/__init__.py 99.11% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##            master     #515      +/-   ##
===========================================
- Coverage   100.00%   99.72%   -0.28%     
===========================================
  Files           22       22              
  Lines         2096     2194      +98     
  Branches       347      378      +31     
===========================================
+ Hits          2096     2188      +92     
- Misses           0        2       +2     
- Partials         0        4       +4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@MoAly98 MoAly98 requested a review from alexander-held March 7, 2025 14:33
@MoAly98 MoAly98 marked this pull request as ready for review March 7, 2025 14:33
@MoAly98 MoAly98 changed the title Implement covariance-based impact calculations feat!: Implement covariance-based impact calculations Mar 7, 2025
@MoAly98
Copy link
Collaborator Author

MoAly98 commented Mar 8, 2025

@alexander-held I am confused by the codecov report on the ranking plot. The uncovered lines in plot_results.ranking are definitely covered -- is this maybe due to using xfail ? I don't see these issues in my local coverage report.

@MoAly98
Copy link
Collaborator Author

MoAly98 commented Mar 9, 2025

In order to avoid inflating this PR, I will branch off this branch to continue the development of the GO-shifting method. I will also use a branch off this one to further utilise the impacts summary and expose them to user. Ideally would then prefer not to squash the history in this PR.

This PR should also be updated with changes from #512 that use MINOS uncertainties for post-fit impact evaluations when available instead of the symmetric result from HESSIAN.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant