-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add Minuit instance to FitResults object #504
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #504 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 22 22
Lines 2094 2096 +2
Branches 347 347
=========================================
+ Hits 2094 2096 +2 ☔ View full report in Codecov by Sentry. |
@alexander-held any thoughts on whether or not additional tests are needed to validate the fit results container? I am not sure what kind of test would be useful to validate in the cases where we set the FitResults container manually. |
I implemented this as an optional argument, just so that it does not break usage of the |
@@ -27,6 +29,7 @@ class FitResults(NamedTuple): | |||
best_twice_nll: float | |||
goodness_of_fit: float = -1 | |||
minos_uncertainty: Dict[str, Tuple[float, float]] = {} | |||
minuit_obj: Optional[iminuit.Minuit] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can make this a required argument instead of an optional one, cabinetry
(currently at least) only works with Minuit as optimizer (as opposed to pyhf
which allows scipy
) so this is always present.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I implemented this way so that it does not break for anyone who uses the container directly and to give us the flexibility of not having to pass it in the future. Would you disagree?
@@ -434,7 +513,7 @@ def test_fit(mock_fit, mock_print, mock_gof): | |||
assert mock_print.call_args[0][0].uncertainty == [0.1] | |||
assert mock_print.call_args[0][0].labels == ["par"] | |||
assert mock_gof.call_count == 0 | |||
assert fit_results.bestfit == [1.0] | |||
assert np.allclose(fit_results.bestfit, [1.0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious about these changes: fit_results
is mocked here so this should be exactly 1.0
, did this cause problems?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I only changed it to be consistent with the rest of the tests. There are other cases where an equality check would suffice but the tests are written with numpy.allclose
…om pyhf model" This reverts commit bf99f0f.
@alexander-held I think these comments are now resolved and this is not a breaking change -- any other issues with it or we update and merge? |
Address #503