You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current implementation now removes any rows that have a nan in either the measured or predicted rates prior to downsampling. In addition, when running the command line tool to score an entire dataset, we take the nanmedian of scores across neurons. In my testing I occasionally found NaNs in the output of the optimization function for the loglik computation for just one neuron, though I wasn't entirely sure why, probably worth tracking down.
We should decide how to handle NaNs in submission (or how many to allow) and implement it
The text was updated successfully, but these errors were encountered: