Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix device handling and logits concatenation in OliveEvaluator #1615

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

tezheng
Copy link
Contributor

@tezheng tezheng commented Feb 14, 2025

  • Add exception handling for unsupported devices in device_string_to_torch_device method.
  • Correct logits concatenation in OnnxEvaluator by using logits_dict instead of logits.
  • Initialize logits_dict in PyTorchEvaluator to handle different result types.
  • Update _inference method in PyTorchEvaluator to handle different result types and concatenate logits correctly.

Describe your changes

Checklist before requesting a review

  • Add unit tests for this change.
  • Make sure all tests can pass.
  • Update documents if necessary.
  • Lint and apply fixes to your code by running lintrunner -a
  • Is this a user-facing change? If yes, give a description of this change to be included in the release notes.
  • Is this PR including examples changes? If yes, please remember to update example documentation in a follow-up PR.

(Optional) Issue link

- Add exception handling for unsupported devices in `device_string_to_torch_device` method.
- Correct logits concatenation in `OnnxEvaluator` by using `logits_dict` instead of `logits`.
- Initialize `logits_dict` in `PyTorchEvaluator` to handle different result types.
- Update `_inference` method in `PyTorchEvaluator` to handle different result types and concatenate logits correctly.
@tezheng tezheng force-pushed the fix/eval_support_dict_as_output branch from 306da8e to 09138a8 Compare February 14, 2025 16:52
@tezheng tezheng marked this pull request as ready for review February 14, 2025 16:58
@@ -196,7 +196,11 @@ def compute_throughput(metric: Metric, latencies: Any) -> MetricResult:
class _OliveEvaluator(OliveEvaluator):
@staticmethod
def device_string_to_torch_device(device: Device):
return torch.device("cuda") if device == Device.GPU else torch.device(device)
try:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we can just do torch.device("cuda") if device == Device.GPU else torch.device("cpu"). Otherwise, this would raise the warning for npu targets everytime. since this method is only used when evaluating pytorch models, I think it's save to just return cuda or cpu only.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we will only consider cuda and cpu, for now at least? Are we planning to support device like Apple mps?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, there are no plans for other torch devices like mps for now.

Comment on lines +756 to +762
if isinstance(result, torch.Tensor):
logits.append(result.cpu())
elif isinstance(result, (list, tuple)):
logits.append([r.cpu() for r in result])
elif isinstance(result, dict):
for k in result:
logits_dict[k].append(result[k].cpu())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can we handle the case when result has logits attribute?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean result like { "logits": tensor }?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants