You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Does anyone know how to enable batched inference using a tokensprompt as input instead of text?
Trying to declare a list of tokens like with a text input doesn't work.
For example trying:
tp = TokensPrompt({"prompt_token_ids": [[1,2],[1,2]]})
Results in a *** TypeError: '>' not supported between instances of 'list' and 'int' when using:
model.generate(tp, sampling_params)
However passing a list of strings as an input does batched inference correctly.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions