-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Florence2 workflows block #661
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, minor comments - if possible let's plan next steps for hosted platform deployment
""" | ||
|
||
TaskType = Literal[ | ||
"<OCR>", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is change in favour of clients using UI, probably would decrease code readability, but I would still choose that tradeoff - let's change the literal entries into this style:
inference/inference/core/workflows/core_steps/models/foundation/google_gemini/v1.py
Line 76 in aa5b4b0
TaskType = Literal[ |
ocr-with-region
instead <OCR_WITH_REGION>
and change it in prompting into f"<{value.upper().replace('-', '_')}>"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, this was still draft -- I added legible english descriptions and mapped them to tasks... that ok?
) | ||
type: Literal["roboflow_core/florence_2@v1"] | ||
images: Union[WorkflowImageSelector, StepOutputImageSelector] = ImageInputField | ||
task_type: TaskType = Field( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's change into this style: #659 following what @EmilyGavrilenko did
prompt=prompt, | ||
) | ||
elif self._step_execution_mode is StepExecutionMode.REMOTE: | ||
raise NotImplementedError( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just exploratory question - am I correct that for base model we could run it in core models lambda given special endpoint is created?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I believe so. We'd need to enable it with envvar also. It might be pretty dang slow depending on the task.
and if possible, could u add support on parsing block - VLM as detector |
sorry, missclick |
…ence into florence2-workflows-block
Description
Adds florence 2 model block and adds florence 2 parsing to VLM_as_detector block
Type of change
Please delete options that are not relevant.
How has this change been tested, please provide a testcase or example of how you tested the change?
Deployed locally and tested with prod workflows UI
Any specific deployment considerations
No
Docs