Releases: roboflow/inference
v0.29.1
🛠️ Fixed
python-multipart
security issue fixed
Caution
We are removing the following vulnerability detected recently in python-multipart
library.
Issue summary
When parsing form data, python-multipart skips line breaks (CR \r
or LF \n
) in front of the first boundary and any tailing bytes after the last boundary. This happens one byte at a time and emits a log event each time, which may cause excessive logging for certain inputs.
An attacker could abuse this by sending a malicious request with lots of data before the first or after the last boundary, causing high CPU load and stalling the processing thread for a significant amount of time. In case of ASGI application, this could stall the event loop and prevent other requests from being processed, resulting in a denial of service (DoS).
Impact
Applications that use python-multipart to parse form data (or use frameworks that do so) are affected.
Next steps
We advise all inference
clients to migrate to version 0.29.1
, especially when inference
docker image is in use. Clients using
older versions of Python package may also upgrade the vulnerable dependency in their environment:
pip install "python-multipart==0.0.19"
Details of the change: #855
Remaining fixes
- Fix problem with docs rendering by @PawelPeczek-Roboflow in #854
- Remove piexif dependency by @iurisilvio in #851
Full Changelog: v0.29.0...v0.29.1
v0.29.0
🚀 Added
📧 Slack and Twilio notifications in Workflows
We've just added two notification blocks to Worfklows ecosystem - Slack and Twilio. Now, there is nothing that can stop you from sending notifications from your Workflows!
slack_notification.mp4
inference-cli
🤝 Workflows
We are happy to share that inference-cli
has now a new command - inference workflows
that make it possible to process data with Workflows without any additional Python scripts needed 😄
🎥 Video files processing
- Input a video path, specify an output directory, and run any workflow.
- Frame-by-frame results saved as CSV or JSONL.
- Your Workflow outputs images? Get an output video out from them if you wanted
🖼️ Process images and directories of images 📂
- Outputs stored in subdirectories with JSONL/CSV aggregation available.
- Fault-tollerant processing:
- ✅ Resume after failure (tracked in logs).
- 🔄 Option to force re-processing.
Review our 📖 docs to discover all options!
👉 Try the command
To try the command, simply run:
pip install inference
inference workflows process-images-directory \
-i {your_input_directory} \
-o {your_output_directory} \
--workspace_name {your-roboflow-workspace-url} \
--workflow_id {your-workflow-id} \
--api-key {your_roboflow_api_key}
Screen.Recording.2024-11-26.at.18.19.23.mov
🔑 Secrets provider block in Workflows
Many Workflows blocks require credential to work correctly, but so far, the ecosystem only provided one secure option for passing those credentials - using workflow parameters, forcing client applications to manipulate secret values.
Since this is not handy solution, we decided to create Environment Secrets Store block which is capable of fetching credentials from environmental variables of inference
server. Thanks to that, admins can now set up the server and client's code do not need to handle secrets ✨
⚠️ Security Notice:
For enhanced security, always use secret providers or Workflow parameters to handle credentials. Hardcoding secrets into your Workflows is strongly discouraged.
🔒 Limitations:
This block is designed for self-hosted inference servers only. Due to security concerns, exporting environment variables is not supported on the hosted Roboflow Platform.
🌐 OPC Workflow block 📡
The OPC Writer block provides a versatile set of integration options that enable enterprises to seamlessly connect with OPC-compliant systems and incorporate real-time data transfer into their workflows. Here’s how you can leverage the block’s flexibility for various integration scenarios that industry-class solutions require.
✨ Key features
- Seamless OPC Integration: Easily send data to OPC servers, whether on local networks or cloud environments, ensuring your workflows can interface with industrial control systems, IoT devices, and SCADA systems.
- Cross-Platform Connectivity: Built with asyncua, the block enables smooth communication across multiple platforms, enabling integration with existing infrastructure and ensuring compatibility with a wide range of OPC standards.
Important
This Workflow block is released under Roboflow Enterprise License and is not available by default on Roboflow Hosted Platform.
Anyone interested in integrating Workflows with industry systems through OPC - please contact Roboflow Sales
See @grzegorz-roboflow's change in #842
🛠️ Fixed
Workflows Execution Engine v1.4.0
-
New Kind: A secret kind for credentials is now available. No action needed for existing blocks, but future blocks should use it for secret parameters.
-
Serialization Fix: Fixed a bug where non-batch outputs weren't being serialized in v1.3.0.
-
Execution Engine Fix: Resolved an issue with empty inputs being passed to downstream blocks. This update ensures smoother workflow execution and may fix previous issues without any changes needed.
See full changelog for more details.
🚧 Changed
Open Workflows on Roboflow Platform
We are moving towards shareable Workflow Definitions on Roboflow Platform - to reflect that @yeldarby made the api_key
optional in Workflows Run requests in #843
⛑️ Maintenance
- Update Docker Tag Logic by @alexnorell in #840
- Make check_if_branch_is_mergeable.yml to succeed if merging to main by @grzegorz-roboflow in #848
- Add workflow to check mergeable state executed on pull request by @grzegorz-roboflow in #847
Full Changelog: v0.28.2...v0.29.0
v0.28.2
🔧 Fixed issue with inference
package installation
26.11.2024 there was a release 0.20.4
of tokenizers
library which is dependency of inference
dependencies introducing breaking change for those inference
clients who use Python 3.8 - causing the following errors while installation of recent (and older) versions of inference
:
👉 MacOS
Downloading tokenizers-0.20.4.tar.gz (343 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details
👉 Linux
After installation, the following error was presented
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1778: in _get_module
return importlib.import_module("." + module_name, self.__name__)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:[101](https://github.com/roboflow/inference/actions/runs/12049175470/job/33595408508#step:7:102)4: in _gcd_import
???
<frozen importlib._bootstrap>:991: in _find_and_load
???
<frozen importlib._bootstrap>:961: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:219: in _call_with_frames_removed
???
<frozen importlib._bootstrap>:1014: in _gcd_import
???
<frozen importlib._bootstrap>:991: in _find_and_load
???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:671: in _load_unlocked
???
<frozen importlib._bootstrap_external>:843: in exec_module
???
<frozen importlib._bootstrap>:219: in _call_with_frames_removed
???
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/models/__init__.py:15: in <module>
from . import (
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/models/mt5/__init__.py:36: in <module>
from ..t5.tokenization_t5_fast import T5TokenizerFast
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py:23: in <module>
from ...tokenization_utils_fast import PreTrainedTokenizerFast
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py:26: in <module>
import tokenizers.pre_tokenizers as pre_tokenizers_fast
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tokenizers/__init__.py:78: in <module>
from .tokenizers import (
E ImportError: /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tokenizers/tokenizers.abi3.so: undefined symbol: PyInterpreterState_Get
The above exception was the direct cause of the following exception:
tests/inference/models_predictions_tests/test_owlv2.py:4: in <module>
from inference.models.owlv2.owlv2 import OwlV2
inference/models/owlv2/owlv2.py:11: in <module>
from transformers import Owlv2ForObjectDetection, Owlv2Processor
<frozen importlib._bootstrap>:[103](https://github.com/roboflow/inference/actions/runs/12049175470/job/33595408508#step:7:104)9: in _handle_fromlist
???
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1766: in __getattr__
module = self._get_module(self._class_to_module[name])
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1780: in _get_module
raise RuntimeError(
E RuntimeError: Failed to import transformers.models.owlv2 because of the following error (look up to see its traceback):
E /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tokenizers/tokenizers.abi3.so: undefined symbol: PyInterpreterState_Get
Caution
We are fixing the problem in inference
0.28.2, but it is not possible to be fixed older releases - for those who need to fix that
in their environments, please modify the build such that installing inference
you also install tokenizers<=0.20.3
.
pip install inference "tokenizers<=0.20.3"
🔧 Fixed issue with CUDA and stream management API
While running inference
server and using stream management API to run Workflows against video inside docker container, it was not possible to use CUDA due to bug present from the very start of the feature. We are fixing it now.
Full Changelog: v0.28.1...v0.28.2
v0.28.1
🔧 Fixed broken Workflows loader
Caution
In 0.28.0
we had bug causing this error:
ModuleNotFoundError: No module named 'inference.core.workflows.core_steps.sinks.roboflow.model_monitoring_inference_aggregator'
We've junked version 0.28.0
of inference
, inference-core
, inference-cpu
and inference-gpu
and we recommend our clients to upgrade.
What's Changed
- Add init.py to fix docs generation by @PawelPeczek-Roboflow in #830
- Add missing static landing page outputs by @PawelPeczek-Roboflow in #832
- Release ARM CPU builds by @alexnorell in #831
- Remove debug print from owlv2 by @alexnorell in #833
- Bump version to 0.28.1 by @PawelPeczek-Roboflow in #835
Full Changelog: v0.28.0...v0.28.1
v0.28.0
🚀 Added
🎥 New Video Processing Cookbook! 💪
We’re excited to introduce a new cookbook showcasing a custom video-processing use case: Creating a Video-Based Fitness Trainer! 🚀 This is not only really nice example on how to use Roboflow tools, but also a great Open Source community contribution from @Matvezy 🥹. Just take look at the notebook.
gpt_coach_demo.mp4
🎯 Purpose
This cookbook demonstrates how inference
enhances foundational models like GPT-4o by adding powerful vision capabilities for accurate, data-driven insights. Perfect for exploring fitness applications or custom video processing workflows.
🔍 What’s inside?
- 🏃 Body Keypoint Tracking: Use
inference
to detect and track body keypoints in real time. - 📐 Joint Angle Calculation: Automatically compute and annotate joint angles on video frames.
- 🤖 AI-Powered Fitness Advice: Integrates GPT to analyze movements and provide personalized fitness tips based on video data.
- 🛠️ Built with supervision: for efficient annotation and processing.
✨ New Workflows Block for Model Monitoring! 📊
We’re thrilled to announce a new block that takes inference data reporting to the next level by integrating seamlessly with Roboflow Model Monitoring - all thanks to @robiscoding 🚀
Take look at 📖 documentation to learn more.
🏋️ Why to use?
- 🏭 Monitor your model processing video
- ⏱️ Track and validate model performance effortlessly over time
- 🔧 Gain understanding on how to improve your models over time
🔧 Fixed
- Change the platform tests assertions to compensate for PR #798 by @PawelPeczek-Roboflow in #816
- Set hosted to True when running on dedicated deployment by @grzegorz-roboflow in #817
- Fix issue with Workflows blocks for Roboflow models v2 not using base64 by @PawelPeczek-Roboflow in #823
- Bug which turned out not to be bug by @PawelPeczek-Roboflow in #824
- Fix bug with primitive types parsing in Worklfows by @PawelPeczek-Roboflow in #825
- Bump cross-spawn from 7.0.3 to 7.0.6 in /inference/landing in the npm_and_yarn group by @dependabot in #828
🏗️ Changed
- Handle internal roboflow service name env by @grzegorz-roboflow in #826
- Always include internal envs if set by @grzegorz-roboflow in #827
- Add support for preloading models by @alexnorell in #822
- Make TURN server config optional by @grzegorz-roboflow in #829
🏅 New Contributors
Full Changelog: v0.27.0...v0.28.0
v0.27.0
🚀 Added
🧠 Your own fine-tuned Florence 2 in Workflows 🔥
Have you been itching to dive into the world of Vision-Language Models (VLMs)? Maybe you've explored @SkalskiP's incredible tutorial on training your own VLM. Well, now you can take it a step further—train your own VLM directly on the Roboflow platform!
But that’s not all: thanks to @probicheaux, you can seamlessly integrate your VLM into Workflows for real-world applications.
Check out the 📖 docs and try it yourself!
Note
This workflow block is not available on Roboflow platform - you need to run inference server on your machine (preferably with GPU).
pip install inference-cli
inference server start
🎨 Classification results visualisation in Workflows
The Workflows ecosystem offers a variety of blocks to visualize model predictions, but we’ve been missing a dedicated option for classification—until now! 🎉
Thanks to the incredible work of @reiffd7, we’re excited to introduce the Classification Label Visualization block to the ecosystem.
Dive in and bring your classification results to life! 🚀
![]() |
![]() |
![]() |
🚧 Changes in ecosystem - Execution Engine v1.3.0
🚧
Tip
Changes introduced in Execution Engine v1.3.0
are non breaking, but we shipped couple of nice extensions and we encourage contributors to adopt them.
Full details of the changes and migration guides available here.
⚙️ Kinds with dynamic serializers and deserializers
- Added serializers/deserializers for each kind, enabling integration with external systems.
- Updated the Blocks Bundling page to reflect these changes.
- Enhanced
roboflow_core
kinds with suitable serializers/deserializers.
See our updated blocks bundling guide for more details.
🆓 Any data can be now a Workflow input
We've added new Workflows input type WorkflowBatchInput
- which is capable of accepting any kind
, unlike our previous inputs like WorkflowImage
. What's even nicer - you can also specify dimensionality level for WorkflowBatchInput
- basically making it possible to break down each workflow into single-steps executed in debug mode.
Take a look at 📖 docs to learn more
🏋️ Easier blocks development
We got tired wondering if specific field in block manifest should be marked with StepOutputSelector
, WorkflowImageSelector
,
StepOutputImageSelector
or WorkflowParameterSelector
type annotation. That was very confusing and was effectively increasing the difficulty of contributions.
Since the selectors type annotations are required for the Execution Engine that block define placeholders for data of specific kind we could not eliminate those annotations, but we are making them easier to understand - introducing generic annotation called Selector(...)
.
Selector(...)
no longer tells Execution Engine that the block accept batch-oriented data - so we replaced old block_manifest.accepts_batch_input()
method with two new:
block_manifest.get_parameters_accepting_batches()
- to return list of params thatWorkflowBlock.run(...)
method
accepts to be wrapped inBatch[X]
containerblock_manifest.get_parameters_accepting_batches_and_scalars()
- to return list of params thatWorkflowBlock.run(...)
method
accepts either to be wrapped inBatch[X]
container or provided as stand-alone scalar values.
Tip
To adopt changes while creating new block - visit our updated blocks creation guide.
To migrate existing blocks - take a look at migration guide.
🖌️ Increased JPEG compression quality
WorkflowImageData
has a property called base64_image
which is auto-generated out from numpy_image
associated to the object. In the previous version of inference
- default compression level was 90%
- we increased it to 95%
. We expect that this change will generally improve the quality of images passed between steps, yet there is no guarantee of better results from the models (that depends on how models were trained). Details of change: #798
Caution
Small changes in model predictions are expected due to this change - as it may happen that we are passing slightly different JPEG images into the models. If you are negatively affected, please let us know via GH Issues.
🧠 Change in Roboflow models blocks
We've changed the way on how Roboflow models blocks work on Roboflow hosted platform. Previously they were using numpy_image
property of WorkflowImageData
as an input to InferenceHTTPClient
while executing remote calls - which usually caused that we are serialising numpy image to JPEG and then to base64
, whereas usually on Roboflow hosted platform, we had base64
representation of image already provided, so effectively we were:
- slowing down the processing
- artificially decreasing the quality of images
This is no longer the case, so we do only transform image representation (and apply lossy compression) when needed. Details of change: #798.
Caution
Small changes in model predictions are expected due to this change - as it may happen that we are passing slightly different JPEG images into the models. If you are negatively affected, please let us know via GH Issues.
🗒️ New kind inference_id
We've diagnosed the need to give a semantic meaning for inference identifiers that are used by external systems as correlation IDs.
That's why we introduce new kind - inference_id
.
We encourage blocks developer to use new kind.
🗒️ New field available in video_metadata
and image
kinds
We've added new optional field to video metadata - measured_fps
- take a look at 📖 docs
🏗️ Changed
- Disable telemetry when running YOLO world by @grzegorz-roboflow in #800
- Pass webrtc TURN config as request parameter when calling POST /inference_pipelines/initialise_webrtc by @grzegorz-roboflow in #801
- Remove reset from YOLO settings by @grzegorz-roboflow in #802
- Pin all dependencies and update to new versions of libs by @PawelPeczek-Roboflow in #803
- bumping owlv2 version and putting cache size in env by @isaacrob-roboflow in #813
🔧 Fixed
- Florence 2 - fixing model caching by @probicheaux in #808
- Use measured fps when fetching frames from live stream by @grzegorz-roboflow in #805
- Fix issue with label visualisation by @PawelPeczek-Roboflow in #811 and @PawelPeczek-Roboflow in #814
Full Changelog: v0.26.1...v0.27.0
v0.26.1
What's Changed
- Make skypilot optional for inference-cli by @sberan in #792
- Add usage_billable to BaseRequest by @grzegorz-roboflow in #793
- Handle malformed usage_fps by @grzegorz-roboflow in #795
- Feature/extend line counter block outputs by @grzegorz-roboflow in #797
- Add turn server configuration to webrtc connection by @grzegorz-roboflow in #799
Full Changelog: v0.26.0...v0.26.1
v0.26.0
🚀 Added
🧠 Support for fine-tuned Florence-2 💥
As part of onboarding of Florence-2 fine-tuning on Roboflow platform, @probicheaux made it possible to run your fine-tuned models in inference
. Just complete the training on the Platform and deploy it using inference
, as any other model we support 🤯
🚦Jetpack 6 Support
We are excited to announce the support for Jetpack 6 which will enable more flexibility of development for Nvidia Jetson devices.
Test the image with the following command on Jetson device with Jetpack 6:
pip install inference-cli
inference server start
or pull the image from
docker pull roboflow/roboflow-inference-server-jetson-6.0.0
🏗️ Changed
InferencePipeline
video files FPS subsampling
We've discovered that the behaviour of max_fps
parameter is not in line with inference clients expectations regarding processing of video files. Current implementation for vides waits before processing the next video frame, instead dropping the frames to modulate video FPS.
We have added a way to change this suboptimal behaviour in release v0.26.0
- new behaviour of InferencePipeline
can be enabled setting environmental variable flag ENABLE_FRAME_DROP_ON_VIDEO_FILE_RATE_LIMITING=True
.
❗ Breaking change planned
Please note that the new behaviour will be the default one end of Q4 2024!
See details: #779
Stay tuned for future updates!
Other changes
- Pass countinference to usage collector by @SolomonLake in #774
- Do not run tests if branch is not up to date with main by @grzegorz-roboflow in #767
- Include resource_details.billable in workflows usage by @SolomonLake in #776
- Add hostname with optional DEDICATED_DEPLOYMENT_ID to usage payload by @grzegorz-roboflow in #778
- Return single top class confidence for mulit-label predictions by @EmilyGavrilenko in #781
- Aggregate usage for streams and photos separately by @grzegorz-roboflow in #786
- Add gzip support by @alexnorell in #783
- avoiding downloading images if possible by @isaacrob-roboflow in #782
🔧 Fixed
- vulnerability issue with
crypthography
by @PawelPeczek-Roboflow in #790 - Fix model type for classification by @robiscoding in #773
- fix case where there are no good matches for the prompt by @isaacrob-roboflow in #770
- Bugfix: keypoint visualization block by @EmilyGavrilenko in #769
- Do not store usage in cache when API key is not available by @grzegorz-roboflow in #772
- Fix the bug with two stage workflow and continue-if failing when nothing gets detected by primary model by @PawelPeczek-Roboflow in #777
- Remove debug step from 'Test package install - inference-gpu' by @grzegorz-roboflow in #780
- Allow easier inheritance of pipeline by @RossLote in #789
🏅 New Contributors
Full Changelog: v0.25...v0.26.0
v0.25
🚀 Added
Newer onnxruntime
👉 faster inference
on MacBook 🐎
@yeldarby bumped the onnxruntime
from 0.15.x
into 0.19.0
and that small change induced great performance improvement for YOLOv8 models running on resolution up to 640 🤯 Some variants of yolo runs now 2x faster on M2 chips ❗
profiling.mp4
🏎️ Better and Faster OWLv2 inference
@isaacrob-roboflow and @probicheaux prepared something special in #759, #763 and #755
🧙 Box prompting in annotations
We built a new tool to automatically label your data for you!
Inspired by a recent open source release, now when you annotate a dataset you can automatically get recommendations for other likely bounding boxes! And if you don’t like a recommendation, you can mark it as ‘negative’ and it will learn not to recommend boxes like that again!
Under the hood we use your annotations and feedback to train a few shot model based on OWLv2 on the fly, and then run it against your data to propose other likely bounding boxes.
We think this will save you a lot of time!
🐎 Inference speedup
Over 10x inference speed boost on T4 GPU. Previous version run in 440ms, now it is only 36ms 🔥
Workflows enhancements
🅰️ Single character detection 👉 OCR
Ever wondered how to read the text if you have model detecting separate characters? Now it is easy with workflows thanks to @reiffd7
Check out Stitch OCR Detections block docs 📖
ocr_stitch.mp4
🤺 Keypoints detection visualisation
Thanks to @EmilyGavrilenko, Workflows can now visualise predictions from keypoints detection models - for instance pose-estimation ones 😄 I bet you find the visualisation familiar - the new block is powered by supervision
.
Check out Keypoint Visualization block docs 📖
keypoints_visualization.mp4
🔧 Fixed
- Binary attachments in e-mail block (like JPEG images) now are possible to be sent - @PawelPeczek-Roboflow in #758
🏅 New Contributors
- @isaacrob-roboflow made their first contribution in #759
- @reiffd7 made their first contribution in #765
Full Changelog: v0.24.0...v0.25.0
v0.24.0
🚀 Added
🎥 Data analysis and export in Workflows
We’re excited to introduce a suite of new blocks to supercharge your video processing and streamline your workflow exports!
✨ What’s New?
- Enhanced Video Analytics: Easily track object counts over time or within specific zones. Want to know how many items appear in a unit of time? ✅ Now you can!
- Custom Notifications: Send automatic emails 📧 when a detected object enters a defined area. Stay informed without manual checks!
- REST API Integration: Effortlessly export your workflow results 🌐. Connect to any REST API and deliver your data wherever you need it.
Our new blocks make your Workflows more powerful and versatile than ever before. Start building smarter workflows today! 🚀
video_analysis_with_workflows.mp4
New blocks
- Data Aggregator: Collects and processes Workflow data to create time-based analytics. 📊 Supports custom aggregation strategies, making it easy to summarize data streams efficiently.
- CSV Formatter: Formats the data into CSV files which can be saved or send as an attachment to in notification block 📚
- Email Notification: Send email notifications 📧 in Workflows.
- Local File Sink: Saves data generated in Workflow runtime into local file 📁
- Webhook SInk: Enables users to integrate their Workflows with REST API to export data or analysis results
Important
At the moment of release, Workflows UI lack some capabilities to display new blocks, but stay tuned - we will fix that shortly
🏷️ Tracking stabiliser block
Ever experienced flickering detections and loosing tracker id? We know this pain - that's why @grzegorz-roboflow prepared Detections Stabilizer block.
376666040-4a259335-ac24-4af0-a1c5-46b4bc27f093.1.mp4
💻 Improvements in stream processing
Each week we are closer and closer to enabling full-blown video processing features - this week @grzegorz-roboflow and @hansent pushed us forward:
- WebRTC streams are now faster - #751
- we have better management of
InferencePipelines
ininference
server - #751
🔧 Fixed
- Loosed
typer
requirements in #746 fixing the problem raised in #738 - Fixed Google Vision OCR icon by @brunopicinin in #749
🌱 Changed
- Adding more inference metrics to the prometheus scraper by @bigbitbus in #750
Full Changelog: v0.23.0...v0.24.0