-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[InferenceSlicer] - instance segmentation inference results are not post-processed properly #1373
Comments
Hi @SaiJeevanPuchakayala 👋🏻 Looks like |
Thanks for the fix @SkalskiP. The line of code below is working for me.
But while stitching back the image the 512 images to 2048, I'm not able to get a few detections and annotations properly near the sliced region as shown in the image below. |
@SaiJeevanPuchakayala, what |
@SkalskiP I actually used |
@SkalskiP I'm still facing the same issue. Is there any resolution for this that you can suggest? |
@SaiJeevanPuchakayala, have you tried playing with |
Yes @SkalskiP, I've tried that as well, no hope. |
@SaiJeevanPuchakayala, can you provide us with your model, image, and code? |
Hi @SkalskiP, Thank you for your continuous support. I have uploaded the model, image, and code to a Google Drive folder for your reference. You can access it here: https://drive.google.com/drive/folders/1nn08DGO7-I1rRX-5Czm_tFn7hWV5J9IN?usp=sharing Here are some additional details about the model:
Please have a look and let me know if you need any additional information. |
@SaiJeevanPuchakayala, realistically, I won't be able to take a deeper look into it. Sorry, I have too much work to do with the upcoming YT video. This would need to wait for @LinasKo to get back next week. Maybe @onuralpszr or @hardikdava have some time to take a look into it? |
@SkalskiP sure, I am going to take look let me setup in collab and start playing with it. |
@onuralpszr, you are the GOAT! 🐐 |
Likewise you too :) Let me do some testing get back to comments with my findings |
I'll update the name of the issue to better reflect what's going on. |
Hey Hi @onuralpszr 👋, have you got time to do some testing? |
Yes, I am looking and let me bit more test and will come back to you, I had some busy tasks I had handle as well, sorry for bit delay. I will come back to you |
Hi @SaiJeevanPuchakayala and @onuralpszr, have you managed to track down the problem? |
Hey @SkalskiP, not yet, still facing the same issue with grains not being stiched back. |
I did stuck for a bit but made progress afterwards. I had a busy work week, let me handle tackle today and show some result and talk based on it. Sorry for delay. |
@onuralpszr Let me know if you need extra pair of hands on this one. |
No worries, @onuralpszr ;) I'm just curious why it does not work as expected. |
@onuralpszr and @hardikdava, have you guys got time to track down the problem? |
Hey @onuralpszr, can you post your findings on this issue here? I can take a look later today. Thanks. |
I am experiencing an issue where the model is performing inference on the full image resolution of 2048x2048 instead of the sliced resolution of 512x512 as intended. Below are the details of the function and the problem encountered.
Output
Issue
The model is performing inference on the full image resolution (2048x2048) instead of the sliced resolution (512x512) as specified in the InferenceSlicer. This results in longer inference times and processing on larger image chunks than intended.
Steps to Reproduce
Expected Behavior
The model should perform inference on 512x512 slices of the image, as specified in the InferenceSlicer.
Actual Behavior
Inference is performed on the full 2048x2048 image resolution.
Environment
Additional Context
Any insights or suggestions to ensure the model performs inference on the specified 512x512 slices would be greatly appreciated.
The text was updated successfully, but these errors were encountered: