Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation Message Types? #87

Open
lexi-brt opened this issue Apr 25, 2023 · 7 comments
Open

Segmentation Message Types? #87

lexi-brt opened this issue Apr 25, 2023 · 7 comments

Comments

@lexi-brt
Copy link

Hi!

I'm looking for a std ros type to use for segmentation outputs.

Something like these:
https://github.com/DavidFernandezChaves/Detectron2_ros/blob/master/msg/Result.msg
https://github.com/akio/mask_rcnn_ros/blob/kinetic-devel/msg/Result.msg

Is there something in this package that's suitable for this already? If not, how would I go about contributing a proposal and getting something merged?

@SteveMacenski
Copy link
Member

What are you looking for? These don't appear to me to be pixel-wise segmentation classes, unless you're only looking at the sensor_msgs/Image[] masks and not the sensor_msgs/RegionOfInterest[] boxes

https://github.com/ros-perception/vision_msgs/blob/ros2/vision_msgs/msg/Detection2D.msg does something like the boxes in those messages. I definitely don't disagree a segmentation message would be valuable and in discussion in #63. I think it might be good to start with a proposal and @Kukanani and I can review and we can go from there!

@Kukanani
Copy link
Collaborator

Yes, happy to consider any proposals on the segmentation front!

@mintar
Copy link
Contributor

mintar commented Apr 27, 2023

I agree with everything @SteveMacenski said. Personally, I'd go with one of the following approaches:

  1. Either, create a new message type:
    std_msgs/Header header
    vision_msgs/Detection2DArray detections
    sensor_msgs/Image[] masks
    
  2. Or, publish those things on separate topics, one vision_msgs/Detection2DArray for the detections and one sensor_msgs/Image for each mask.

This also depends what's inside the mask image(s):

  • Is it a single segmentation image, where each pixel value is the class label?
  • Is it a mask for each individual object, with the pixel value representing the probability that the pixel belongs to the object?

If it's a single segmentation image, I'd go with approach (2) above, for the reasons I've outlined in this comment. If it's individual object masks (which is what Mask R-CNN is doing), approach (2) becomes very cumbersome/impossible, so I'd go with approach (1).

@SteveMacenski
Copy link
Member

I'm not 100% sure I understand having detection and segmentation masks together - these are often different processes building bounding boxes vs pixel-wise segmentation masks (though I suppose a BB could be generated from a mask rather easily).

LabelInfo is meant to communicate the label's class IDs to string IDs, so that could be reused here just like in detections with synchronized topics.

I agree instance segmentation vs class segmentation adds in a wrench. For class segmentation, 1 image is OK, but for instance segmentation, we may need N images for the N instances or try to find a way to embed that in an imagine in another way. Perhaps a new Image-like message containing the class, instance, and probability info for each pixel so it could work for any situation (and instance = 0 for non-instance segmentation implementations)

@gachiemchiep
Copy link

@SteveMacenski @mintar
I think we could use the design of PasCalVOC dataset for this problem.

For example, this is JPEGImage :

2007_000129

The mask for class segmentation (semantic segmentation) is like this:
2007_000129

Then for instance segmentation, they add another mask for object like this:
2007_000129

By using this rule, only 2 mask images is needed.

@SteveMacenski
Copy link
Member

But how does that distinguish the class of the instance? If you just have instance 1...N for 1...N objects, you'd have multiple 1 blocks representing different first-instances of N classes

I think that mask would need to have 2 values: 1 for the instance # and another for the class #. It doubles the message size which I don't love, but without doing bit shifting, that's I think the best we can do. For non-instance segmentation algorithms, that can be left empty/non-allocated so it shouldn't be a huge amount of overhead relative to the image segmentation message size.

Thoughts @mintar ?

@gachiemchiep
Copy link

@SteveMacenski
Maybe my writting is a little bit confused.

For sementic segmentation, 1 mask image is needed:
1 mask image for class as showed in 2nd picture.

For instance segmentation 's result, 2 mask images are needed.
1 mask image for class as showed in 2nd picture
1 mask image for object as showed in 3rd picture.

Instance segmentation can also be explained like :

  1. First do the detection to find all object' s box. The box msg is Detection2D.msg
  2. For each box, do the segmentation to find the object mask inside

instead of publishing entire image mask as above approach. We could cut off the mask for each box. then attach the mask image to each box's msg.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants