-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add RF-DETR #36895
base: main
Are you sure you want to change the base?
Add RF-DETR #36895
Conversation
Just a small message to present the architecture and what it looks like from 🤗 transformers point of view : RF-DETR is based on LW-DETR and DeformableDETR. The LW-DETR is based on DETR but modified the encoder to be a ViT instead of a CNN (like ResNet) and they added the appropriate MultiScaleProjector to make the link between the encoder and the decoder. RF-DETR changed in LW-DETR the encoder from a ViT to DinoV2WithRegisters with a "window" mechanism as well as changed the classical DETR decoder by a DeformableDETR decoder. There is basically 2 things to write :
One difficulty I may see in advance is the following : I noticed your PR about refactoring attention in ViTs, is there any plan for other models such as Detr, RTDetr etc to add FlashAttention ? Let me know what you guys think |
Hi @sbucaille, thanks for the detailed write-up!
We can add
Not at the moment, from my experiments it was not required for detr-based models and did not give any speedup. However, it might be more relevant for transformer-based encoder. Let's keep it simple initially and set it to False as you suggested |
We can't use the class RFDetrBackboneLayer(Dinov2WithRegistersLayer):
def __init__(self, config):
super(Dinov2WithRegistersLayer).__init__(config)
self.num_windows = config.num_windows
def forward(
self,
hidden_states: torch.Tensor,
head_mask: Optional[torch.Tensor] = None,
output_attentions: bool = False,
run_full_attention: bool = False,
) -> Union[Tuple[torch.Tensor, torch.Tensor], Tuple[torch.Tensor]]:
assert head_mask is None, "head_mask is not supported for windowed attention"
assert not output_attentions, "output_attentions is not supported for windowed attention"
shortcut = hidden_states
if run_full_attention:
# reshape x to remove windows
B, HW, C = hidden_states.shape
num_windows_squared = self.num_windows**2
hidden_states = hidden_states.view(B // num_windows_squared, num_windows_squared * HW, C)
self_attention_outputs = self.attention(
self.norm1(hidden_states), # in Dinov2WithRegisters, layernorm is applied before self-attention
head_mask,
output_attentions=output_attentions,
)
attention_output = self_attention_outputs[0]
if run_full_attention:
# reshape x to add windows back
B, HW, C = hidden_states.shape
num_windows_squared = self.num_windows**2
# hidden_states = hidden_states.view(B * num_windows_squared, HW // num_windows_squared, C)
attention_output = attention_output.view(B * num_windows_squared, HW // num_windows_squared, C)
attention_output = self.layer_scale1(attention_output)
outputs = self_attention_outputs[1:] # add self attentions if we output attention weights
# first residual connection
hidden_states = self.drop_path(attention_output) + shortcut
# in Dinov2WithRegisters, layernorm is also applied after self-attention
layer_output = self.norm2(hidden_states)
layer_output = self.mlp(layer_output)
layer_output = self.layer_scale2(layer_output)
# second residual connection
layer_output = self.drop_path(layer_output) + hidden_states
outputs = (layer_output,) + outputs
return outputs That's why I think we necessarily need a custom Backbone class for that 🤔 |
Hmm, am I correct that this part was added?
It looks like it is a reshape only operation, we can return attention_output as is, and reshape all layers output later, right? |
You are right, but it is not the only example. I'll stick to my original plan until I have something running with actual results and I'll take care of refactoring this part later, I'll ping you when it's ready. |
Hey @qubvel, in the end I made modeling files follow the Also I had issues with the modular mechanism where |
Hey, let's use RfDert name + modular, it's ok! RfDetr is a correct naming format while RTDetr is an exception made before modular was introduced |
Ok sorry I confused the problems I had, I didn't have a problem with the capital letters of class RfDetrModel(DeformableDetrModel):
pass generates a bunch of class RfDetrConvEncoder(DeformableDetrConvEncoder):
pass
class RfDetrModel(DeformableDetrModel):
def __init__(self, config: RfDetrConfig):
super().__init__(config)
backbone = RfDetrConvEncoder(config)
... But the problem also appears for Should I open an issue ? Maybe @ArthurZucker have some insights on this problem ? |
cc @Cyrilvallez re modular you faced somthing similar |
What does this PR do?
Implements RF-DETR
Fixes #36879
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@qubvel