Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

axis 1 is out of bounds for array of dimension 1 #5327

Open
14790897 opened this issue Jul 16, 2024 · 1 comment
Open

axis 1 is out of bounds for array of dimension 1 #5327

14790897 opened this issue Jul 16, 2024 · 1 comment

Comments

@14790897
Copy link

14790897 commented Jul 16, 2024

code:

https://www.kaggle.com/code/liuweiq/coincide-separation-detectron2-training

Environment:

kaggle

error:

[07/16 13:14:44 d2.engine.defaults]: Model:
GeneralizedRCNN(
  (backbone): FPN(
    (fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (top_block): LastLevelMaxPool()
    (bottom_up): ResNet(
      (stem): BasicStem(
        (conv1): Conv2d(
          3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
      )
      (res2): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv1): Conv2d(
            64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
      )
      (res3): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv1): Conv2d(
            256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (3): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
      )
      (res4): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
          (conv1): Conv2d(
            512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (3): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (4): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (5): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
      )
      (res5): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
          (conv1): Conv2d(
            1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
      )
    )
  )
  (proposal_generator): RPN(
    (rpn_head): StandardRPNHead(
      (conv): Conv2d(
        256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
        (activation): ReLU()
      )
      (objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))
      (anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))
    )
    (anchor_generator): DefaultAnchorGenerator(
      (cell_anchors): BufferList()
    )
  )
  (roi_heads): StandardROIHeads(
    (box_pooler): ROIPooler(
      (level_poolers): ModuleList(
        (0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True)
        (1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True)
        (2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
        (3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
      )
    )
    (box_head): FastRCNNConvFCHead(
      (flatten): Flatten(start_dim=1, end_dim=-1)
      (fc1): Linear(in_features=12544, out_features=1024, bias=True)
      (fc_relu1): ReLU()
      (fc2): Linear(in_features=1024, out_features=1024, bias=True)
      (fc_relu2): ReLU()
    )
    (box_predictor): FastRCNNOutputLayers(
      (cls_score): Linear(in_features=1024, out_features=3, bias=True)
      (bbox_pred): Linear(in_features=1024, out_features=8, bias=True)
    )
    (mask_pooler): ROIPooler(
      (level_poolers): ModuleList(
        (0): ROIAlign(output_size=(14, 14), spatial_scale=0.25, sampling_ratio=0, aligned=True)
        (1): ROIAlign(output_size=(14, 14), spatial_scale=0.125, sampling_ratio=0, aligned=True)
        (2): ROIAlign(output_size=(14, 14), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
        (3): ROIAlign(output_size=(14, 14), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
      )
    )
    (mask_head): MaskRCNNConvUpsampleHead(
      (mask_fcn1): Conv2d(
        256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
        (activation): ReLU()
      )
      (mask_fcn2): Conv2d(
        256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
        (activation): ReLU()
      )
      (mask_fcn3): Conv2d(
        256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
        (activation): ReLU()
      )
      (mask_fcn4): Conv2d(
        256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
        (activation): ReLU()
      )
      (deconv): ConvTranspose2d(256, 256, kernel_size=(2, 2), stride=(2, 2))
      (deconv_relu): ReLU()
      (predictor): Conv2d(256, 2, kernel_size=(1, 1), stride=(1, 1))
    )
  )
)
[07/16 13:14:44 d2.data.datasets.coco]: Loaded 161 images in COCO format from /kaggle/input/coco-data/instances.json
[07/16 13:14:44 d2.data.build]: Removed 0 images with no usable annotations. 161 images left.
[07/16 13:14:44 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[07/16 13:14:44 d2.data.build]: Using training sampler TrainingSampler
[07/16 13:14:44 d2.data.common]: Serializing the dataset using: <class 'detectron2.data.common._TorchSerializedList'>
[07/16 13:14:44 d2.data.common]: Serializing 161 elements to byte tensors and concatenating them all ...
[07/16 13:14:44 d2.data.common]: Serialized dataset takes 0.11 MiB
[07/16 13:14:44 d2.data.build]: Making batched data loader with batch_size=2
[07/16 13:14:44 d2.checkpoint.detection_checkpoint]: [DetectionCheckpointer] Loading from https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl ...
[07/16 13:14:44 d2.engine.train_loop]: Starting training from iteration 0
[07/16 13:14:49 d2.utils.events]:  eta: 0:08:08  iter: 19  total_loss: 2.388  loss_cls: 0.5612  loss_box_reg: 0.5152  loss_mask: 0.6743  loss_rpn_cls: 0.2725  loss_rpn_loc: 0.234    time: 0.2471  last_time: 0.2480  data_time: 0.0182  last_data_time: 0.0076   lr: 0.0005  max_mem: 2758M
[07/16 13:14:54 d2.utils.events]:  eta: 0:08:10  iter: 39  total_loss: 1.896  loss_cls: 0.3904  loss_box_reg: 0.7147  loss_mask: 0.5105  loss_rpn_cls: 0.0675  loss_rpn_loc: 0.2052    time: 0.2511  last_time: 0.2110  data_time: 0.0075  last_data_time: 0.0074   lr: 0.0005  max_mem: 2758M
[07/16 13:14:59 d2.utils.events]:  eta: 0:08:03  iter: 59  total_loss: 1.697  loss_cls: 0.3442  loss_box_reg: 0.682  loss_mask: 0.3977  loss_rpn_cls: 0.0367  loss_rpn_loc: 0.187    time: 0.2512  last_time: 0.2547  data_time: 0.0082  last_data_time: 0.0085   lr: 0.0005  max_mem: 2760M
[07/16 13:15:04 d2.utils.events]:  eta: 0:08:00  iter: 79  total_loss: 1.581  loss_cls: 0.311  loss_box_reg: 0.681  loss_mask: 0.3736  loss_rpn_cls: 0.02267  loss_rpn_loc: 0.1788    time: 0.2510  last_time: 0.2624  data_time: 0.0079  last_data_time: 0.0081   lr: 0.0005  max_mem: 2760M
[07/16 13:15:09 d2.utils.events]:  eta: 0:07:54  iter: 99  total_loss: 1.489  loss_cls: 0.3167  loss_box_reg: 0.6183  loss_mask: 0.3583  loss_rpn_cls: 0.01854  loss_rpn_loc: 0.1595    time: 0.2506  last_time: 0.2538  data_time: 0.0079  last_data_time: 0.0074   lr: 0.0005  max_mem: 2760M
[07/16 13:15:15 d2.utils.events]:  eta: 0:07:51  iter: 119  total_loss: 1.481  loss_cls: 0.3209  loss_box_reg: 0.6435  loss_mask: 0.3739  loss_rpn_cls: 0.01431  loss_rpn_loc: 0.1781    time: 0.2525  last_time: 0.2795  data_time: 0.0081  last_data_time: 0.0086   lr: 0.0005  max_mem: 2760M
[07/16 13:15:20 d2.utils.events]:  eta: 0:07:48  iter: 139  total_loss: 1.421  loss_cls: 0.2869  loss_box_reg: 0.6089  loss_mask: 0.3471  loss_rpn_cls: 0.009652  loss_rpn_loc: 0.1751    time: 0.2532  last_time: 0.2746  data_time: 0.0076  last_data_time: 0.0076   lr: 0.0005  max_mem: 2760M
[07/16 13:15:25 d2.utils.events]:  eta: 0:07:42  iter: 159  total_loss: 1.42  loss_cls: 0.297  loss_box_reg: 0.6114  loss_mask: 0.3572  loss_rpn_cls: 0.01175  loss_rpn_loc: 0.1626    time: 0.2536  last_time: 0.2537  data_time: 0.0074  last_data_time: 0.0081   lr: 0.0005  max_mem: 2760M
[07/16 13:15:30 d2.utils.events]:  eta: 0:07:36  iter: 179  total_loss: 1.343  loss_cls: 0.2586  loss_box_reg: 0.5469  loss_mask: 0.3461  loss_rpn_cls: 0.009276  loss_rpn_loc: 0.1687    time: 0.2530  last_time: 0.2379  data_time: 0.0082  last_data_time: 0.0081   lr: 0.0005  max_mem: 2760M
[07/16 13:15:35 d2.utils.events]:  eta: 0:07:31  iter: 199  total_loss: 1.45  loss_cls: 0.2805  loss_box_reg: 0.6134  loss_mask: 0.3579  loss_rpn_cls: 0.008294  loss_rpn_loc: 0.173    time: 0.2525  last_time: 0.2530  data_time: 0.0081  last_data_time: 0.0081   lr: 0.0005  max_mem: 2760M
[07/16 13:15:40 d2.utils.events]:  eta: 0:07:26  iter: 219  total_loss: 1.382  loss_cls: 0.2553  loss_box_reg: 0.5704  loss_mask: 0.3324  loss_rpn_cls: 0.009328  loss_rpn_loc: 0.1638    time: 0.2528  last_time: 0.2514  data_time: 0.0082  last_data_time: 0.0073   lr: 0.0005  max_mem: 2760M
[07/16 13:15:45 d2.utils.events]:  eta: 0:07:22  iter: 239  total_loss: 1.34  loss_cls: 0.2805  loss_box_reg: 0.5728  loss_mask: 0.3491  loss_rpn_cls: 0.01016  loss_rpn_loc: 0.1435    time: 0.2536  last_time: 0.2491  data_time: 0.0078  last_data_time: 0.0079   lr: 0.0005  max_mem: 2760M
[07/16 13:15:49 d2.data.datasets.coco]: Loaded 135 images in COCO format from /kaggle/input/coco-data-val/instances.json
[07/16 13:15:49 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[07/16 13:15:49 d2.data.common]: Serializing the dataset using: <class 'detectron2.data.common._TorchSerializedList'>
[07/16 13:15:49 d2.data.common]: Serializing 135 elements to byte tensors and concatenating them all ...
[07/16 13:15:49 d2.data.common]: Serialized dataset takes 0.02 MiB
[07/16 13:15:49 d2.data.datasets.coco]: Loaded 135 images in COCO format from /kaggle/input/coco-data-val/instances.json
[07/16 13:15:49 d2.evaluation.evaluator]: Start inference on 135 batches
ERROR [07/16 13:15:49 d2.engine.train_loop]: Exception during training:
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/detectron2/engine/train_loop.py", line 156, in train
    self.after_step()
  File "/opt/conda/lib/python3.10/site-packages/detectron2/engine/train_loop.py", line 190, in after_step
    h.after_step()
  File "/opt/conda/lib/python3.10/site-packages/detectron2/engine/hooks.py", line 556, in after_step
    self._do_eval()
  File "/opt/conda/lib/python3.10/site-packages/detectron2/engine/hooks.py", line 529, in _do_eval
    results = self._func()
  File "/opt/conda/lib/python3.10/site-packages/detectron2/engine/defaults.py", line 457, in test_and_save_results
    self._last_eval_results = self.test(self.cfg, self.model)
  File "/opt/conda/lib/python3.10/site-packages/detectron2/engine/defaults.py", line 621, in test
    results_i = inference_on_dataset(model, data_loader, evaluator)
  File "/opt/conda/lib/python3.10/site-packages/detectron2/evaluation/evaluator.py", line 172, in inference_on_dataset
    evaluator.process(inputs, outputs)
  File "/tmp/ipykernel_34/3164767280.py", line 37, in process
    self.scores.append(score(out, targ))
  File "/tmp/ipykernel_34/3164767280.py", line 18, in score
    tp, fp, fn = precision_at(t, ious)
  File "/tmp/ipykernel_34/3164767280.py", line 6, in precision_at
    true_positives = np.sum(matches, axis=1) == 1  # Correct objects
  File "/opt/conda/lib/python3.10/site-packages/numpy/core/fromnumeric.py", line 2313, in sum
    return _wrapreduction(a, np.add, 'sum', axis, dtype, out, keepdims=keepdims,
  File "/opt/conda/lib/python3.10/site-packages/numpy/core/fromnumeric.py", line 88, in _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
numpy.exceptions.AxisError: axis 1 is out of bounds for array of dimension 1
[07/16 13:15:49 d2.engine.hooks]: Overall training speed: 247 iterations in 0:01:02 (0.2544 s / it)
[07/16 13:15:49 d2.engine.hooks]: Total training time: 0:01:04 (0:00:01 on hooks)
[07/16 13:15:49 d2.utils.events]:  eta: 0:07:19  iter: 249  total_loss: 1.29  loss_cls: 0.2537  loss_box_reg: 0.5709  loss_mask: 0.3331  loss_rpn_cls: 0.009182  loss_rpn_loc: 0.1365    time: 0.2534  last_time: 0.2509  data_time: 0.0077  last_data_time: 0.0082   lr: 0.0005  max_mem: 2760M
---------------------------------------------------------------------------
AxisError                                 Traceback (most recent call last)
Cell In[9], line 34
     32 trainer = Trainer(cfg)  # without data augmentation
     33 trainer.resume_or_load(resume=False)
---> 34 trainer.train()

File /opt/conda/lib/python3.10/site-packages/detectron2/engine/defaults.py:488, in DefaultTrainer.train(self)
    481 def train(self):
    482     """
    483     Run training.
    484 
    485     Returns:
    486         OrderedDict of results, if evaluation is enabled. Otherwise None.
    487     """
--> 488     super().train(self.start_iter, self.max_iter)
    489     if len(self.cfg.TEST.EXPECTED_RESULTS) and comm.is_main_process():
    490         assert hasattr(
    491             self, "_last_eval_results"
    492         ), "No evaluation results obtained during training!"

File /opt/conda/lib/python3.10/site-packages/detectron2/engine/train_loop.py:156, in TrainerBase.train(self, start_iter, max_iter)
    154     self.before_step()
    155     self.run_step()
--> 156     self.after_step()
    157 # self.iter == max_iter can be used by `after_train` to
    158 # tell whether the training successfully finished or failed
    159 # due to exceptions.
    160 self.iter += 1

File /opt/conda/lib/python3.10/site-packages/detectron2/engine/train_loop.py:190, in TrainerBase.after_step(self)
    188 def after_step(self):
    189     for h in self._hooks:
--> 190         h.after_step()

File /opt/conda/lib/python3.10/site-packages/detectron2/engine/hooks.py:556, in EvalHook.after_step(self)
    553 if self._period > 0 and next_iter % self._period == 0:
    554     # do the last eval in after_train
    555     if next_iter != self.trainer.max_iter:
--> 556         self._do_eval()

File /opt/conda/lib/python3.10/site-packages/detectron2/engine/hooks.py:529, in EvalHook._do_eval(self)
    528 def _do_eval(self):
--> 529     results = self._func()
    531     if results:
    532         assert isinstance(
    533             results, dict
    534         ), "Eval function must return a dict. Got {} instead.".format(results)

File /opt/conda/lib/python3.10/site-packages/detectron2/engine/defaults.py:457, in DefaultTrainer.build_hooks.<locals>.test_and_save_results()
    456 def test_and_save_results():
--> 457     self._last_eval_results = self.test(self.cfg, self.model)
    458     return self._last_eval_results

File /opt/conda/lib/python3.10/site-packages/detectron2/engine/defaults.py:621, in DefaultTrainer.test(cls, cfg, model, evaluators)
    619         results[dataset_name] = {}
    620         continue
--> 621 results_i = inference_on_dataset(model, data_loader, evaluator)
    622 results[dataset_name] = results_i
    623 if comm.is_main_process():

File /opt/conda/lib/python3.10/site-packages/detectron2/evaluation/evaluator.py:172, in inference_on_dataset(model, data_loader, evaluator, callbacks)
    169 total_compute_time += time.perf_counter() - start_compute_time
    171 start_eval_time = time.perf_counter()
--> 172 evaluator.process(inputs, outputs)
    173 total_eval_time += time.perf_counter() - start_eval_time
    175 iters_after_start = idx + 1 - num_warmup * int(idx >= num_warmup)

Cell In[7], line 37, in MAPIOUEvaluator.process(self, inputs, outputs)
     35 else:
     36     targ = self.annotations_cache[inp['image_id']]
---> 37     self.scores.append(score(out, targ))

Cell In[7], line 18, in score(pred, targ)
     16 prec = []
     17 for t in np.arange(0.5, 1.0, 0.05):
---> 18     tp, fp, fn = precision_at(t, ious)
     19     p = tp / (tp + fp + fn)
     20     prec.append(p)

Cell In[7], line 6, in precision_at(threshold, iou)
      4 def precision_at(threshold, iou):
      5     matches = iou > threshold
----> 6     true_positives = np.sum(matches, axis=1) == 1  # Correct objects
      7     false_positives = np.sum(matches, axis=0) == 0  # Missed objects
      8     false_negatives = np.sum(matches, axis=1) == 0  # Extra objects

File /opt/conda/lib/python3.10/site-packages/numpy/core/fromnumeric.py:2313, in sum(a, axis, dtype, out, keepdims, initial, where)
   2310         return out
   2311     return res
-> 2313 return _wrapreduction(a, np.add, 'sum', axis, dtype, out, keepdims=keepdims,
   2314                       initial=initial, where=where)

File /opt/conda/lib/python3.10/site-packages/numpy/core/fromnumeric.py:88, in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs)
     85         else:
     86             return reduction(axis=axis, out=out, **passkwargs)
---> 88 return ufunc.reduce(obj, axis, dtype, out, **passkwargs)

AxisError: axis 1 is out of bounds for array of dimension 1
@github-actions github-actions bot added the needs-more-info More info is needed to complete the issue label Jul 16, 2024
Copy link

You've chosen to report an unexpected problem or bug. Unless you already know the root cause of it, please include details about it by filling the issue template.
The following information is missing: "Instructions To Reproduce the Issue and Full Logs"; "Your Environment";

@github-actions github-actions bot removed the needs-more-info More info is needed to complete the issue label Jul 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant