Skip to content

Conversation

@myownskyW7
Copy link
Collaborator

No description provided.

@myownskyW7 myownskyW7 requested a review from hellock October 24, 2019 13:57
@ShihuaiXu
Copy link

ShihuaiXu commented Oct 31, 2019

Error limit reached.
100 errors detected in the compilation of "/tmp/tmpxft_00006868_00000000-6_carafe_cuda_kernel_benchmark.cpp1.ii".
Compilation terminated.
error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1
occur this error, please help me

m.weight, mode='fan_out', nonlinearity='relu')
nn.init.constant_(m.bias, 0)
if self.with_carafe:
self.upsample.init_weights()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for m in [self.upsample, self.conv_logits]:
    if m is None:
        continue
    elif isinstance(m, CARAFEPack):
        m.init_weights()
    else:
        nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
        nn.init.constant_(m.bias, 0)

upsample_ratio=2,
num_classes=81,
class_agnostic=False,
carafe_cfg=None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe renamed to upsample_cfg for future usage.

self.upsample_ratio,
stride=self.upsample_ratio)
elif self.upsample_method == 'carafe':
self.upsample = CARAFEPack(upsample_in_channels, upsample_ratio,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be consistent, use self.upsample_ratio instead.

out_channels,
self.upsample_kernel,
stride=2,
padding=int((self.upsample_kernel - 1) / 2),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(self.upsample_kernel - 1) // 2

upsample_module = nn.ConvTranspose2d(
out_channels,
out_channels,
self.upsample_kernel,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.upsample_kernel is undefined.

for i in range(len(laterals) - 1, 0, -1):
if self.upsample is not None:
if (self.upsample == 'nearest' or self.upsample == 'bilinear'):
align_corners = (None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add some comments here like suppress warnings.

carafe_naive = CARAFENAIVEFunction.apply


class CARAFENAIVE(Module):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to name it CARAFENaive.


from torch.utils.cpp_extension import BuildExtension, CUDAExtension

cxx_args = ['-std=c++11']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unused variables

const int n = index / width / height;

// const int down_pw = pw / scale_factor;
// const int down_ph = ph / scale_factor;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

useless comments

const int output_size = batch_size * channels * height * width;

// TODO: use AT_DISPATCH_FLOATING_TYPES_AND_HALF when atomicAdd is resolved
AT_DISPATCH_FLOATING_TYPES(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AT_DISPATCH_FLOATING_TYPES_AND_HALF is available here.



upsampler_cfg = {
# format: layer_type: (abbreviation, module)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

layer_type and abbreviation are the same. There is no need to use a tuple for dict values.

}


def build_upsampler_layer(cfg, postfix=''):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I suggest using a consistent term "upsample" instead of "upsample" and "upsampler".
  2. postfix is unnecessary.

from mmdet.ops.carafe import CARAFEPack


class PixelShufflePack(nn.Module):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add docstring here.

total_epochs = 12
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './work_dirs/faster_rcnn_r50_fpn_1x'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use the same name as the config file.

self.upsample_method = upsample_method
self.upsample_ratio = upsample_ratio
self.upsample_method = self.upsample_cfg.pop('type')
self.upsample_ratio = self.upsample_cfg.pop('upsample_ratio')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may consider renaming upsample_ratio to scale_factor since all upsample operators below have the argument scale_factor.

It will be simpler to manipulate the args.

self.upsample_method = upsample_cfg_.get('type')

if self.upsample_method is None:
    self.upsample = None
elif self.upsample_method == 'deconv':
    upsample_cfg_.update(
        in_channels=upsample_in_channels,
        out_channels=self.conv_out_channels,
        kernel_size=self.upsample_ratio,
        stride=self.upsample_ratio)
elif self.upsample_method == 'carafe':
    upsampler_cfg_.update(
        channels=upsample_in_channels,
        scale_factor=self.upsample_ratio)
else:
    xxxxx



class PixelShufflePack(nn.Module):
""" pixel shuffle upsample layer
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"""Pixel Shuffle upsample layer

""" pixel shuffle upsample layer
Args:
in_channels (int): number of input channels
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The description is a sentence and use upper case for the first character.


self.kernel_size = int(kernel_size)
self.group_size = int(group_size)
self.scale_factor = int(scale_factor)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a type check instead of simply casting to int.

@hellock hellock merged commit b543109 into open-mmlab:master Feb 21, 2020
mattdawkins added a commit to VIAME/mmdetection that referenced this pull request Feb 26, 2020
* jon/dev/fix_fpn2: (33 commits)
  Fix FPN upscale
  fix analyze log (open-mmlab#2150)
  Fix a documentation error in GETTING_STARTED.md (open-mmlab#2149)
  add optimizer registry (open-mmlab#2139)
  Update version to 1.1 (open-mmlab#2144)
  Fix IOU assigners when ignore_of_thr > 0 and no pred boxes (open-mmlab#2135)
  reset worker_seed (open-mmlab#2111)
  Fix issue with list of metrics in CustomDataset.evaluate (open-mmlab#2128)
  Code Release: CARAFE: Content-Aware ReAssembly of FEatures (ICCV 2019) (open-mmlab#1583)
  fixed test package (open-mmlab#2127)
  add an argument format-only to handle the json formating (open-mmlab#2114)
  fix (get_cls_results): use np.empty for empty bbox rather than np.arrary (open-mmlab#2116)
  fix (dpool): directly return empty if rois's length is 0 (open-mmlab#2099)
  fix workflow problem (open-mmlab#2103)
  Uint8 fix (open-mmlab#2105)
  Fix device bug (open-mmlab#2098)
  fix test ddp initialize (open-mmlab#2100)
  set FORCE_CUDA (open-mmlab#2097)
  Speed up sampler (open-mmlab#2094)
  Use official DDP to implement MMDDP (open-mmlab#2090)
  ...
mattdawkins added a commit to VIAME/mmdetection that referenced this pull request Mar 13, 2020
* tag 'v1.1.0': (29 commits)
  Update version to 1.1 (open-mmlab#2144)
  Fix IOU assigners when ignore_of_thr > 0 and no pred boxes (open-mmlab#2135)
  reset worker_seed (open-mmlab#2111)
  Fix issue with list of metrics in CustomDataset.evaluate (open-mmlab#2128)
  Code Release: CARAFE: Content-Aware ReAssembly of FEatures (ICCV 2019) (open-mmlab#1583)
  fixed test package (open-mmlab#2127)
  add an argument format-only to handle the json formating (open-mmlab#2114)
  fix (get_cls_results): use np.empty for empty bbox rather than np.arrary (open-mmlab#2116)
  fix (dpool): directly return empty if rois's length is 0 (open-mmlab#2099)
  fix workflow problem (open-mmlab#2103)
  Uint8 fix (open-mmlab#2105)
  Fix device bug (open-mmlab#2098)
  fix test ddp initialize (open-mmlab#2100)
  set FORCE_CUDA (open-mmlab#2097)
  Speed up sampler (open-mmlab#2094)
  Use official DDP to implement MMDDP (open-mmlab#2090)
  log meta (open-mmlab#2086)
  fix pad_val not used in class Pad when pad to a fixed size (open-mmlab#2093)
  remove cython docs (open-mmlab#2091)
  remove new_tensor (open-mmlab#2092)
  ...
ioir123ju pushed a commit to ioir123ju/mmdetection that referenced this pull request Mar 30, 2020
open-mmlab#1583)

* add carafe ops

* rename carafe benchmark

* grad check fix

* update grad check

* update grad check output

* add fpn carafe & mask head carafe

* add ReadMe

* update readme

* add carafe setup

* update naive carafe

* update readme and setup

* readme typo fix

* fix flake8 error

* fix flake 8 error

* fix flake 8

* fix flake 8 more

* flake 8 fix plus

* flake 8 fix

* fix flake 8

* reformat ops files

* update fpn files and cfgs

* update readme

* update fcn_mask_head

* update fpn_carafe

* update kernel

* update

* update

* add docstring in FPN_CARAFE

* reformat with yapf

* update

* update

* add build upsampler

* fix mask head build error

* reformat build upsample layer

* add doc string for CARAFE and PixelShuffle

* update

* update upsample_cfg_

* update

* update doc string

* rm abbr in build upsample layer

* update readme

* update model_zoo

* add link to other features in ReadMe
mike112223 pushed a commit to mike112223/mmdetection that referenced this pull request Aug 25, 2020
open-mmlab#1583)

* add carafe ops

* rename carafe benchmark

* grad check fix

* update grad check

* update grad check output

* add fpn carafe & mask head carafe

* add ReadMe

* update readme

* add carafe setup

* update naive carafe

* update readme and setup

* readme typo fix

* fix flake8 error

* fix flake 8 error

* fix flake 8

* fix flake 8 more

* flake 8 fix plus

* flake 8 fix

* fix flake 8

* reformat ops files

* update fpn files and cfgs

* update readme

* update fcn_mask_head

* update fpn_carafe

* update kernel

* update

* update

* add docstring in FPN_CARAFE

* reformat with yapf

* update

* update

* add build upsampler

* fix mask head build error

* reformat build upsample layer

* add doc string for CARAFE and PixelShuffle

* update

* update upsample_cfg_

* update

* update doc string

* rm abbr in build upsample layer

* update readme

* update model_zoo

* add link to other features in ReadMe
jben-hun pushed a commit to jben-hun/mmdetection that referenced this pull request Jan 10, 2025
open-mmlab#1583)

* add carafe ops

* rename carafe benchmark

* grad check fix

* update grad check

* update grad check output

* add fpn carafe & mask head carafe

* add ReadMe

* update readme

* add carafe setup

* update naive carafe

* update readme and setup

* readme typo fix

* fix flake8 error

* fix flake 8 error

* fix flake 8

* fix flake 8 more

* flake 8 fix plus

* flake 8 fix

* fix flake 8

* reformat ops files

* update fpn files and cfgs

* update readme

* update fcn_mask_head

* update fpn_carafe

* update kernel

* update

* update

* add docstring in FPN_CARAFE

* reformat with yapf

* update

* update

* add build upsampler

* fix mask head build error

* reformat build upsample layer

* add doc string for CARAFE and PixelShuffle

* update

* update upsample_cfg_

* update

* update doc string

* rm abbr in build upsample layer

* update readme

* update model_zoo

* add link to other features in ReadMe
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants