Skip to content

Commit 02a9969

Browse files
authored
Refactor object detection box predictors and fix some issues with model_main. (tensorflow#4965)
* Merged commit includes the following changes: 206852642 by Zhichao Lu: Build the balanced_positive_negative_sampler in the model builder for FasterRCNN. Also adds an option to use the static implementation of the sampler. -- 206803260 by Zhichao Lu: Fixes a misplaced argument in resnet fpn feature extractor. -- 206682736 by Zhichao Lu: This CL modifies the SSD meta architecture to support both Slim-based and Keras-based box predictors, and begins preparation for Keras box predictor support in the other meta architectures. Concretely, this CL adds a new `KerasBoxPredictor` base class and makes the meta architectures appropriately call whichever box predictors they are using. We can switch the non-ssd meta architectures to fully support Keras box predictors once the Keras Convolutional Box Predictor CL is submitted. -- 206669634 by Zhichao Lu: Adds an alternate method for balanced positive negative sampler using static shapes. -- 206643278 by Zhichao Lu: This CL adds a Keras layer hyperparameter configuration object to the hyperparams_builder. It automatically converts from Slim layer hyperparameter configs to Keras layer hyperparameters. Namely, it: - Builds Keras initializers/regularizers instead of Slim ones - sets weights_regularizer/initializer to kernel_regularizer/initializer - converts batchnorm decay to momentum - converts Slim l2 regularizer weights to the equivalent Keras l2 weights This will be used in the conversion of object detection feature extractors & box predictors to newer Tensorflow APIs. -- 206611681 by Zhichao Lu: Internal changes. -- 206591619 by Zhichao Lu: Clip the to shape when the input tensors are larger than the expected padded static shape -- 206517644 by Zhichao Lu: Make MultiscaleGridAnchorGenerator more consistent with MultipleGridAnchorGenerator. -- 206415624 by Zhichao Lu: Make the hardcoded feature pyramid network (FPN) levels configurable for both SSD Resnet and SSD Mobilenet. -- 206398204 by Zhichao Lu: This CL modifies the SSD meta architecture to support both Slim-based and Keras-based feature extractors. This allows us to begin the conversion of object detection to newer Tensorflow APIs. -- 206213448 by Zhichao Lu: Adding a method to compute the expected classification loss by background/foreground weighting. -- 206204232 by Zhichao Lu: Adding the keypoint head to the Mask RCNN pipeline. -- 206200352 by Zhichao Lu: - Create Faster R-CNN target assigner in the model builder. This allows configuring matchers in Target assigner to use TPU compatible ops (tf.gather in this case) without any change in meta architecture. - As a +ve side effect of the refactoring, we can now re-use a single target assigner for all of second stage heads in Faster R-CNN. -- 206178206 by Zhichao Lu: Force ssd feature extractor builder to use keyword arguments so values won't be passed to wrong arguments. -- 206168297 by Zhichao Lu: Updating exporter to use freeze_graph.freeze_graph_with_def_protos rather than a homegrown version. -- 206080748 by Zhichao Lu: Merge external contributions. -- 206074460 by Zhichao Lu: Update to preprocessor to apply temperature and softmax to the multiclass scores on read. -- 205960802 by Zhichao Lu: Fixing a bug in hierarchical label expansion script. -- 205944686 by Zhichao Lu: Update exporter to support exporting quantized model. -- 205912529 by Zhichao Lu: Add a two stage matcher to allow for thresholding by one criteria and then argmaxing on the other. -- 205909017 by Zhichao Lu: Add test for grayscale image_resizer -- 205892801 by Zhichao Lu: Add flag to decide whether to apply batch norm to conv layers of weight shared box predictor. -- 205824449 by Zhichao Lu: make sure that by default mask rcnn box predictor predicts 2 stages. -- 205730139 by Zhichao Lu: Updating warning message to be more explicit about variable size mismatch. -- 205696992 by Zhichao Lu: Remove utils/ops.py's dependency on core/box_list_ops.py. This will allow re-using TPU compatible ops from utils/ops.py in core/box_list_ops.py. -- 205696867 by Zhichao Lu: Refactoring mask rcnn predictor so have each head in a separate file. This CL lets us to add new heads more easily in the future to mask rcnn. -- 205492073 by Zhichao Lu: Refactor R-FCN box predictor to be TPU compliant. - Change utils/ops.py:position_sensitive_crop_regions to operate on single image and set of boxes without `box_ind` - Add a batch version that operations on batches of images and batches of boxes. - Refactor R-FCN box predictor to use the batched version of position sensitive crop regions. -- 205453567 by Zhichao Lu: Fix bug that cannot export inference graph when write_inference_graph flag is True. -- 205316039 by Zhichao Lu: Changing input tensor name. -- 205256307 by Zhichao Lu: Fix model zoo links for quantized model. -- 205164432 by Zhichao Lu: Fixes eval error when label map contains non-ascii characters. -- 205129842 by Zhichao Lu: Adds a option to clip the anchors to the window size without filtering the overlapped boxes in Faster-RCNN -- 205094863 by Zhichao Lu: Update to label map util to allow the option of adding a background class and fill in gaps in the label map. Useful for using multiclass scores which require a complete label map with explicit background label. -- 204989032 by Zhichao Lu: Add tf.prof support to exporter. -- 204825267 by Zhichao Lu: Modify mask rcnn box predictor tests for TPU compatibility. -- 204778749 by Zhichao Lu: Remove score filtering from postprocessing.py and rely on filtering logic in tf.image.non_max_suppression -- 204775818 by Zhichao Lu: Python3 fixes for object_detection. -- 204745920 by Zhichao Lu: Object Detection Dataset visualization tool (documentation). -- 204686993 by Zhichao Lu: Internal changes. -- 204559667 by Zhichao Lu: Refactor box_predictor.py into multiple files. The abstract base class remains in the object_detection/core, The other classes have moved to a separate file each in object_detection/predictors -- 204552847 by Zhichao Lu: Update blog post link. -- 204508028 by Zhichao Lu: Bump down the batch size to 1024 to be a bit more tolerant to OOM and double the number of iterations. This job still converges to 20.5 mAP in 3 hours. -- PiperOrigin-RevId: 206852642 * Add original post-processing back.
1 parent d135ed9 commit 02a9969

File tree

80 files changed

+5062
-2175
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

80 files changed

+5062
-2175
lines changed

research/object_detection/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ Extras:
7979
Run the evaluation for the Open Images Challenge 2018</a><br>
8080
* <a href='g3doc/tpu_compatibility.md'>
8181
TPU compatible detection pipelines</a><br>
82-
* <a href='g3doc/running_on_mobile_tensorflowlite.md'>
82+
* <a href='g3doc/running_on_mobile_tensorflowlite.md'>
8383
Running object detection on mobile devices with TensorFlow Lite</a><br>
8484

8585
## Getting Help

research/object_detection/anchor_generators/multiple_grid_anchor_generator.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -157,12 +157,10 @@ def _generate(self, feature_map_shape_list, im_height=1, im_width=1):
157157
correspond to an 8x8 layer followed by a 7x7 layer.
158158
im_height: the height of the image to generate the grid for. If both
159159
im_height and im_width are 1, the generated anchors default to
160-
normalized coordinates, otherwise absolute coordinates are used for the
161-
grid.
160+
absolute coordinates, otherwise normalized coordinates are produced.
162161
im_width: the width of the image to generate the grid for. If both
163162
im_height and im_width are 1, the generated anchors default to
164-
normalized coordinates, otherwise absolute coordinates are used for the
165-
grid.
163+
absolute coordinates, otherwise normalized coordinates are produced.
166164
167165
Returns:
168166
boxes_list: a list of BoxLists each holding anchor boxes corresponding to

research/object_detection/anchor_generators/multiscale_grid_anchor_generator.py

Lines changed: 20 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -57,14 +57,12 @@ def __init__(self, min_level, max_level, anchor_scale, aspect_ratios,
5757
self._scales_per_octave = scales_per_octave
5858
self._normalize_coordinates = normalize_coordinates
5959

60+
scales = [2**(float(scale) / scales_per_octave)
61+
for scale in xrange(scales_per_octave)]
62+
aspects = list(aspect_ratios)
63+
6064
for level in range(min_level, max_level + 1):
6165
anchor_stride = [2**level, 2**level]
62-
scales = []
63-
aspects = []
64-
for scale in range(scales_per_octave):
65-
scales.append(2**(float(scale) / scales_per_octave))
66-
for aspect_ratio in aspect_ratios:
67-
aspects.append(aspect_ratio)
6866
base_anchor_size = [2**level * anchor_scale, 2**level * anchor_scale]
6967
self._anchor_grid_info.append({
7068
'level': level,
@@ -84,7 +82,7 @@ def num_anchors_per_location(self):
8482
return len(self._anchor_grid_info) * [
8583
len(self._aspect_ratios) * self._scales_per_octave]
8684

87-
def _generate(self, feature_map_shape_list, im_height, im_width):
85+
def _generate(self, feature_map_shape_list, im_height=1, im_width=1):
8886
"""Generates a collection of bounding boxes to be used as anchors.
8987
9088
Currently we require the input image shape to be statically defined. That
@@ -95,14 +93,20 @@ def _generate(self, feature_map_shape_list, im_height, im_width):
9593
format [(height_0, width_0), (height_1, width_1), ...]. For example,
9694
setting feature_map_shape_list=[(8, 8), (7, 7)] asks for anchors that
9795
correspond to an 8x8 layer followed by a 7x7 layer.
98-
im_height: the height of the image to generate the grid for.
99-
im_width: the width of the image to generate the grid for.
96+
im_height: the height of the image to generate the grid for. If both
97+
im_height and im_width are 1, anchors can only be generated in
98+
absolute coordinates.
99+
im_width: the width of the image to generate the grid for. If both
100+
im_height and im_width are 1, anchors can only be generated in
101+
absolute coordinates.
100102
101103
Returns:
102104
boxes_list: a list of BoxLists each holding anchor boxes corresponding to
103105
the input feature map shapes.
104106
Raises:
105107
ValueError: if im_height and im_width are not integers.
108+
ValueError: if im_height and im_width are 1, but normalized coordinates
109+
were requested.
106110
"""
107111
if not isinstance(im_height, int) or not isinstance(im_width, int):
108112
raise ValueError('MultiscaleGridAnchorGenerator currently requires '
@@ -118,9 +122,9 @@ def _generate(self, feature_map_shape_list, im_height, im_width):
118122
feat_h = feat_shape[0]
119123
feat_w = feat_shape[1]
120124
anchor_offset = [0, 0]
121-
if im_height % 2.0**level == 0:
125+
if im_height % 2.0**level == 0 or im_height == 1:
122126
anchor_offset[0] = stride / 2.0
123-
if im_width % 2.0**level == 0:
127+
if im_width % 2.0**level == 0 or im_width == 1:
124128
anchor_offset[1] = stride / 2.0
125129
ag = grid_anchor_generator.GridAnchorGenerator(
126130
scales,
@@ -131,6 +135,11 @@ def _generate(self, feature_map_shape_list, im_height, im_width):
131135
(anchor_grid,) = ag.generate(feature_map_shape_list=[(feat_h, feat_w)])
132136

133137
if self._normalize_coordinates:
138+
if im_height == 1 or im_width == 1:
139+
raise ValueError(
140+
'Normalized coordinates were requested upon construction of the '
141+
'MultiscaleGridAnchorGenerator, but a subsequent call to '
142+
'generate did not supply dimension information.')
134143
anchor_grid = box_list_ops.to_normalized_coordinates(
135144
anchor_grid, im_height, im_width, check_range=False)
136145
anchor_grid_list.append(anchor_grid)

research/object_detection/anchor_generators/multiscale_grid_anchor_generator_test.py

Lines changed: 35 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,40 @@ def test_construct_single_anchor(self):
4747
anchor_corners_out = anchor_corners.eval()
4848
self.assertAllClose(anchor_corners_out, exp_anchor_corners)
4949

50+
def test_construct_single_anchor_unit_dimensions(self):
51+
min_level = 5
52+
max_level = 5
53+
anchor_scale = 1.0
54+
aspect_ratios = [1.0]
55+
scales_per_octave = 1
56+
im_height = 1
57+
im_width = 1
58+
feature_map_shape_list = [(2, 2)]
59+
# Positive offsets are produced.
60+
exp_anchor_corners = [[0, 0, 32, 32],
61+
[0, 32, 32, 64],
62+
[32, 0, 64, 32],
63+
[32, 32, 64, 64]]
64+
65+
anchor_generator = mg.MultiscaleGridAnchorGenerator(
66+
min_level, max_level, anchor_scale, aspect_ratios, scales_per_octave,
67+
normalize_coordinates=False)
68+
anchors_list = anchor_generator.generate(
69+
feature_map_shape_list, im_height=im_height, im_width=im_width)
70+
anchor_corners = anchors_list[0].get()
71+
72+
with self.test_session():
73+
anchor_corners_out = anchor_corners.eval()
74+
self.assertAllClose(anchor_corners_out, exp_anchor_corners)
75+
76+
def test_construct_normalized_anchors_fails_with_unit_dimensions(self):
77+
anchor_generator = mg.MultiscaleGridAnchorGenerator(
78+
min_level=5, max_level=5, anchor_scale=1.0, aspect_ratios=[1.0],
79+
scales_per_octave=1, normalize_coordinates=True)
80+
with self.assertRaisesRegexp(ValueError, 'Normalized coordinates'):
81+
anchor_generator.generate(
82+
feature_map_shape_list=[(2, 2)], im_height=1, im_width=1)
83+
5084
def test_construct_single_anchor_in_normalized_coordinates(self):
5185
min_level = 5
5286
max_level = 5
@@ -94,7 +128,7 @@ def test_construct_single_anchor_fails_with_tensor_image_size(self):
94128
anchor_generator = mg.MultiscaleGridAnchorGenerator(
95129
min_level, max_level, anchor_scale, aspect_ratios, scales_per_octave,
96130
normalize_coordinates=False)
97-
with self.assertRaises(ValueError):
131+
with self.assertRaisesRegexp(ValueError, 'statically defined'):
98132
anchor_generator.generate(
99133
feature_map_shape_list, im_height=im_height, im_width=im_width)
100134

research/object_detection/builders/box_predictor_builder.py

Lines changed: 91 additions & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,12 @@
1515

1616
"""Function to build box predictor from configuration."""
1717

18-
from object_detection.core import box_predictor
18+
from object_detection.predictors import convolutional_box_predictor
19+
from object_detection.predictors import mask_rcnn_box_predictor
20+
from object_detection.predictors import rfcn_box_predictor
21+
from object_detection.predictors.mask_rcnn_heads import box_head
22+
from object_detection.predictors.mask_rcnn_heads import class_head
23+
from object_detection.predictors.mask_rcnn_heads import mask_head
1924
from object_detection.protos import box_predictor_pb2
2025

2126

@@ -48,92 +53,112 @@ def build(argscope_fn, box_predictor_config, is_training, num_classes):
4853
box_predictor_oneof = box_predictor_config.WhichOneof('box_predictor_oneof')
4954

5055
if box_predictor_oneof == 'convolutional_box_predictor':
51-
conv_box_predictor = box_predictor_config.convolutional_box_predictor
52-
conv_hyperparams_fn = argscope_fn(conv_box_predictor.conv_hyperparams,
56+
config_box_predictor = box_predictor_config.convolutional_box_predictor
57+
conv_hyperparams_fn = argscope_fn(config_box_predictor.conv_hyperparams,
5358
is_training)
54-
box_predictor_object = box_predictor.ConvolutionalBoxPredictor(
55-
is_training=is_training,
56-
num_classes=num_classes,
57-
conv_hyperparams_fn=conv_hyperparams_fn,
58-
min_depth=conv_box_predictor.min_depth,
59-
max_depth=conv_box_predictor.max_depth,
60-
num_layers_before_predictor=(conv_box_predictor.
61-
num_layers_before_predictor),
62-
use_dropout=conv_box_predictor.use_dropout,
63-
dropout_keep_prob=conv_box_predictor.dropout_keep_probability,
64-
kernel_size=conv_box_predictor.kernel_size,
65-
box_code_size=conv_box_predictor.box_code_size,
66-
apply_sigmoid_to_scores=conv_box_predictor.apply_sigmoid_to_scores,
67-
class_prediction_bias_init=(conv_box_predictor.
68-
class_prediction_bias_init),
69-
use_depthwise=conv_box_predictor.use_depthwise
70-
)
59+
box_predictor_object = (
60+
convolutional_box_predictor.ConvolutionalBoxPredictor(
61+
is_training=is_training,
62+
num_classes=num_classes,
63+
conv_hyperparams_fn=conv_hyperparams_fn,
64+
min_depth=config_box_predictor.min_depth,
65+
max_depth=config_box_predictor.max_depth,
66+
num_layers_before_predictor=(
67+
config_box_predictor.num_layers_before_predictor),
68+
use_dropout=config_box_predictor.use_dropout,
69+
dropout_keep_prob=config_box_predictor.dropout_keep_probability,
70+
kernel_size=config_box_predictor.kernel_size,
71+
box_code_size=config_box_predictor.box_code_size,
72+
apply_sigmoid_to_scores=config_box_predictor.
73+
apply_sigmoid_to_scores,
74+
class_prediction_bias_init=(
75+
config_box_predictor.class_prediction_bias_init),
76+
use_depthwise=config_box_predictor.use_depthwise))
7177
return box_predictor_object
7278

7379
if box_predictor_oneof == 'weight_shared_convolutional_box_predictor':
74-
conv_box_predictor = (box_predictor_config.
75-
weight_shared_convolutional_box_predictor)
76-
conv_hyperparams_fn = argscope_fn(conv_box_predictor.conv_hyperparams,
80+
config_box_predictor = (
81+
box_predictor_config.weight_shared_convolutional_box_predictor)
82+
conv_hyperparams_fn = argscope_fn(config_box_predictor.conv_hyperparams,
7783
is_training)
78-
box_predictor_object = box_predictor.WeightSharedConvolutionalBoxPredictor(
79-
is_training=is_training,
80-
num_classes=num_classes,
81-
conv_hyperparams_fn=conv_hyperparams_fn,
82-
depth=conv_box_predictor.depth,
83-
num_layers_before_predictor=(
84-
conv_box_predictor.num_layers_before_predictor),
85-
kernel_size=conv_box_predictor.kernel_size,
86-
box_code_size=conv_box_predictor.box_code_size,
87-
class_prediction_bias_init=conv_box_predictor.
88-
class_prediction_bias_init,
89-
use_dropout=conv_box_predictor.use_dropout,
90-
dropout_keep_prob=conv_box_predictor.dropout_keep_probability,
91-
share_prediction_tower=conv_box_predictor.share_prediction_tower)
84+
apply_batch_norm = config_box_predictor.conv_hyperparams.HasField(
85+
'batch_norm')
86+
box_predictor_object = (
87+
convolutional_box_predictor.WeightSharedConvolutionalBoxPredictor(
88+
is_training=is_training,
89+
num_classes=num_classes,
90+
conv_hyperparams_fn=conv_hyperparams_fn,
91+
depth=config_box_predictor.depth,
92+
num_layers_before_predictor=(
93+
config_box_predictor.num_layers_before_predictor),
94+
kernel_size=config_box_predictor.kernel_size,
95+
box_code_size=config_box_predictor.box_code_size,
96+
class_prediction_bias_init=config_box_predictor.
97+
class_prediction_bias_init,
98+
use_dropout=config_box_predictor.use_dropout,
99+
dropout_keep_prob=config_box_predictor.dropout_keep_probability,
100+
share_prediction_tower=config_box_predictor.share_prediction_tower,
101+
apply_batch_norm=apply_batch_norm))
92102
return box_predictor_object
93103

94104
if box_predictor_oneof == 'mask_rcnn_box_predictor':
95-
mask_rcnn_box_predictor = box_predictor_config.mask_rcnn_box_predictor
96-
fc_hyperparams_fn = argscope_fn(mask_rcnn_box_predictor.fc_hyperparams,
105+
config_box_predictor = box_predictor_config.mask_rcnn_box_predictor
106+
fc_hyperparams_fn = argscope_fn(config_box_predictor.fc_hyperparams,
97107
is_training)
98108
conv_hyperparams_fn = None
99-
if mask_rcnn_box_predictor.HasField('conv_hyperparams'):
109+
if config_box_predictor.HasField('conv_hyperparams'):
100110
conv_hyperparams_fn = argscope_fn(
101-
mask_rcnn_box_predictor.conv_hyperparams, is_training)
102-
box_predictor_object = box_predictor.MaskRCNNBoxPredictor(
111+
config_box_predictor.conv_hyperparams, is_training)
112+
box_prediction_head = box_head.BoxHead(
103113
is_training=is_training,
104114
num_classes=num_classes,
105115
fc_hyperparams_fn=fc_hyperparams_fn,
106-
use_dropout=mask_rcnn_box_predictor.use_dropout,
107-
dropout_keep_prob=mask_rcnn_box_predictor.dropout_keep_probability,
108-
box_code_size=mask_rcnn_box_predictor.box_code_size,
109-
conv_hyperparams_fn=conv_hyperparams_fn,
110-
predict_instance_masks=mask_rcnn_box_predictor.predict_instance_masks,
111-
mask_height=mask_rcnn_box_predictor.mask_height,
112-
mask_width=mask_rcnn_box_predictor.mask_width,
113-
mask_prediction_num_conv_layers=(
114-
mask_rcnn_box_predictor.mask_prediction_num_conv_layers),
115-
mask_prediction_conv_depth=(
116-
mask_rcnn_box_predictor.mask_prediction_conv_depth),
117-
masks_are_class_agnostic=(
118-
mask_rcnn_box_predictor.masks_are_class_agnostic),
119-
predict_keypoints=mask_rcnn_box_predictor.predict_keypoints,
116+
use_dropout=config_box_predictor.use_dropout,
117+
dropout_keep_prob=config_box_predictor.dropout_keep_probability,
118+
box_code_size=config_box_predictor.box_code_size,
120119
share_box_across_classes=(
121-
mask_rcnn_box_predictor.share_box_across_classes))
120+
config_box_predictor.share_box_across_classes))
121+
class_prediction_head = class_head.ClassHead(
122+
is_training=is_training,
123+
num_classes=num_classes,
124+
fc_hyperparams_fn=fc_hyperparams_fn,
125+
use_dropout=config_box_predictor.use_dropout,
126+
dropout_keep_prob=config_box_predictor.dropout_keep_probability)
127+
third_stage_heads = {}
128+
if config_box_predictor.predict_instance_masks:
129+
third_stage_heads[
130+
mask_rcnn_box_predictor.MASK_PREDICTIONS] = mask_head.MaskHead(
131+
num_classes=num_classes,
132+
conv_hyperparams_fn=conv_hyperparams_fn,
133+
mask_height=config_box_predictor.mask_height,
134+
mask_width=config_box_predictor.mask_width,
135+
mask_prediction_num_conv_layers=(
136+
config_box_predictor.mask_prediction_num_conv_layers),
137+
mask_prediction_conv_depth=(
138+
config_box_predictor.mask_prediction_conv_depth),
139+
masks_are_class_agnostic=(
140+
config_box_predictor.masks_are_class_agnostic))
141+
box_predictor_object = mask_rcnn_box_predictor.MaskRCNNBoxPredictor(
142+
is_training=is_training,
143+
num_classes=num_classes,
144+
box_prediction_head=box_prediction_head,
145+
class_prediction_head=class_prediction_head,
146+
third_stage_heads=third_stage_heads)
122147
return box_predictor_object
123148

124149
if box_predictor_oneof == 'rfcn_box_predictor':
125-
rfcn_box_predictor = box_predictor_config.rfcn_box_predictor
126-
conv_hyperparams_fn = argscope_fn(rfcn_box_predictor.conv_hyperparams,
150+
config_box_predictor = box_predictor_config.rfcn_box_predictor
151+
conv_hyperparams_fn = argscope_fn(config_box_predictor.conv_hyperparams,
127152
is_training)
128-
box_predictor_object = box_predictor.RfcnBoxPredictor(
153+
box_predictor_object = rfcn_box_predictor.RfcnBoxPredictor(
129154
is_training=is_training,
130155
num_classes=num_classes,
131156
conv_hyperparams_fn=conv_hyperparams_fn,
132-
crop_size=[rfcn_box_predictor.crop_height,
133-
rfcn_box_predictor.crop_width],
134-
num_spatial_bins=[rfcn_box_predictor.num_spatial_bins_height,
135-
rfcn_box_predictor.num_spatial_bins_width],
136-
depth=rfcn_box_predictor.depth,
137-
box_code_size=rfcn_box_predictor.box_code_size)
157+
crop_size=[config_box_predictor.crop_height,
158+
config_box_predictor.crop_width],
159+
num_spatial_bins=[config_box_predictor.num_spatial_bins_height,
160+
config_box_predictor.num_spatial_bins_width],
161+
depth=config_box_predictor.depth,
162+
box_code_size=config_box_predictor.box_code_size)
138163
return box_predictor_object
139164
raise ValueError('Unknown box predictor: {}'.format(box_predictor_oneof))

0 commit comments

Comments
 (0)