-
Notifications
You must be signed in to change notification settings - Fork 18.6k
Reshape single input batches for inputs of varying dimension #1313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
src/caffe/layers/data_layer.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not const Datum& datum = iter_->value;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By they way you forgot to add DecodeDatum(&datum); just in case datum was encoded
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Datum isn't stored with the proper dimensions and it's only set right when decoding? Shouldn't encoding / decoding be isolated to the data, float_data fields?
|
@shelhamer Reading this ticket comments the status is unknown. Was self assigned but there is no evidence of further step needs , no todo list, no bullet points. |
src/caffe/layers/data_layer.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work if crop_size is nonzero?
|
So I would like to merge this soon. Why not lose Then cropping should work fine (it looks broken, no?), and the patch should be only be adding ~2 net loc. Unless I'm forgetting something. |
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
Reshape single input batches for inputs of varying dimension
|
@longjon re: #1313 (comment) the It does seem like this could be simplified so that data layers unconditionally reshape, but we need to decide what to do about |
|
It can wait until a more general data reformation though. |
7fa470e to
ba39b58
Compare
|
@longjon DATA + IMAGE_DATA now reshape with tests. Could you review + merge? |
src/caffe/layers/base_data_layer.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be Reshape(this->prefetch_data_.num()? This is unconditional, not just batch size one, right?
Reshape single input batches for inputs of varying dimension
ba39b58 to
7cbd4ff
Compare
To feed inputs of varying dimension, the `DATA` and `IMAGE_DATA` layer reshapes its prefetch and top blobs when the batch size is 1. The `BasePrefetchingDataLayer` always reshapes on forward.
7cbd4ff to
46ae0f9
Compare
46ae0f9 to
d3b2010
Compare
Reshape single input batches for inputs of varying dimension
|
em... I think the python's wrapper need to adjust as well, otherwise the preprocess will always resize the input image. |
To feed inputs of varying dimension, the DATA layer reshapes its prefetch and top blobs when the batch size is 1. This is useful for models of variable input size, such as fully convolutional models.
By the grace of #594 this is a simple change.