Support expand op#4
Conversation
5990333 to
3bbd08f
Compare
| const auto& shape_tensor = *initializers.at(input_defs[1]->Name()); | ||
| std::vector<int64_t> raw_shape; | ||
| // We already checked the raw_shape (new_shape) in IsOpSupportedImpl. | ||
| GetExpandShape(shape_tensor, raw_shape, logger); |
There was a problem hiding this comment.
Can we enlarge raw_shape to input.rank, right aligned with leading ones before passing to WebNN? My implementation of expand requires that the input rank and new shape length match (like slice).
There was a problem hiding this comment.
Can we enlarge raw_shape to input.rank, right aligned with leading ones before passing to WebNN?
On the above case, the size of new shape should be less than or equal to the size of input shape. So the size of new shape being greater than the size of input shape isn't allowed currently, right?
There was a problem hiding this comment.
So the size of new shape being greater than the size of input shape isn't allowed currently, right?
Yes, the old input shape and new output shape must have the same dimension count (or rank) in my WebNN implementation, where raw_shape.size() == input.shape().size() == output.shape.size(). Did you encounter a case in the model where the input shape and new output shape had differing rank? (the cases I've seen match, so if the input is 3D, then the new shape has 3 elements)
There was a problem hiding this comment.
Is there a case of the size of new shape being greater than the one of input's shape?
For example 2D to 3D,
input_shape = [a, b];
new_shape = [1, a, b];
There was a problem hiding this comment.
Is there a case of the size of new shape being greater than the one of input's shape?
Although I haven't seen it in real models, ONNX allows it (https://github.com/onnx/onnx/blob/main/docs/Operators.md#expand in the dim_changed example), and so probably there exists at least one model out there which does. I would prefer to see the WebNN implementation be simple in its rules (like slice which requires starts and sizes to match rank with input) and for frameworks to resolve these higher layer policies before reaching the lower level WebNN. Is it easy to call reshape on the input with leading 1's if the newShape has more dimensions?
There was a problem hiding this comment.
input_shape = [a, b];
new_shape = [1, a, b];
🤔 We'll likely need to add a Reshape call in this case, because otherwise when running the ONNX backend tests in the checkin tests, we'll get a failure with Expand (but we can complete this CR and track that when merging to main too).
| LOGS(logger, VERBOSE) << "The shape of expand must be 1D."; | ||
| return false; | ||
| } | ||
| const int64_t* shape_data = reinterpret_cast<const int64_t*>(unpacked_tensor.data()); |
There was a problem hiding this comment.
We should sanity check the data TensorProto data type is actually ONNXTensorElementDataType::ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64 before attempting to dereference it.
There was a problem hiding this comment.
Would you please guide me how to do sanity check? Are there some sample code? Thanks.
There was a problem hiding this comment.
I haven't tried actually compiling this, but I think it would just be:
if (tensor.data_type() != ONNXTensorElementDataType::ONNX_TENSOR_ELEMENT_DATA_TYPE_INT64) {
LOGS(logger, VERBOSE) << "The shape element data type must be INT64.";
return false;
}
const int64_t* shape_data = reinterpret_cast<const int64_t*>(unpacked_tensor.data());There was a problem hiding this comment.
Verified, successfully compiled.
| const auto& input_defs = node.InputDefs(); | ||
| const auto& shape_name = input_defs[1]->Name(); | ||
| if (!Contains(initializers, shape_name)) { | ||
| LOGS(logger, VERBOSE) << "The shape must be a constant initializer"; |
There was a problem hiding this comment.
| LOGS(logger, VERBOSE) << "The shape must be a constant initializer"; | |
| LOGS(logger, VERBOSE) << "The shape must be a constant initializer."; |
(nit) period
|
Thanks @fdwr I've updated commit to address your comments. Please take another look. |
|
Btw, I fear force pushing is making it hard to find the previous comments in the code, because when I try to view them later in the Files tab, I just see this... ...and then I can't compare the small delta to what I've previously reviewed :(. Rather than force pushing, can you push a merge commit and just squash at the end before completing? TY. |
Thanks. I will follow it later. |
### Description
Release OrtEnv before main function returns. Before this change, OrtEnv
is deleted when C/C++ runtime destructs all global variables in ONNX
Runtime's core framework.
The callstack is like this:
```
* frame #0: 0x00007fffee39f5a6 libonnxruntime.so.1.16.0`onnxruntime::Environment::~Environment(this=0x00007fffee39fbf2) at environment.h:20:7
frame #1: 0x00007fffee39f614 libonnxruntime.so.1.16.0`std::default_delete<onnxruntime::Environment>::operator()(this=0x00007ffff4c30e50, __ptr=0x0000000005404b00) const at unique_ptr.h:85:2
frame #2: 0x00007fffee39edca libonnxruntime.so.1.16.0`std::unique_ptr<onnxruntime::Environment, std::default_delete<onnxruntime::Environment>>::~unique_ptr(this=0x5404b00) at unique_ptr.h:361:17
frame #3: 0x00007fffee39e2ab libonnxruntime.so.1.16.0`OrtEnv::~OrtEnv(this=0x00007ffff4c30e50) at ort_env.cc:43:1
frame #4: 0x00007fffee39fa96 libonnxruntime.so.1.16.0`std::default_delete<OrtEnv>::operator()(this=0x00007fffefff8f78, __ptr=0x00007ffff4c30e50) const at unique_ptr.h:85:2
frame #5: 0x00007fffee39f394 libonnxruntime.so.1.16.0`std::unique_ptr<OrtEnv, std::default_delete<OrtEnv>>::~unique_ptr(this=0x7ffff4c30e50) at unique_ptr.h:361:17
frame #6: 0x00007ffff78574b5 libc.so.6`__run_exit_handlers + 261
frame #7: 0x00007ffff7857630 libc.so.6`exit + 32
frame #8: 0x00007ffff783feb7 libc.so.6`__libc_start_call_main + 135
frame #9: 0x00007ffff783ff60 libc.so.6`__libc_start_main@@GLIBC_2.34 + 128
frame #10: 0x0000000000abbdee node`_start + 46
```
After this change, OrtEnv will be deleted before the main function
returns and nodejs is still alive.

@Honry PTAL, thanks.