Skip to content

Comments

[GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN#20406

Merged
alalek merged 48 commits intoopencv:4.xfrom
MarkGHX:gsoc_2021_webnn
Nov 23, 2021
Merged

[GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN#20406
alalek merged 48 commits intoopencv:4.xfrom
MarkGHX:gsoc_2021_webnn

Conversation

@MarkGHX
Copy link
Contributor

@MarkGHX MarkGHX commented Jul 14, 2021

Overview

Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN
Mentor: Ningxin Hu @huningxin
Student: Hanxi Guo @MarkGHX

This pull request changes

  • Modify the CMAKE files to fit WebNN.
  • Create the WebNN backend prototype for OpenCV.js dnn module in C++.
  • Create the WebNN ReLULayer and pass the test case.
  • Create the WebNN implementation of selected compute-intensive operations.
    • BatchNormLayer
    • ConvolutionLayer (group = 1)
    • FullyConnectedLayer
    • ReLULayer
    • ReLU6Layer
    • PoolingLayer
    • ReshapeLayer
    • SoftmaxLayer
    • PermuteLayer
    • ConstantLayer
    • ConcatLayer
  • Compile the OpenCV.js dnn WebNN backend implementation into WebAssembly.
  • Create the image classification sample of OpenCV.js dnn module using WebNN Backend, collect and analyze the performance numbers of different implementations including WebAssembly, WebNN (both WebNN-polyfill and electron) and native.

Test

My test environments

OS: Ubuntu Linux 18.04.5 LTS
Emscripten: 2.0.15
Browser: Chrome, Version 90.0.4430.72 (Official Build) (64-bit)
Hardware: Intel® Core™ i9-7920X CPU @ 2.90GHz and GeForce GTX 1060 3GB

Preparations

  1. Download this PR to /GSoC2021 (for example).
  2. Download emsdk to /GSoC2021 and install version 2.0.15.
  3. Replace /GSoC2021/emsdk/upstream/emscripten with emscripten-webnn (branch webnn_2.0.15).

To run OpenCV native DNN module with WebNN backend

  1. Refer to WebNN's build instructions to complete the build of WebNN-native with OpenVINO.
  2. Download opencv_extra.
  3. Set the environment variables:
$ source /opt/intel/<openvino_dir>/bin/setupvars.sh
$ export WEBNN_NATIVE_DIR=<webnn_native_out_dir>
$ export LD_LIBRARY_PATH=<opencv_dir>/lib:<webnn_native_out_dir>:${LD_LIBRARY_PATH}
$ export OPENCV_TEST_DATA_PATH=<opencv_extra_dir>/testdata/
  1. Build OpenCV native with WebNN backend:
$ mkdir build
$ cd build
$ cmake -DCMAKE_BUILD_TYPE=Release -DWITH_WEBNN=ON -DBUILD_EXAMPLES=ON -DBUILD_TEST=ON -DCMAKE_INSTALL_PREFIX=/usr/local ..
$ make
  1. Copy model files and figures to ./opencv/build/bin
  2. Run the OpenCV native DNN module with WebNN backend:
$ ./bin/example_dnn_classification --model=./bin/googlenet-v1.caffemodel --config=./bin/googlenet-v1.prototxt --width=224 --height=224 --classes=./bin/classification_classes_ILSVRC2012.txt --input=./bin/space_shuttle.jpg --mean="104 117 123" --rgb=false --backend=6

7.Expected result:
image

To run OpenCV.js DNN module with WebNN backend using WebNN-polyfill

  1. Build WebNN-polyfill and OpenCV.js:
$ cd GSoC2021
$ git clone https://github.com/MarkGHX/webnn-polyfill
$ cd webnn-polyfill/
$ npm install
$ npm run build
$ cd ..
$ cd opencv
$ mkdir build_js
$ emcmake python ./platforms/js/build_js.py build_js --build_wasm --build_loader --build_doc --build_test --webnn
$ cd ..
$ cd webnn-polyfill/
$ cp dist/webnn-polyfill.js ~/GSoC2021/opencv/build_js/doc/doxygen/html/
$ cd ..
  1. Set up http-server:
$ cd opencv/build_js/doc/doxygen/html/
$ http-server
  1. Open Chrome and go to http://127.0.0.1:8080/js_image_classification_webnn_polyfill.html. Then you can test OpenCV.js GoogleNet with WebNN backend in image classification task.
  2. Expected results:
    polyfill

To run OpenCV.js DNN module with WebNN backend using Electron

  1. Refer to WebNN's build instructions to complete the build of WebNN-native with OpenVINO.
  2. Download opencv_extra.
  3. Set the environment variables:
$ source /opt/intel/<openvino_dir>/bin/setupvars.sh
$ export WEBNN_NATIVE_DIR=<webnn_native_out_dir>
$ export LD_LIBRARY_PATH=./lib:<webnn_native_out_dir>:${LD_LIBRARY_PATH}
$ export OPENCV_TEST_DATA_PATH=<opencv_extra_dir>/testdata/
  1. Build electron in OpenCV.js DNN module folder:
$ cd opencv/build_js/doc/doxygen/html/
$ npm install
$ npm run start
  1. Expected result:
    electron

Performance results

Inference time in one round

Model OpenCV.js wasm OpenCV.js wasm+simd+threads OpenCV native default OpenCV OpenVINO OpenCV WebNN OpenCV.js WebNN-polyfill OpenCV.js WebNN-Electron
GoogleNet 825.07ms 51.55ms 29.32ms 10.35ms 24.8ms 69.15ms 24.90ms
SqueezeNet 462.12ms 31.69ms 17.4ms 4.29ms 4.56ms 21.27ms 4.07ms
AlexNet 503.84ms ms 9.73ms 5.66ms 8.87ms ms ms

Average inference time of 200 rounds

Model OpenCV.js wasm OpenCV.js wasm+simd+threads OpenCV native default OpenCV OpenVINO OpenCV WebNN OpenCV.js WebNN-polyfill OpenCV.js WebNN-Electron
GoogleNet 862.14 ms 51.33 ms 10.48ms 3.68ms 7.44ms 64.28ms 24.85ms
SqueezeNet 461.71 ms 15.24 ms 3.99ms 1.83ms 1.96ms 24.74ms 1.97ms
AlexNet ms ms 7.56ms 5.09ms 7.23ms ms ms

Performance analysis

OpenCV native DNN module

From the performance results above, we could find that in OpenCV native DNN module (GoogleNet for example), using WebNN backend is 5ms (18.2%) faster than using default implementation. However, there is still a gap between using WebNN backend and using OpenVINO backend. I think that this is because the LRN and Dropout layers in GoogleNet is not implemented by WebNN yet, which in turn divides the graph into four sub-graphs. Then the four sub-graphs are linked with default LRN and Dropout implementations. Using such sub-graphs instead of using a whole graph to do the optimization reduces the performance of OpenCV native DNN module with WebNN backend. To the contrast, SqueezeNet's ops are all use WebNN backend except Softmax. Thus, SqueezeNet using WebNN backend is not divided into different parts and its performance is very close to the SqueezeNet using OpenVINO backend.

OpenCV.js DNN module

From the performance results above, we could find that Both OpenCV.js DNN module (GoogleNet as an example) using WebNN-polyfill and using WebNN-Electron are better than OpenCV.js DNN module using only wasm. They outperform the default OpenCV.js DNN module using only wasm 10.93x and 32.13x respectively. Compared with OpenCV.js DNN module with wasm+simd+threads, OpenCV.js DNN module with WebNN-polyfill is similar to it while OpenCV.js DNN module with WebNN-Electron is at least 2x faster than it.

force_builders=docs,Custom
buildworker:Docs=linux-4,linux-6
build_image:Docs=docs-js:18.04
build_image:Custom=javascript
buildworker:Custom=linux-4,linux-6

@MarkGHX MarkGHX changed the title Gsoc 2021 webnn [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN Jul 14, 2021
@huningxin huningxin mentioned this pull request Jul 14, 2021
6 tasks
@asmorkalov
Copy link
Contributor

@huningxin Please provide feedback.

@huningxin
Copy link
Contributor

@asmorkalov , thanks for the reminder. @MarkGHX and I have weekly meeting to discuss about this GSoC project where I provided my feedbacks to him directly. This PR is WIP. However, I think it is a good idea to start logging my feedbacks in this PR. I'll do that.

@MarkGHX , please help fix the build issue reported by the buildbots. Thanks.

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Jul 29, 2021

Thanks a lot! I will work on the build issue.

for (it = layers.begin(); it != layers.end(); ++it)
{
LayerData &ld = it->second;
// std::cout<<"Layer Name: "<<ld.name<<"Layer Type: "<<ld.type<<std::endl;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove the commented code.

{
Ptr<WebnnBackendWrapper> wrapper = ld.outputBlobsWrappers[i].dynamicCast<WebnnBackendWrapper>();
std::string outputName = ld.outputBlobsWrappers.size() > 1 ? (ld.name + "." + std::to_string(i)) : ld.name;
// std::cout<<"outputName at 2437: "<<outputName<<std::endl;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto.

for (int i = 0; i < ld.outputBlobsWrappers.size(); ++i)
{
Ptr<WebnnBackendWrapper> wrapper = ld.outputBlobsWrappers[i].dynamicCast<WebnnBackendWrapper>();
// std::cout << "wrapper->name: " << wrapper->name << std::endl;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto.

}
for (const auto& pin : blobsToKeep_)
{
// std::cout << "pin.lid: " << pin.lid << " ld.id:" << ld.id << std::endl;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto.

CV_LOG_WARNING(NULL, "Mask is not supported by WebNN backend.");
return false;
}
return type == MAX || type == AVE;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably you can log the other type of pooling are not supported by WebNN backend.

}

void WebnnNet::setUnconnectedNodes(Ptr<WebnnBackendNode>& node) {
// std::cout<<"outputNames in setUnconnectedNodes:"<<node->name<<std::endl;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto.

{
std::string name = wrapper->name;
name = name.empty() ? kDefaultInpLayerName : name;
std::cout << "addBlobs: " << name << std::endl;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

turn this into a logger or remove it.

input.size = wrapper->size;
input.resource.buffer = wrapper->host->data;
input.resource.byteLength = wrapper->size;
// std::cout<<"in size:"<<input.size<<std::endl;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove the commented code.

output.size = outs[i]->size;
// std::cout<<"host_shape: ";
// for (int d = 0; d < outs[i]->host->dims; d++)
// std::cout<<outs[i]->host->size[d]<<" ";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

// for (int d = 0; d < outs[i]->host->dims; d++)
// std::cout<<outs[i]->host->size[d]<<" ";
output.byteLength = outs[i]->size;
// std::cout<<"out size:"<<output.byteLength<<std::endl;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Jul 30, 2021

@huningxin Hi Ningxin, the new updated codes include the implementation of poolingLayer using WebNN API and some detailed logs. You can use the following codes to check it:

$ ./bin/opencv_test_dnn --gtest_filter=*Test_ONNX_layers.MaxPooling/0*
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from Test_ONNX_layers
[ RUN      ] Test_ONNX_layers.MaxPooling/0, where GetParam() = WEBNN/CPU
[ WARN:0] global /home/webml/GSoC2021/opencv/modules/dnn/src/layers/pooling_layer.cpp (266) supportBackend ceilMode is not supported by WebNN backend.
[ WARN:0] global /home/webml/GSoC2021/opencv/modules/dnn/src/dnn.cpp (2460) initWebnnBackend Layer Pooling name 1 is unsupported by WebNN backend.
[       OK ] Test_ONNX_layers.MaxPooling/0 (2 ms)
[----------] 1 test from Test_ONNX_layers (2 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (2 ms total)
[  PASSED  ] 1 test.

Thanks a lot!

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Jul 30, 2021

@huningxin Hi Ningxin, when dealing with the building issue, I met a very strange error. When building opencv in Docs, the whitespace opencv, which is used to check trailing space, failed. And the log is:

modules/dnn/src/dnn.cpp:2479: trailing whitespace.
+ 

However, when I check Line 2479 in dnn.cpp, I didn't find any trailing space. Do you have any ideas about this error? Thanks a lot!

@alalek
Copy link
Member

alalek commented Jul 30, 2021

Please squash commits into one commit. Perhaps git tool or GitHub review tool can't properly process all of them.

@huningxin
Copy link
Contributor

Please squash commits into one commit. Perhaps git tool or GitHub review tool can't properly process all of them.

@MarkGHX , please follow the instruction and see how is the build.

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Aug 2, 2021

Thanks a lot! I will try this.

@MarkGHX MarkGHX force-pushed the gsoc_2021_webnn branch 2 times, most recently from f5083c8 to 89eed5d Compare August 2, 2021 06:21
@MarkGHX
Copy link
Contributor Author

MarkGHX commented Aug 4, 2021

@huningxin Hi Ningxin, I have implemented BatchNorm Layer using WebNN API. You could use the following codes to check it:

$ ./bin/opencv_test_dnn --gtest_filter=*Test_ONNX_layers.BatchNormalization*
[==========] Running 16 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 16 tests from Test_ONNX_layers
[ RUN      ] Test_ONNX_layers.BatchNormalization/0, where GetParam() = WEBNN/CPU
[       OK ] Test_ONNX_layers.BatchNormalization/0 (22 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalization/1, where GetParam() = OCV/OCL
[       OK ] Test_ONNX_layers.BatchNormalization/1 (4 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalization/2, where GetParam() = OCV/OCL_FP16
[       OK ] Test_ONNX_layers.BatchNormalization/2 (2 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalization/3, where GetParam() = OCV/CPU
[       OK ] Test_ONNX_layers.BatchNormalization/3 (0 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalization3D/0, where GetParam() = WEBNN/CPU
[       OK ] Test_ONNX_layers.BatchNormalization3D/0 (13 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalization3D/1, where GetParam() = OCV/OCL
[       OK ] Test_ONNX_layers.BatchNormalization3D/1 (1 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalization3D/2, where GetParam() = OCV/OCL_FP16
[       OK ] Test_ONNX_layers.BatchNormalization3D/2 (0 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalization3D/3, where GetParam() = OCV/CPU
[       OK ] Test_ONNX_layers.BatchNormalization3D/3 (1 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalizationUnfused/0, where GetParam() = WEBNN/CPU
[       OK ] Test_ONNX_layers.BatchNormalizationUnfused/0 (8 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalizationUnfused/1, where GetParam() = OCV/OCL
[       OK ] Test_ONNX_layers.BatchNormalizationUnfused/1 (0 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalizationUnfused/2, where GetParam() = OCV/OCL_FP16
[       OK ] Test_ONNX_layers.BatchNormalizationUnfused/2 (1 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalizationUnfused/3, where GetParam() = OCV/CPU
[       OK ] Test_ONNX_layers.BatchNormalizationUnfused/3 (0 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalizationSubgraph/0, where GetParam() = WEBNN/CPU
[       OK ] Test_ONNX_layers.BatchNormalizationSubgraph/0 (6 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalizationSubgraph/1, where GetParam() = OCV/OCL
[       OK ] Test_ONNX_layers.BatchNormalizationSubgraph/1 (1 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalizationSubgraph/2, where GetParam() = OCV/OCL_FP16
[       OK ] Test_ONNX_layers.BatchNormalizationSubgraph/2 (1 ms)
[ RUN      ] Test_ONNX_layers.BatchNormalizationSubgraph/3, where GetParam() = OCV/CPU
[       OK ] Test_ONNX_layers.BatchNormalizationSubgraph/3 (1 ms)
[----------] 16 tests from Test_ONNX_layers (61 ms total)

[----------] Global test environment tear-down
[==========] 16 tests from 1 test case ran. (61 ms total)
[  PASSED  ] 16 tests.

Thanks a lot! Besides, my next target will be Conv2d Layer.

#endif // HAVE_DNN_NGRAPH

#ifdef HAVE_WEBNN
ml::Operand BuildConstant(const ml::GraphBuilder& builder,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest to move BuildConstant to op_webnn.h and op_webnn.cpp, so other layers could share the implementation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I have pushed a new commit to improve this.



template<typename T>
inline std::vector<T> getShape(const Mat& mat)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should you add namespace for webnn related methods, say webnn::getShape? BTW, webnn defines dimensions as vector<int32_t>. So it would be better to change this method signature to

std::vector<int32_t> getDimensions(const Mat& mat);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I have pushed a new commit to improve this.

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Aug 12, 2021

@huningxin Hi Ningxin, the new codes implement the conv2d layer, constant layer, concat layer and fully connected layer using WebNN backend. Here are the test results:

  • fully connected layer:
$ ./bin/opencv_test_dnn --gtest_filter=*Test_Torch_layers.run_linear*
[==========] Running 4 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 4 tests from Test_Torch_layers
[ RUN      ] Test_Torch_layers.run_linear/0, where GetParam() = WEBNN/CPU
[       OK ] Test_Torch_layers.run_linear/0 (34 ms)
[ RUN      ] Test_Torch_layers.run_linear/1, where GetParam() = OCV/OCL
[       OK ] Test_Torch_layers.run_linear/1 (4 ms)
[ RUN      ] Test_Torch_layers.run_linear/2, where GetParam() = OCV/OCL_FP16
[     SKIP ] Test with tag 'dnn_skip_ocl_fp16' is skipped ('dnn_skip_ocl_fp16' is in skip list)
[       OK ] Test_Torch_layers.run_linear/2 (1 ms)
[ RUN      ] Test_Torch_layers.run_linear/3, where GetParam() = OCV/CPU
[       OK ] Test_Torch_layers.run_linear/3 (1 ms)
[----------] 4 tests from Test_Torch_layers (40 ms total)

[----------] Global test environment tear-down
[ SKIPSTAT ] 1 tests skipped
[ SKIPSTAT ] TAG='dnn_skip_ocl_fp16' skip 1 tests
[==========] 4 tests from 1 test case ran. (40 ms total)
[  PASSED  ] 4 tests.
  • concat layer:
$ ./bin/opencv_test_dnn --gtest_filter=*Test_ONNX_layers.Concat*
[==========] Running 4 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 4 tests from Test_ONNX_layers
[ RUN      ] Test_ONNX_layers.Concatenation/0, where GetParam() = WEBNN/CPU
[       OK ] Test_ONNX_layers.Concatenation/0 (28 ms)
[ RUN      ] Test_ONNX_layers.Concatenation/1, where GetParam() = OCV/OCL
OpenCV(ocl4dnn): consider to specify kernel configuration cache directory 
                 via OPENCV_OCL4DNN_CONFIG_PATH parameter.
OpenCL program build log: dnn/dummy
Status -11: CL_BUILD_PROGRAM_FAILURE
-cl-no-subgroup-ifp
Error in processing command line: Don't understand command line argument "-cl-no-subgroup-ifp"!
[       OK ] Test_ONNX_layers.Concatenation/1 (7 ms)
[ RUN      ] Test_ONNX_layers.Concatenation/2, where GetParam() = OCV/OCL_FP16
[       OK ] Test_ONNX_layers.Concatenation/2 (2 ms)
[ RUN      ] Test_ONNX_layers.Concatenation/3, where GetParam() = OCV/CPU
[       OK ] Test_ONNX_layers.Concatenation/3 (2 ms)
[----------] 4 tests from Test_ONNX_layers (39 ms total)

[----------] Global test environment tear-down
[==========] 4 tests from 1 test case ran. (39 ms total)
[  PASSED  ] 4 tests.
  • conv2d layer:
$ ./bin/opencv_test_dnn --gtest_filter=*Test_ONNX_layers.Convolution/0*
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from Test_ONNX_layers
[ RUN      ] Test_ONNX_layers.Convolution/0, where GetParam() = WEBNN/CPU
[       OK ] Test_ONNX_layers.Convolution/0 (26 ms)
[----------] 1 test from Test_ONNX_layers (26 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (26 ms total)
[  PASSED  ] 1 test.

Existing problems:
Currently we only support conv2d layer with group = 1, this is because we cannot get the input node's shape and OpenCV dnn module can also not provide the group parameter. In this case, the group parameter can neither be obtained directly nor be calculated. I will try to find another way to get this information.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the reason to add this tryQuantize for batch norm layer and others?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Ningxin, this tryQuantize function seems to be a newly added function in the main branch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I upload my new codes, github reminds me that there are some conflicts since the main branch added this tryQuantize function but my PR didn't. Thus, in order to resolve the possible conflicts, I added this function to my PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this comes from upstream merged PR #20228 (some experimental whole network int8 quantization...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Ningxin, I have re-based my codes on current upstream master repo. This should be fixed now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto


#endif

virtual bool tryQuantize(const std::vector<std::vector<float> > &scales,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Aug 23, 2021

@huningxin Hi Ningxin, since the checks of bot failed, I reviewed the building log provided by the bot, while the errors are quite strange. To find the reason of such errors, I downloaded the latest opencv master branch and ran the tests. It seems that the building errors are caused by the latest master branch not my codes.

@alalek
Copy link
Member

alalek commented Aug 23, 2021

building errors are caused by the latest master branch not my codes.

Check nightly builds. There are several problems, but they are different.
Need to revise "Merge commit" contents.

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Aug 23, 2021

@alalek Thanks a lot, but I didn't get your point after reviewing the nightly builds logs. Sorry.:sweat_smile: Do you mean that there are some places in my codes caused the failed build?

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Aug 23, 2021

@alalek I have checked the nightly builds carefully and finally found the error. Thanks! I will try to fix this.

Copy link
Member

@alalek alalek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for contribution!

Comment on lines 4979 to 4981
// blob_.copyTo(impl->netInputLayer->inputsData[pin.oid]);
impl->netInputLayer->inputsData.emplace(impl->netInputLayer->inputsData.begin()+pin.oid, blob_);
impl->netInputLayer->inputsData.erase(impl->netInputLayer->inputsData.begin()+pin.oid+1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this change?

Copy link
Contributor Author

@MarkGHX MarkGHX Aug 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that this is a test version. I have fixed this. Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been fixed now. Some building errors are also fixed. Thanks!

Comment on lines 354 to 358
if (ksize != 2)
{
CV_LOG_WARNING(NULL, "WebNN only supports Conv2d.");
}
return ksize == 2;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoid code duplication in conditions.
Use return false; after warning and return true; below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been fixed with the new commit.

Comment on lines 62 to 64
/* Webnn support */
#cmakedefine HAVE_WEBNN

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need that for whole OpenCV library?
Limit this for DNN module only.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your comment. I have removed this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for that, it is removed now. 😁

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @alalek, since I removed the #cmakedefine HAVE_WEBNN from the cvconfig.h.in file, the HAVE_WEBNN macro does not work well. Besides, I didn't find a proper .in file that only for dnn module to place this macro. Could you please give some suggestions? Thanks!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot! This has been fixed now.

endif()

set(EMSCRIPTEN_LINK_FLAGS "${EMSCRIPTEN_LINK_FLAGS} --memory-init-file 0 -s TOTAL_MEMORY=128MB -s WASM_MEM_MAX=1GB -s ALLOW_MEMORY_GROWTH=1")
set(EMSCRIPTEN_LINK_FLAGS "${EMSCRIPTEN_LINK_FLAGS} -s USE_WEBNN=1 --memory-init-file 0 -s TOTAL_MEMORY=128MB -s WASM_MEM_MAX=1GB -s ALLOW_MEMORY_GROWTH=1")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-s USE_WEBNN=1

This should be a configurable build option.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I have added this option to build_js.py.

Comment on lines 63 to 66
struct Pool2dOptions {
public:
std::vector<int32_t> windowDimensions;
std::vector<int32_t> padding;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

broken indentation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. This is fixed in the new commit.

if(WITH_WEBNN)
set(WEBNN_HEADER_DIRS "$ENV{WEBNN_NATIVE_DIR}/gen/src/include")
set(WEBNN_INCLUDE_DIRS "$ENV{WEBNN_NATIVE_DIR}/../../src/include")
set(WEBNN_LIBRARIES "$ENV{WEBNN_NATIVE_DIR}/libwebnn_native.so;$ENV{WEBNN_NATIVE_DIR}/libwebnn_proc.so")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

$ENV{WEBNN_NATIVE_DIR}

It is better to use CMake variable instead (you can still initialize this from environment, see usage of ocv_check_environment_variables).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your comment. I have fixed it.

message(WARNING "Can't use WebNN-native")
return()
endif()
message(AUTHOR_WARNING "Use WebNN-native")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

native

What is about Emscripten case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your suggestion. More logs have been added.

@@ -0,0 +1,55 @@
// Modules to control application life and create native browser window
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

main.js

Does this file for Electron usage only?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I have moved such files into a sub-directory called webnn-electron.

{
"name": "image_classification",
"version": "0.0.1",
"description": "An Electon.js example of image_classification using webnn-native",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It makes sense to move Electron-specific stuff into sub-directory to avoid confusion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! It is moved to webnn-electron folder.

Comment on lines 138 to 141
if(HAVE_WEBNN)
list(APPEND include_dirs ${WEBNN_HEADER_DIRS})
list(APPEND include_dirs ${WEBNN_INCLUDE_DIRS})
list(APPEND libs -Wl,--whole-archive ${WEBNN_LIBRARIES} -Wl,--no-whole-archive)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use tabs.
Indentation in CMake scripts is 2 spaces.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed now.

Hanxi Guo added 4 commits August 26, 2021 16:38
Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Add WebNN head files into OpenCV 3rd partiy files

Create webnn.hpp

update cmake

Complete README and add OpenCVDetectWebNN.cmake file

add webnn.cpp

Modify webnn.cpp

Can successfully compile the codes for creating a MLContext

Update webnn.cpp

Update README.md

Update README.md

Update README.md

Update README.md

Update cmake files and

update README.md

Update OpenCVDetectWebNN.cmake and README.md

Update OpenCVDetectWebNN.cmake

Fix OpenCVDetectWebNN.cmake and update README.md

Add source webnn_cpp.cpp and libary libwebnn_proc.so

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

update dnn.cpp

update op_webnn

update op_webnn

Update op_webnn.hpp

update op_webnn.cpp & hpp

Update op_webnn.hpp

Update op_webnn

update the skeleton

Update op_webnn.cpp

Update op_webnn

Update op_webnn.cpp

Update op_webnn.cpp

Update op_webnn.hpp

update op_webnn

update op_webnn

Solved the problems of released variables.

Fixed the bugs in op_webnn.cpp

Implement op_webnn

Implement Relu by WebNN API

Update dnn.cpp for better test

Update elementwise_layers.cpp

Implement ReLU6

Update elementwise_layers.cpp

Implement SoftMax using WebNN API

Implement Reshape by WebNN API

Implement PermuteLayer by WebNN API

Implement PoolingLayer using WebNN API

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Implement poolingLayer by WebNN API and add more detailed logs

Update dnn.cpp

Update dnn.cpp

Remove redundant codes and add more logs for poolingLayer

Add more logs in the pooling layer implementation

Fix the indent issue and resolve the compiling issue

Fix the build problems

Fix the build issue

FIx the build issue

Update dnn.cpp

Update dnn.cpp
This is a temporary file for Conv2d layer implementation
@alalek
Copy link
Member

alalek commented Nov 12, 2021

Conflicting files
modules/dnn/src/layers/elementwise_layers.cpp

Please merge changes for upstream repository (to resolve conflicts):

# https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/configuring-a-remote-for-a-fork
git fetch upstream
git merge upstream/4.x

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Nov 12, 2021

Hi @alalek, Thanks for your comments. I will work on that.

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Nov 15, 2021

Hi @alalek, the conflicts have been resolved. Please have a look. Thanks!

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Nov 17, 2021

Hi @alalek, did you test the OpenCV dnn module with OpenVINO backend using the latest 4.x version? According to my testing report, both current formal 4.x version and the latest version in this PR cannot run dnn module using OpenVINO backend. The error is like symbol lookup error: ./example_dnn_classification: undefined symbol: _ZN2cv3dnn14dnn4_v202110043Net20setPreferableBackendEi. However, in the version before the 4.x merge in this PR, OpenCV dnn module with OpenVINO backend can run sucessfully. I don't know whether this failure is caused by the changes in the latest Opencv 4.x codes. Do you have any ideas about this? Thanks!

Comment on lines 167 to 171
// Put efficiency information.
std::vector<double> layersTimes;
double freq = getTickFrequency() / 1000;
t = net.getPerfProfile(layersTimes) / freq;
t_sum += t;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is better to use external TickMeter instead of .getPerfProfile()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

//! [Get a class with a highest score]
Point classIdPoint;
double t_sum = 0.0;
double t;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

double t;

Prefer to declare variables near its usage. It significantly increases code readability.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include "sys/time.h"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably not needed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.


### Test native DNN_BACKEND_WEBNN backend
Add -DWITH_WEBNN=ON to the cmake command to build the WebNN module such as:
`cmake -DWITH_WEBNN=ON ../opencv` (according to the [Installation in Linux](https://docs.opencv.org/master/d7/d9f/tutorial_linux_install.html)) No newline at end of file
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://docs.opencv.org/master

Direct URL links on docs.opencv.org are not allowed.
Use doxygen references instead.

Copy link
Contributor Author

@MarkGHX MarkGHX Nov 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @alalek, I changed this link but I'm not sure the new link is as your expectation, so please have a look. Thanks!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use @ref tutorial_linux_install

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I have modified this in the new commit.

@alalek
Copy link
Member

alalek commented Nov 17, 2021

undefined symbol: _ZN2cv3dnn14dnn4_v202110043Net20setPreferableBackendEi

dnn4_v20211004

This version is increased on Oct 4. Perhaps you have outdated binaries somewhere.


Please take a look on whitespace and compilation issues on public CI: https://pullrequest.opencv.org/#/summary/

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Nov 18, 2021

Hi @alalek, thanks for your comments. I will fix these problems.

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Nov 18, 2021

undefined symbol: _ZN2cv3dnn14dnn4_v202110043Net20setPreferableBackendEi

dnn4_v20211004

This version is increased on Oct 4. Perhaps you have outdated binaries somewhere.

Hi @alalek, your idea is right. However, after trying several latest released versions of OpenVINO, like 2021.4.1 and 2021.4.2, the OpenCV dnn module with OpenVINO still cannot work. It seems that the version v202110043 used now is not released by OpenVINO.

@alalek
Copy link
Member

alalek commented Nov 18, 2021

Right, there are no binary releases of latest OpenCV versions with OpenVINO (these packages use 4.5.3 variant).
You need to build OpenCV from source code adding/enabling OpenVINO (InferenceEngine): https://github.com/opencv/opencv/wiki/Intel%27s-Deep-Learning-Inference-Engine-backend

@alalek
Copy link
Member

alalek commented Nov 18, 2021

There are still red builds which blocks PR merge.
Please take a look on whitespace and compilation issues on public CI: https://pullrequest.opencv.org/#/summary/

@MarkGHX
Copy link
Contributor Author

MarkGHX commented Nov 19, 2021

Hi @alalek, The CI problems have been fixed now. Thanks for your comments!


maxProb = *std::max_element(prob.begin<float>(), prob.end<float>());
cv::exp(prob-maxProb, softmaxProb);
sum = cv::sum(softmaxProb)[0];
Copy link
Member

@alalek alalek Nov 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sum = (float)cv::sum(softmaxProb)[0];

to resolve MSVC warning

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Copy link
Member

@alalek alalek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job! Thank you for contribution 👍

@alalek alalek merged commit 1fcf7ba into opencv:4.x Nov 23, 2021
@alalek alalek mentioned this pull request Dec 30, 2021
@alalek alalek mentioned this pull request Feb 22, 2022
a-sajjad72 pushed a commit to a-sajjad72/opencv that referenced this pull request Mar 30, 2023
[GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN

* Add WebNN backend for OpenCV DNN Module

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Add WebNN head files into OpenCV 3rd partiy files

Create webnn.hpp

update cmake

Complete README and add OpenCVDetectWebNN.cmake file

add webnn.cpp

Modify webnn.cpp

Can successfully compile the codes for creating a MLContext

Update webnn.cpp

Update README.md

Update README.md

Update README.md

Update README.md

Update cmake files and

update README.md

Update OpenCVDetectWebNN.cmake and README.md

Update OpenCVDetectWebNN.cmake

Fix OpenCVDetectWebNN.cmake and update README.md

Add source webnn_cpp.cpp and libary libwebnn_proc.so

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

update dnn.cpp

update op_webnn

update op_webnn

Update op_webnn.hpp

update op_webnn.cpp & hpp

Update op_webnn.hpp

Update op_webnn

update the skeleton

Update op_webnn.cpp

Update op_webnn

Update op_webnn.cpp

Update op_webnn.cpp

Update op_webnn.hpp

update op_webnn

update op_webnn

Solved the problems of released variables.

Fixed the bugs in op_webnn.cpp

Implement op_webnn

Implement Relu by WebNN API

Update dnn.cpp for better test

Update elementwise_layers.cpp

Implement ReLU6

Update elementwise_layers.cpp

Implement SoftMax using WebNN API

Implement Reshape by WebNN API

Implement PermuteLayer by WebNN API

Implement PoolingLayer using WebNN API

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Implement poolingLayer by WebNN API and add more detailed logs

Update dnn.cpp

Update dnn.cpp

Remove redundant codes and add more logs for poolingLayer

Add more logs in the pooling layer implementation

Fix the indent issue and resolve the compiling issue

Fix the build problems

Fix the build issue

FIx the build issue

Update dnn.cpp

Update dnn.cpp

* Fix the build issue

* Implement BatchNorm Layer by WebNN API

* Update convolution_layer.cpp

This is a temporary file for Conv2d layer implementation

* Integrate some general functions into op_webnn.cpp&hpp

* Update const_layer.cpp

* Update convolution_layer.cpp

Still have some bugs that should be fixed.

* Update conv2d layer and fc layer

still have some problems to be fixed.

* update constLayer, conv layer, fc layer

There are still some bugs to be fixed.

* Fix the build issue

* Update concat_layer.cpp

Still have some bugs to be fixed.

* Update conv2d layer, fully connected layer and const layer

* Update convolution_layer.cpp

* Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron)

* Delete bib19450.aux

* Add WebNN backend for OpenCV DNN Module

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Add WebNN head files into OpenCV 3rd partiy files

Create webnn.hpp

update cmake

Complete README and add OpenCVDetectWebNN.cmake file

add webnn.cpp

Modify webnn.cpp

Can successfully compile the codes for creating a MLContext

Update webnn.cpp

Update README.md

Update README.md

Update README.md

Update README.md

Update cmake files and

update README.md

Update OpenCVDetectWebNN.cmake and README.md

Update OpenCVDetectWebNN.cmake

Fix OpenCVDetectWebNN.cmake and update README.md

Add source webnn_cpp.cpp and libary libwebnn_proc.so

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

Update dnn.cpp

update dnn.cpp

update op_webnn

update op_webnn

Update op_webnn.hpp

update op_webnn.cpp & hpp

Update op_webnn.hpp

Update op_webnn

update the skeleton

Update op_webnn.cpp

Update op_webnn

Update op_webnn.cpp

Update op_webnn.cpp

Update op_webnn.hpp

update op_webnn

update op_webnn

Solved the problems of released variables.

Fixed the bugs in op_webnn.cpp

Implement op_webnn

Implement Relu by WebNN API

Update dnn.cpp for better test

Update elementwise_layers.cpp

Implement ReLU6

Update elementwise_layers.cpp

Implement SoftMax using WebNN API

Implement Reshape by WebNN API

Implement PermuteLayer by WebNN API

Implement PoolingLayer using WebNN API

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Update pooling_layer.cpp

Implement poolingLayer by WebNN API and add more detailed logs

Update dnn.cpp

Update dnn.cpp

Remove redundant codes and add more logs for poolingLayer

Add more logs in the pooling layer implementation

Fix the indent issue and resolve the compiling issue

Fix the build problems

Fix the build issue

FIx the build issue

Update dnn.cpp

Update dnn.cpp

* Fix the build issue

* Implement BatchNorm Layer by WebNN API

* Update convolution_layer.cpp

This is a temporary file for Conv2d layer implementation

* Integrate some general functions into op_webnn.cpp&hpp

* Update const_layer.cpp

* Update convolution_layer.cpp

Still have some bugs that should be fixed.

* Update conv2d layer and fc layer

still have some problems to be fixed.

* update constLayer, conv layer, fc layer

There are still some bugs to be fixed.

* Update conv2d layer, fully connected layer and const layer

* Update convolution_layer.cpp

* Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron)

* Update dnn.cpp

* Fix Error in dnn.cpp

* Resolve duplication in conditions in convolution_layer.cpp

* Fixed the issues in the comments

* Fix building issue

* Update tutorial

* Fixed comments

* Address the comments

* Update CMakeLists.txt

* Offer more accurate perf test on native

* Add better perf tests for both native and web

* Modify per tests for better results

* Use more latest version of Electron

* Support latest WebNN Clamp op

* Add definition of HAVE_WEBNN macro

* Support group convolution

* Implement Scale_layer using WebNN

* Add Softmax option for native classification example

* Fix comments

* Fix comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants