[GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN#20406
[GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN#20406alalek merged 48 commits intoopencv:4.xfrom
Conversation
|
@huningxin Please provide feedback. |
|
@asmorkalov , thanks for the reminder. @MarkGHX and I have weekly meeting to discuss about this GSoC project where I provided my feedbacks to him directly. This PR is WIP. However, I think it is a good idea to start logging my feedbacks in this PR. I'll do that. @MarkGHX , please help fix the build issue reported by the buildbots. Thanks. |
|
Thanks a lot! I will work on the build issue. |
modules/dnn/src/dnn.cpp
Outdated
| for (it = layers.begin(); it != layers.end(); ++it) | ||
| { | ||
| LayerData &ld = it->second; | ||
| // std::cout<<"Layer Name: "<<ld.name<<"Layer Type: "<<ld.type<<std::endl; |
There was a problem hiding this comment.
Please remove the commented code.
modules/dnn/src/dnn.cpp
Outdated
| { | ||
| Ptr<WebnnBackendWrapper> wrapper = ld.outputBlobsWrappers[i].dynamicCast<WebnnBackendWrapper>(); | ||
| std::string outputName = ld.outputBlobsWrappers.size() > 1 ? (ld.name + "." + std::to_string(i)) : ld.name; | ||
| // std::cout<<"outputName at 2437: "<<outputName<<std::endl; |
modules/dnn/src/dnn.cpp
Outdated
| for (int i = 0; i < ld.outputBlobsWrappers.size(); ++i) | ||
| { | ||
| Ptr<WebnnBackendWrapper> wrapper = ld.outputBlobsWrappers[i].dynamicCast<WebnnBackendWrapper>(); | ||
| // std::cout << "wrapper->name: " << wrapper->name << std::endl; |
modules/dnn/src/dnn.cpp
Outdated
| } | ||
| for (const auto& pin : blobsToKeep_) | ||
| { | ||
| // std::cout << "pin.lid: " << pin.lid << " ld.id:" << ld.id << std::endl; |
| CV_LOG_WARNING(NULL, "Mask is not supported by WebNN backend."); | ||
| return false; | ||
| } | ||
| return type == MAX || type == AVE; |
There was a problem hiding this comment.
Probably you can log the other type of pooling are not supported by WebNN backend.
modules/dnn/src/op_webnn.cpp
Outdated
| } | ||
|
|
||
| void WebnnNet::setUnconnectedNodes(Ptr<WebnnBackendNode>& node) { | ||
| // std::cout<<"outputNames in setUnconnectedNodes:"<<node->name<<std::endl; |
modules/dnn/src/op_webnn.cpp
Outdated
| { | ||
| std::string name = wrapper->name; | ||
| name = name.empty() ? kDefaultInpLayerName : name; | ||
| std::cout << "addBlobs: " << name << std::endl; |
There was a problem hiding this comment.
turn this into a logger or remove it.
modules/dnn/src/op_webnn.cpp
Outdated
| input.size = wrapper->size; | ||
| input.resource.buffer = wrapper->host->data; | ||
| input.resource.byteLength = wrapper->size; | ||
| // std::cout<<"in size:"<<input.size<<std::endl; |
There was a problem hiding this comment.
remove the commented code.
modules/dnn/src/op_webnn.cpp
Outdated
| output.size = outs[i]->size; | ||
| // std::cout<<"host_shape: "; | ||
| // for (int d = 0; d < outs[i]->host->dims; d++) | ||
| // std::cout<<outs[i]->host->size[d]<<" "; |
modules/dnn/src/op_webnn.cpp
Outdated
| // for (int d = 0; d < outs[i]->host->dims; d++) | ||
| // std::cout<<outs[i]->host->size[d]<<" "; | ||
| output.byteLength = outs[i]->size; | ||
| // std::cout<<"out size:"<<output.byteLength<<std::endl; |
|
@huningxin Hi Ningxin, the new updated codes include the implementation of poolingLayer using WebNN API and some detailed logs. You can use the following codes to check it: $ ./bin/opencv_test_dnn --gtest_filter=*Test_ONNX_layers.MaxPooling/0*
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from Test_ONNX_layers
[ RUN ] Test_ONNX_layers.MaxPooling/0, where GetParam() = WEBNN/CPU
[ WARN:0] global /home/webml/GSoC2021/opencv/modules/dnn/src/layers/pooling_layer.cpp (266) supportBackend ceilMode is not supported by WebNN backend.
[ WARN:0] global /home/webml/GSoC2021/opencv/modules/dnn/src/dnn.cpp (2460) initWebnnBackend Layer Pooling name 1 is unsupported by WebNN backend.
[ OK ] Test_ONNX_layers.MaxPooling/0 (2 ms)
[----------] 1 test from Test_ONNX_layers (2 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (2 ms total)
[ PASSED ] 1 test.Thanks a lot! |
|
@huningxin Hi Ningxin, when dealing with the building issue, I met a very strange error. When building opencv in Docs, the However, when I check Line 2479 in dnn.cpp, I didn't find any trailing space. Do you have any ideas about this error? Thanks a lot! |
|
Please squash commits into one commit. Perhaps git tool or GitHub review tool can't properly process all of them. |
@MarkGHX , please follow the instruction and see how is the build. |
|
Thanks a lot! I will try this. |
f5083c8 to
89eed5d
Compare
|
@huningxin Hi Ningxin, I have implemented BatchNorm Layer using WebNN API. You could use the following codes to check it: $ ./bin/opencv_test_dnn --gtest_filter=*Test_ONNX_layers.BatchNormalization*
[==========] Running 16 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 16 tests from Test_ONNX_layers
[ RUN ] Test_ONNX_layers.BatchNormalization/0, where GetParam() = WEBNN/CPU
[ OK ] Test_ONNX_layers.BatchNormalization/0 (22 ms)
[ RUN ] Test_ONNX_layers.BatchNormalization/1, where GetParam() = OCV/OCL
[ OK ] Test_ONNX_layers.BatchNormalization/1 (4 ms)
[ RUN ] Test_ONNX_layers.BatchNormalization/2, where GetParam() = OCV/OCL_FP16
[ OK ] Test_ONNX_layers.BatchNormalization/2 (2 ms)
[ RUN ] Test_ONNX_layers.BatchNormalization/3, where GetParam() = OCV/CPU
[ OK ] Test_ONNX_layers.BatchNormalization/3 (0 ms)
[ RUN ] Test_ONNX_layers.BatchNormalization3D/0, where GetParam() = WEBNN/CPU
[ OK ] Test_ONNX_layers.BatchNormalization3D/0 (13 ms)
[ RUN ] Test_ONNX_layers.BatchNormalization3D/1, where GetParam() = OCV/OCL
[ OK ] Test_ONNX_layers.BatchNormalization3D/1 (1 ms)
[ RUN ] Test_ONNX_layers.BatchNormalization3D/2, where GetParam() = OCV/OCL_FP16
[ OK ] Test_ONNX_layers.BatchNormalization3D/2 (0 ms)
[ RUN ] Test_ONNX_layers.BatchNormalization3D/3, where GetParam() = OCV/CPU
[ OK ] Test_ONNX_layers.BatchNormalization3D/3 (1 ms)
[ RUN ] Test_ONNX_layers.BatchNormalizationUnfused/0, where GetParam() = WEBNN/CPU
[ OK ] Test_ONNX_layers.BatchNormalizationUnfused/0 (8 ms)
[ RUN ] Test_ONNX_layers.BatchNormalizationUnfused/1, where GetParam() = OCV/OCL
[ OK ] Test_ONNX_layers.BatchNormalizationUnfused/1 (0 ms)
[ RUN ] Test_ONNX_layers.BatchNormalizationUnfused/2, where GetParam() = OCV/OCL_FP16
[ OK ] Test_ONNX_layers.BatchNormalizationUnfused/2 (1 ms)
[ RUN ] Test_ONNX_layers.BatchNormalizationUnfused/3, where GetParam() = OCV/CPU
[ OK ] Test_ONNX_layers.BatchNormalizationUnfused/3 (0 ms)
[ RUN ] Test_ONNX_layers.BatchNormalizationSubgraph/0, where GetParam() = WEBNN/CPU
[ OK ] Test_ONNX_layers.BatchNormalizationSubgraph/0 (6 ms)
[ RUN ] Test_ONNX_layers.BatchNormalizationSubgraph/1, where GetParam() = OCV/OCL
[ OK ] Test_ONNX_layers.BatchNormalizationSubgraph/1 (1 ms)
[ RUN ] Test_ONNX_layers.BatchNormalizationSubgraph/2, where GetParam() = OCV/OCL_FP16
[ OK ] Test_ONNX_layers.BatchNormalizationSubgraph/2 (1 ms)
[ RUN ] Test_ONNX_layers.BatchNormalizationSubgraph/3, where GetParam() = OCV/CPU
[ OK ] Test_ONNX_layers.BatchNormalizationSubgraph/3 (1 ms)
[----------] 16 tests from Test_ONNX_layers (61 ms total)
[----------] Global test environment tear-down
[==========] 16 tests from 1 test case ran. (61 ms total)
[ PASSED ] 16 tests.Thanks a lot! Besides, my next target will be Conv2d Layer. |
| #endif // HAVE_DNN_NGRAPH | ||
|
|
||
| #ifdef HAVE_WEBNN | ||
| ml::Operand BuildConstant(const ml::GraphBuilder& builder, |
There was a problem hiding this comment.
I would suggest to move BuildConstant to op_webnn.h and op_webnn.cpp, so other layers could share the implementation.
There was a problem hiding this comment.
Thanks! I have pushed a new commit to improve this.
modules/dnn/src/op_webnn.hpp
Outdated
|
|
||
|
|
||
| template<typename T> | ||
| inline std::vector<T> getShape(const Mat& mat) |
There was a problem hiding this comment.
Should you add namespace for webnn related methods, say webnn::getShape? BTW, webnn defines dimensions as vector<int32_t>. So it would be better to change this method signature to
std::vector<int32_t> getDimensions(const Mat& mat);There was a problem hiding this comment.
Thanks! I have pushed a new commit to improve this.
|
@huningxin Hi Ningxin, the new codes implement the conv2d layer, constant layer, concat layer and fully connected layer using WebNN backend. Here are the test results:
$ ./bin/opencv_test_dnn --gtest_filter=*Test_Torch_layers.run_linear*
[==========] Running 4 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 4 tests from Test_Torch_layers
[ RUN ] Test_Torch_layers.run_linear/0, where GetParam() = WEBNN/CPU
[ OK ] Test_Torch_layers.run_linear/0 (34 ms)
[ RUN ] Test_Torch_layers.run_linear/1, where GetParam() = OCV/OCL
[ OK ] Test_Torch_layers.run_linear/1 (4 ms)
[ RUN ] Test_Torch_layers.run_linear/2, where GetParam() = OCV/OCL_FP16
[ SKIP ] Test with tag 'dnn_skip_ocl_fp16' is skipped ('dnn_skip_ocl_fp16' is in skip list)
[ OK ] Test_Torch_layers.run_linear/2 (1 ms)
[ RUN ] Test_Torch_layers.run_linear/3, where GetParam() = OCV/CPU
[ OK ] Test_Torch_layers.run_linear/3 (1 ms)
[----------] 4 tests from Test_Torch_layers (40 ms total)
[----------] Global test environment tear-down
[ SKIPSTAT ] 1 tests skipped
[ SKIPSTAT ] TAG='dnn_skip_ocl_fp16' skip 1 tests
[==========] 4 tests from 1 test case ran. (40 ms total)
[ PASSED ] 4 tests.
$ ./bin/opencv_test_dnn --gtest_filter=*Test_ONNX_layers.Concat*
[==========] Running 4 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 4 tests from Test_ONNX_layers
[ RUN ] Test_ONNX_layers.Concatenation/0, where GetParam() = WEBNN/CPU
[ OK ] Test_ONNX_layers.Concatenation/0 (28 ms)
[ RUN ] Test_ONNX_layers.Concatenation/1, where GetParam() = OCV/OCL
OpenCV(ocl4dnn): consider to specify kernel configuration cache directory
via OPENCV_OCL4DNN_CONFIG_PATH parameter.
OpenCL program build log: dnn/dummy
Status -11: CL_BUILD_PROGRAM_FAILURE
-cl-no-subgroup-ifp
Error in processing command line: Don't understand command line argument "-cl-no-subgroup-ifp"!
[ OK ] Test_ONNX_layers.Concatenation/1 (7 ms)
[ RUN ] Test_ONNX_layers.Concatenation/2, where GetParam() = OCV/OCL_FP16
[ OK ] Test_ONNX_layers.Concatenation/2 (2 ms)
[ RUN ] Test_ONNX_layers.Concatenation/3, where GetParam() = OCV/CPU
[ OK ] Test_ONNX_layers.Concatenation/3 (2 ms)
[----------] 4 tests from Test_ONNX_layers (39 ms total)
[----------] Global test environment tear-down
[==========] 4 tests from 1 test case ran. (39 ms total)
[ PASSED ] 4 tests.
$ ./bin/opencv_test_dnn --gtest_filter=*Test_ONNX_layers.Convolution/0*
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from Test_ONNX_layers
[ RUN ] Test_ONNX_layers.Convolution/0, where GetParam() = WEBNN/CPU
[ OK ] Test_ONNX_layers.Convolution/0 (26 ms)
[----------] 1 test from Test_ONNX_layers (26 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (26 ms total)
[ PASSED ] 1 test.Existing problems: |
There was a problem hiding this comment.
What's the reason to add this tryQuantize for batch norm layer and others?
There was a problem hiding this comment.
Hi Ningxin, this tryQuantize function seems to be a newly added function in the main branch.
There was a problem hiding this comment.
When I upload my new codes, github reminds me that there are some conflicts since the main branch added this tryQuantize function but my PR didn't. Thus, in order to resolve the possible conflicts, I added this function to my PR.
There was a problem hiding this comment.
this comes from upstream merged PR #20228 (some experimental whole network int8 quantization...)
There was a problem hiding this comment.
Hi Ningxin, I have re-based my codes on current upstream master repo. This should be fixed now.
|
|
||
| #endif | ||
|
|
||
| virtual bool tryQuantize(const std::vector<std::vector<float> > &scales, |
|
@huningxin Hi Ningxin, since the checks of bot failed, I reviewed the building log provided by the bot, while the errors are quite strange. To find the reason of such errors, I downloaded the latest |
Check nightly builds. There are several problems, but they are different. |
|
@alalek Thanks a lot, but I didn't get your point after reviewing the nightly builds logs. Sorry.:sweat_smile: Do you mean that there are some places in my codes caused the failed build? |
|
@alalek I have checked the nightly builds carefully and finally found the error. Thanks! I will try to fix this. |
modules/dnn/src/dnn.cpp
Outdated
| // blob_.copyTo(impl->netInputLayer->inputsData[pin.oid]); | ||
| impl->netInputLayer->inputsData.emplace(impl->netInputLayer->inputsData.begin()+pin.oid, blob_); | ||
| impl->netInputLayer->inputsData.erase(impl->netInputLayer->inputsData.begin()+pin.oid+1); |
There was a problem hiding this comment.
It seems that this is a test version. I have fixed this. Thanks!
There was a problem hiding this comment.
This has been fixed now. Some building errors are also fixed. Thanks!
| if (ksize != 2) | ||
| { | ||
| CV_LOG_WARNING(NULL, "WebNN only supports Conv2d."); | ||
| } | ||
| return ksize == 2; |
There was a problem hiding this comment.
Avoid code duplication in conditions.
Use return false; after warning and return true; below.
There was a problem hiding this comment.
This has been fixed with the new commit.
cmake/templates/cvconfig.h.in
Outdated
| /* Webnn support */ | ||
| #cmakedefine HAVE_WEBNN | ||
|
|
There was a problem hiding this comment.
Do we really need that for whole OpenCV library?
Limit this for DNN module only.
There was a problem hiding this comment.
Thanks for your comment. I have removed this.
There was a problem hiding this comment.
Sorry for that, it is removed now. 😁
There was a problem hiding this comment.
Hi @alalek, since I removed the #cmakedefine HAVE_WEBNN from the cvconfig.h.in file, the HAVE_WEBNN macro does not work well. Besides, I didn't find a proper .in file that only for dnn module to place this macro. Could you please give some suggestions? Thanks!
There was a problem hiding this comment.
There are no module-specific .in files for now.
Please add definition through CMake: https://github.com/opencv/opencv/blob/4.5.4/modules/dnn/CMakeLists.txt#L21-L23
There was a problem hiding this comment.
Thanks a lot! This has been fixed now.
modules/js/CMakeLists.txt
Outdated
| endif() | ||
|
|
||
| set(EMSCRIPTEN_LINK_FLAGS "${EMSCRIPTEN_LINK_FLAGS} --memory-init-file 0 -s TOTAL_MEMORY=128MB -s WASM_MEM_MAX=1GB -s ALLOW_MEMORY_GROWTH=1") | ||
| set(EMSCRIPTEN_LINK_FLAGS "${EMSCRIPTEN_LINK_FLAGS} -s USE_WEBNN=1 --memory-init-file 0 -s TOTAL_MEMORY=128MB -s WASM_MEM_MAX=1GB -s ALLOW_MEMORY_GROWTH=1") |
There was a problem hiding this comment.
-s USE_WEBNN=1
This should be a configurable build option.
There was a problem hiding this comment.
Thanks! I have added this option to build_js.py.
| struct Pool2dOptions { | ||
| public: | ||
| std::vector<int32_t> windowDimensions; | ||
| std::vector<int32_t> padding; |
There was a problem hiding this comment.
Thanks. This is fixed in the new commit.
cmake/OpenCVDetectWebNN.cmake
Outdated
| if(WITH_WEBNN) | ||
| set(WEBNN_HEADER_DIRS "$ENV{WEBNN_NATIVE_DIR}/gen/src/include") | ||
| set(WEBNN_INCLUDE_DIRS "$ENV{WEBNN_NATIVE_DIR}/../../src/include") | ||
| set(WEBNN_LIBRARIES "$ENV{WEBNN_NATIVE_DIR}/libwebnn_native.so;$ENV{WEBNN_NATIVE_DIR}/libwebnn_proc.so") |
There was a problem hiding this comment.
$ENV{WEBNN_NATIVE_DIR}
It is better to use CMake variable instead (you can still initialize this from environment, see usage of ocv_check_environment_variables).
There was a problem hiding this comment.
Thanks for your comment. I have fixed it.
cmake/OpenCVDetectWebNN.cmake
Outdated
| message(WARNING "Can't use WebNN-native") | ||
| return() | ||
| endif() | ||
| message(AUTHOR_WARNING "Use WebNN-native") |
There was a problem hiding this comment.
native
What is about Emscripten case?
There was a problem hiding this comment.
Thanks for your suggestion. More logs have been added.
| @@ -0,0 +1,55 @@ | |||
| // Modules to control application life and create native browser window | |||
There was a problem hiding this comment.
main.js
Does this file for Electron usage only?
There was a problem hiding this comment.
Yes, I have moved such files into a sub-directory called webnn-electron.
| { | ||
| "name": "image_classification", | ||
| "version": "0.0.1", | ||
| "description": "An Electon.js example of image_classification using webnn-native", |
There was a problem hiding this comment.
It makes sense to move Electron-specific stuff into sub-directory to avoid confusion.
There was a problem hiding this comment.
Thanks! It is moved to webnn-electron folder.
modules/dnn/CMakeLists.txt
Outdated
| if(HAVE_WEBNN) | ||
| list(APPEND include_dirs ${WEBNN_HEADER_DIRS}) | ||
| list(APPEND include_dirs ${WEBNN_INCLUDE_DIRS}) | ||
| list(APPEND libs -Wl,--whole-archive ${WEBNN_LIBRARIES} -Wl,--no-whole-archive) |
There was a problem hiding this comment.
Don't use tabs.
Indentation in CMake scripts is 2 spaces.
Update dnn.cpp Update dnn.cpp Update dnn.cpp Update dnn.cpp Add WebNN head files into OpenCV 3rd partiy files Create webnn.hpp update cmake Complete README and add OpenCVDetectWebNN.cmake file add webnn.cpp Modify webnn.cpp Can successfully compile the codes for creating a MLContext Update webnn.cpp Update README.md Update README.md Update README.md Update README.md Update cmake files and update README.md Update OpenCVDetectWebNN.cmake and README.md Update OpenCVDetectWebNN.cmake Fix OpenCVDetectWebNN.cmake and update README.md Add source webnn_cpp.cpp and libary libwebnn_proc.so Update dnn.cpp Update dnn.cpp Update dnn.cpp Update dnn.cpp update dnn.cpp update op_webnn update op_webnn Update op_webnn.hpp update op_webnn.cpp & hpp Update op_webnn.hpp Update op_webnn update the skeleton Update op_webnn.cpp Update op_webnn Update op_webnn.cpp Update op_webnn.cpp Update op_webnn.hpp update op_webnn update op_webnn Solved the problems of released variables. Fixed the bugs in op_webnn.cpp Implement op_webnn Implement Relu by WebNN API Update dnn.cpp for better test Update elementwise_layers.cpp Implement ReLU6 Update elementwise_layers.cpp Implement SoftMax using WebNN API Implement Reshape by WebNN API Implement PermuteLayer by WebNN API Implement PoolingLayer using WebNN API Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Implement poolingLayer by WebNN API and add more detailed logs Update dnn.cpp Update dnn.cpp Remove redundant codes and add more logs for poolingLayer Add more logs in the pooling layer implementation Fix the indent issue and resolve the compiling issue Fix the build problems Fix the build issue FIx the build issue Update dnn.cpp Update dnn.cpp
This is a temporary file for Conv2d layer implementation
Please merge changes for upstream repository (to resolve conflicts): |
|
Hi @alalek, Thanks for your comments. I will work on that. |
|
Hi @alalek, the conflicts have been resolved. Please have a look. Thanks! |
|
Hi @alalek, did you test the OpenCV dnn module with OpenVINO backend using the latest 4.x version? According to my testing report, both current formal 4.x version and the latest version in this PR cannot run dnn module using OpenVINO backend. The error is like |
samples/dnn/classification.cpp
Outdated
| // Put efficiency information. | ||
| std::vector<double> layersTimes; | ||
| double freq = getTickFrequency() / 1000; | ||
| t = net.getPerfProfile(layersTimes) / freq; | ||
| t_sum += t; |
There was a problem hiding this comment.
It is better to use external TickMeter instead of .getPerfProfile()
samples/dnn/classification.cpp
Outdated
| //! [Get a class with a highest score] | ||
| Point classIdPoint; | ||
| double t_sum = 0.0; | ||
| double t; |
There was a problem hiding this comment.
double t;
Prefer to declare variables near its usage. It significantly increases code readability.
samples/dnn/classification.cpp
Outdated
| #include <opencv2/dnn.hpp> | ||
| #include <opencv2/imgproc.hpp> | ||
| #include <opencv2/highgui.hpp> | ||
| #include "sys/time.h" |
modules/dnn/src/webnn/README.md
Outdated
|
|
||
| ### Test native DNN_BACKEND_WEBNN backend | ||
| Add -DWITH_WEBNN=ON to the cmake command to build the WebNN module such as: | ||
| `cmake -DWITH_WEBNN=ON ../opencv` (according to the [Installation in Linux](https://docs.opencv.org/master/d7/d9f/tutorial_linux_install.html)) No newline at end of file |
There was a problem hiding this comment.
https://docs.opencv.org/master
Direct URL links on docs.opencv.org are not allowed.
Use doxygen references instead.
There was a problem hiding this comment.
Hi @alalek, I changed this link but I'm not sure the new link is as your expectation, so please have a look. Thanks!
There was a problem hiding this comment.
Please use @ref tutorial_linux_install
There was a problem hiding this comment.
Thanks! I have modified this in the new commit.
This version is increased on Oct 4. Perhaps you have outdated binaries somewhere. Please take a look on whitespace and compilation issues on public CI: https://pullrequest.opencv.org/#/summary/ |
|
Hi @alalek, thanks for your comments. I will fix these problems. |
Hi @alalek, your idea is right. However, after trying several latest released versions of OpenVINO, like 2021.4.1 and 2021.4.2, the OpenCV dnn module with OpenVINO still cannot work. It seems that the version |
|
Right, there are no binary releases of latest OpenCV versions with OpenVINO (these packages use 4.5.3 variant). |
|
There are still red builds which blocks PR merge. |
|
Hi @alalek, The CI problems have been fixed now. Thanks for your comments! |
samples/dnn/classification.cpp
Outdated
|
|
||
| maxProb = *std::max_element(prob.begin<float>(), prob.end<float>()); | ||
| cv::exp(prob-maxProb, softmaxProb); | ||
| sum = cv::sum(softmaxProb)[0]; |
There was a problem hiding this comment.
sum = (float)cv::sum(softmaxProb)[0];
to resolve MSVC warning
alalek
left a comment
There was a problem hiding this comment.
Great job! Thank you for contribution 👍
[GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN * Add WebNN backend for OpenCV DNN Module Update dnn.cpp Update dnn.cpp Update dnn.cpp Update dnn.cpp Add WebNN head files into OpenCV 3rd partiy files Create webnn.hpp update cmake Complete README and add OpenCVDetectWebNN.cmake file add webnn.cpp Modify webnn.cpp Can successfully compile the codes for creating a MLContext Update webnn.cpp Update README.md Update README.md Update README.md Update README.md Update cmake files and update README.md Update OpenCVDetectWebNN.cmake and README.md Update OpenCVDetectWebNN.cmake Fix OpenCVDetectWebNN.cmake and update README.md Add source webnn_cpp.cpp and libary libwebnn_proc.so Update dnn.cpp Update dnn.cpp Update dnn.cpp Update dnn.cpp update dnn.cpp update op_webnn update op_webnn Update op_webnn.hpp update op_webnn.cpp & hpp Update op_webnn.hpp Update op_webnn update the skeleton Update op_webnn.cpp Update op_webnn Update op_webnn.cpp Update op_webnn.cpp Update op_webnn.hpp update op_webnn update op_webnn Solved the problems of released variables. Fixed the bugs in op_webnn.cpp Implement op_webnn Implement Relu by WebNN API Update dnn.cpp for better test Update elementwise_layers.cpp Implement ReLU6 Update elementwise_layers.cpp Implement SoftMax using WebNN API Implement Reshape by WebNN API Implement PermuteLayer by WebNN API Implement PoolingLayer using WebNN API Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Implement poolingLayer by WebNN API and add more detailed logs Update dnn.cpp Update dnn.cpp Remove redundant codes and add more logs for poolingLayer Add more logs in the pooling layer implementation Fix the indent issue and resolve the compiling issue Fix the build problems Fix the build issue FIx the build issue Update dnn.cpp Update dnn.cpp * Fix the build issue * Implement BatchNorm Layer by WebNN API * Update convolution_layer.cpp This is a temporary file for Conv2d layer implementation * Integrate some general functions into op_webnn.cpp&hpp * Update const_layer.cpp * Update convolution_layer.cpp Still have some bugs that should be fixed. * Update conv2d layer and fc layer still have some problems to be fixed. * update constLayer, conv layer, fc layer There are still some bugs to be fixed. * Fix the build issue * Update concat_layer.cpp Still have some bugs to be fixed. * Update conv2d layer, fully connected layer and const layer * Update convolution_layer.cpp * Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron) * Delete bib19450.aux * Add WebNN backend for OpenCV DNN Module Update dnn.cpp Update dnn.cpp Update dnn.cpp Update dnn.cpp Add WebNN head files into OpenCV 3rd partiy files Create webnn.hpp update cmake Complete README and add OpenCVDetectWebNN.cmake file add webnn.cpp Modify webnn.cpp Can successfully compile the codes for creating a MLContext Update webnn.cpp Update README.md Update README.md Update README.md Update README.md Update cmake files and update README.md Update OpenCVDetectWebNN.cmake and README.md Update OpenCVDetectWebNN.cmake Fix OpenCVDetectWebNN.cmake and update README.md Add source webnn_cpp.cpp and libary libwebnn_proc.so Update dnn.cpp Update dnn.cpp Update dnn.cpp Update dnn.cpp update dnn.cpp update op_webnn update op_webnn Update op_webnn.hpp update op_webnn.cpp & hpp Update op_webnn.hpp Update op_webnn update the skeleton Update op_webnn.cpp Update op_webnn Update op_webnn.cpp Update op_webnn.cpp Update op_webnn.hpp update op_webnn update op_webnn Solved the problems of released variables. Fixed the bugs in op_webnn.cpp Implement op_webnn Implement Relu by WebNN API Update dnn.cpp for better test Update elementwise_layers.cpp Implement ReLU6 Update elementwise_layers.cpp Implement SoftMax using WebNN API Implement Reshape by WebNN API Implement PermuteLayer by WebNN API Implement PoolingLayer using WebNN API Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Update pooling_layer.cpp Implement poolingLayer by WebNN API and add more detailed logs Update dnn.cpp Update dnn.cpp Remove redundant codes and add more logs for poolingLayer Add more logs in the pooling layer implementation Fix the indent issue and resolve the compiling issue Fix the build problems Fix the build issue FIx the build issue Update dnn.cpp Update dnn.cpp * Fix the build issue * Implement BatchNorm Layer by WebNN API * Update convolution_layer.cpp This is a temporary file for Conv2d layer implementation * Integrate some general functions into op_webnn.cpp&hpp * Update const_layer.cpp * Update convolution_layer.cpp Still have some bugs that should be fixed. * Update conv2d layer and fc layer still have some problems to be fixed. * update constLayer, conv layer, fc layer There are still some bugs to be fixed. * Update conv2d layer, fully connected layer and const layer * Update convolution_layer.cpp * Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron) * Update dnn.cpp * Fix Error in dnn.cpp * Resolve duplication in conditions in convolution_layer.cpp * Fixed the issues in the comments * Fix building issue * Update tutorial * Fixed comments * Address the comments * Update CMakeLists.txt * Offer more accurate perf test on native * Add better perf tests for both native and web * Modify per tests for better results * Use more latest version of Electron * Support latest WebNN Clamp op * Add definition of HAVE_WEBNN macro * Support group convolution * Implement Scale_layer using WebNN * Add Softmax option for native classification example * Fix comments * Fix comments
Overview
Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN
Mentor: Ningxin Hu @huningxin
Student: Hanxi Guo @MarkGHX
This pull request changes
Test
My test environments
Preparations
/GSoC2021(for example)./GSoC2021and install version2.0.15./GSoC2021/emsdk/upstream/emscriptenwith emscripten-webnn (branchwebnn_2.0.15).To run OpenCV native DNN module with WebNN backend
$ mkdir build $ cd build $ cmake -DCMAKE_BUILD_TYPE=Release -DWITH_WEBNN=ON -DBUILD_EXAMPLES=ON -DBUILD_TEST=ON -DCMAKE_INSTALL_PREFIX=/usr/local .. $ make./opencv/build/bin$ ./bin/example_dnn_classification --model=./bin/googlenet-v1.caffemodel --config=./bin/googlenet-v1.prototxt --width=224 --height=224 --classes=./bin/classification_classes_ILSVRC2012.txt --input=./bin/space_shuttle.jpg --mean="104 117 123" --rgb=false --backend=67.Expected result:

To run OpenCV.js DNN module with WebNN backend using WebNN-polyfill
$ cd opencv/build_js/doc/doxygen/html/ $ http-serverhttp://127.0.0.1:8080/js_image_classification_webnn_polyfill.html. Then you can test OpenCV.js GoogleNet with WebNN backend in image classification task.To run OpenCV.js DNN module with WebNN backend using Electron
$ cd opencv/build_js/doc/doxygen/html/ $ npm install $ npm run startPerformance results
Inference time in one round
Average inference time of 200 rounds
Performance analysis
OpenCV native DNN module
From the performance results above, we could find that in OpenCV native DNN module (GoogleNet for example), using WebNN backend is 5ms (18.2%) faster than using default implementation. However, there is still a gap between using WebNN backend and using OpenVINO backend. I think that this is because the LRN and Dropout layers in GoogleNet is not implemented by WebNN yet, which in turn divides the graph into four sub-graphs. Then the four sub-graphs are linked with default LRN and Dropout implementations. Using such sub-graphs instead of using a whole graph to do the optimization reduces the performance of OpenCV native DNN module with WebNN backend. To the contrast, SqueezeNet's ops are all use WebNN backend except Softmax. Thus, SqueezeNet using WebNN backend is not divided into different parts and its performance is very close to the SqueezeNet using OpenVINO backend.
OpenCV.js DNN module
From the performance results above, we could find that Both OpenCV.js DNN module (GoogleNet as an example) using WebNN-polyfill and using WebNN-Electron are better than OpenCV.js DNN module using only wasm. They outperform the default OpenCV.js DNN module using only wasm 10.93x and 32.13x respectively. Compared with OpenCV.js DNN module with wasm+simd+threads, OpenCV.js DNN module with WebNN-polyfill is similar to it while OpenCV.js DNN module with WebNN-Electron is at least 2x faster than it.