Standalone TVM Executor Provider#10019
Conversation
…or fixed input ones by TVM API
…uto_TVM is provider option (tuning_type)
…ssary parameter was added to STVM provider options
90badc7 to
af8d49a
Compare
include/onnxruntime/core/providers/stvm/stvm_provider_factory.h
Outdated
Show resolved
Hide resolved
| if (onnxruntime_USE_STVM) | ||
| # find_library(STVM_LIBS NAMES libtvm.so PATHS ${onnxruntime_STVM_HOME}/lib) | ||
| # link_directories(onnxruntime_test_all ${STVM_LIBS}) | ||
| find_library(PYTHON_LIBS NAMES libpython PATHS /usr/local/lib) |
There was a problem hiding this comment.
why is python needed for unit tests?
There was a problem hiding this comment.
It is specific of TVM. It uses native and python code in combination.
| ) | ||
| if(${CMAKE_SYSTEM_NAME} MATCHES "Darwin") | ||
| target_link_libraries(onnxruntime_providers_stvm PRIVATE ${onnxruntime_STVM_HOME}/build/libtvm.dylib) | ||
| else() |
There was a problem hiding this comment.
this assumes Unix/Linux systems only?
Does this work on Windows?
There was a problem hiding this comment.
Just now the target platform is Unix/Linux system and all tests were done on it only. In fact TVM can be supported on iOS or Windows. Now it is simply a draft for future development we did not check work on Windows.
There was a problem hiding this comment.
We've made a note in the documentation that this "preview" EP has only been tested in Linux. It is part of our plans to ensure this can run in Windows, but that capability won't be included in this PR.
|
/azp run Linux CPU Minimal Build E2E CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux Nuphar CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-python-checks-ci-pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux CPU x64 NoContribops CI Pipeline, Linux CPU x64 NoContribops CI Pipeline |
|
Azure Pipelines successfully started running 6 pipeline(s). |
|
/azp run Linux GPU CI Pipeline |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run MacOS NoContribops CI Pipeline, Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows WebAssembly CI Pipeline, orttraining-amd-gpu-ci-pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed, Linux CPU CI Pipeline |
|
Azure Pipelines successfully started running 8 pipeline(s). |
|
/azp run Linux CPU CI Pipeline |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run Linux CPU Minimal Build E2E CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux Nuphar CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-python-checks-ci-pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux CPU x64 NoContribops CI Pipeline, Linux CPU x64 NoContribops CI Pipeline |
|
Azure Pipelines successfully started running 6 pipeline(s). |
|
@jywu-msft please restart CI. I've found only two issues from the first CI failure. Seems my second fix in commit canceled the second CI start. |
|
/azp run Linux CPU Minimal Build E2E CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux Nuphar CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-python-checks-ci-pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux CPU x64 NoContribops CI Pipeline, Linux CPU x64 NoContribops CI Pipeline |
|
Azure Pipelines successfully started running 6 pipeline(s). |
|
/azp run Linux CPU CI Pipeline |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run Linux GPU CI Pipeline, Linux OpenVINO CI Pipeline, Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed |
|
Azure Pipelines successfully started running 8 pipeline(s). |
* squashed commit for standalone tvm execution provider * critical fix for correct python build with stvm ep * get tuning log file from ep options. It has priority over AUTOTVM_TUNING_LOG * updates and fixes * update parsing of stvm provider options * add support of external data for onnx model * add conditional dump of subgraphs * remove unused code * get input tensor shapes through provider options. get output shapes for fixed input ones by TVM API * support AUTO_TVM tuning log file inside ORT. Selector for Ansor and Auto_TVM is provider option (tuning_type) * add fp16 * add functionality of conversion of model layout to NHWC if need. Necessary parameter was added to STVM provider options * fix license text in header. fix log format * small fixes * fix issues from flake8 * remove model proto construction from GetCapability * reserve memory for vector of DLTensors * add simple tutorial for STVM EP * STVM docs * jroesch/tvm -> apache/tvm * remove dead code, unneccessary logs and comments * fix in readme * improve tutorial notebook * tvm update * update STVM_EP.md * fix default value * update STVM_EP.md * some TODOs for the future development * shorten long lines * add hyperlink to STVM_EP.md * fix Linux CI error * fix error in csharp test Co-authored-by: Jared Roesch <[email protected]> Co-authored-by: Valery Chernov <[email protected]> Co-authored-by: KJlaccHoeUM9l <[email protected]> (cherry picked from commit b327e89)
It is preview for new EP: STVM or standalone TVM. It is implementation of TVM as EP separated from Nuphar EP.