Conversation
…g against kernel32.lib etc (#2346) add onecoreuap_apiset.lib in order to avoid linking against kernel32.lib etc and violating our OS layering requirements. We linked against onecoreuap_apiset.lib in VB so we will continue doing this, but I am still unsure why not to link against onecore instead since that is where we ship. However, since Sheil is the owner of this code we will wait to discuss with him before changing anything.
* update build instructions to include --build_shared_lib * fix line breaks
* Task 23998197: add winml_lib_core into onnnxruntime.dll * PR feedback build break on perf_test
#2382) this is a big PR. we are going to move it up to layer_dev , which is still a L3 so we are still safe to do work there agile. we are going to move this into the L3 so that ryan can start doing intergration testing. we will pause for a full code review and integration test result prior to going into the L2. >>>> raw comments from previous commits >>> * LearningModelSession is cleaned up to use the adapter, and parts of binding are. * moved everything in the winmladapter made it all nano-com using, WRL to construct objects in the ORT side. base interfaces for everythign for winml to call cleaned up a bunch of winml to use the base interfaces. * more pieces * GetData across the abi. * renamed some namepsace cleaned up OrtValue cleaned up Tensor cleaned up custom ops. everything *but* learnignmodel should be clean * make sure it's building. winml.dll is still a monolith.
everything builds clean. step !
* model moved over. everything builds clean. step ! * weak ref comment
…r creating winml objects. fixes model load.
* model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load.
* add option to enable winml telemetry * add option to enable winml telemetry * clean logs while developping * clean the log of GUID * compile onnxruntime_common with winml telemetry * use option for use_telemetry * rename option winml_use_telemetry to onnxruntime_use_telemetry * little change
fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU
* model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU
* model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * PR feedback.
* model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * PR feedback. * couple of fixes and coded getmutabledata()
* model moved over. everything builds clean. step ! * weak ref comment * added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects. fixes model load. * fixed some lifetime management. fixed the debug build. squeezenet passes using winmlrunner for CPU and GPU * PR feedback. * couple of fixes and coded getmutabledata() * fixed 2 more heap corruptions
* Add opset and IR check. * Add test case for future opsets. #2371
found a leak in nvidia driver, but skipped it. all winmlapitests pass now
snnn
reviewed
Jan 31, 2020
snnn
reviewed
Jan 31, 2020
snnn
reviewed
Jan 31, 2020
snnn
reviewed
Jan 31, 2020
snnn
reviewed
Jan 31, 2020
Contributor
|
Overall it LGTM. There are some tiny things that must get fixed. |
| |**Build Shared Library**|--build_shared_lib|| | ||
| |**Build Python wheel**|--build_wheel|| | ||
| |**Build C# and C packages**|--build_csharp|| | ||
| |**Build WindowsML**|--use_winml<br>--use_dml<br>--build_shared_lib|WindowsML depends on DirectML and the OnnxRuntime shared library.| |
Contributor
There was a problem hiding this comment.
Can we avoid users from typing --build_shared_lib and just infer it when --use_winml is specified? It's just one less thing to remember for users.
Contributor
|
I haven't checked all the files under the winml/ folder. Please make sure licence headers and copyright notices are added in all these new files. |
* CR feedback * fix weird formatting on privacy readme * Add 'All rights reserved.' everywhere * readd all rights reserved to winml_provider_factory.h * remove extra space in comment * remove extra whitespace
snnn
reviewed
Feb 4, 2020
snnn
reviewed
Feb 4, 2020
snnn
previously approved these changes
Feb 4, 2020
martinb35
previously approved these changes
Feb 4, 2020
martinb35
previously approved these changes
Feb 4, 2020
martinb35
approved these changes
Feb 4, 2020
weixingzhang
pushed a commit
that referenced
this pull request
Mar 23, 2020
Merge up to commit 4f4f4bc There were several very large pull requests in public master: #2956 #2958 #2961 **BERT-Large, FP16, seq=128:** Batch = 66 Throughput = 189.049 ex/sec **BERT-Large, FP16, seq=512:** Batch = 10 Throughput = 36.6335 ex/sec **BERT-Large, FP32, seq=128:** Batch = 33 Throughput = 42.2642 ex/sec **BERT-Large, FP32, seq=512:** Batch = 5 Throughput = 9.32792 ex/sec **BERT-Large LAMB convergence:**  `$ python watch_experiment.py --subscription='4aaa645c-5ae2-4ae9-a17a-84b9023bc56a' --resource_group='onnxtraining' --workspace='onnxtraining' --remote_dir='logs/tensorboard/' --local_dir='D:/tensorboard/bert-large/fp16/lamb/seq128/lr3e-3/wr0.2843/master/' --run='BERT-ONNX_1581120364_71872cef'` **E2E**: PASSED https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=117300&view=results
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR is for comment on bringing the newly layered Windows ML components into the master branch for the ONNX runtime for windows.
This is anticipation of a beta release of these bits over the next couple of months. We will include more documentation on how to use , how the layers work, and the relationships of WinML and ORT and DML (which we introduced to master in ORT v1.0) .
Some of the things we have done in this PR:
- Added a top level directory "/winml"
- Contributed all of the windows inbox code from the Windows.AI.MachineLearning namespace into that directory . Making it available using the MIT license.
- Started a layering effort to have the new Windows.AI.MachineLearning.dll consume the onnxruntime.dll c_abi that we introduced in v1.0 .
- Added an "adapter" module that gets linked into the core onnxruntime.dll. This adapter is private to the WinML component, and provides ABI functionality required for the layering effort. this is not a new public ABI and is not support for developer to call.
- Made the WinML ABI fully available for public developers to call.
- You can now include both of these DLL's (WinML + ORT) in your projects that want to use the WinML ABI as a redist component.
- Added cmakery for all of this work. There is now a "use_winml" build flag in addition to the "use_dml" build flag
- Added google test unit tests for the newly added WinML ABI. these are under the top level "winml/test" folder.
- And several other things :)
Enjoy !