TESTS: Added test for parallel LoadNetwork with accuracy check#858
Conversation
756189a to
8694187
Compare
|
@vladimir-paramuzov could you please have a look why the tests fail on GPU? |
@ilya-lavrenov As I understand, the issue is connected with the lifetime of output blob from the test run. CLDNN plugin just wrap up the memory allocated by cldnn::network for each output blob to avoid extra copies, so once ExecNetwork is freed, these memory buffers are invalidated as well. The solution is to allocate unique output buffers for each infer request and force cldnn to use them instead of ones allocated internally. This is a known limitation in cldnn and we have a task to fix it (27643). So as a WA I can suggest either temporary suppression of these tests for GPU or using SetBlob API: req.SetInput(blobs);
InferenceEngine::Blob::Ptr blob = make_blob_with_precision(network.getOutputsInfo().begin()->second->getTensorDesc());
blob->allocate();
req.SetBlob(network.getOutputsInfo().begin()->first, blob);
req.Infer();
return blob;With these changes tests passed on my machine. |
8694187 to
401258d
Compare
|
@vladimir-paramuzov thanks for the detailed explanations! Updated the test, hope GPU will pass. |
c1a6399 to
bafcea4
Compare
CVS-31714