When we have memory transfers from the host to a device, or any long running (I/O) method that can be split in a begin and wait part, we can try to hide the latency. (For now this is focused on memory transfers in OpenMP target offloading but the scheme should apply to CUDA and other languages as well.)
Given a blocking cross device memory transfer such as blocking_memcpy_host2device(Dst, Src, N), we want to first split it in two parts, the "issue" and the "wait", something like:
handle = async_issue_memcpy_host2device(Dst, Src, N); wait(handle, Dst, Src, N). Then, we want to move the two calls apart, thus causing the issue to be executed earlier and the wait later. There is a chance that the code we can legally move in-between is now executed while the memcpy is performed, effectively reducing the latency. Note that this also works if we start with a async version.
When we have memory transfers from the host to a device, or any long running (I/O) method that can be split in a begin and wait part, we can try to hide the latency. (For now this is focused on memory transfers in OpenMP target offloading but the scheme should apply to CUDA and other languages as well.)
Given a blocking cross device memory transfer such as
blocking_memcpy_host2device(Dst, Src, N), we want to first split it in two parts, the "issue" and the "wait", something like:handle = async_issue_memcpy_host2device(Dst, Src, N); wait(handle, Dst, Src, N). Then, we want to move the two calls apart, thus causing the issue to be executed earlier and the wait later. There is a chance that the code we can legally move in-between is now executed while the memcpy is performed, effectively reducing the latency. Note that this also works if we start with a async version.