Fix observe broadcast fallback on REST QPU targets#4395
Fix observe broadcast fallback on REST QPU targets#43951tnguyen merged 2 commits intoNVIDIA:mainfrom
Conversation
Command Bot: Processing... |
|
CUDA Quantum Docs Bot: A preview of the documentation can be found here. |
c6cbfda to
1e1c833
Compare
|
@1tnguyen |
Command Bot: Processing... |
|
@zeel2104 There is a test failure in the new test case: https://github.com/NVIDIA/cuda-quantum/actions/runs/25017183407/job/73268586629?pr=4395#step:15:8084 It looks like this may be related to shot-based expectation evaluation in these hardware QPU mock tests. You might consider increase the number of shots and adjust the tolerance to be reasonable for stability. |
Signed-off-by: Zeel <[email protected]>
6721a6a to
a4f8b8a
Compare
|
@1tnguyen Local WSL validation: |
Command Bot: Processing... |
|
CUDA Quantum Docs Bot: A preview of the documentation can be found here. |
|
CUDA Quantum Docs Bot: A preview of the documentation can be found here. |
|
Thanks for tackling this! Verified the fix works against the real IonQ cloud REST endpoint ( One small gap worth filling: the issue explicitly lists IonQ as affected, but the new tests only cover OQC and Quantinuum. Since IonQ goes through the same def test_ionq_observe_broadcast():
qubit_count = 5
sample_count = 4
shots_count = 10000
parameters = np.random.default_rng(13).uniform(low=0,
high=1,
size=(sample_count,
qubit_count))
@cudaq.kernel
def kernel(qubit_count: int, parameters: List[float]):
qvector = cudaq.qvector(qubit_count)
for i in range(qubit_count - 1):
rx(parameters[i], qvector[i])
results = cudaq.observe(kernel,
spin.z(0), [qubit_count] * sample_count,
parameters,
shots_count=shots_count)
expected = np.cos(parameters[:, 0])
assert len(results) == sample_count
assert np.allclose([r.expectation() for r in results], expected, atol=0.1)Happy to open a follow-up PR if that's easier than tacking it on. |
@splch Thank you very much for verifying this fix. |
Command Bot: Processing... |
## Summary Follow-up to NVIDIA#4395 (which fixed `cudaq.observe` broadcast on REST QPU targets). That PR added regression tests for OQC and Quantinuum but not IonQ, even though NVIDIA#4363 explicitly listed IonQ as affected. IonQ uses the same `BaseRemoteRESTQPU` path, so an analogous test in `test_IonQ.py` closes the coverage gap. The new test mirrors `test_OQC_observe_broadcast` / `test_quantinuum_observe_broadcast` exactly: a 4-sample parameter sweep through `cudaq.observe(...)` on `spin.z(0)`, with results compared to the analytical `cos(theta)` answer. ## Verification I reproduced the original `TypeError` against `target='ionq'` (both `emulate=True` and the real cloud `qpu='simulator'`), applied the fix from NVIDIA#4395, and confirmed the broadcast call now returns correct expectation values within shot noise. The new test passes on the post-NVIDIA#4395 main. ## Test plan - [x] `yapf --style google` clean - [x] Manually verified against the real IonQ cloud simulator (broadcast `observe`, 3 parameter sets, 200 shots, results within 0.02 of analytical `cos(theta)`) - [ ] CI: `python/tests/backends/test_IonQ.py::test_ionq_observe_broadcast` Signed-off-by: Spencer Churchill <[email protected]>
Summary
This change fixes
cudaq.observe()broadcasting on REST-based QPU targets whenExecutionContext.getExpectationValue()returnsNone.REST backends such as OQC and Quantinuum do not always populate
executionContext->expectationValue. The non-broadcastobserve()path already handled that by reconstructing the expectation value from sampled term results, but__broadcastObserve()passed theNonevalue directly intoObserveResult, which caused a crash.This patch makes the broadcast path use the same fallback behavior as the non-broadcast path.
What changed
python/cudaq/runtime/observe.pyto resolve the expectation value:ctx.getExpectationValue()when available__broadcastObserve()to use that helperpython/tests/backends/test_OQC.pypython/tests/backends/test_Quantinuum_kernel.pyWhy this fixes the issue
Previously, the broadcast path assumed the expectation value was always present in the execution context. On REST targets that assumption is false, so
ObserveResult(...)receivedNoneand raised aTypeError.With this change, broadcasted
observe()calls now fall back to computing the expectation value from the returned sample counts, matching the behavior already used in the non-broadcast path.Testing
I added regression tests covering broadcasted
observe()calls for OQC and Quantinuum.What I was able to verify locally:
python/cudaq/runtime/observe.pyWhat I could not fully verify locally:
Reason:
observe()execution beginscudaq/kernel/ast_bridge.py/compile_to_mlirThis appears to be unrelated to the
observebroadcast fix itself, since the abort happens before theobserve()runtime path is exercised.Local environment notes
During local setup I had to:
Even after that, the backend tests still abort earlier during kernel compilation in this environment.