Skip to content

Fix observe broadcast fallback on REST QPU targets#4395

Merged
1tnguyen merged 2 commits intoNVIDIA:mainfrom
zeel2104:fix-observe-broadcast-rest-qpu
Apr 29, 2026
Merged

Fix observe broadcast fallback on REST QPU targets#4395
1tnguyen merged 2 commits intoNVIDIA:mainfrom
zeel2104:fix-observe-broadcast-rest-qpu

Conversation

@zeel2104
Copy link
Copy Markdown
Contributor

Summary

This change fixes cudaq.observe() broadcasting on REST-based QPU targets when ExecutionContext.getExpectationValue() returns None.

REST backends such as OQC and Quantinuum do not always populate executionContext->expectationValue. The non-broadcast observe() path already handled that by reconstructing the expectation value from sampled term results, but __broadcastObserve() passed the None value directly into ObserveResult, which caused a crash.

This patch makes the broadcast path use the same fallback behavior as the non-broadcast path.

What changed

  • Added a shared helper in python/cudaq/runtime/observe.py to resolve the expectation value:
    • return ctx.getExpectationValue() when available
    • otherwise reconstruct it from the sampled term expectations
  • Updated __broadcastObserve() to use that helper
  • Updated the existing non-broadcast path to reuse the same helper instead of duplicating the fallback logic
  • Added backend regression tests for:
    • python/tests/backends/test_OQC.py
    • python/tests/backends/test_Quantinuum_kernel.py

Why this fixes the issue

Previously, the broadcast path assumed the expectation value was always present in the execution context. On REST targets that assumption is false, so ObserveResult(...) received None and raised a TypeError.

With this change, broadcasted observe() calls now fall back to computing the expectation value from the returned sample counts, matching the behavior already used in the non-broadcast path.

Testing

I added regression tests covering broadcasted observe() calls for OQC and Quantinuum.

What I was able to verify locally:

  • the runtime fix is present in python/cudaq/runtime/observe.py
  • the new backend regression tests are present and selected by pytest

What I could not fully verify locally:

  • end-to-end execution of the new tests in my WSL environment

Reason:

  • local runs abort during kernel compilation / MLIR lowering before observe() execution begins
  • the crash occurs in cudaq/kernel/ast_bridge.py / compile_to_mlir
  • because of that, the local environment does not reach the broadcast observe path, so it does not validate the new fallback behavior end-to-end

This appears to be unrelated to the observe broadcast fix itself, since the abort happens before the observe() runtime path is exercised.

Local environment notes

During local setup I had to:

  • build CUDA-Q from source in WSL
  • build a custom LLVM/MLIR toolchain
  • disable the Braket backend locally to avoid unrelated AWS SDK dependency issues

Even after that, the backend tests still abort earlier during kernel compilation in this environment.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 26, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@1tnguyen 1tnguyen requested review from 1tnguyen and sacpis April 26, 2026 22:43
@1tnguyen
Copy link
Copy Markdown
Collaborator

1tnguyen commented Apr 26, 2026

/ok to test c6cbfda

Command Bot: Processing...

@1tnguyen
Copy link
Copy Markdown
Collaborator

Thanks @zeel2104 for your contribution.
Could you please take a look at the CI checks? There are DCO and code formatting issues. For DCO, just follow the instructions there to resolve the pushed commit without sign-off.

github-actions Bot pushed a commit that referenced this pull request Apr 26, 2026
@github-actions
Copy link
Copy Markdown

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

@zeel2104 zeel2104 force-pushed the fix-observe-broadcast-rest-qpu branch from c6cbfda to 1e1c833 Compare April 27, 2026 13:34
@zeel2104
Copy link
Copy Markdown
Contributor Author

zeel2104 commented Apr 27, 2026

@1tnguyen
Done

@1tnguyen
Copy link
Copy Markdown
Collaborator

1tnguyen commented Apr 27, 2026

/ok to test 6721a6a

Command Bot: Processing...

@1tnguyen
Copy link
Copy Markdown
Collaborator

@zeel2104 There is a test failure in the new test case: https://github.com/NVIDIA/cuda-quantum/actions/runs/25017183407/job/73268586629?pr=4395#step:15:8084

It looks like this may be related to shot-based expectation evaluation in these hardware QPU mock tests. You might consider increase the number of shots and adjust the tolerance to be reasonable for stability.

@zeel2104 zeel2104 force-pushed the fix-observe-broadcast-rest-qpu branch from 6721a6a to a4f8b8a Compare April 28, 2026 18:41
@zeel2104
Copy link
Copy Markdown
Contributor Author

@1tnguyen
Updated the OQC and Quantinuum broadcast observe regression tests to compare shot-based results against the analytic expectation with explicit shots and tolerance.

Local WSL validation:
python -m pytest python/tests/backends/test_OQC.py::test_OQC_observe_broadcast python/tests/backends/test_Quantinuum_kernel.py::test_quantinuum_observe_broadcast -q -s
2 passed, 1 warning in 40.54s

@1tnguyen
Copy link
Copy Markdown
Collaborator

1tnguyen commented Apr 28, 2026

/ok to test a4f8b8a

Command Bot: Processing...

github-actions Bot pushed a commit that referenced this pull request Apr 28, 2026
@github-actions
Copy link
Copy Markdown

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

github-actions Bot pushed a commit that referenced this pull request Apr 28, 2026
@github-actions
Copy link
Copy Markdown

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

@splch
Copy link
Copy Markdown
Contributor

splch commented Apr 29, 2026

Thanks for tackling this! Verified the fix works against the real IonQ cloud REST endpoint (target='ionq', qpu='simulator'), not just the local emulator - the broadcasted observe now returns correct expectation values, matching analytical cos(θ) to within shot noise.

One small gap worth filling: the issue explicitly lists IonQ as affected, but the new tests only cover OQC and Quantinuum. Since IonQ goes through the same BaseRemoteRESTQPU path, an analogous test in python/tests/backends/test_IonQ.py would close the loop. Drop-in addition mirroring the style of the new OQC test:

def test_ionq_observe_broadcast():
    qubit_count = 5
    sample_count = 4
    shots_count = 10000
    parameters = np.random.default_rng(13).uniform(low=0,
                                                   high=1,
                                                   size=(sample_count,
                                                         qubit_count))

    @cudaq.kernel
    def kernel(qubit_count: int, parameters: List[float]):
        qvector = cudaq.qvector(qubit_count)
        for i in range(qubit_count - 1):
            rx(parameters[i], qvector[i])

    results = cudaq.observe(kernel,
                            spin.z(0), [qubit_count] * sample_count,
                            parameters,
                            shots_count=shots_count)
    expected = np.cos(parameters[:, 0])

    assert len(results) == sample_count
    assert np.allclose([r.expectation() for r in results], expected, atol=0.1)

Happy to open a follow-up PR if that's easier than tacking it on.

@1tnguyen
Copy link
Copy Markdown
Collaborator

Happy to open a follow-up PR if that's easier than tacking it on.

@splch Thank you very much for verifying this fix.
Yeah, I think it'd be great if you could open a follow-up PR after this PR is merged in.

@1tnguyen
Copy link
Copy Markdown
Collaborator

1tnguyen commented Apr 29, 2026

/ok to test a9484d1

Command Bot: Processing...

Copy link
Copy Markdown
Collaborator

@1tnguyen 1tnguyen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

@1tnguyen 1tnguyen added this pull request to the merge queue Apr 29, 2026
Merged via the queue into NVIDIA:main with commit 733413f Apr 29, 2026
229 of 230 checks passed
github-actions Bot pushed a commit that referenced this pull request Apr 29, 2026
taalexander pushed a commit to taalexander/cuda-quantum that referenced this pull request Apr 29, 2026
## Summary

Follow-up to NVIDIA#4395 (which fixed `cudaq.observe` broadcast on REST QPU
targets). That PR added regression tests for OQC and Quantinuum but not
IonQ, even though NVIDIA#4363 explicitly listed IonQ as affected. IonQ uses
the same `BaseRemoteRESTQPU` path, so an analogous test in
`test_IonQ.py` closes the coverage gap.

The new test mirrors `test_OQC_observe_broadcast` /
`test_quantinuum_observe_broadcast` exactly: a 4-sample parameter sweep
through `cudaq.observe(...)` on `spin.z(0)`, with results compared to
the analytical `cos(theta)` answer.

## Verification

I reproduced the original `TypeError` against `target='ionq'` (both
`emulate=True` and the real cloud `qpu='simulator'`), applied the fix
from NVIDIA#4395, and confirmed the broadcast call now returns correct
expectation values within shot noise. The new test passes on the
post-NVIDIA#4395 main.

## Test plan

- [x] `yapf --style google` clean
- [x] Manually verified against the real IonQ cloud simulator (broadcast
`observe`, 3 parameter sets, 200 shots, results within 0.02 of
analytical `cos(theta)`)
- [ ] CI:
`python/tests/backends/test_IonQ.py::test_ionq_observe_broadcast`

Signed-off-by: Spencer Churchill <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

cudaq.observe broadcast crashes with TypeError on all REST QPU targets (OQC, Quantinuum, …)

3 participants