Conversation
|
At the moment:
Everything else should be implemented. It seems like there are still scheduling issues somewhere, but there should be only a few left as I already corrected quite a few. |
It is performing circuit generation and simulation. In theory, the cache should allow that CI job to run in ~7 minutes, but it has been written thinking that we could update a cache. It turns out that is not the case (see the Update a cache section). I will try the code provided as example in the above section, but that might lead to a lot of caches that end up being unused. Let's try it out.
Yes :) |
|
So now in theory:
I will see if there is a solution to automatically delete cache entries when merging the PR. |
|
@purva-thakre could you double check the last changes made to CI? |
|
I re-ran the doc job to see if the cache works. |
|
@nelimee I see a new cache was added 3 hours ago. This PR is using the new cache. Maybe when Yiming noticed the docs build took longer, it was because the cache was being added. Now, the time taken by the docs build is approximately the same on main and this PR.
|
| path: docs/_examples_database/ | ||
| key: docs_database_and_stim_results_cache | ||
| # See https://github.com/actions/cache/blob/main/tips-and-workarounds.md#update-a-cache | ||
| key: docs_database_and_stim_results_cache-${{ github.run_id }} |
There was a problem hiding this comment.
@nelimee we don't want to create a cache for every run, right?
There was a problem hiding this comment.
a new cache is created at each workflow run,
This is going to increase the docs build time for the first time for each run.
There was a problem hiding this comment.
Ideally, we would like to avoid having one cache per run... In practice, that seems hard. I plan to add a job to delete caches from a PR when the PR is closed to "solve" that.
There was a problem hiding this comment.
a new cache is created at each workflow run,
This is going to increase the docs build time for the first time for each run.
No, explanations in my comment below.
My idea behind this was if we need to update a cache, we could manually delete the cache and let the first run add a new cache. Subsequent runs shouldn't take as long. |
| key: docs_database_and_stim_results_cache-${{ github.run_id }} | ||
| enableCrossOsArchive: true | ||
| restore-keys: | | ||
| docs_database_and_stim_results_cache |
There was a problem hiding this comment.
I do no think this line is working. This is supposed to restore the run specific cache to this key. The workflow goes through the steps as expected but the cache with this key is not restored at all. The original cache was used/restored 4 hours ago.
There was a problem hiding this comment.
I think it is working. What this does is, when the cache misses (which should be everytime because we have a new cache at each workflow), it creates a new cache and pre-populate it with the content of the earliest cache that starts with the entries in restore-keys. So here, the workflow will always:
- cache miss, so create a new cache,
- find a previous cache that starts with
docs_database_and_stim_results_cacheand pre-populate the new cache with the content of that old cache, - perform documentation generation, potentially updating the new cache at the end.
The original cache in main should be used only once, at the first workflow run of each PR. Then, the cache created by the first successful workflow run will be used as a base for the next one, which will in turn be used as a base for the next one, ...
That's the only solution I found.
There was a problem hiding this comment.
Ok. I think I need some time to look over the new CI changes. I won't be able to over the next few days, it's a long holiday weekend in the US.
Feel free to merge the PR, I can make another PR if I think your changes require some additional tweaks.
Yes, except if the cache is on the |
|
@inmzhang I think this can be merged. I'll try if I have the time to do a final re-review and full-test to check one last time, but I think the implementation should be pretty robust. |
|
I re-run the complete set of tests, searched for unaddressed TODOs in the code (found one in |
|
Merging as soon as CI completes with expected timings and no errors |
|
Cache seems to not be working as expected. Will try to fix that later. |
That was due (I think) to the original cache |
|
|
🪓 PR closed, deleted preview at https://tqec.github.io/tqec/pull/562/ |
|
That seems to have worked flawlessly, the 3 caches created in this branch were deleted in less than 10 seconds of CI time. |
|
I forgot to bump the detector database version number, not sure it was needed though, just adding the comment to think about it. |






This PR implements the
CubeBuilderandPipeBuilderinterfaces for the fixed parity convention.TODO list:
Plaquette.project_on_data_qubit_indicesdoes the opposite of what the documentation says it does. It is only used inExtendedPlaquette, so should not be too much of a hassle to fix.