fix: fix log producer stopping block indefinitely#1783
fix: fix log producer stopping block indefinitely#1783lefinal wants to merge 1 commit intotestcontainers:mainfrom
Conversation
✅ Deploy Preview for testcontainers-go ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
The mentioned issue regarding panics is not the same. However, the changes from #1777 will accomplish the same thing. The author just uses |
So, if you mind 🙏 , once we merge that one (already reviewed, tested and commented by the community), we can close this one. |
Absolutely! Only wanted to provide a quick fix in case #1777 takes longer to review and merge :) |
Fixes an issue where stopping the log producer would block indefinitely if the log producer has exited before. Closes testcontainers#1407, testcontainers#1669
We're currently hitting this issue testcontainers/testcontainers-go#1783 and the forked version fixes it. When that fix is upstreamed we may move back to an official build.
* [AUTO-COMMIT] Update `manifest.yaml` Files changed: M manifest.yaml * video: add recording of video+ prefixes * Makefile: remove old docker-local dependency * Dockerfile: download catalyst with curl * ci: try making box * ci: no frontend in tests * go.mod: use upgraded, forked testcontainers-go We're currently hitting this issue testcontainers/testcontainers-go#1783 and the forked version fixes it. When that fix is upstreamed we may move back to an official build. * ci: disable ghcr.io repos they're only there as a backup and they result in this alllll the time ERROR: failed to solve: failed to compute cache key: failed to copy: read tcp 172.17.0.2:44614->20.60.63.1:443: read: connection timed out Error: buildx failed with: ERROR: failed to solve: failed to compute cache key: failed to copy: read tcp 172.17.0.2:44614->20.60.63.1:443: read: connection timed out * Dockerfile: bypass cloudflare for catalyst pull --------- Co-authored-by: livepeer-docker <[email protected]>
|
Any updates on this? |
|
I think #1971 could be related to this |
|
Hey @lefinal did you check #1783 (comment)? Please let me know if I can close this one 🙏 Cheers! |
|
I'm going to close this one, as it has been superseded by the current state of the main branch and by #2576 Thanks! |
Fixes an issue where stopping the log producer would block indefinitely if the log producer has exited before.
What does this PR do?
Fixes stopping the log producer blocking by not requiring a read on the
stopProducerchannel but simply closing it.There is no synchronization for the log producer to finish running anyways, so this should be fine.
Why is it important?
If the log producer exits due to an exited container,
StopLogProducerwill hang indefinitely.Related issues
Closes #1407
Closes #1669
How to test this PR
Test has been added.
Follow-ups
stopProduceris currently not safe for concurrent use. Maybe add some synchronization in the future.