gh-66587: Fix deadlock from pool worker death without communication#16103
gh-66587: Fix deadlock from pool worker death without communication#16103applio wants to merge 6 commits intopython:mainfrom
Conversation
…ueue; adds test for issue22393/issue38084.
|
This looks good to me, simply a few remarks:
Also pinging @tomMoral |
|
For mine, I think this fix seems more elegant than #10441, but the tests in that PR seem to have more coverage. I personally prefer to just have the task fail, and the pool continue. The current behaviour is that the broken worker is immediately replaced and other work continues, but if you wait on the failed task then it will never complete. Now it does complete (with a failure), which means robust code can re-queue it if appropriate. I don't see any reason to tear down the entire pool. Few comments on the PR incoming. |
Lib/multiprocessing/pool.py
Outdated
| pid = worker.ident | ||
| worker.join() | ||
| cleaned = True | ||
| if pid in job_assignments: |
There was a problem hiding this comment.
| if pid in job_assignments: | |
| job = job_assignments.pop(pid, None) | |
| if job: | |
| outqueue.put((job, i, (False, RuntimeError("Worker died")))) |
And some additional simplification below, of course.
tomMoral
left a comment
There was a problem hiding this comment.
Here is a batch of comments.
I have to say that I like this solution as it is the most robust way of handling this, (a kind of scheduler). But it also comes with more complexity and increase communication needs -> more changes for deadlocks.
One of the main argument for the fail on error design is that there is no way there is no way to know in the main process if the worker that died had a lock on one of the communication queue. In this situation, the only way to recover the system and avoid a deadlock is to kill the Pool and re-spawn one.
| job_assignments[value] = job | ||
| else: | ||
| try: | ||
| cache[job]._set(i, (task_info, value)) |
There was a problem hiding this comment.
Why don't you remove the job from job_assignement here? It would avoid unecessary operation when a worker died gracefully.
Co-Authored-By: Steve Dower <[email protected]>
taleinat
left a comment
There was a problem hiding this comment.
Additional tests would certainly be a good idea.
| # Issue22393: test fix of indefinite hang caused by worker processes | ||
| # exiting abruptly (such as via os._exit()) without communicating | ||
| # back to the pool at all. | ||
| prog = ( |
There was a problem hiding this comment.
This can be written much more clearly using a multi-line string. See for example a very similar case in test_shared_memory_cleaned_after_process_termination in this file.
| # Only if there is a regression will this ever trigger a | ||
| # subprocess.TimeoutExpired. | ||
| completed_process = subprocess.run( | ||
| [sys.executable, '-E', '-S', '-O', '-c', prog], |
There was a problem hiding this comment.
The '-O' flag probably shouldn't be used here, but '-S' and '-E' seem fine.
There was a problem hiding this comment.
Also, consider calling test.support.script_utils.interpreter_requires_environment(), and only use the '-E' flag if that returns False, as done by the other Python script running utils in test.support.script_utils.
Or just use test.support.script_utils.run_python_until_end() instead of subprocess.run().
|
@applio, I'm not sure where this one is at, but I believe there are some comments that still need to be addressed. I don't know if it's waiting on anything else, but it would probably be nice to get this merged. |
|
Closing and re-opening to re-trigger CI. |
|
This missed the boat for inclusion in Python 3.9 which accepts security fixes only as of today. |
|
The following commit authors need to sign the Contributor License Agreement: |
Adds tracking of which worker process in the pool takes which job from the queue.
When a worker process dies without communication, its task/job is also lost. By tracking what job that worker took off the job queue as its task, upon detecting the death, the parent process can add an item to the result queue indicating the failure of that task/job.
In case of a future regression, the supplied test uses subprocess to constrain the test with a timeout to ensure an indefinite hang does not interfere with the running of tests.
https://bugs.python.org/issue22393