-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
django.db.utils.OperationalError: (1114, "The table 'huey_monitor_taskprogressmodel' is full") #46
Comments
Thats a good point. The initial idea was to to avoid database locking mechanisms. If it runs in parallel. But I think that should be rethought. |
Since 3 days I'm using this minimal rewrite (to keep maximum compatibility with your existing code):
|
Can we move on with this PR (#44)? As I don't really understand what you would expect through your comment
I cannot do anything further unless you give me more precise instructions in a not technical language that I can understand (see my answer from 1 month ago). Kind regards |
@formacube Maybe you want to share your opinion here: #57 That will fix this issues. |
Remove the `TaskProgressModel` model that was used to store process information. But this has some disadvantages: * Every `process_info.update()` call creates one `TaskProgressModel` instance and this model was never cleaned. So if many items was processed, the table may be get full. So this PR fixed #46 * If many small `process_info.update()` calls happens, then we get a high database load Refactor that completely by using the Django cache to store progress information. If a task has been finished: Transfer the information from cache to database.
will be fixed by: #58 |
Remove the `TaskProgressModel` model that was used to store process information. But this has some disadvantages: * Every `process_info.update()` call creates one `TaskProgressModel` instance and this model was never cleaned. So if many items was processed, the table may be get full. So this PR fixed #46 * If many small `process_info.update()` calls happens, then we get a high database load Refactor that completely by using the Django cache to store progress information. If a task has been finished: Transfer the information from cache to database.
Remove the `TaskProgressModel` model that was used to store process information. But this has some disadvantages: * Every `process_info.update()` call creates one `TaskProgressModel` instance and this model was never cleaned. So if many items was processed, the table may be get full. So this PR fixed #46 * If many small `process_info.update()` calls happens, then we get a high database load Refactor that completely by using the Django cache to store progress information. If a task has been finished: Transfer the information from cache to database.
Remove the `TaskProgressModel` model that was used to store process information. But this has some disadvantages: * Every `process_info.update()` call creates one `TaskProgressModel` instance and this model was never cleaned. So if many items was processed, the table may be get full. So this PR fixed #46 * If many small `process_info.update()` calls happens, then we get a high database load Refactor that completely by using the Django cache to store progress information. If a task has been finished: Transfer the information from cache to database.
Don't create a new TaskProgressModel instances for every `ProcessInfo.update()` call and increment a existing TaskProgressModel instances. So we will not flood the database ;) Based on #67
Don't create a new TaskProgressModel instances for every `ProcessInfo.update()` call and increment a existing TaskProgressModel instances. So we will not flood the database ;) Based on #67
Fix #46 By increment existing TaskProgressModel instances
Remove the `TaskProgressModel` model that was used to store process information. But this has some disadvantages: * Every `process_info.update()` call creates one `TaskProgressModel` instance and this model was never cleaned. So if many items was processed, the table may be get full. So this PR fixed #46 * If many small `process_info.update()` calls happens, then we get a high database load Refactor that completely by using the Django cache to store progress information. If a task has been finished: Transfer the information from cache to database.
Hi @jedie,
I had a bad surprise this morning while checking on the progress of a process which had it's first sub-process running for over than 24h.
There was an error:
django.db.utils.OperationalError: (1114, "The table 'huey_monitor_taskprogressmodel' is full")
Then I realized that for each tiny progress report, a record is created for
TaskProgressModel
Could you please tell me what is the benefit of creating a new
TaskProgressModel
(even two when you want to report progress on both sub-task and maintask) each time you report a progress vs. incrementing directly theprogress_count
field of an existingTaskProgressModel
?That would mean that in general, we would have only one
TaskProgressModel
for eachTaskModel
.Thanks in advance for your feedback
The text was updated successfully, but these errors were encountered: