MemPostings.Delete(): make pauses to unlock and let the readers read#15242
MemPostings.Delete(): make pauses to unlock and let the readers read#15242jesusvazquez merged 3 commits intoprometheus:mainfrom
Conversation
This introduces back some unlocking that was removed in prometheus#13286 but in a more balanced way, as suggested by @pracucci. For TSDBs with a lot of churn, Delete() can take a couple of seconds, and while it's holding the mutex, reads and writes are blocked waiting for that mutex, increasing the number of connections handled and memory usage. This implementation pauses every 4K labels processed (note that also compared to prometheus#13286 we're not processing all the label-values anymore, but only the affected ones, because of prometheus#14307), makes sure that it's possible to get the read lock, and waits for a few milliseconds more. Signed-off-by: Oleg Zaytsev <[email protected]> Co-authored-by: Marco Pracucci <[email protected]> Signed-off-by: Oleg Zaytsev <[email protected]>
0fee8a8 to
b11d1df
Compare
codesome
left a comment
There was a problem hiding this comment.
LGTM. But can we test it somewhere before merging to ~verify if this is doing the intended thing?
|
I've always wanted to give As I see this patch reduces lock contention, but I'm also seeing, in this situation, for this setup, the benefit of concurrency that was introduced in #15239 and reverted. The 3 instances were fed with: and queried with: The prometheus instances were run with |
Signed-off-by: Oleg Zaytsev <[email protected]>
|
Thank you for testing this, @machine424, we also tested in our Mimir environment (runs same TSDB) and after experimenting a little bit, I pushed another change that would pause more often, every 512 deleted series. Note that the previous change looked well under normal load, but in scenarios where there's a lot of series churn (we have instances with more than 750K series churning) the parallel approach is still not enough, as we'd still block all reads for seconds. I'll try to gather some graphs of our test on Monday, but so far I'd say this is looking good. |
The change introduced in prometheus#14307 was great for the performance of MemPostings.Delete(): we know which labels we have to touch, so we just grab the lock once and keep processing them until we're done. While this is the best for MemPostings.Delete(), it's much worse for other users of that mutex: readers and new series writes. These now have to wait until we've deleted all the affected postings, which are potentially millions. Our operation isn't so urgent, but we're not letting other users grab the mutex. While prometheus#15242 proposes a solution for this by performing small pauses from time to time and letting other callers take the mutex, it still doesn't address the elephant in the room: we're just doing too much stuff with the write mutex being held, that's is exclusive time for us. We can spread it longer over time, decreasing the impact, but the overall exclusive time is the same, and we should try to address that. What's the long operation we're doing while holding the write mutex? We're filtering the postings list for each label by building a new one, and looking up in a hashmap whether each one of those elements id deleted. These lists are potentially tens or hundreds of thousands of elements each one. Why don't we build that list while holding just a RLock()? Well there's a small issue with that: maybe a new series is added to the list after we've built that filtered replacement postings list, and before we took the write mutex. The good news is that postings are always appended in-order to the list and we are the only ones who delete things from that list, so if we just check whether the list grew, we can know for sure if we're missing something. Moreover we don't even need to rebuild the entire list if something was added, we just need to add the extra elements that the list has now compared to our snapshot. Finally, one last thing to address is that with this approach we'd be taking the write mutex once per affected label value again, which is a lot, it causes too long compactions under lots of read pressure, and we mitigated that in the past. We also can't just build all the replacement builds and then swap them in one go: we would be referencing too much memory there. It is true that while we're swapping there's still someone referencing the old slice from some reader, but not at an arbitrary scale: for a 2M series tsdb, label names which reference all series (like the typical env=prod) as 16MB of postings each one (2M * 8B). So we process labels in batches, with max 128 labels in a batch (so ideally we take one write mutex per each 128 labels) and a maximum of 10*len(allpostings) max postings in the batch (in our example that would be an extra 160MB allocated temporarily, which should be negligible in a 2M series instance. Signed-off-by: Oleg Zaytsev <[email protected]>
|
While this improves the situation, we've seen that it's not a perfect solution and we still see some impacts on the reads while doing the compaction. I would like us to consider a different approach: #15310 Edit: that one still needs some more love, so I think we can proceed with merging this one, while I polish the other one. |
Signed-off-by: Oleg Zaytsev <[email protected]>
pracucci
left a comment
There was a problem hiding this comment.
Thanks Oleg for this work! It helped really a lot (as screenshots show).
Port prometheus/prometheus#15242 to our fork temporarily until it is upstreamed. Signed-off-by: Giedrius Statkevičius <[email protected]>
Port prometheus/prometheus#15242 to our fork temporarily until it is upstreamed. Signed-off-by: Giedrius Statkevičius <[email protected]>
jesusvazquez
left a comment
There was a problem hiding this comment.
LGTM It seems a straightforward change that has brought some good results for downstream projects using the TSDB.
I see that both Thanos and Mimir have already taken this commit in so I see no reason to hold it further.
Let's export that metric in Prometheus too (I was looking at it in Mimir, where it has been exported for a year). |
…15242) This introduces back some unlocking that was removed in #13286 but in a more balanced way, as suggested by @pracucci. For TSDBs with a lot of churn, Delete() can take a couple of seconds, and while it's holding the mutex, reads and writes are blocked waiting for that mutex, increasing the number of connections handled and memory usage. This implementation pauses every 4K labels processed (note that also compared to #13286 we're not processing all the label-values anymore, but only the affected ones, because of #14307), makes sure that it's possible to get the read lock, and waits for a few milliseconds more. Signed-off-by: Oleg Zaytsev <[email protected]> Co-authored-by: Marco Pracucci <[email protected]>
Signed-off-by: Vladimir Varankin <[email protected]>
Signed-off-by: Vladimir Varankin <[email protected]>
|
Is there a chance we could backport this into 2.55? That way we could also patch this into Thanos quicker without having to upgrade to v3 of Prometheus. |
…rometheus#15242) This introduces back some unlocking that was removed in prometheus#13286 but in a more balanced way, as suggested by @pracucci. For TSDBs with a lot of churn, Delete() can take a couple of seconds, and while it's holding the mutex, reads and writes are blocked waiting for that mutex, increasing the number of connections handled and memory usage. This implementation pauses every 4K labels processed (note that also compared to prometheus#13286 we're not processing all the label-values anymore, but only the affected ones, because of prometheus#14307), makes sure that it's possible to get the read lock, and waits for a few milliseconds more. Signed-off-by: Oleg Zaytsev <[email protected]> Co-authored-by: Marco Pracucci <[email protected]>
…rometheus#15242) This introduces back some unlocking that was removed in prometheus#13286 but in a more balanced way, as suggested by @pracucci. For TSDBs with a lot of churn, Delete() can take a couple of seconds, and while it's holding the mutex, reads and writes are blocked waiting for that mutex, increasing the number of connections handled and memory usage. This implementation pauses every 4K labels processed (note that also compared to prometheus#13286 we're not processing all the label-values anymore, but only the affected ones, because of prometheus#14307), makes sure that it's possible to get the read lock, and waits for a few milliseconds more. Signed-off-by: Oleg Zaytsev <[email protected]> Co-authored-by: Marco Pracucci <[email protected]>
…rometheus#15242) This introduces back some unlocking that was removed in prometheus#13286 but in a more balanced way, as suggested by @pracucci. For TSDBs with a lot of churn, Delete() can take a couple of seconds, and while it's holding the mutex, reads and writes are blocked waiting for that mutex, increasing the number of connections handled and memory usage. This implementation pauses every 4K labels processed (note that also compared to prometheus#13286 we're not processing all the label-values anymore, but only the affected ones, because of prometheus#14307), makes sure that it's possible to get the read lock, and waits for a few milliseconds more. Signed-off-by: Oleg Zaytsev <[email protected]> Co-authored-by: Marco Pracucci <[email protected]>



This introduces back some unlocking that was removed in #13286 but in a more balanced way, as suggested by @pracucci.
For TSDBs with a lot of churn, Delete() can take a couple of seconds, and while it's holding the mutex, reads and writes are blocked waiting for that mutex, increasing the number of connections handled and memory usage.
This implementation pauses every 4K labels processed (note that also compared to #13286 we're not processing all the label-values anymore, but only the affected ones, because of #14307), makes sure that it's possible to get the read lock, and waits for a few milliseconds more.