Skip to content

Avoid going through unsaved data limit throttler for reads #6072

@danielmewes

Description

@danielmewes

At the moment both read and write transactions go through the throttle in the cache that limits the amount of unsaved data. If the unsaved data limit is exhausted, this means that even reads will queue up on the throttle and need to get in line with writes.

This was originally necessary to ensure ordering of writes and reads, but we have since changed the mechanism by which the ordering of transaction is ensured (it now happens in the cluster layer rather than the cache). In fact we have special code elsewhere that allows reads to skip ahead of writes when acquiring the superblock.

I need to double-check, but I believe that we can just ignore the throttler for read transactions. That would avoid potentially extremely high latencies for read transactions during heavy write load.

If this turns out to be safe, I'd like to try getting this into 2.3.5 actually. (@larkost that means that we will have to run the tests again on at least one platform).

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions