Skip to content

Conversation

@hebasto
Copy link
Member

@hebasto hebasto commented Jun 17, 2020

This PR replaces RecursiveMutex CTxMemPool::cs with Mutex CTxMemPool:cs.

All of the related code branches are covered by appropriate lock assertions to insure that the mutex locking policy has not been changed by accident

Related to #19303.

@DrahtBot
Copy link
Contributor

DrahtBot commented Jun 17, 2020

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Conflicts

Reviewers, this pull request conflicts with the following ones:

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

@hebasto
Copy link
Member Author

hebasto commented Jun 18, 2020

From IRC:

<sipa> eh, i agree - that change isn't a step in the right direction

@sipa
Which direction is right?

@hebasto
Copy link
Member Author

hebasto commented Jun 18, 2020

From IRC:

<jeremyrubin> I think it's basically getting rid of a recursive mutex for code that's still designed to take a recursive mutex
<gwillen> it's better than a true recursive mutex because it's not possible to recurse by accident, you have to declare at call time which behavior you want (although better if you had to declare statically at compile time)
<sipa> it looks like that
<jeremyrubin> The correct refactor would be to make the code not do anything fancy with locks, or to just leave it
<jeremyrubin> gwillen: I think the chances of a bug or error in custom logic is higher than a recursive mutex
<jeremyrubin> accidental recursion seems unlikely...
<jeremyrubin> and accidental recursion would be a bug lock or no
<gwillen> (sorry I mean, accidental mutex recursion, that is, calling a function while holding a mutex, not expecting the callee to also lock it, resulting in the callee violating the caller's invariants)
<gwillen> (this is the fundamental problem of recursive mutexes)
<gwillen> (and I assume the motivation behind hebasto's refactor)

Correct, @gwillen :)

@hebasto
Copy link
Member Author

hebasto commented Sep 6, 2020

Rebased 1faf43a -> a887d73 (pr19306.10 -> pr19306.12) due to the merge conflicts.

@hebasto hebasto marked this pull request as ready for review September 6, 2020 12:46
@hebasto hebasto marked this pull request as draft September 6, 2020 18:54
sidhujag pushed a commit to syscoin/syscoin that referenced this pull request Sep 9, 2020
…le cases

020f051 refactor: CTxMemPool::IsUnbroadcastTx() requires CTxMemPool::cs lock (Hennadii Stepanov)
7c4bd03 refactor: CTxMemPool::GetTotalTxSize() requires CTxMemPool::cs lock (Hennadii Stepanov)
fa5fcb0 refactor: CTxMemPool::ClearPrioritisation() requires CTxMemPool::cs lock (Hennadii Stepanov)
7140b31 refactor: CTxMemPool::ApplyDelta() requires CTxMemPool::cs lock (Hennadii Stepanov)
66e47e5 refactor: CTxMemPool::UpdateChild() requires CTxMemPool::cs lock (Hennadii Stepanov)
9398077 refactor: CTxMemPool::UpdateParent() requires CTxMemPool::cs lock (Hennadii Stepanov)

Pull request description:

  This is another step to transit `CTxMemPool::cs` from `RecursiveMutex` to `Mutex`.

  Split out from bitcoin#19306.
  Only trivial thread safety annotations and lock assertions added. No new locks. No behavior change.

  Refactoring `const uint256` to `const uint256&` was [requested](bitcoin#19647 (comment)) by **promag**.

  Please note that now, since bitcoin#19668 has been merged, it is safe to apply `AssertLockHeld()` macros as they do not swallow compile time Thread Safety Analysis warnings.

ACKs for top commit:
  promag:
    Core review ACK 020f051.
  jnewbery:
    Code review ACK 020f051
  vasild:
    ACK 020f051

Tree-SHA512: a31e389142d5a19b25fef0aaf1072a337278564528b5cc9209df88ae548a31440e1b8dd9bae0169fd7aa59ea06e22fe5e0413955386512b83ef1f3e7d941e890
@hebasto hebasto closed this Aug 24, 2021
@bitcoin bitcoin locked as resolved and limited conversation to collaborators Aug 24, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants