-
Notifications
You must be signed in to change notification settings - Fork 38.7k
net processing: Move block inventory state to net_processing #19829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
717a374
77a2c2f
78040f9
53b7ac1
c853ef0
184557e
3002b4a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -791,6 +791,11 @@ void PeerManager::FinalizeNode(const CNode& node, bool& fUpdateConnectionTime) { | |||||
| LOCK(cs_main); | ||||||
| int misbehavior{0}; | ||||||
| { | ||||||
| // We remove the PeerRef from g_peer_map here, but we don't always | ||||||
| // destruct the Peer. Sometimes another thread is still holding a | ||||||
| // PeerRef, so the refcount is >= 1. Be careful not to do any | ||||||
| // processing here that assumes Peer won't be changed before it's | ||||||
| // destructed. | ||||||
| PeerRef peer = RemovePeer(nodeid); | ||||||
| assert(peer != nullptr); | ||||||
jnewbery marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||||
| misbehavior = WITH_LOCK(peer->m_misbehavior_mutex, return peer->m_misbehavior_score); | ||||||
|
|
@@ -870,6 +875,7 @@ bool PeerManager::GetNodeStateStats(NodeId nodeid, CNodeStateStats &stats) { | |||||
| PeerRef peer = GetPeerRef(nodeid); | ||||||
| if (peer == nullptr) return false; | ||||||
| stats.m_misbehavior_score = WITH_LOCK(peer->m_misbehavior_mutex, return peer->m_misbehavior_score); | ||||||
| stats.m_starting_height = peer->m_starting_height; | ||||||
jnewbery marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||||
|
|
||||||
| return true; | ||||||
| } | ||||||
|
|
@@ -1309,13 +1315,17 @@ void PeerManager::UpdatedBlockTip(const CBlockIndex *pindexNew, const CBlockInde | |||||
| } | ||||||
| } | ||||||
|
|
||||||
| // Relay to all peers | ||||||
| m_connman.ForEachNode([&vHashes](CNode* pnode) { | ||||||
| LOCK(pnode->cs_inventory); | ||||||
| for (const uint256& hash : reverse_iterate(vHashes)) { | ||||||
| pnode->vBlockHashesToAnnounce.push_back(hash); | ||||||
| { | ||||||
| LOCK(m_peer_mutex); | ||||||
| for (auto& it : m_peer_map) { | ||||||
|
||||||
| if (!pto->fSuccessfullyConnected || pto->fDisconnect) | |
| return true; |
I think this behaviour is ok. The version-verack handshake usually lasts no more than a second or two, and at the worst case times out after a minute, so we're never going to push many block hashes onto this vector.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a50dee50fd2aee93b41bfb3b04b24939d0c2f6c1
Any reason to not have a ForEachPeer member function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I originally had one but removed it because it didn't seem worth it. We can revisit that decision later if it seems useful.
ForEachNode was useful because it allowed passing a lambda into CConnman which would be executed while the cs_vNodes lock was held. Here, the m_peer_mutex is in PeerMan, so it can be held directly.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the first place where we are acquiring a lock while already holding m_peer_mutex. Any thoughts on what is the best way to document this design, so that future code writers know that acquiring m_peer_mutex while holding a peer's m_block_inv_mutex is not allowed?
Or, would it be better to first copy all the Peer objects to a temporary place, release the m_peer_mutex lock, and then grab the per-peer m_block_inv_mutex locks to avoid holding both at the same time? (I don't know or have an opinion; just trying to figure out the locking design here.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would a comment next to m_peer_mutex suffice here? This design (take m_peer_mutex first, then lock the mutexes protecting the individual data members) is analogous to ForEachNode() was doing here before (take cs_vNodes first, then lock the mutexes protecting the individual data members in the lambda - here cs_inventory).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added comments to the first commit. Let me know if you think more is required.
Uh oh!
There was an error while loading. Please reload this page.