Skip to content

Conversation

@pstratem
Copy link
Contributor

@pstratem pstratem commented Apr 9, 2015

The addr/msg_tx logic currently assumes that ProcessMEssage/SendMessage are called on a regular timer.

This was (sort of) true in the past but is not longer true after #5971

Critical analysis of this PR would be helpful as this goes beyond just fixing the timing issue.

@jgarzik
Copy link
Contributor

jgarzik commented Apr 9, 2015

Any possibility to use the timer stuff being added in #5964 ?

@pstratem
Copy link
Contributor Author

@jgarzik maybe later, porting wont be difficult

src/main.cpp Outdated
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could that magic number be defined somehwere as constant? IMHO it's used quite often in this pull.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At a later date in a different PR

@sipa
Copy link
Member

sipa commented Apr 11, 2015

Concept ACK. I love getting rid of the vSendTrickle logic...

@gavinandresen
Copy link
Contributor

I'm with @jgarzik: this could be much cleaner built on top of #5964. All of the pnode->nNext* variables could go away; just schedule a task that takes a NodeId and Does The Right Thing at some (random) time in the future.

(the tasks would lock cs_vNodes and do the NodeId to CNode* lookup... probably don't need to optimize that lookup, looping through even a 1,000-peer vector isn't a significant amount of CPU time).

@sipa
Copy link
Member

sipa commented Apr 24, 2015 via email

@gavinandresen
Copy link
Contributor

@sipa : benefit would be clearer logic. I find the "set a variable here, test it over there and then do something" hard to review/follow.

@pstratem
Copy link
Contributor Author

@gavinandresen As I said to @jgarzik it's easy enough to back port this logic into CSchedule.

I would rather not tie the two PR's together.

src/main.cpp Outdated
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Integer overflow on this line. You probably want to cast to int64_t first.

@sipa
Copy link
Member

sipa commented Apr 25, 2015

@gavinandresen Yes, that's the small benefit. The small cost is extra locking/work. Not opposed to it, but it's not as clear an advantage as replacing the several pieces of code we had that were already using their own thread. This one doesn't need it.

@sipa
Copy link
Member

sipa commented Apr 27, 2015

Can you fix the integer overflow, and perhaps rebase on top of #6066?

@sipa
Copy link
Member

sipa commented May 5, 2015

Rebase please?

@sipa
Copy link
Member

sipa commented Jun 14, 2015

Needs rebase, and fixing the integer overflow.

@sipa
Copy link
Member

sipa commented Jun 14, 2015

@sipa
Copy link
Member

sipa commented Jul 9, 2015

@pstratem ping

@pstratem
Copy link
Contributor Author

pstratem commented Jul 9, 2015

@sipa pong

@pstratem
Copy link
Contributor Author

setAddrKnown was replaced with a CRollingBloomFilter d81cff3

Given this it seems reasonable to slowly unset bits of the bloomfilter rather than clearing the filter entirely every 24 hours.

Thoughts?

@pstratem
Copy link
Contributor Author

Scratch that. There's no need to clear the filter explicitly with the rolling bloom filter.

@dcousens
Copy link
Contributor

@pstratem rebase/close?

@sipa
Copy link
Member

sipa commented Oct 30, 2015 via email

@sipa
Copy link
Member

sipa commented Nov 28, 2015

See #7125.

@pstratem
Copy link
Contributor Author

pstratem commented Dec 8, 2015

we have better ideas on how to achieve the properties of the trickle logic (such as mempool sync at random interval)

@pstratem pstratem closed this Dec 8, 2015
@bitcoin bitcoin locked as resolved and limited conversation to collaborators Sep 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants