Conversation
Codecov Report
@@ Coverage Diff @@
## master #358 +/- ##
=====================================
Coverage 100% 100%
=====================================
Files 9 9
Lines 806 825 +19
Branches 171 173 +2
=====================================
+ Hits 806 825 +19
Continue to review full report at Codecov.
|
|
This is going to significantly slow down repr, isn't it? Usually people don't repr on a hot path, but I log attrs classes all the time. |
|
If it gets prohibitively expensive, we can always make it an option. |
|
Is Guys, it is your library, you establish the rule. |
|
I'd like to see some perf numbers before doing anything drastic :) but on the perf front we could also disable this for frozen classes. You can participate in a cycle as a frozen object, but you can't be the cause of the cycle, so it's the (mutable) cyclic object that should be responsible for the |
|
Re: logging attrs classes: yes, this is precisely why I really want this to happen :). Sometimes having |
|
Anyone I can motivate to give me a review? :) |
|
@asvetlov Thanks for your input, your review is appreciated! But per project policy, I do need another project member to sign off :). |
There was a problem hiding this comment.
I've seen a couple approaches to this in the past, but this matches the better of the two. The other is OrderedDict's, in case you'd like to contrast it. LGTM.
|
So show us some performance numbers? |
|
BTW I log attrs classes all the time on the happy path. Not in hot loops of course, but once per request for sure. A slow repr isn't prohibitive, but maybe it should be opt-in. Personally I have never had a cycle, why pay the price? |
|
Preliminary testing suggests a 20% slowdown on both CPython and PyPy. Test: import attr
from textwrap import dedent
@attr.s
class Foo(object):
bar = attr.ib()
baz = attr.ib()
boz = attr.ib()
import timeit
print(timeit.timeit(
'repr(foo)',
dedent('''
from __main__ import Foo
foo = Foo(1, "hello", ["some", "values"])
'''),
number=1000000
)) |
|
@Tinche On the one hand, I like the philosophy of "don't pay for what you don't use". On the other hand, what would you call this if you had to opt in to it? |
|
How do you feel about |
|
Actually I guess since |
|
@Tinche With the micro-optimization I just committed, this change is about 25% faster on PyPy and about 10% faster on CPython. It's true that if we removed the code that makes it correct now it could probably be even faster, but presumably if the previous performance was acceptable, faster performance is also acceptable :). |
|
(25%/10% faster than |
|
Restraining the optimization golfer in me, I think eliminating the |
|
Let’s get some numbers. Hynek’s MolassesGlyph’s Vroom-VroomHawkie’s TurboHowever, looking at the code reminded me of @hawkowl ’s blog post on string concats (well, she wrote about byte strings but I figured the lessons are transferable) and I did some more micro optimizations that even made the code shorter and nicer. And lo and behold: So I’m not sure what the etiquette here is but I’ve just pushed my changes against Glyph’s branch. If you (Glyph) are OK with it, we can merge it as soon as CI passes. |
| import hashlib | ||
| import linecache | ||
| import sys | ||
| import threading |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
|
Although I have one last question: does this work with greenlets too? Or do we have to do something like I did in structlog? |
|
It should work well enough with greenlets - in principle it's possible to suspend and reentrantly repr something in a different greenlet but the same thread but:
|
|
If we're happy with the perf numbers, I'm going to go ahead and merge. |
|
I helped, without even having to help. Woo! |
|
“Hawkie’s Turbo” is a thing now |
|
In 3.7 you could use (I haven't managed to convince Yury to make a backport yet though.) |
|
Yay, everyone's happy! |
Fixes #95.
Done.
There are no tests for the multithreaded case because it's literally impossible to test that, you just have to think really hard. I could try harder if the reviewer thinks that this implementation strategy is risky but it seems pretty boring; just using threading.local to avoid sharing state between threads.
I don't think I need to do this here because I don't see anything relevant, but, I am not a Hypothesis expert so I might be missing something.
It's a bugfix, so I don't think it calls for updated docs.
changelog.d.Done.