I like that we have coverage guided testing, but it causes a bunch of problems (e.g. #1392, #1493, general performance issues), and realistically at the sort of workloads we actually have a good UX for right now, I don't think it can realistically ever pull its weight.
On the UX front, I think what we want is some sort of fuzzer support for Hypothesis, so that we can e.g. run a Hypothesis based test under python-afl and most of the benefits of the coverage guidance would be better served by adding that, and the current generation of coverage guided testing adds very little either currently or towards that goal.
So I'd like to propose that we do the following:
- Rip out all of the coverage guidance code, so that now all behaviour is equivalent to current behaviour with
use_coverage=False
- Deprecate the
use_coverage setting
- Open up a ticket describing a concrete plan towards fuzzer support (I actually think this can be done quite easily)
Thoughts?
I like that we have coverage guided testing, but it causes a bunch of problems (e.g. #1392, #1493, general performance issues), and realistically at the sort of workloads we actually have a good UX for right now, I don't think it can realistically ever pull its weight.
On the UX front, I think what we want is some sort of fuzzer support for Hypothesis, so that we can e.g. run a Hypothesis based test under python-afl and most of the benefits of the coverage guidance would be better served by adding that, and the current generation of coverage guided testing adds very little either currently or towards that goal.
So I'd like to propose that we do the following:
use_coverage=Falseuse_coveragesettingThoughts?