Conversation
|
To be honest, I'm really not convinced this is worth it the effort. VLAs work, they have worked for decades, and they're not suddenly going to stop working - and if they do, then we can get rid of them. |
filter
|
I don't really have anything against VLAs, but being able to fall back on heap allocations when the stack is filling up seems good. VLA dynamic sized small vec (do some sort of stack check to decide VLA or regular heap vector) works, but unconditional VLA doesn't satisfy the above. |
|
Indeed I'm not doing this for the VLAs but for the stack overflows. The PR only had that as a title for continuity with @thufschmitt's work.
This would seem optimal but leads to somewhat more complicated code because the VLAs can't be put into a type. Also this suggests a small performance penalty with VLAs:
In fact I'm quite ok with VLAs, as long as they're not chocolate flavored. |
Fix stack overflow in `filter` (cherry picked from commit cb7f258) Change-Id: Ib90f97a9805bbb4d0e2741551d490f054fc0a675
Motivation
Remove VLAs, attempt 2 with traceable allocator.
Fixes stack overflow in
builtins.filteron large lists.Context
Requires a trivial patch in bdwgc (patch included in the flake)
Priorities
Add 👍 to pull requests you find important.