Use a binary search when trying to find source mappings#9291
Use a binary search when trying to find source mappings#9291davidwengier merged 1 commit intodotnet:mainfrom
Conversation
That loop checks the mapping.OriginalSpan and hence the array is not ordered. It might be beneficial to sort then binary search, but I don't know. Maybe based on a heuristic of file size? We could also change how we store the mappings, and store a reverse map, but need to work out if that's worth it, since that loop didn't show up in PRISM (or shows up lower?). Also if we're changing how the compiler produces the data, we can change to allow us to use the built in binary search method, and not need the lambda, but again that might not be worth it. TL;DR I'm OOF for two weeks from Monday and ran out of time 😁 |
| if (generatedDocument is null) | ||
| { | ||
| throw new ArgumentNullException(nameof(generatedDocument)); | ||
| } | ||
|
|
||
| if (generatedDocument.CodeDocument is not { } codeDocument) | ||
| { | ||
| throw new InvalidOperationException("Cannot use document mapping service on a generated document that has a null CodeDocument."); | ||
| } |
There was a problem hiding this comment.
Do we need to benchmark the argument checking?
There was a problem hiding this comment.
Do we need to benchmark "Is a binary search better than a loop?"? 😛
Last one before I go :)
Fixes https://devdiv.visualstudio.com/DevDiv/_workitems/edit/1886121 which is the first non-compiler stack when searching PRISM for "Razor" 😁
The benchmark results look pretty good, but are actually probably worse than real world, as the benchmark code has the number of locations being mapped increases with the size of the file. In the real world, I expect large files will get a huge speed boost in normal scenarios, which just map one or two positions. I could write another benchmark that just maps one location, but that feels unnecssary.