Conversation
| test('testdata default (scrolling with VT240 cursor pos)', async () => { | ||
| const dim = await getDimensions(); | ||
| await ctx.proxy.write(SIXEL_SEQ_0); | ||
| await timeout(50); |
There was a problem hiding this comment.
The await on write should be fine, no? If not, is there something else we could await that isn't an arbitrary time?
There was a problem hiding this comment.
Sadly nope - we expose no signal/event like thingy, that would correctly mark the end of terminal processing up to the renderer.
My other guess regarding those flaky tests is about term.reset, which gets called over and over in test modules (fixed for web-links tests in the other PR #5180). I am not sure, if this call is fully synchronous now (we even had an issue regarding this in the past) - if not, then heavy load on the CI machine might cause those time offsets with failing tests. Also just a wild guess atm, but with the fix the tests passed.
There was a problem hiding this comment.
Oh right, these need rendering to have occurred. This is internal, but you could debounce listen on IRenderService.onRender which is what you're actually after. Alternatively I think we have a pollFor helper where we can set a much larger timeout without impacting test duration to prevent flakiness
I'm not aware of any reset not being sync problems.
There was a problem hiding this comment.
Yes some of them need the actual rendering result, which is a bit cumbersome to follow due to our async writing/rendering. For those tests it is clear, that a busy CI machine will take very long eventually running into a timeout. Thats also the case for weblink tests, as they eval the DOM renderer results.
I will see if I can find a more reliable / deterministic way to wait for render result with onRender.
The issue with reset was brought up in #4603, and your fix kinda addressed a bad interim state of the viewport in sync code. To me this had similarities to the infamous nextTick issue often seen in nodejs tasks, that need eventloop+1 invocations to fully finish.
And sometimes tests on a busy CI machine fail so randomly with a weird error that I started to wonder, if we have such an eventloop+1 issue hidden anywhere. Also I see those types of failures only in integration tests, which made we wonder if terminal life cycling could cause this. Unit tests dont show those type of failures, they kinda only fail with very high timeout on really busy machines (means we got not enough CPU in time).
Ofc integration/playwright tests are much more heavy with their browser engines in the test stack, so it is not clear, if our (test) code has a hidden issue not surfacing on idle machines, or the browser engine simply stopped working somewhere in between.
Edit: Gonna revert 8faf097, it should not have been mixed with the bug being addressed in this PR.
|
Eww, the errors are back in the web links tests. This really s*x, as I cannot repro locally. |
|
This is a great bug fix. Why was it merged but a new version hasn't been released yet? |
|
@viko16 yep last stable publish was over a year ago. You can use the beta build in the meantime to get on the latest |
See #5181.