Create new test type aamtest for accessibility API testing#57696
Create new test type aamtest for accessibility API testing#57696spectranaut wants to merge 27 commits intomasterfrom
aamtest for accessibility API testing#57696Conversation
5aacc60 to
2016a87
Compare
Uh oh! Looks like an error!Client ID static/taskcluster/github does not have sufficient scopes and is missing the following scopes: This request requires the client to satisfy the following scope expression:
|
3101e53 to
7bda3f7
Compare
|
@jcsteh -- I'd love your early feedback on this completely new direction to add AAM tests, the tests are like the wpt's webdriver spec tests, all in python! Look at the blockquote test. The APIs are passed to the test as arguments ("fixtures" in pytest speak -- defined in wai-aria-aam/support/fixtures_a11y_api.py). The You can see these tests already in the wpt.fyi for this PR: https://wpt.fyi/results/?label=pr_head&max-count=1&pr=57696 |
|
@spectranaut Thanks for the early ping and for your work on this. This looks really neat! I haven't looked at this in-depth yet, but here are some early thoughts:
|
|
@jcsteh thanks as always for the thoughts! :) On 1, imperative vs declarative -- tbh I never had a strong preference either way, maybe slight :) I think the declarative approach aligns the way the mapping of the Core-AAM are presented.. they are somewhat simplified and kind of have their own language for describing the APIs. Plus we can reuse all the manual tests Joanie maintained. But I think I've been convinced by you that closer to the API/imperative tests will get us better results -- and make a better and more flexible test suite in the long run. On 2, on the python vs html+js flip -- yeah I see the tradeoffs! The tests in this PR all have inline html, but for more complicated tests, we can create an separate html file to open. I think if we are going to write imperative tests (which I've been convinced), I think we should write the tests in python, and choose those tradeoffs. On 3, on session objects/executing javascript -- the session object is an implementation of webdriver maintained in wpt here: https://github.com/web-platform-tests/wpt/tree/master/tools/webdriver These tests have all of webdriver available to them, including the ability to send in javascript to execute, or sending clicks, keys, etc. On 4, on DOM events -- in webdriver classic, you can't wait on DOM events, you can only poll for changes, which is probably good enough? There is a way to wait for things with webdriver bidi, but Safari doesn't have support for bidi yet. On 5 accessibility events -- awesome, yes, that will be helpful, and I think accessibility event testing will be easier here too. On 6, the |
a9fac34 to
de8bf82
Compare
a09c749 to
8765a3a
Compare
|
Hi @jcsteh -- I'm noticing that these tests are flakey on Firefox on Linux.. and I wonder if you know why or can think of an easy fix. The flakes were caught by the Community-TC Integration / wpt-firefox-nightly-stability and are easy to reproduce locally. Basically, the nodes all appear in the tree, but not all the correct attributes are set by the time we query for them. In the code, before we run the test, we (1) load the webpage, then (2) find for the correct tab (role: document web), then (3) wait until "busy" is not set. But when you run the test immediately after that, finding the node by DOM ID fails sometimes -- the blockquote node does not always have a DOM ID attribute. I added a poll to try to solve for this but it doesn't seem like a great solution, and then, I'm getting flakey failures while looking for another attribute in another test, as you can see in this CI report. Am I waiting for busy on the wrong thing? Or is this bug in firefox? |
|
Ah, this is due to caching granularity. By default, we only enable a small set of cached attributes to improve memory usage and performance, since a lot of clients don't need everything. When a client first requests something that isn't in the cache, we asynchronously enable it from that point forward. You can work around this by setting the pref |
|
Bad news, @jcsteh 😢 |
|
Very odd. I'll need to get this running locally so I can shove some logging into Gecko and see what's going on. What's really strange is that we have a whole bunch of Gecko tests which cover exactly this behaviour. |
|
@spectranaut, are you far enough along with Windows or Mac testing to know whether this flake shows up for Firefox on either of those platforms? That is, is this just a Linux flake at this stage or is that not conclusive yet? |
1fd1df6 to
3f7b977
Compare
Yes, that would align well with the WPT pattern that tests should be in a directory that uses the same name as the spec. That's why the ARIA directory is |
3f7b977 to
61be1c5
Compare
There was a problem hiding this comment.
Please document the path requirement if you haven't yet
There was a problem hiding this comment.
Oh you mean documentation like what shows up in https://web-platform-tests.org/? I think I'll do that in a separate PR, unless you think I should do it in this one.
| @@ -0,0 +1,210 @@ | |||
| # type: ignore | |||
There was a problem hiding this comment.
Could we put all the pure library code (i.e. everything that's not one of the pytest fixtures) under tools? That matches the organization of the rest of the repo more closely and has the advantage that lints and unittests will automatically cover this code.
There was a problem hiding this comment.
I forgot to mention -- I moved this to "third_party_modified": https://github.com/web-platform-tests/wpt/pull/57696/changes#diff-0d2c8b6952aab7e1d99eaf6f96101ea52337da802caa86528d83b27cf3822a5c
But then saw the only thing in there was recently moved. Is it ok there, or do you have a better place in mind? The tlb file is copied in, the ia2.py file contains helpers I wrote.
| } | ||
|
|
||
| @pytest.mark.parametrize("test_html", TEST_HTML.values(), ids=TEST_HTML.keys()) | ||
| def test_atspi(atspi, session, inline, test_html): |
There was a problem hiding this comment.
I'm not sure I love having a totally different test function per AT API (if I'm understanding what's happening here correctly). I guess the problem is that the tuple (platform, browser, AT API) is more unique than just (platform, browser) i.e. we can have specific platform/browser combinations with >1 AT, so the fact that wpt.fyi (for example) is currently parameterized by (platform, browser, channel) means we need different AT APIs to show up as different tests rather than different configurations?
There was a problem hiding this comment.
I'm not sure I fully understand this comment/question -- but in general, there is one accessibility API per platform, except on Windows, where there are two. One of those APIs (IA2) is suppose to be defunct but is more fully featured and absolutely necessary for screen readers to interact with web content, and the other (UIA) is newer, and Windows is actively trying to get support for and extending the capabilities of. So I guess the answer is yes, we do need a the tuple (platform, browser, Accessibility API) to understand the results on Windows.
The goal of the different test functions per API is to easily see if there is accessibility support for a web feature for a given (platform, browser) combination without having to think about APIs. You just look at the test results for role=blockquote (and ignore the "PRECONDITION_FAILED" results). So it's not exactly to solve the tuple problem, its for understanding the test results and compare support across different (platform, browser) combinations.
There was a problem hiding this comment.
Oh I guess you could have just one single function per concept, no subtests per API...
def test_blockquote(atspi, axapi, ia2, uia, session, inline):
session.url = inline(TEST_HTML)
if (atspi):
node = atspi.find_node("test", session.url)
assert atspi.Accessible.get_role(node) == atspi.Role.BLOCK_QUOTE
if (axapi):
node = axapi.find_node("test", session.url)
role = axapi.AXUIElementCopyAttributeValue(node, "AXRole", None)[1]
assert role == "AXGroup"
role = axapi.AXUIElementCopyAttributeValue(node, "AXSubrole", None)[1]
assert role == "AXUnknown"
if (ia2):
node = ia2.find_node("test", session.url)
assert ia2.get_role(node) == "IA2_ROLE_BLOCK_QUOTE"
assert ia2.get_msaa_role(node) == "ROLE_SYSTEM_GROUPING"
if (uia):
...
But then you are right, you can't see easily if IA2 or UIA is failing. So it is ultimately about solving the (platform, browser, AT API) problem, after all.
There was a problem hiding this comment.
Yeah, exactly. If we wrote all tests as test_foo_{browser}_{os}_{channel} and produced PRECONDITION_FAILED for all the cases not corresponding to the the current browser/platform/channel it would be really hard to interpret the results.
Of course doing it for one property that is expected to have 4 different values isn't so bad, but it still makes it harder to compare the results.
We could make the AT-API a property of the run (although it would mean you'd have to have separate runs for each AT API on platforms with > 1). That would also require a bit of work on the wpt.fyi side to allow filtering the runs appropriately, but it might be better overall?
There was a problem hiding this comment.
Also for what it's worth, if I was writing this with one test function I'd define a single at fixture returning a class representing the AT API and use it something like:
def test_blockquote(at, session, inline):
session.url = inline(TEST_HTML)
if isinstance(at, AtSpi):
node = at.find_node("test", session.url)
assert atspi.Accessible.get_role(node) == atspi.Role.BLOCK_QUOTE
elif isinstance(at, AxApi):
# Fill in other cases here
else:
raise PreconditionError(f"Test not supported for {at}")
One could also consider moving that to a match block once we have Python 3.10 support.
There was a problem hiding this comment.
The suggestion to make the accessibility API a property of the run is interesting (btw in general we called it an "accessibility API", not an "AT API").
But I do have a concern about going this direction -- right now, to support all assistive technologies on Windows, you need to support both accessibility APIs, and it will likely be that way for a long long time. So I think it would be more ideal if any time anyone ran these tests on windows (or ran the full wpt test suite), they got results for both APIs, since both are relevant to making an accessible browser on Windows. In this ideal scenario, all consumers of wpt (and everyone sending results to wpt.fyi) wouldn't have to change anything -- you can get the full picture of windows support just by running the test suite as you were previously doing.
If we added an accessibility property to the run, would that mean that in order for wpt.fyi to get the results of both API, then whoever maintains and sends the edge results will have to add infrastructure to run the tests in these directories twice...? Do you think there is a way around this, so that consumers don't have to change anything but still get a full report (the results for both APIs) -- somehow wptrunner would have to handle running tests of type "aamtest" on twice on Windows... and results of the test run are reported correctly to wpt.fyi? It sounds complicated, but if you think it's really worth it, I could look into it.
On the other hand, if we made it easy to filter out "PRECONDITION_FAILED" subtests in WPT (web-platform-tests/wpt.fyi#4672), which I planned on picking up next -- I think it would resolve the difficulty in reading/understanding the results.
There was a problem hiding this comment.
I could also potentially do different test files per API, and then use the test file status PRECONDITION_FAILED when it doesn't apply to platform.
But I don't like this, because (1) you can't easily see the support for a specific feature cross-platfoms, and (2) for each feature, you will have to make a test file per platform. Already I have 300 tests to add for core-aam alone, it would become 1200 if we did it for every test platform. Alternatively, we could combine tests for many different roles into one test file -- but all roles would be a huge test file, and otherwise, I'm not sure what a clean division would be.
|
@whimboo I just noticed you changes to |
Thank you for pointing me to this PR. I do have some concerns about reusing our fixtures outside the WebDriver tests directory, since that makes it harder to change them safely when we may not be aware of effects on tests elsewhere in the repository. If duplication is not desirable and we genuinely need higher-level shared fixtures (for example, for creating WebDriver sessions) we should probably identify a more appropriate location for them. I would also expect similar needs to arise for other Python-based tests in different parts of the tree. That said, this likely needs broader discussion with people who have a more complete view of the repository structure and testing strategy, such as @jgraham. |
|
@whimboo thanks for so quickly taking a look!!
I would really like to not have duplication, would it be ok if I opened another PR to move the fixtures we'd need to the tools/ directory that both sets of test reference -- and we could talk about it there? So far, these are the only python based tests, the webdriver tests and the accessibility API tests.
@jgraham is already reviewing this patch :) and we have been in discussion about it for a while -- there is an RFC for this change that has just been merged. |
This PR adds a new test type to test accessibility APIs exposed by browsers, as defined by the ARIA, Core-AAM and HTML-AAM specifications. The RFC can be found here.
This is a replacement for: #53733
Instead of extending testharness, I added a new test type (
aamtest) that is similar towdspectests and uses a lot of the same infrastructure. This idea came from @foolip in this comment on the RFC, and I think it looks good!Here is a PR that adds documentation for this test type: #58632
To run tests on Linux:
On Debian distros:
apt install libatspi2.0-dev libcairo2-dev libgirepository1.0-devOn Mac:
Run chrome tests with
--no-headless. Safari does not yet support this test type.