Skip to content

Conversation

@kainino0x
Copy link
Contributor

@kainino0x kainino0x commented Apr 28, 2021

Draft, dependent on #1666

There are some big open questions with this proposal, as discovered when I was trying to write spec for it. It's very possible that speccing a portable behavior for this isn't going to be tractable. We could do a non-portable behavior (where the definition of luminance isn't 100% consistent but is as close as we can get). However the value of this addition may not be worth the spec effort right now, at least until we have some concrete experiments showing performance benefits.


Preview | Diff

@kainino0x
Copy link
Contributor Author

kainino0x commented Apr 28, 2021

Feedback from a video expert:

[It is possible to] convert everything to XYZ and return Y. This is well-founded but always requires accessing UV data (and knowing the source color space).

There isn't anything truly consistent we can do with just Y/Y'. That said, a supermajority of (current) content is in formats that have a plane that is at least called Y', so it may be acceptable to just ignore (or reject) other things. In this sense you can straightforwardly get Y' from a supermajority of decoded data.

@litherum
Copy link
Contributor

litherum commented May 3, 2021

If we accepted this pull request, and we realized in the future we wanted a more rigorous notion of luminance, how would we accomplish this in the future?

@kainino0x
Copy link
Contributor Author

kainino0x commented May 3, 2021

Resolution:

  • Still think this is useful
  • Don't define the color space strictly, better to define it so a UV plane never has to be sampled

IMO: Likely luma (nonlinear) rather than luminance (linear), to avoid gamma conversion. TBD

@kainino0x kainino0x self-assigned this May 3, 2021
@Kangz
Copy link
Contributor

Kangz commented May 3, 2021

If we accepted this pull request, and we realized in the future we wanted a more rigorous notion of luminance, how would we accomplish this in the future?

A new function like textureLoadLuminancePrecise?

@kainino0x
Copy link
Contributor Author

A textureLoadLuminancePrecise would need to load all planes anyway, in many cases. So it only makes sense if the user can specify their destination Y axis (based on their video source metadata) in such a way that the impl will still end up on the fast path. I'm optimistic this might be doable but not sufficiently informed.

An alternative would be to provide a luminance value and metadata telling them what it means. (Could be either provided at the API level, or returned by the WGSL function.) IMO this seems better suited for a WebCodecs path.

@kainino0x kainino0x added this to the post-MVP milestone May 10, 2021
@kainino0x
Copy link
Contributor Author

Discussed again. Conclusion this time was we think this can be punted to post-MVP. In the meantime we can experiment to determine how much performance can actually be gained.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants