Skip to content

Conversation

@adamraine
Copy link
Contributor

Mostly interested in any thoughts on computeTopOpportunities. Once we agree on the implementation, I will start working on the visuals for the top opportunities.

I'm imagining the final structure looks something like:

Performance --------------------------------------- ▼

Images do not have explicit width and height
	Navigation report 1 -> #index=0&anchor=performance
	Snapshot report 1 -> #index=1&anchor=performance
	Navigation report 2 -> #index=2&anchor=performance

Serve images in next-gen formats
	Navigation report 1 -> #index=0&anchor=performance
	Navigation report 2 -> #index=2&anchor=performance

Accessibility ------------------------------------- ◀
Best Practices ------------------------------------ ◀
SEO ----------------------------------------------- ◀

There are some other minor visual updates to the summary page (pretty much everything outside opportunities.tsx), I'm happy to split those into a separate PR if it will make this review easier.

Copy link
Collaborator

@patrickhulce patrickhulce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome start here 🌮

I'm happy to split those into a separate PR if it will make this review easier.

There's enough going on just in opportunities I think that might be worth it to land the summary changes quicker 👍

import {Util} from '../../../report/renderer/util';
import {useFlowResult} from '../util';

const MAX_TOP_OPPORTUNITIES = 5;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WDYT about calling this something other than "opportunities" since that term already has a specific meaning in the performance category? (at least for now).

  • Highest Impact Audits (I think this is what we tested with users)
  • Top Audits
  • Top Improvements
  • Top Issues

I'm also thinking we shift the intent here and tone down our promise on them being "Top" if we don't have a lot of confidence yet that we're actually surfacing the highest impact things at first. Examples...

  • Common Issues
  • Issues At-a-Glance
  • Quick Fixes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Highest Impact Audits (I think this is what we tested with users)
Common Issues

These two are my top picks. "Highest Impact Audits" because we tested it with users, "Common Issues" because it best describes the current algo. If I had to pick one it would be "Common Issues" because it's shorted 🙃

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer common issues as well as I think it's more accurate of our early impl here.


const MAX_TOP_OPPORTUNITIES = 5;

function computeTopOpportunities(reportResults: LH.ReportResult[], categoryId: string) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have a few possible goals here:

  • Surface the absolute most important things to fix across the entire flow
  • Surface issues that are actionable and/or easy to fix
  • Surface issues that are very common/high ROI

My initial take is that the first is rather difficult to accomplish for us, and possibly not actionable for the developer either (i.e. if you got a report that just listed the "Top Audits" as the 6 metrics, that wouldn't be very interesting). I'm inclined to favor the "very common"/"high ROI" approach for non-perf categories and the "actionable"+"impact" approach for perf category. What this means in practice for a proposed algorithm...

Non-perf

  1. Group all failures across reports by audit ID
  2. Map each group to compute the sum of number of details items
  3. Return the top N audits when sorted by # of total details items

Perf

  1. Group all opportunities across reports by audit ID
  2. Map each group to compute the sum of overallSavingsMs.
  3. Return the top N audits when sorted by # of total ms savings.
  4. If no items are returned, fallback to non-perf algorithm.

It looks like what you have here is pretty close to non-perf algo, just using the # of reports it appears in rather than number of details items, and using weight as a tiebreaker. Is that right? WDYT about the above?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like what you have here is pretty close to non-perf algo, just using the # of reports it appears in rather than number of details items, and using weight as a tiebreaker. Is that right? WDYT about the above?

Yup that's the algo

if you got a report that just listed the "Top Audits" as the 6 metrics, that wouldn't be very interesting

WDYT about filtering out any audits under group "metrics"?

Non-perf

Group all failures across reports by audit ID
Map each group to compute the sum of number of details items
Return the top N audits when sorted by # of total details items

I'm worried about using the details items because audits like viewport don't include any details items. I understand that using # of relevant reports isn't perfect either, but it will be easier to rank all the audits including ones without items. Maybe we could sort by # reports, then # details items, then weight?

Perf

Group all opportunities across reports by audit ID
Map each group to compute the sum of overallSavingsMs.
Return the top N audits when sorted by # of total ms savings.
If no items are returned, fallback to non-perf algorithm.

Ranking first by savings SGTM, but I would still want to include some diagnostics with 0 savings if there is available space.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WDYT about filtering out any audits under group "metrics"?

SGTM

I'm worried about using the details items because audits like viewport don't include any details items. I understand that using # of relevant reports isn't perfect either, but it will be easier to rank all the audits including ones without items. Maybe we could sort by # reports, then # details items, then weight?

WFM

Ranking first by savings SGTM, but I would still want to include some diagnostics with 0 savings if there is available space.

Yes agreed.

To solve multiple of these concerns, what about # of reports weighted by score? i.e. we wouldn't want to surface a yellow opportunity above a red diagnostic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To solve multiple of these concerns, what about # of reports weighted by score?

SGTM

@adamraine
Copy link
Contributor Author

There's enough going on just in opportunities I think that might be worth it to land the summary changes quicker 👍

Aight gonna split these

@adamraine adamraine changed the base branch from master to fr-summary-sections September 20, 2021 16:08
Base automatically changed from fr-summary-sections to master September 20, 2021 20:16
};
if (!Util.showAsPassed(auditRef.result)) {
reportListForAudit.reports.push(i);
reportListForAudit.remainingScore += (1 - Number(auditRef.result.score)) * auditRef.weight;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a slightly than #13082 (comment). I figured auditRef.weight makes more sense as a weight than a tiebreaker. # of reports is now the tiebreaker. WDYT @patrickhulce?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the concept, I think we'll also want a version of this for perf though where everything has weight 0 but we still want to consider the score.

I'm thinking...

interface AuditEntry {
  occurrences: Array<{ref: AuditResultRef, reportIndex: number}>
  totalCategoryScoreHeadroom: number // this one is weighted by `.weight`
  totalAuditScoreHeadroom: number // this one is not
  totalOverallSavingsMs?: number
}

.filter(audit =>
audit.reports.length &&
audit.ref.group !== 'metrics' &&
audit.ref.result.scoreDisplayMode !== 'informative' &&
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like we should skip these per ref above rather than based on a single ref in the report?

};
if (!Util.showAsPassed(auditRef.result)) {
reportListForAudit.reports.push(i);
reportListForAudit.remainingScore += (1 - Number(auditRef.result.score)) * auditRef.weight;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the concept, I think we'll also want a version of this for perf though where everything has weight 0 but we still want to consider the score.

I'm thinking...

interface AuditEntry {
  occurrences: Array<{ref: AuditResultRef, reportIndex: number}>
  totalCategoryScoreHeadroom: number // this one is weighted by `.weight`
  totalAuditScoreHeadroom: number // this one is not
  totalOverallSavingsMs?: number
}

@adamraine
Copy link
Contributor Author

Punting until we can understand real use cases better

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants