Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PageSpeed Insights now reporting all AMP scripts as "Unused Javascript" #28638

Closed
stephengardner opened this issue May 30, 2020 · 57 comments
Closed

Comments

@stephengardner
Copy link

Reproduction steps:

As an aside, AMP pages across the board seem to be scoring less on the PSI report, and LCP's seem to fail the 2.5s threshold across the board. Not reporting this LCP note as a bug, but it is relatively alarming to those who spend a lot of time optimizing their pages on AMP. Google Search Console also seems to report drastically lower LCP results in "Web Vitals" report (2.x seconds as opposed to 4.x when tested directly via PSI).

@stephengardner
Copy link
Author

In an effort not to overload the issues tab, if someone could comment on the fact that the AMP homepage doesn't pass Google's Core Web Vitals assessment, that would be helpful. As the AMP framework is generally touted as the fast, mobile-friendly and user-first ideal, it would be great to see it as an example of what it preaches. Of course, PSI is a pretty key benchmark for users across the board (over a billion tests last year), so this is important for both publishers, and customers of AMP-related solutions. Not saying all AMP pages have to pass, but it's a bit surprising here.
image
The AMP optimization tips documented are phenomenal. Although even with these in place, including SSR, AMP Toolbox optimizer, etc, the LCP issue (at least when testing directly on the PageSpeed Insights tool itself), still remains.
Curious if any other publishers have seen something similar regarding LCP times.

@kristoferbaxter
Copy link
Contributor

Apologies for the slow response, I’ve been out on paternity leave.

I’m back tomorrow and will make sure this item gets an answer that clarifies things.

@stylecnc
Copy link

stylecnc commented Jun 2, 2020

I am troubled with the same problems, including unused JavaScript, Total Blocking Time, and Largest Contentful Paint, the scores always change from 70 to 80, I can not get higher score anymore!
Here is an example:
https://amp.stylecnc.com/cnc-router/

@stephengardner
Copy link
Author

Thank you @kristoferbaxter, is this a confirmed issue now? Any insight into next steps?

@kristoferbaxter
Copy link
Contributor

This discussion topic has come up several times over the last few years (with notable higher frequency once Chromium based browsers exposed coverage data from V8 via the Coverage Tab in Developer Tools).

Coverage report for amp.dev

Here you can see a Coverage report run on amp.dev reporting the same data you're seeing in Page Speed Insights.

It's important to know how Coverage is calculated, and how it pertains to your documents (and the billions of others using AMP). Effectively this is a collection of the executed code versus unexecuted during the recording lifecycle.

Coverage of code executed in blue and unexecuted in red

In this example the current browser visiting the document supported Object.create directly, so the poorlyfill did not execute.

This pattern is a frequent one in the AMP codebase and many Web Libraries, where support across many user-agent types requires additional complexity and code to ensure all visitors from supported devices get roughly equivalent experiences.

However, this report is also flagging specific sections of the AMP codebase as unexecuted for other reasons as well.

Unexecuted Experiment Logic

Here, segments of the code are unexecuted because the document or user is not currently enrolled in any valid experiments. These experiments are used by AMP developers to build new features and roll them out progressively, without their inclusion in the static output there wouldn't currently be a safe way to rollout these changes without impacting all documents.

In general this type of monitoring is intended to be a tool for guiding changes since it cannot truly detect if a codepath is ever executed across all users (its scope is limited to the current execution for a user on a device).

Conditions create branch points in execution

Here, the fallback path is also important, but wasn't executed by the current invocation. However, a coverage report would indicate that this code is "unused". The AMP codebase is used in many scenarios and contains fallbacks to ensure the outcome is reached across many paths (some less than ideal).

What are AMP contributors doing about it?

AMP contributors have been working to eliminate many of the codepaths. Here's an example from the currently experimental esm build of AMP that we plan to ship to all transformed documents over the next month.

No Object.create poorlyfill

In this example, the codepath for poorlyfilling Object.create was omitted, since there is certainty it exists for the loaded esm codepath.

Many more examples fall into this category.

Assistance Welcomed!

If you have ideas about how to improve the Performance of AMP (its runtime, network usage, etc) please join us on the Performance Working group!

@stephengardner
Copy link
Author

stephengardner commented Jun 3, 2020

Thanks @kristoferbaxter,

You can image a general AMP User, upon adopting AMP: "I have unused javascript! I want it removed! How can AMP be optimized with all these warnings about unused JavaScript?! Google claims AMP is great for speed, I'm seeing over 3 second load times on PSI and tons of unused javascript!"

I have four questions around this:

  1. As far as consumers of AMP go, the effective shorthand response is: "the 'Unused Javascript' warning is valid"?
  2. It is not clear if these warnings would be reconciled with the aforementioned updates in place. It seems like there would just be less unused javascript, and the same warning would exist. Can you clarify if these warnings would still exist? Or if the most likely case is yes?
  3. Are these warnings a new addition to the PSI/lighthouse tool? We did not notice these a month ago.
  4. Do you think it's reasonable that a website using any medium to large javascript file of a complexity similar to an AMP component would ever pass this "Unused Javascript" test?

Thanks again for your helpful response and attention here

@kristoferbaxter
Copy link
Contributor

As far as consumers of AMP go, the effective shorthand response is: "the 'Unused Javascript' warning is valid"?

Some of the warnings are valid, others are not. AMP contributors are working on improvements to reduce the amount of "unused JavaScript" from production documents.

It is not clear if these warnings would be reconciled with the aforementioned updates in place. It seems like there would just be less unused javascript, and the same warning would exist. Can you clarify if these warnings would still exist? Or if the most likely case is yes?

It's not currently possible to remove all unused JavaScript from a singular session. Bundles and extensions for AMP are used by billions of documents in varying ways, and as a result have differential unused JavaScript given the context of the document's usage and a visitors session.

Are these warnings a new addition to the PSI/lighthouse tool? We did not notice these a month ago.

Lighthouse and Pagespeed Insights change frequently, but I believe these warnings became more clear with a recent version.

Do you think it's reasonable that a website using any medium to large javascript file of a complexity similar to an AMP component would ever pass this "Unused Javascript" test?

It's possible to reduce the amount of unused JavaScript, but getting to 0 unused JavaScript has other tradeoffs. I'd hope that in the future AMP's output is far smaller than today and can get as close as reasonable to 0 additional unused code.

@stephengardner
Copy link
Author

Thanks @kristoferbaxter

Some of the warnings are valid, others are not. AMP contributors are working on improvements to reduce the amount of "unused JavaScript" from production documents.

What are valid warnings and what are not? Is this confirmation that some of the warnings in PSI are false positives? Which ones, for AMP specifically, are false positives? For example, "AMP components X, Y, Z, contain false positives."

It's not currently possible to remove all unused JavaScript from a singular session. Bundles and extensions for AMP are used by billions of documents in varying ways, and as a result have differential unused JavaScript given the context of the document's usage and a visitors session.

So, to be clear, these warnings in PSI will still exist even after the potential upcoming performance updates?

Lighthouse and Pagespeed Insights change frequently, but I believe these warnings became more clear with a recent version.

Is there a way to get a definitive answer on this?

These questions and desire for specificity arise due to an abundance of AMP users on our end asking for information about this. If there are errors in PSI with regards to AMP, it's an issue we are the POC for them. If new measurement tools arise in PSI that flag AMP warnings, that's a red flag for them. We're trying to construct the most effective and direct yes or no response to them with regards to their confusion.

@kristoferbaxter
Copy link
Contributor

Apologies if these responses are not answering your questions. Trying again.

Some of the warnings are valid, others are not. AMP contributors are working on improvements to reduce the amount of "unused JavaScript" from production documents.

What are valid warnings and what are not? Is this confirmation that some of the warnings in PSI are false positives? Which ones, for AMP specifically, are false positives? For example, "AMP components X, Y, Z, contain false positives."

AMP components are designed to be resilient to a few conditions:

  1. Many user-agents
  2. Many input value types
  3. Different experiences per experiment group.
  4. Different languages and layout orientations.

Each of these conditions can increase the size of a component given the static nature of the output of AMP JavaScript across billions of documents. To support the many conditions, many codepath permutations are included by default. This is a tradeoff, increasing the number of conditions each component can support with a singular JavaScript payload and decreasing the possibility of needing to communicate with a server for more script to handle one of the permutations not included by default.

When a requested document is "lab tested" in PSI/Lighthouse the conditions a component executes against is a singular value for the many options across each condition. This is a selective view of the total possible codepaths.

Short version: Each component is equivalently measured, given the singular input conditions of the scenario the lab test is run under. This means there are no false positives, but the test conditions do not cover the gamut of scenarios the code is expected to handle.

It's not currently possible to remove all unused JavaScript from a singular session. Bundles and extensions for AMP are used by billions of documents in varying ways, and as a result have differential unused JavaScript given the context of the document's usage and a visitors session.

So, to be clear, these warnings in PSI will still exist even after the potential upcoming performance updates?

Likely yes, but end users will recieve less total script to execute on their devices without lowering the number of conditions the library supports.

Lighthouse and Pagespeed Insights change frequently, but I believe these warnings became more clear with a recent version.

Is there a way to get a definitive answer on this?

These questions and desire for specificity arise due to an abundance of AMP users on our end asking for information about this. If there are errors in PSI with regards to AMP, it's an issue we are the POC for them. If new measurement tools arise in PSI that flag AMP warnings, that's a red flag for them. We're trying to construct the most effective and direct yes or no response to them with regards to their confusion.

These tests were previously available in Lighthouse when run via CLI, but were moved to higher prominence in Lighthouse 6.0 which recently released.

@connorjclark
Copy link

connorjclark commented Jun 3, 2020

Lighthouse and Pagespeed Insights change frequently, but I believe these warnings became more clear with a recent version.
Is there a way to get a definitive answer on this?
These tests were previously available in Lighthouse when run via CLI, but were moved to higher prominence in Lighthouse 6.0 which recently released.

Hi, Lighthouse dev here. Allow me to clear this up. The unused JavaScript audit is new in Lighthouse 6.0, which was released to PSI recently. See this for more: https://web.dev/lighthouse-whats-new-6.0/ .

This is a new feature. Another developer recently addressed similar feedback from the next.js people: vercel/next.js#13682 (comment)

To be clear, these opportunities don't directly impact the score, they are Lighthouse's best guesses as to how to improve the metrics. The score is only based on the metrics. We try to scope our opportunities to what is most likely to have an impact. Sometime, for some pages, it can be wrong, or at the very least may not be as good of an avenue for optimization as the relative "estimated savings" might imply.

poorlyfill

In this example, the codepath for poorlyfilling Object.create was omitted, since there is certainty it exists for the loaded esm codepath.

I love this :)

@connorjclark
Copy link

connorjclark commented Jun 3, 2020

I should also point out that Lab tools (such as Lighthouse) are only testing cold loads. Any JS that is behind user interaction will be considered "unused".

Sometimes, if the code is large enough and structurally isolated from the rest of the app, you can can lazy load that "unused" code. For example, the "Share" modal in Google Docs is ~1.4MB compressed JS that only loads if you click on it. In the case of a framework, as may be the case for AMP (I am no expert), the "unused" code may be too integrated into the code base, and there is no clean place to lazy load anything.

@stephengardner
Copy link
Author

stephengardner commented Jun 3, 2020

Thank you @kristoferbaxter and @connorjclark
I do entirely understand that a subset of code will of course not be covered by the "Used javascript" test case, and the javascript which covers user interaction will have an additional likelihood of being flagged as "unused". Not that this is a bug in PSI, but it's raising some eyebrows from customers who are also simultaneously noticing, either correlated or not, that their scores have dropped in the 6.0.0 release of Lighthouse.

It looks like #13682 also has some similar score-dropping remarks.

While I understand it's the nature of PSI to change over time and of course not report identical results with each version release, I suppose from a broader perspective I am also raising a question as to whether or not it is the best idea to have a red error appear for this, since it seems extremely rare, at least at the current point in time, that websites are able to actually resolve this. It does pop up in the very first "Opportunites" section, beneath their overall page speed timing results, which is unfortunate, at least for anyone in our position, as a publisher of an AMP solution.

I did run some tests on our Angular (9) codebase, and lazy-loaded modules are not being flagged as Unused Javascript, as one would expect. Of course, the main bundle is.

Images from https://amp.dev 's lighthouse result:
image

You can see, it looks like just about every single AMP component is being flagged.
image

Personally, I'm not really bothered by the scores. But it's my job to make sure adopters of AMP feel they're getting the performance bang-for-their-buck. Trying to convince them that a brand new red error and an "Estimated savings" time value doesn't affect their performance score, or their potential Google ranking, is in some cases a dealbreaker, when AMP is touted as the extremely fast alternative. (I do recognize it's explicitly stated that these "Opportunities" metrics don't affect a performance score. I believe the overall sentiment is that; well, if there's an "estimated savings" then of course the page should load that much faster if this was implemented "properly", and how could that not affect the overall score?).

I'd like to also point out that I'm extremely thankful for the responses given.

Ultimately, our answer to customers is the following:
"AMP has unused javascript being flagged on what appears to be every individual AMP component, and unfortunately we can't fix this. The AMP team has plans to mitigate this somewhat, but it's an almost certain guarantee that the 'Remove unused JavaScript' error will never actually disappear from your performance report, despite AMP being implemented perfectly on our end. It does not affect your performance score, and if you have received a drop in your performance score without any other edits to your pages, it is a result of the ever-changing lighthouse algorithm, not this new error."

However disheartening, does this sound accurate?

@connorjclark
Copy link

It does pop up in the very first "Opportunites" section, beneath their overall page speed timing results, which is unfortunate,

Agreed–we're considering increasing the threshold to something like 20KB unused per script. For this specific case, it'd cut the estimated savings to a third, which should reduce the implied actionably.

"will never actually disappear from your performance report"

might add that Lighthouse may reduce this estimation by raising the threshold. In hindsight, complaining about a few KB of unused JS was a bad idea.

"it is a result of the ever-changing lighthouse algorithm"

I don't want to minimize the impact changes have on you, I understand it can be frustrating to see scores change over night (we change the scoring every year or so but this is perhaps the most significant so far). We're using the new web.dev/vitals metrics as we collectively all learn how to better measure user experience. The hope is that the score is a better reflection of user experience.

I'd like to also point out that I'm extremely thankful for the responses given.

Of course, happy to help :)

@stephengardner
Copy link
Author

It does pop up in the very first "Opportunites" section, beneath their overall page speed timing results, which is unfortunate,

Agreed–we're considering increasing the threshold to something like 20KB unused per script. For this specific case, it'd cut the estimated savings to a third, which should reduce the implied actionably.

"will never actually disappear from your performance report"

might add that Lighthouse may reduce this estimation by raising the threshold. In hindsight, complaining about a few KB of unused JS was a bad idea.

I think this entire thread can be summed up with these insights. Thank you. Is there anywhere we can cast a vote for this to be taken into consideration? I believe the AMP project as a whole would benefit from this change.

@connorjclark
Copy link

I think we're in agreement, already have a PR up: GoogleChrome/lighthouse#10906

@ahmedkaludi
Copy link

We at AMPforWP have practically 180000 people and possibly billions of AMP pages bothered by this issue, our support has been flooded by this because of the panic created among the people due to the drop in performance.

One such example: https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.saveatrain.com%2Fblog%2Fapps-download-traveling%2Famp

I hope this PR GoogleChrome/lighthouse#10906 will help us fix this problem :)

@stephengardner
Copy link
Author

@ahmedkaludi glad to know we weren't the only ones! Thanks for sharing

@ahmedkaludi
Copy link

any movement on this?

@abukhan
Copy link

abukhan commented Jun 25, 2020

Eagerly waiting for updates

@kristoferbaxter
Copy link
Contributor

Looks like this was just merged for the next release of lighthouse.

GoogleChrome/lighthouse#11025

@connorjclark
Copy link

connorjclark commented Jul 9, 2020

6.1 is in PSI now thanks to @jazyan 🎉

@suneel-code
Copy link

suneel-code commented Dec 11, 2020

It is November 2020. Till now there are no updates from anywhere regarding how these Core Web Vitals take these unused JS loading times? Our site is showing higher time for LCP. We are unable to solve..
https://www.examtray.com/python/python-type-casting-or-type-conversion-tutorial?amp

Also, AMP pages do not allow serving font files locally. Fontawesome is slow at times.

@sebastianbenz
Copy link
Contributor

@suneel-code please take a look at our page experience checker: https://amp.dev/page-experience/?url=https://www.examtray.com/python/python-type-casting-or-type-conversion-tutorial?amp

It'll give you plenty advice on how you can improve your performance. AMP pages also allow you to self-host font files.

@MrCsabaToth
Copy link

@suneel-code please take a look at our page experience checker: https://amp.dev/page-experience/?url=https://www.examtray.com/python/python-type-casting-or-type-conversion-tutorial?amp

It'll give you plenty advice on how you can improve your performance. AMP pages also allow you to self-host font files.

  1. @suneel-code fonts are one exception and can be hosted by you. I'm using a hybrid approach: self hosting but offering woff2 from CDN.
  2. @sebastianbenz this AMP specific page is great. But I'm confused why it does not show the same values for my site regarding LCP and CLS mostly as https://web.dev/measure/ or https://www.webpagetest.org/

My site is https://csaba.page/ (https://gitlab.com/MrCsabaToth/mrcsabatoth.gitlab.io/)

@sebastianbenz
Copy link
Contributor

re 2.: measuring these things in a consistent manner is hard as connection, server-response times, cpu load etc might vary. They can only give you indication, but the exact numbers might change. Sidenote: amp.dev/page-experience uses https://developers.google.com/speed/pagespeed/insights/ under the hood for measuring the performance.

@EndiHariadi43
Copy link

EndiHariadi43 commented Dec 28, 2020

I have configured my website (where almost everything is built with AMP) server with nginx, apache2 and varnish and managed to achieve a decent gtmetrix score.
Screenshot_2020-12-28-07-53-02-502_com android chrome
However, when tested with https://www.webpagetest.org/result/210404_AiDcXC_59a0257fea4f4235d0388d0b41203696/ speed issues then arise.
Screenshot_2020-12-28-07-55-19-614_com android chrome.

I think my efforts have been maximized, but, until now I can't find a proper reference to solve this problem.

@nicolasfaurereboot
Copy link

  1. @sebastianbenz this AMP specific page is great. But I'm confused why it does not show the same values for my site regarding LCP and CLS mostly as https://web.dev/measure/ or https://www.webpagetest.org/

Even if you only take the Boilerplate from AMP.dev, run AMP optimizer, and launch simultaneously 6 chrome windows to use the following web vitals measuring tools:

it's probably more efficient to focus on the speed improvement advices given by these tools (like removing unused javascript to gain hundreds of milliseconds) rather than the measures.
Because at the end we mainly want to load/render faster the webpage and to get the 'Fast Page' Label a 'green experience' is enough.

@linhlanweb
Copy link

linhlanweb commented Feb 3, 2021

I was really impressed with AMP from the start. But now (2021), I feel it too bad. It's not like before. I had to remove it , to use Web standards . The speed is green.
I'm really sorry to have to quit AMP. Thank Developers!
Mysite: https://thuongiado.vn

@kristoferbaxter
Copy link
Contributor

@kristoferbaxter
Copy link
Contributor

@nicolasfaurereboot Agree with your statement, synthetic testing tools like those mentioned are best used as a way to gain insight into improvements not to compare the results against one another.

@kristoferbaxter
Copy link
Contributor

Since this issue has moved away from its original concerns, I'm going to mark the issue closed.

Efforts are ongoing to reduce the size of AMP's JavaScript payloads, including Bento, AMP Compiler, and Module/NoModule mode.

@MrCsabaToth
Copy link

Since this issue has moved away from its original concerns, I'm going to mark the issue closed.

Efforts are ongoing to reduce the size of AMP's JavaScript payloads, including Bento, AMP Compiler, and Module/NoModule mode.

I found referneces to Bento:

What are the best references to AMP Compiler (I found AMP Closure Compiler, but I'm not sure that's what you mean) and the Module/NoModule mode. My web page is Jekyll based, I'll explore how I can take advantage of those.

@kristoferbaxter
Copy link
Contributor

Apologies @MrCsabaToth I missed your response.

The AMP Compiler is an ongoing project from the wg-performance team, intended to allow for further reductions in JavaScript payloads based on the usage of Bento components.

Over time, Bento components will replace the current AMP Components, and as a result documents will no longer need the runtime provided by v0.js.

@honeybeecnc
Copy link

I used the wordpress AMP plug-in and tested AMP, but it feels slow. I don’t know what the problem is? I’m currently switching back to using the theme for mobile adaptive.Here is an example:https://www.honeybeecutting.com/

@MrCsabaToth
Copy link

I used the wordpress AMP plug-in and tested AMP, but it feels slow. I don’t know what the problem is? I’m currently switching back to using the theme for mobile adaptive.Here is an example:https://www.honeybeecutting.com/

There can be many reasons for a site to feel slow. If you want to be seriously fast then you either want to get closer to the bare metal (a.k.a. use a framework on your own instead of WordPress), or seriously restrict and cherry pick WordPress plugins to cut your footprint, optimize images, etc.
Looks like there's much more than just AMP: https://www.webpagetest.org/result/210527_BiDc1H_9d74899605acf3702cd5517744677128/

@colorman4
Copy link

It fetches the url twice, once with a mobile user-agent, and once with a desktop-user agent. The PageSpeed Insights Score ranges from 0 to 100 points. A higher score is better and a score of 85 or above indicates that the page is performing well.


https://coloring-pages.io

@Haseeb717
Copy link

@westonruter any updates on that?

@westonruter
Copy link
Member

@Haseeb717 no. I'm not working on this.

@EzCad
Copy link

EzCad commented May 5, 2022

I am troubled with the same problems, including unused JavaScript, Total Blocking Time, and Largest Contentful Paint, the scores always change from 70 to 80, I can not get higher score anymore!
Here is an example:

I never noticed this issue until I read this article. I found my pages has similar problems. The score can not be perfect like 90.

Example pages
https://www.laserchina.com/
https://www.laserchina.com/engraver/
https://www.laserchina.com/cleaner/

@ennkh
Copy link

ennkh commented Feb 8, 2023

Same issue, I tried many times to improve my score, always got errors,

Remove duplicate modules in JavaScript bundles 1.47 s
Reduce unused JavaScript 0.87 s
Reduce initial server response time 0.5 s

the main page https://www.computer-pdf.com/
other pages
https://www.computer-pdf.com/programming/
https://www.computer-pdf.com/programming/802-tutorial-python-tutorial.html

@EzCad
Copy link

EzCad commented Feb 8, 2023 via email

@bubobih
Copy link

bubobih commented Apr 8, 2024

hello any update on this bug? we still have bad score because of this. regards

@EzCad
Copy link

EzCad commented Apr 8, 2024 via email

@EzCad
Copy link

EzCad commented Jan 15, 2025 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests