-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PageSpeed Insights now reporting all AMP scripts as "Unused Javascript" #28638
Comments
Apologies for the slow response, I’ve been out on paternity leave. I’m back tomorrow and will make sure this item gets an answer that clarifies things. |
I am troubled with the same problems, including unused JavaScript, Total Blocking Time, and Largest Contentful Paint, the scores always change from 70 to 80, I can not get higher score anymore! |
Thank you @kristoferbaxter, is this a confirmed issue now? Any insight into next steps? |
This discussion topic has come up several times over the last few years (with notable higher frequency once Chromium based browsers exposed coverage data from V8 via the Coverage Tab in Developer Tools). Here you can see a Coverage report run on It's important to know how Coverage is calculated, and how it pertains to your documents (and the billions of others using AMP). Effectively this is a collection of the executed code versus unexecuted during the recording lifecycle. In this example the current browser visiting the document supported This pattern is a frequent one in the AMP codebase and many Web Libraries, where support across many user-agent types requires additional complexity and code to ensure all visitors from supported devices get roughly equivalent experiences. However, this report is also flagging specific sections of the AMP codebase as unexecuted for other reasons as well. Here, segments of the code are unexecuted because the document or user is not currently enrolled in any valid experiments. These experiments are used by AMP developers to build new features and roll them out progressively, without their inclusion in the static output there wouldn't currently be a safe way to rollout these changes without impacting all documents. In general this type of monitoring is intended to be a tool for guiding changes since it cannot truly detect if a codepath is ever executed across all users (its scope is limited to the current execution for a user on a device). Here, the fallback path is also important, but wasn't executed by the current invocation. However, a coverage report would indicate that this code is "unused". The AMP codebase is used in many scenarios and contains fallbacks to ensure the outcome is reached across many paths (some less than ideal). What are AMP contributors doing about it?AMP contributors have been working to eliminate many of the codepaths. Here's an example from the currently experimental In this example, the codepath for poorlyfilling Many more examples fall into this category. Assistance Welcomed!If you have ideas about how to improve the Performance of AMP (its runtime, network usage, etc) please join us on the Performance Working group! |
Thanks @kristoferbaxter, You can image a general AMP User, upon adopting AMP: "I have unused javascript! I want it removed! How can AMP be optimized with all these warnings about unused JavaScript?! Google claims AMP is great for speed, I'm seeing over 3 second load times on PSI and tons of unused javascript!" I have four questions around this:
Thanks again for your helpful response and attention here |
Some of the warnings are valid, others are not. AMP contributors are working on improvements to reduce the amount of "unused JavaScript" from production documents.
It's not currently possible to remove all unused JavaScript from a singular session. Bundles and extensions for AMP are used by billions of documents in varying ways, and as a result have differential unused JavaScript given the context of the document's usage and a visitors session.
Lighthouse and Pagespeed Insights change frequently, but I believe these warnings became more clear with a recent version.
It's possible to reduce the amount of unused JavaScript, but getting to 0 unused JavaScript has other tradeoffs. I'd hope that in the future AMP's output is far smaller than today and can get as close as reasonable to 0 additional unused code. |
Thanks @kristoferbaxter
What are valid warnings and what are not? Is this confirmation that some of the warnings in PSI are false positives? Which ones, for AMP specifically, are false positives? For example, "AMP components X, Y, Z, contain false positives."
So, to be clear, these warnings in PSI will still exist even after the potential upcoming performance updates?
Is there a way to get a definitive answer on this? These questions and desire for specificity arise due to an abundance of AMP users on our end asking for information about this. If there are errors in PSI with regards to AMP, it's an issue we are the POC for them. If new measurement tools arise in PSI that flag AMP warnings, that's a red flag for them. We're trying to construct the most effective and direct yes or no response to them with regards to their confusion. |
Apologies if these responses are not answering your questions. Trying again.
AMP components are designed to be resilient to a few conditions:
Each of these conditions can increase the size of a component given the static nature of the output of AMP JavaScript across billions of documents. To support the many conditions, many codepath permutations are included by default. This is a tradeoff, increasing the number of conditions each component can support with a singular JavaScript payload and decreasing the possibility of needing to communicate with a server for more script to handle one of the permutations not included by default. When a requested document is "lab tested" in PSI/Lighthouse the conditions a component executes against is a singular value for the many options across each condition. This is a selective view of the total possible codepaths. Short version: Each component is equivalently measured, given the singular input conditions of the scenario the lab test is run under. This means there are no false positives, but the test conditions do not cover the gamut of scenarios the code is expected to handle.
Likely yes, but end users will recieve less total script to execute on their devices without lowering the number of conditions the library supports.
These tests were previously available in Lighthouse when run via CLI, but were moved to higher prominence in Lighthouse 6.0 which recently released. |
Hi, Lighthouse dev here. Allow me to clear this up. The unused JavaScript audit is new in Lighthouse 6.0, which was released to PSI recently. See this for more: https://web.dev/lighthouse-whats-new-6.0/ . This is a new feature. Another developer recently addressed similar feedback from the next.js people: vercel/next.js#13682 (comment) To be clear, these opportunities don't directly impact the score, they are Lighthouse's best guesses as to how to improve the metrics. The score is only based on the metrics. We try to scope our opportunities to what is most likely to have an impact. Sometime, for some pages, it can be wrong, or at the very least may not be as good of an avenue for optimization as the relative "estimated savings" might imply.
I love this :) |
I should also point out that Lab tools (such as Lighthouse) are only testing cold loads. Any JS that is behind user interaction will be considered "unused". Sometimes, if the code is large enough and structurally isolated from the rest of the app, you can can lazy load that "unused" code. For example, the "Share" modal in Google Docs is ~1.4MB compressed JS that only loads if you click on it. In the case of a framework, as may be the case for AMP (I am no expert), the "unused" code may be too integrated into the code base, and there is no clean place to lazy load anything. |
Thank you @kristoferbaxter and @connorjclark It looks like #13682 also has some similar score-dropping remarks. While I understand it's the nature of PSI to change over time and of course not report identical results with each version release, I suppose from a broader perspective I am also raising a question as to whether or not it is the best idea to have a red error appear for this, since it seems extremely rare, at least at the current point in time, that websites are able to actually resolve this. It does pop up in the very first "Opportunites" section, beneath their overall page speed timing results, which is unfortunate, at least for anyone in our position, as a publisher of an AMP solution. I did run some tests on our Angular (9) codebase, and lazy-loaded modules are not being flagged as Unused Javascript, as one would expect. Of course, the main bundle is. Images from https://amp.dev 's lighthouse result: You can see, it looks like just about every single AMP component is being flagged. Personally, I'm not really bothered by the scores. But it's my job to make sure adopters of AMP feel they're getting the performance bang-for-their-buck. Trying to convince them that a brand new red error and an "Estimated savings" time value doesn't affect their performance score, or their potential Google ranking, is in some cases a dealbreaker, when AMP is touted as the extremely fast alternative. (I do recognize it's explicitly stated that these "Opportunities" metrics don't affect a performance score. I believe the overall sentiment is that; well, if there's an "estimated savings" then of course the page should load that much faster if this was implemented "properly", and how could that not affect the overall score?). I'd like to also point out that I'm extremely thankful for the responses given. Ultimately, our answer to customers is the following: However disheartening, does this sound accurate? |
Agreed–we're considering increasing the threshold to something like 20KB unused per script. For this specific case, it'd cut the estimated savings to a third, which should reduce the implied actionably.
might add that Lighthouse may reduce this estimation by raising the threshold. In hindsight, complaining about a few KB of unused JS was a bad idea.
I don't want to minimize the impact changes have on you, I understand it can be frustrating to see scores change over night (we change the scoring every year or so but this is perhaps the most significant so far). We're using the new web.dev/vitals metrics as we collectively all learn how to better measure user experience. The hope is that the score is a better reflection of user experience.
Of course, happy to help :) |
I think this entire thread can be summed up with these insights. Thank you. Is there anywhere we can cast a vote for this to be taken into consideration? I believe the AMP project as a whole would benefit from this change. |
I think we're in agreement, already have a PR up: GoogleChrome/lighthouse#10906 |
We at AMPforWP have practically 180000 people and possibly billions of AMP pages bothered by this issue, our support has been flooded by this because of the panic created among the people due to the drop in performance. I hope this PR GoogleChrome/lighthouse#10906 will help us fix this problem :) |
@ahmedkaludi glad to know we weren't the only ones! Thanks for sharing |
any movement on this? |
Eagerly waiting for updates |
Looks like this was just merged for the next release of lighthouse. |
6.1 is in PSI now thanks to @jazyan 🎉 |
It is November 2020. Till now there are no updates from anywhere regarding how these Core Web Vitals take these unused JS loading times? Our site is showing higher time for LCP. We are unable to solve.. Also, AMP pages do not allow serving font files locally. Fontawesome is slow at times. |
@suneel-code please take a look at our page experience checker: https://amp.dev/page-experience/?url=https://www.examtray.com/python/python-type-casting-or-type-conversion-tutorial?amp It'll give you plenty advice on how you can improve your performance. AMP pages also allow you to self-host font files. |
re 2.: measuring these things in a consistent manner is hard as connection, server-response times, cpu load etc might vary. They can only give you indication, but the exact numbers might change. Sidenote: amp.dev/page-experience uses https://developers.google.com/speed/pagespeed/insights/ under the hood for measuring the performance. |
I have configured my website (where almost everything is built with AMP) server with nginx, apache2 and varnish and managed to achieve a decent gtmetrix score. I think my efforts have been maximized, but, until now I can't find a proper reference to solve this problem. |
Even if you only take the Boilerplate from AMP.dev, run AMP optimizer, and launch simultaneously 6 chrome windows to use the following web vitals measuring tools:
it's probably more efficient to focus on the speed improvement advices given by these tools (like removing unused javascript to gain hundreds of milliseconds) rather than the measures. |
I was really impressed with AMP from the start. But now (2021), I feel it too bad. It's not like before. I had to remove it , to use Web standards . The speed is green. |
@EndiHariadi43 AMP's page experience tools have a few suggestions for your site: You can see them here: https://amp.dev/page-experience/?url=https://nickgenom.com/en |
@nicolasfaurereboot Agree with your statement, synthetic testing tools like those mentioned are best used as a way to gain insight into improvements not to compare the results against one another. |
Since this issue has moved away from its original concerns, I'm going to mark the issue closed. Efforts are ongoing to reduce the size of AMP's JavaScript payloads, including Bento, AMP Compiler, and Module/NoModule mode. |
I found referneces to Bento:
What are the best references to AMP Compiler (I found AMP Closure Compiler, but I'm not sure that's what you mean) and the Module/NoModule mode. My web page is Jekyll based, I'll explore how I can take advantage of those. |
Apologies @MrCsabaToth I missed your response. The AMP Compiler is an ongoing project from the Over time, Bento components will replace the current AMP Components, and as a result documents will no longer need the runtime provided by |
I used the wordpress AMP plug-in and tested AMP, but it feels slow. I don’t know what the problem is? I’m currently switching back to using the theme for mobile adaptive.Here is an example:https://www.honeybeecutting.com/ |
There can be many reasons for a site to feel slow. If you want to be seriously fast then you either want to get closer to the bare metal (a.k.a. use a framework on your own instead of WordPress), or seriously restrict and cherry pick WordPress plugins to cut your footprint, optimize images, etc. |
It fetches the url twice, once with a mobile user-agent, and once with a desktop-user agent. The PageSpeed Insights Score ranges from 0 to 100 points. A higher score is better and a score of 85 or above indicates that the page is performing well. https://coloring-pages.io |
@westonruter any updates on that? |
@Haseeb717 no. I'm not working on this. |
I am troubled with the same problems, including unused JavaScript, Total Blocking Time, and Largest Contentful Paint, the scores always change from 70 to 80, I can not get higher score anymore! I never noticed this issue until I read this article. I found my pages has similar problems. The score can not be perfect like 90. Example pages |
Same issue, I tried many times to improve my score, always got errors, Remove duplicate modules in JavaScript bundles 1.47 s the main page https://www.computer-pdf.com/ |
邮件已收到。如紧急,请通过电话联系。
|
hello any update on this bug? we still have bad score because of this. regards |
邮件已收到。如紧急,请通过电话联系。
|
I am troubled with the same problems, including unused JavaScript, Total Blocking Time, and Largest Contentful Paint, the scores always change from 70 to 80, I can not get higher score anymore! |
邮件已收到。如紧急,请通过电话联系。
|
Reproduction steps:
As an aside, AMP pages across the board seem to be scoring less on the PSI report, and LCP's seem to fail the 2.5s threshold across the board. Not reporting this LCP note as a bug, but it is relatively alarming to those who spend a lot of time optimizing their pages on AMP. Google Search Console also seems to report drastically lower LCP results in "Web Vitals" report (2.x seconds as opposed to 4.x when tested directly via PSI).
The text was updated successfully, but these errors were encountered: