Want to increase my page speed by optimizing three js code

Hi everyone :waving_hand:, I’m working on a project using three.js and I’m running into performance issues. My page speed score is quite low when I include my 3D model/scene. The model is already optimized (compressed to around 67 KB with Draco/Meshopt), but loading and rendering still seem to slow down the page. I’m using Next.js with TypeScript, React + three.js (via @react-three/fiber and drei), and hosting on Vercel. I’m not using any textures — it’s just a basic Principled BSDF material. For the environment, I’ve tried using a custom HDR (around 900 KB) and also tested with a cube map using separate images (each under 34 KB). Even with these optimizations, the page speed is still not great, and Lighthouse reports a Total Blocking Time of 13,070 ms.

Does anyone have suggestions or best practices for improving performance and page speed when using three.js in a production site (e.g., caching models, preloading assets, reducing bundle size, optimizing shaders, handling environment maps, etc.)? Any advice or examples from your experience would be really helpful!

1 Like

it takes time to decompress, and if you are using Draco through DREI, then by default i think it loads the worker scripts from CDN. You could load them locally.

Maybe a little off-topic, but in my perception the impact of page loading speed on Google ranking is widely overestimated. Since different “cultures” use different Thousand separators like [“,”, “.”, “‘“, …]: are you talking about 13 ms or about 13 seconds? 13 ms would be absolutely negligible, according to my personal experience with my blog.

The really impactful contributors to a high Google ranking are, again according to my limited experience:

  • originality of content (avoid “me too” content)
  • significant and meaningful tags
  • pictures/images with a significant description and/or ‘<‘alt’>’ description
  • cross-links that reference your contribution (easy if you can cross-link to your own blog-posts)

I’ve limited the following examples (all two-word searches) to those, which load multiple Megabytes of data and feature loading times of multiple seconds each, which is by no means meant as a recommendation. Yet serves to indicate, that a piece can make it to the top in spite of violating popular recommendations:

  1. Google “Desmo inside”. My page is on Google rank 1 and shows the following in Firefox’s developer tools, network analysis tab:

Bildschirmfoto 2025-09-14 um 13.25.45

  1. Google “Gespann Simulator”. My page is on Google rank 1 and shows the following in Firefox’s developer tools, network analysis tab:

Bildschirmfoto 2025-09-14 um 13.32.47

  1. Google “Federbein Axiallager”. My page is on Google rank 1 and shows the following in Firefox’s developer tools, network analysis tab:

Bildschirmfoto 2025-09-14 um 13.47.17

2 Likes

Sorry if this is a dumb question but, can you first load the page and then the various 3d related things?

1 Like

Thanks for clarifying! Yes, Lighthouse reports 13,070 ms = ~13 seconds, not 13 ms. So it’s definitely a significant blocking time, and that’s what I’m trying to bring down.

I completely agree with you that originality, tags/alt text, and cross-linking have a much bigger impact on ranking than raw performance scores. My main concern here is more about user experience and ensuring the landing page feels responsive, especially since long blocking times delay interactivity.

1 Like

That’s a good suggestion, and I did try lazy-loading the 3D scene with next/dynamic / React Suspense. It definitely helps reduce TBT when the 3D isn’t critical, but in my case the model is the hero section, so I want it to appear instantly when the page loads.

Thanks, that makes sense! Yes, the model is Draco-compressed, so the decompression step is probably what’s hitting my TBT. I wasn’t aware that drei’s GLTF loader pulls the Draco worker from a CDN by default — I’ll try hosting the decoder locally in /public/draco/ so it loads straight from my domain and doesn’t add extra network requests.

I might also test using an uncompressed GLB for comparison, since in my case the model is the hero section and instant load matters more than the smallest possible file size.

And I should mention I’m still pretty new to three.js, so I don’t fully understand how all the pieces work yet — what’s the best way to check where the actual bottleneck is (e.g. decompression vs. rendering vs. environment map)?

I concur that 13 seconds is indeed a significant blocking time which goes along with the risk of loosing prospective visitors of your page along the waiting time.

As @seanwasere correctly observed, loading from a CDN entails an unknown loading time of its own, which you could easily avoid* by hosting the required components locally.

*Obviously, you cant “avoid” the loading time altogether, but you can transfer the loading time into a controllable realm.

TooLiDaR; use R3F Suspense, PerformanceMonitor, and/or a vanilla video poster. Reorder blocking async dependencies.

Lighthouse is always available… You can personally audit WhiteHouse.gov (and design them a new logo)! But (with due regard to new AI results) the search formula has favored domain reputation (time) and content quality (freshness). Lighthouse helps you verify why 13 seconds is too slow, and may suggest noscript fallbacks. Time to First Paint is ubiquitous, but the nature of this tool reserves a 100 score for a fancy outline, and penalizes any unscrupulous resource… it can’t scrape the role or structure. :thinking: For 3d there are better inspector tools and community standards. Even PWAs face modern debate over what the ‘value’ is.

To reiterate: a score of 100 for barren content doesn’t trump any media’s benefit of user retention (at the expense of loading). Similarly, offensive content may not impact the Lighthouse score, but in reality graders have blacklists for reports of nonconsensual moral violation. Not to mention crawl errors from server uptime… or competing results that overwhelm your hero and any the reach of backlinks. Be aware but note early optimization. :thinking: If Amazon spent 3 years reducing 3 milliseconds and saved 10 million visitors $10 billion… that doesn’t mean you should spend 3 months reducing 3 seconds to save nobody anything.

Paul “Node Graph” Ears

My web site loads fast it won’t take 13s to load but as I run google page speed in that I have blocking time of 13s.

you can use the network tab in your developer tools to see if the problem is network related.

Yes I Try To decode my model using local draco files and got some improvements but not two much now the page speed is around 51.

here is the websites link :https://3d-portfolio-page-speed.vercel.app

13 seconds is really long for loading ~2 megs of data?
Maybe your host is slow?

edit: I just hit your link and it loads for me in 480 msec. so under half a second. Perhaps you are located far from your host, or you just have a slow-ish connection?

1 Like

I also tested on other Wi-Fi networks, and since my average speed is around 500 Mbps, I don’t think the connection is the issue. I’ve also checked my host uptime, and that’s not the problem either. I even checked the network tab, React Profiler, and performance tools, and everything in my hero section seems to work great—so I’m not sure what the bottleneck is. I need to explore other ways to optimize my hero section to achieve better page speed.

I have successfully improved page speed by reducing model complexity. I am now using Meshopt compression, which is less CPU intensive and has a very fast, lightweight decoding process. This reduces main-thread load and improves GPU performance.

2 Likes

If you are not already doing so, use WebGPU for rendering. I don’t think that will help with loading times though.

Correct. WebGPU won’t help with load times.

Meshopt is pretty fantastic. To clarify.. It is less of “mesh compression” and more mesh quantization, so instead of storing vertex data in floating point, it looks at the distribution and chooses a numeric format that approximates the original data with minimal loss. This helps not only with file size.. 1/2/4x or more reduction, but also with performance, since these vertex formats are not decoded at load time. They are sent directly to the GPU which expands them to float in hardware. So you also get that 1/2/4x reduction in vertex bandwidth as well, so you can fit a lot more mesh data on the GPU.

Switching to WebGPURenderer isn’t a bad idea at least in terms of future proofing, but I wouldn’t commit to an entire webGPU custom pipeline if you are already using your own shaders. You will have to convert them, and may end up not being able to back out when you (inevitably) encounter bugs since WebGPURenderer is still very much a work in progress. Best case you end up slightly future proofed, but worst case, you’re stuck maintaining 2 different versions of your custom shaders.

In WebGPU, the WGSL shaders won’t change will they? And what about simple compute pipelines? I would think that those are so basic that they won’t change.

wgsl isn’t a subset of webgl.. so if you convert your shaders to wgsl, you wont be able to use them with WebGLRenderer, and you may end up stalled on a bug in WebGPURenderer until it gets addressed..
either that.. or you have to maintain 2 versions .. a WGSL and GLSL version of your custom shaders. Also webGPU support on mobile is still limited. WebGPU | Can I use... Support tables for HTML5, CSS3, etc

Compute pipelines are also done differently in WebGL vs WebGPU afaik.

If you’re not using any shaders/compute pipelines though, it’s pretty risk free since switching renderers is literally just swapping WebGLRenderer with WebGPURenderer.

I could be wrong about some of this though, since I don’t really have a lot of webgpu xp yet.

2 Likes

Hi everyone, here’s my current Three.js setup. I’d love any suggestions or best practices to improve performance and increase page speed.

Scene.tsx (697 Bytes)

ResponsiveCamera.tsx (897 Bytes)

Model.tsx (1.5 KB)

index.tsx (1.2 KB)