Replies: 4 comments 7 replies
-
|
From a glance, in the first version of your function the Could you share the EXPLAIN execution plans of both versions?
create or replace function auth.uid()
returns uuid
language sql stable
as $$
select
coalesce(
nullif(current_setting('request.jwt.claim.sub', true), ''),
(nullif(current_setting('request.jwt.claims', true), '')::jsonb ->> 'sub')
)::uuid
$$; |
Beta Was this translation helpful? Give feedback.
-
|
Hello, just circling back to this one. I've found a solution for the RLS performance issue with Basically we cache the |
Beta Was this translation helpful? Give feedback.
-
|
Hi @steve-chavez , we also have problems with high CPU usage in our project. We use only plpgsql functions with temporary tables in our project. Do you think this is a big problem? If so, how can we improve the performance? All our logic is in the database. Thanks a lot in advance. |
Beta Was this translation helpful? Give feedback.
-
|
Was this ever solved / did anyone figure a more general solution? I am running into another performance issue with |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I had a few questions around the auth.uid() helper provided by Supabase, what exactly does it do and what is the best way to test queries that are using it? The reason that I’m asking these questions is because we had to make some RPCs to implement some changes to our application, this change also resulted in increased Postgres API requests from our application (which was intended). Everything was working well in our stage environment so we rolled to production. On our production database, CPU utilisation rose to 99% at peak time, even with a XL database add-on. We only have about 10,000 users total and maybe only 300 on at peak time.
First we investigated our RPC queries using pgMustard which was really useful for optimising with indexing but RPC query times were still slow and CPU utilisation still high.
We spent most of the day adding indexes where we could, however our network request response times when using RPCs were still tremendously high (all other Postgrest requests from the Supabase client were performing well) even once CPU had freed up and was around 30%.
For example when using the following RPC:
The network delay waiting for the server response was 10.24 seconds:
It was a tremendous delay for a query that was not very complicated, and if our RLS queries has been contributing to this time then we’d expect to see this delay on all our queries not just those using RPCs.
However, when we were testing our queries with explain analyze and putting them into pgMustard, we’d mostly been replacing the auth.uid() helper method, since there wasn’t any guidance that we could find about performance testing with this helper. We figured that we could re-write the query and see it’s impact in performance first in stage and then in production.
We added a variable to the RPC and then set it as the auth.uid() helper. This is what we changed the RPC to:
First we tested in stage to be sure it worked and there was a very small improvement on network time, but it worked so we rolled it to production.
The network delay when waiting for server response reduced significantly from 10.24 seconds to 381.39 milliseconds:
With this performance improvement we decided to declare the auth.uid() helper into a variable for all out RPCs and saw significant improvements for all of them.
One RPC saw a reduction from 59 seconds to 251.28 milliseconds and another RPC decreased from 10.04 seconds to 555.98 milliseconds. After these improvements our CPU utilisation also decreased and now at peak times is around 30% and at its lowest is 7%, whereas before our lowest utilisation average was 30% and maxing out to 99%.
Why is the auth.uid() helper method slow when used in RPCs and is this documented anywhere?
I was hoping as well that there could be a guide around using explain analyze with Supabase helper methods, this would have really helped with finding this issue and would be really useful for everyday use, even with regular queries.
If it were possible it would be really useful to be able to trace requests through the system to know where they are being bogged down. This way it would be better to know where things are being held up in live traffic.
Are there any resources that could be recommended to better enhance performance and observability on Supabase?
Beta Was this translation helpful? Give feedback.
All reactions