fix: Migration error with different auth schema#1983
fix: Migration error with different auth schema#1983willnode wants to merge 1 commit intosupabase:masterfrom
Conversation
|
I have confirmed that executing this to the DB directly will make it healthy alter type factor_type add value 'phone';
alter table auth.mfa_factors add column if not exists phone text unique default null;
alter table auth.mfa_challenges add column if not exists otp_code text null;
create unique index if not exists unique_verified_phone_factor on auth.mfa_factors (user_id, phone);
insert into public.schema_migrations (version) values ('20240729123726'); |
|
Hmm this seems off. Are you specifying your schema in the migrations correctly? |
|
Here's my compose file: https://github.com/willnode/supabase-docker/blob/main/docker-compose.yml. I do not set "DB_NAMESPACE" and I do not set the default postgres search path to be "auth" since I want to use my external postgres database. The only thing that prevents me from using my external database is this PR. I could reverse this change, so the enum would be namespaced correctly, but it would be adding namespace for enum types from migrations/20221003041349_add_mfa_schema.up.sql and every migration files of it after. |
|
OK I understand whythe "default" way to install in docker works, the search path is set in the script: Line 11 in 38eb4bb We do not set separate postgres account in each docker service, so we don't have the opportunity to set this. We just set one postgres account to all services in the compose file. Setting the search path is not a reasonable option in this case. |
## What kind of change does this PR introduce? Big fix for supabase#1729, supabase#1848, supabase#1983, and supabase#2040 with an additional type fix. ## What is the current behavior? The auth service cannot be deployed in a net new environment on PostgreSQL 17. ## What is the new behavior? The service is running properly with PostgreSQL 17 in a cleanroom environment. ## Additional context Here is a redacted version of the terraform I used to deploy it with. I used my own container build with these fixes, `ghcr.io/siennathesane/auth:v2.175.0`, that you can use to verify the fix is valid, if you want. ```hcl locals { f2-auth-db-namespace = "auth" } resource "kubernetes_service_account" "f2-auth" { metadata { name = "f2-auth" namespace = var.namespace } } resource "kubernetes_manifest" "f2-auth-db" { manifest = { "apiVersion" = "postgresql.cnpg.io/v1" "kind" = "Database" "metadata" = { "name" = "f2-auth-db" "namespace" = var.namespace } "spec" = { "cluster" = { "name" = kubernetes_manifest.f2-cluster.object.metadata.name } "allowConnections" = true "name" = local.f2-auth-db-namespace "owner" = kubernetes_secret_v1.f2-auth-db.data.username "schemas" = [{ "name" = local.f2-auth-db-namespace "owner" = kubernetes_secret_v1.f2-auth-db.data.username }] } } } resource "kubernetes_config_map_v1" "f2-auth-initdb" { metadata { name = "sql-commands" namespace = var.namespace } data = { "script.sql" = <<-EOT ALTER USER ${kubernetes_secret_v1.f2-auth-db.data.username} WITH LOGIN CREATEROLE CREATEDB REPLICATION BYPASSRLS; GRANT ${kubernetes_secret_v1.f2-auth-db.data.username} TO postgres; CREATE SCHEMA IF NOT EXISTS ${local.f2-auth-db-namespace} AUTHORIZATION ${kubernetes_secret_v1.f2-auth-db.data.username}; GRANT CREATE ON DATABASE postgres TO ${kubernetes_secret_v1.f2-auth-db.data.username}; ALTER USER ${kubernetes_secret_v1.f2-auth-db.data.username} SET search_path = '${local.f2-auth-db-namespace}'; EOT } } resource "kubernetes_secret_v1" "f2-auth-db" { metadata { name = "auth-db" namespace = var.namespace labels = { "cnpg.io/reload" = "true" } } data = { username = "[REDACTED]" password = random_password.f2-auth-db-password.result database = "auth" } type = "kubernetes.io/basic-auth" } resource "kubernetes_secret_v1" "f2-auth-jwt" { metadata { name = "auth-jwt" namespace = var.namespace } data = { anonKey = "[REDACTED]" secret = "[REDACTED]" serviceKey = "[REDACTED]" } type = "Opaque" } resource "random_password" "f2-auth-db-password" { length = 16 special = false } resource "kubernetes_deployment_v1" "f2-auth" { depends_on = [kubernetes_manifest.f2-auth-db] timeouts { create = "2m" } metadata { name = "f2auth" labels = { "f2.pub/app" = "auth-${var.environment}" } namespace = var.namespace } spec { replicas = 1 selector { match_labels = { "f2.pub/app" = "auth-${var.environment}" } } template { metadata { labels = { "f2.pub/app" = "auth-${var.environment}" } } spec { image_pull_secrets { name = var.ghcr-pull-secret-name } init_container { name = "init-db" image = "postgres:17-alpine" command = ["psql", "-f", "/sql/script.sql"] env { name = "PGHOST" value = "${kubernetes_manifest.f2-cluster.object.metadata.name}-rw" } env { name = "PGPORT" value = "5432" } env { name = "PGDATABASE" value = kubernetes_secret_v1.f2-auth-db.data.database } env { name = "PGUSER" value = kubernetes_secret_v1.f2-auth-db.data.username } env { name = "PGPASSWORD" value = kubernetes_secret_v1.f2-auth-db.data.password } volume_mount { name = "sql-volume" mount_path = "/sql" } } volume { name = "sql-volume" config_map { name = kubernetes_config_map_v1.f2-auth-initdb.metadata[0].name } } container { image = "ghcr.io/siennathesane/auth:${var.goauth-version}" image_pull_policy = "Always" name = "auth" resources { limits = { cpu = "0.5" memory = "512Mi" } requests = { cpu = "250m" memory = "50Mi" } } port { name = "http" container_port = 9999 protocol = "TCP" } env { name = "GOTRUE_DB_DRIVER" value = "postgres" } env { name = "DB_NAMESPACE" value = "auth" } env { name = "DATABASE_URL" value = "postgres://${kubernetes_secret_v1.f2-auth-db.data.username}:[REDACTED]@${ kubernetes_manifest.f2-cluster.object.metadata.name}-rw:5432/${kubernetes_secret_v1.f2-auth-db.data.database}" } env { name = "GOTRUE_JWT_SECRET" value_from { secret_key_ref { name = "auth-jwt" key = "secret" } } } env { name = "API_EXTERNAL_URL" value = "http://[REDACTED]" } env { name = "GOTRUE_SITE_URL" value = "http://[REDACTED]" } env { name = "GOTRUE_API_HOST" value = "0.0.0.0" } env { name = "PORT" value = "9999" } } } } } } ``` Closes supabase#1729 Closes supabase#1848 Closes supabase#1983 Closes supabase#2040 Signed-off-by: Sienna Satterwhite <[email protected]> Co-authored-by: Chris Stockton <[email protected]>
## What kind of change does this PR introduce? Big fix for #1729, #1848, #1983, and #2040 with an additional type fix. ## What is the current behavior? The auth service cannot be deployed in a net new environment on PostgreSQL 17. ## What is the new behavior? The service is running properly with PostgreSQL 17 in a cleanroom environment. ## Additional context Here is a redacted version of the terraform I used to deploy it with. I used my own container build with these fixes, `ghcr.io/siennathesane/auth:v2.175.0`, that you can use to verify the fix is valid, if you want. ```hcl locals { f2-auth-db-namespace = "auth" } resource "kubernetes_service_account" "f2-auth" { metadata { name = "f2-auth" namespace = var.namespace } } resource "kubernetes_manifest" "f2-auth-db" { manifest = { "apiVersion" = "postgresql.cnpg.io/v1" "kind" = "Database" "metadata" = { "name" = "f2-auth-db" "namespace" = var.namespace } "spec" = { "cluster" = { "name" = kubernetes_manifest.f2-cluster.object.metadata.name } "allowConnections" = true "name" = local.f2-auth-db-namespace "owner" = kubernetes_secret_v1.f2-auth-db.data.username "schemas" = [{ "name" = local.f2-auth-db-namespace "owner" = kubernetes_secret_v1.f2-auth-db.data.username }] } } } resource "kubernetes_config_map_v1" "f2-auth-initdb" { metadata { name = "sql-commands" namespace = var.namespace } data = { "script.sql" = <<-EOT ALTER USER ${kubernetes_secret_v1.f2-auth-db.data.username} WITH LOGIN CREATEROLE CREATEDB REPLICATION BYPASSRLS; GRANT ${kubernetes_secret_v1.f2-auth-db.data.username} TO postgres; CREATE SCHEMA IF NOT EXISTS ${local.f2-auth-db-namespace} AUTHORIZATION ${kubernetes_secret_v1.f2-auth-db.data.username}; GRANT CREATE ON DATABASE postgres TO ${kubernetes_secret_v1.f2-auth-db.data.username}; ALTER USER ${kubernetes_secret_v1.f2-auth-db.data.username} SET search_path = '${local.f2-auth-db-namespace}'; EOT } } resource "kubernetes_secret_v1" "f2-auth-db" { metadata { name = "auth-db" namespace = var.namespace labels = { "cnpg.io/reload" = "true" } } data = { username = "[REDACTED]" password = random_password.f2-auth-db-password.result database = "auth" } type = "kubernetes.io/basic-auth" } resource "kubernetes_secret_v1" "f2-auth-jwt" { metadata { name = "auth-jwt" namespace = var.namespace } data = { anonKey = "[REDACTED]" secret = "[REDACTED]" serviceKey = "[REDACTED]" } type = "Opaque" } resource "random_password" "f2-auth-db-password" { length = 16 special = false } resource "kubernetes_deployment_v1" "f2-auth" { depends_on = [kubernetes_manifest.f2-auth-db] timeouts { create = "2m" } metadata { name = "f2auth" labels = { "f2.pub/app" = "auth-${var.environment}" } namespace = var.namespace } spec { replicas = 1 selector { match_labels = { "f2.pub/app" = "auth-${var.environment}" } } template { metadata { labels = { "f2.pub/app" = "auth-${var.environment}" } } spec { image_pull_secrets { name = var.ghcr-pull-secret-name } init_container { name = "init-db" image = "postgres:17-alpine" command = ["psql", "-f", "/sql/script.sql"] env { name = "PGHOST" value = "${kubernetes_manifest.f2-cluster.object.metadata.name}-rw" } env { name = "PGPORT" value = "5432" } env { name = "PGDATABASE" value = kubernetes_secret_v1.f2-auth-db.data.database } env { name = "PGUSER" value = kubernetes_secret_v1.f2-auth-db.data.username } env { name = "PGPASSWORD" value = kubernetes_secret_v1.f2-auth-db.data.password } volume_mount { name = "sql-volume" mount_path = "/sql" } } volume { name = "sql-volume" config_map { name = kubernetes_config_map_v1.f2-auth-initdb.metadata[0].name } } container { image = "ghcr.io/siennathesane/auth:${var.goauth-version}" image_pull_policy = "Always" name = "auth" resources { limits = { cpu = "0.5" memory = "512Mi" } requests = { cpu = "250m" memory = "50Mi" } } port { name = "http" container_port = 9999 protocol = "TCP" } env { name = "GOTRUE_DB_DRIVER" value = "postgres" } env { name = "DB_NAMESPACE" value = "auth" } env { name = "DATABASE_URL" value = "postgres://${kubernetes_secret_v1.f2-auth-db.data.username}:[REDACTED]@${ kubernetes_manifest.f2-cluster.object.metadata.name}-rw:5432/${kubernetes_secret_v1.f2-auth-db.data.database}" } env { name = "GOTRUE_JWT_SECRET" value_from { secret_key_ref { name = "auth-jwt" key = "secret" } } } env { name = "API_EXTERNAL_URL" value = "http://[REDACTED]" } env { name = "GOTRUE_SITE_URL" value = "http://[REDACTED]" } env { name = "GOTRUE_API_HOST" value = "0.0.0.0" } env { name = "PORT" value = "9999" } } } } } } ``` Closes #1729 Closes #1848 Closes #1983 Closes #2040 Signed-off-by: Sienna Satterwhite <[email protected]> Co-authored-by: Chris Stockton <[email protected]>

What kind of change does this PR introduce?
Fixes #1729
What is the current behavior?
Doesn't nicely play if default schema is other than "auth"
What is the new behavior?
Migration will be fine
Additional context
Add any other context or screenshots.