Just a Theory

By David E. Wheeler

Posts about Summit

Mini Summit 5 Transcript: Improving the PostgreSQL Extensions Experience in Kubernetes with CloudNativePG

Orange card with large black text reading “Extension Management in CNPG”. Smaller text below reads “Gabriele Bartolini (EDB)” and that is the date, “05.07.2025”.

The final PostgresSQL Extension Mini-Summit took place on May 7. Gabriele Bartolini gave an overview of PostgreSQL extension management in CloudNativePG (CNPG). This talk brings together the topics of several previous Mini-Summits — notably Peter Eisentraut on implementing an extension search path — to look at the limitations of extension support in CloudNativePG and the possibilities enabled by the extension search path feature and the Kubernetes 1.33 ImageVolume feature. Check it out:

Or read on for the full transcript with thanks to Floor Drees for putting it together.

Introduction

Floor Drees.

On May 7 we hosted the last of five (5) virtual Mini-Summits that lead up to the big one at the Postgres Development Conference (PGConf.Dev), taking place next week, in Montreal, Canada. Gabriele Bartolini, CloudNativePG maintainer, PostgreSQL Contributor, and VP Cloud Native at EDB, joined to talk about improving the Postgres extensions experience in Kubernetes with CloudNativePG.

The organizers:

The stream and the closed captions available for the recording are supported by PGConf.dev and their gold level sponsors, Google, AWS, Huawei, Microsoft, and EDB.

Improving the Postgres extensions experience in Kubernetes with CloudNativePG

Gabriele Bartolini.

Hi everyone. Thanks for this opportunity, and thank you Floor and David for inviting me today.

I normally start every presentation with a question, and this is actually the question that has been hitting me and the other maintainers of CloudNativePG — and some are in this call — from the first day. We know that extensions are important in Kubernetes, in Postgres, and we’ve always been asking how can we deploy extensions, without breaking the immutability of the container.

So today I will be telling basically our story, and hopefully providing good insights in the future about how with CloudNativePG we are trying to improve the experience of Postgres extensions when running databases, including issues.

I’ve been using Postgres for 25 years. I’m one of the co-founders of 2ndQuadrant, which was bought by a EDB in 2020. And because of my contributions, I’ve been recognized as a Postgres contributor and I’m really grateful for that. And I’m also “Data on Kubernetes ambassador”; my role is to promote the usage of stateful workloads in Kubernetes. I’m also DevOps evangelist. I always say this: DevOps is the reason why I encountered Kubernetes, and it will also be the reason why I move away one day from Kubernetes. It’s about culture and I’ll explain this later.

In the past I’ve been working with Barman; I’m one of the creators of Barman. And since 2022, I’m one of the maintainers of CloudNativePG. I want to thank my company, EDB, for being the major contributor in Postgres history in terms of source code. And right now we are also the creators of CloudNativePG. And as we’ll see, the company donated the IP to the CNCF. So it’s something that is quite rare, and I’m really grateful for that.

What I plan to cover tonight is first, set the context and talk about immutable application containers, which have been kind of a dogma for us from day one. Then, how we are handling right now extensions in Kubernetes with CNPG. This is quite similar to the way other operators deal with it. Then the future and key takeaways.

First, we’re talking about Kubernetes. If you’re not familiar, it’s an orchestration system for containers. It’s not just an executor of containers, but it’s a complex system that also manages infrastructure. When it manages infrastructure, it also manages cloud native applications that are also called workloads. When we’re thinking about Postgres in Kubernetes, the database is a workload like the others. That, I think, is the most important mind shift among Postres users that I have faced myself, that I’ve always treated Postgres differently from the rest. Here in Kubernetes is it’s just another workload.

Then of course, it’s not like any other workload, and that’s where operators come into play, and I think the work that we are doing even tonight is in the direction to improve how databases is run in Kubernetes in general, and for everyone.

It was open sourced in 2014, and, it’s owned by the CNCF, and it’s actually the first project that graduated, and graduated is the most advanced stage in the graduation process of the CNCF, which starts with sandbox, then incubation and then graduation.

CloudNativePG is an operator for Postgres. It’s production-ready — what we say is level five. Level five is kind of an utopic, and unbounded level, the highest one as defined by the operator development framework. It’s used by all these players including Tembo, IBM Cloud Paks, Google Cloud, Azure, Akamai, and so on. CNPG is a CNCF project since January. It’s distributed under Apache License 2.0 and the IP — the Intellectual Property — is owned by the community and protected by the CNCF. It therefore is a vendor neutral and openly governed project. This is kind of a guarantee that it will always be free. This is also, in my opinion, a differentiation between CloudNativePG and the rest.

The project was originally created by EDB, but specifically at that time, by 2ndQuadrant. And, as I always like to recall, it was Simon Riggs that put me in charge of the initiative. I’ll always be grateful to Simon, not only for that, but for everything he has done for me and the team.

CNPG can be installed in several ways. As you can see, it’s very popular in terms of stars. There’s more than 4,000 commits. And what’s impressive is the number of downloads in three years, which is 78 million, which means that it’s used the way we wanted it to be used: with CICD pipelines.

This is the CNCF landscape; these are the CNCF projects. As you can see, there are only five projects in the CNCF in the database area, and CloudNativePG is the only one for Postgres. Our aim for 2025 and 2026 is to become incubating. If you’re using CNPG and you want to help with the process, get in touch with me and Floor.

I think to understand again, what, why we’ve done all this process, that led to the patch that, you’ve seen in Postgres 18, it’s important to understand what cloud native has meant to us since we started in 2019. We’ve got our own definition, but I think it still applies. For us it’s three things, Cloud native. It’s people that work following DevOps culture. For example, there are some capabilities that come from DevOps that apply to the cloud native world. I selected some of them like in user infrastructure, infrastructure abstraction, version control. These three form the infrastructure-as-code principle, together with the declarative configuration.

A shift left on security. You’ll see with CloudNativePG, we rarely mention security because it’s pretty much everywhere. It’s part of the process. Then continuous delivery.

The second item is immutable application containers, which kind of led the immutable way of thinking about extensions. And then the third one is that these application containers must be orchestrated via an infrastructure-as-code by an orchestrator, and the standard right now is Kubernetes.

For us it’s these three things, and without any of them, you cannot achieve cloud native.

So what are these immutable application containers? To explain immutability I’d like to talk about immutable infrastructure, which is probably what the majority of people that have historically worked with Postgres are used to. I’m primarily referring to traditional environments like VMs and bare metal where the main ways we deploy Postgres is through packages, maybe even managed by configuration managers, but still, packages are the main artifacts. The infrastructure is seen as a long-term kind of project. Changes happen over time and are incremental updates, updates on an existing infrastructure. So if you want to know the history of the infrastructure over time, you need to check all the changes that have applied. In case of failure of a system, systems are healed. So that’s the pets concept that comes from DevOps.

On the other hand, immutable infrastructure relies on OCI container images. OCI is a standard, the Open Container Initiative and it’s part of the Linux Foundation as well. Immutable infrastructure is founded on continuous delivery, which is the foundation of GitOps practices. In an immutable infrastructure, releasing a new version of an application is not updating the system’s application, it is building a new image and publishing it on a public registry and then deploying it. Changes in the system happen in an atomic way: the new version of a container is pulled from the registry and the existing image is almost instantaneously replaced by the new one. This is true for stateless applications and we’ll see, in the case of stateful applications like Postgres, is not that instantaneous because we need to perform a switchover or restart — in any case, generate a downtime.

When it comes to Kubernetes, the choice was kind of obvious to go towards that immutable infrastructure. So no incremental updates, and in the case of stateful workloads where you cannot change the content of the container, you can use data volumes or persistent volumes. These containers are not changed. If you want to change even a single file or a binary in a container image, you need to create a new one. This is very important for security and change management policies in general.

But what I really like about this way of managing our infrastructure is that, at any time, Kubernetes knows exactly what software is running in your infrastructure. All of this is versioned in an SCM, like Git or whatever. This is something that in the mutable world is less easy to obtain. Again, for security, this is the foundational thing because this is how you can control CVEs, the vulnerabilities in your system. This is a very basic representation of how you build, contain — let’s say the lifecycle of a container image. You create a Dockerfile, you put it in Git, for example, then there’s an action or a pipeline that creates the container image, maybe even run some tests and then pushes it to the container registry.

I walked you through the concepts of mutable and immutable containers, what are, these immutable application containers? If you go back and read what we were rising before CloudNativePG was famous or was even used, we were always putting in immutable application containers as one of the principles we could not lose.

For an immutable application container, it means that there’s only a single application running; that’s why it’s called “application”. If you have been using Docker, you are more familiar with system containers: you run a Debian system, you just connect and then you start treating it like a VM. Application containers are not like that. And then they are immutable — read-only — so you cannot even make any change or perform updates of packages. But in CloudNativePG, because we are managing databases, we need to put the database files in separate persistent volumes. Persistent volumes are standard resources provided by Kubernetes. This is where we put PGDATA and, if you want, a separate volume for WAL files with different storage specifications and even an optional number of table spaces.

CloudNativePG orchestrates what we call “operand images”. These are very important to understand. They contain the Postgres binaries and they’re orchestrated via what we call the “instance manager”. The instance manager is just the process that runs and controlled Postgres; I’ss the PID 1 — or the entry point — of the container.

There’s no other, like SSHD or other, other applications work. There’s just the instance manager that then controls everything else. And this is the project of the operating images. This is one open source project, and every week we rebuild the Postgres containers. We recently made some changes to the flavors of these images and I’ll talk about it shortly.

We mentioned the database, we mentioned the binaries, but what about extensions? This is the problem. Postgres extensions in Kubernetes with CloudNativePG is the next section, and it’s kind of a drama. I’m not hiding this. The way we are managing extensions in Kubernetes right now, in my opinion, is not enough. It works, but it’s got several limitations — mostly limitations in terms of usage.

For example, we cannot place them in the data files or in persistent volumes because these volumes are not read-only in any way. In any case, they cannot be strictly immutable. So we discarded this option to have persistent volume where you could kind of deploy extensions and maybe you can even download on the fly or use the package manager to download them or these kind of operations. We discarded this from the start and we embraced the operand image solution. Essentially what we did was placing these extensions in the same operand image that contains the Postgres binaries. This is a typical approach of also the other operators. If you think about also Zalando we call it “the Spilo way”. Spilo contained all the software that would run with the Zalando operator.

Our approach was a bit different, in that we wanted lighter images, so we created a few flavors of images, and also selected some extensions that we placed in the images. But in general, we recommended to build custom images. We provided instructions and we’ve also provided the requirements to build container images. But as you can see, the complexity of the operational layer is quite high, it’s not reasonable to ask any user or any customer to build their own images.

This is how they look now, although this is changing as I was saying:

A stack of boxes with “Debian base image” at the top, then “PostgreSQL”, then “Barman Cloud”, and finally  three “Extension” boxes at the bottom.

You’ve got a base image, for example, the Debian base image. You deploy the Postgres binaries. Then — even right now though it’s changing — CloudNativePG requires Barman Cloud to be installed. And then we install the extensions that we think are needed. For example, I think we distribute pgAudit, if I recall correctly, pgvector and pg_failover_slots. Every layer you add, of course, the image is heavier and we still rely on packages for most extensions.

The problem is, you’ve got a cluster that is already running and you want, for example, to test an extension that’s just come out, or you want to deploy it in production. If that extension is not part of the images that we build, you have to build your own image. Because of the possible combinations of extensions that exist, it’s impossible to build all of these combinations. You could build, for example, a system that allows you to select what extensions you want and then build the image, but in our way of thinking, this was not the right approach. And then you’ve got system dependencies and, if an extension brings a vulnerability that affects the whole image and requires more updates — not just of the cluster, but also of the builds of the image.

We wanted to do something else, but we immediately faced some limitations of the technologies. One was on Postgres, the other one was on Kubernetes. In Postgres, extensions need to be placed in a single folder. It’s not possible to define multiple locations, but thanks to the work that Peter and this team have done, now we’ve got extension_control_path in version 18.

Kubernetes could not allow until, 10 days ago, to mount OCI artifacts as read-only volumes. There’s a new feature that is now part of Kubernetes 1.33 that allows us to do it.

This is the patch that I was talking about, by Peter Eisentraut. I’m really happy that CloudNativePG is mentioned as one of the use cases. And there’s also mentioned for the work that, me, David, and Marco and, primarily Marco and Niccolò from CloudNativePG have done.

This is the patch that introduced VolumeSource in Kubernetes 1.33.

The idea is that with Postgres 18 now we can set in the configuration where we can look up for extensions in the file system. And then, if there are libraries, we can also use the existing dynamic_library_path GUC.

So, you remember, this is where we come from [image above]; the good thing is we have the opportunity to build Postgres images that are minimal, that only contain Postgres.

Three stacks of boxes. On the left, “Debian base image” on top of “PostgreSQL”. On the right, “Debian base image” on top of “Barman Cloud”. On the lower right, a single box for an extension.

Instead of recreating them every week — because it’s very likely that something has some dependency, has a CVE, and so recreate them for everyone, forcing everyone to update their Postgres systems — we can now release them maybe once a month, and pretty much follow the Postgres cadence patch releases, and maybe if there are CVEs it’s released more frequently.

The other good thing is that now we are working to remove the dependency on Barman Cloud for CloudNativePG. CloudNativePG has a new plugin interface and with 1.26 with — which is expected in the next weeks — we are suggesting people start moving new workloads to the Barman Cloud plugin solution. What happens is that Barman Cloud will be in that sidecar image. So it will be distributed separately, and so its lifecycle is independent from the rest. But the biggest advantage is that any extension in Postgres can be distributed — right now we’ve got packages — The idea is that they are distributed also as images.

If we start thinking about this approach, if I write an extension for Postgres, until now I’ve been building only packages for Debian or for RPM systems. If I start thinking about also building container images, they could be immediately used by the new way of CloudNativePG to manage extensions. That’s my ultimate goal, let’s put it that way.

This is how things will change at run time without breaking immutability.

A box labeled “PostgreSQL Pod” with four separate boxes inside, labeled “Container Postgres”, “Sidecar Barman Cloud”, “Volume Extension 1”, and “Volume Extension 2”.

There will be no more need to think about all the possible combinations of extensions. There will be the Postgres pod that runs, for example, a primary or standby, that will have the container for Postgres. If you’re using Barman Cloud, the sidecar container managed by the plugin with Barman Cloud. And then, for every extension you have, you will have a different image volume that is read-only, very light, only containing the files distributed in the container image of the extension, and that’s all.

Once you’ve got these, we can then coordinate the settings for external extension_control_path and dynamic_library_path. What we did was, starting a fail fast pilot project within EDB to test the work that Peter was doing on the extension_control_path. For that we used the Postgres Trunk Containers project, which is a very interesting project that we have at CloudNativePG. Every day it rebuilds the latest snapshot of the master branch of Postgres so that we are able to catch, at an early stage, problems with the new version of Postgres in CloudNativePG. But there’s also an action that builds container images for a specific, for example, Commitfest patch. So we use that.

Niccolò wrote a pilot patch, an exploratory patch, for the operator to define the extensions stanza inside the cluster resource. He also built some bare container images for a few extensions. We make sure to include a very simple one and the most complex one, which is PostGIS. This is the patch that — it’s still a draft — and the idea is to have it in the next version, 1.27 for CloudNativePG. This is how it works:

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: postgresql-with-extensions
spec:
  instances: 1
  imageName: ghcr.io/cloudnative-pg/postgresql-trunk:18-devel
  postgresql:
    extensions:
      - name: pgvector
        image:
          reference: ghcr.io/cloudnative-pg/pgvector-18-testing:latest
  storage:
    storageClass: standard
    size: 1Gi

We have the extensions section in the cluster definition. We name the extension. Theoretically we could also define the version and we point to the image. What’s missing in this pilot patch is support for image catalogs, but that’s something else that we can worry about later.

What happens under the hood is that when you update, or when you add a new extension in the cluster definition, a rolling update is initiated. So there’s this short downtime, but the container image is loaded in the replicas first, and then in the primary. n image volume is mounted for each extension in, let’s say, /extensions/$name_of_extension folder and CNPG updates, these two parameters. It’s quite clean, quite neat. It works, but most of the work needs to happen here. So that’s been my call, I mean to call container images as a first class artifacts. If these changes, we have a new way to distribute images.

Just to approach the conclusion, if you want to know more about the whole story, I wrote this blog article that recaps everything, and the key takeaway for me — and then we go more on the patch if you want to, and also address the questions. But what is important for me? Being in the Postgres community for a long time, I think this is a good way, a good moment for us to challenge the status quo of the extension distribution ecosystem.

I think we have an opportunity now to define a standard, which, I just want to be clear, I’m focusing myself primarily on CNPG, but this is in general, even for other operators. I’m sure that this will benefit everyone and overall it will reduce the waste that we collectively create when distributing these extensions in Kubernetes. If this becomes a standard way to distribute extensions, the benefits will be much better operational work for everyone, primarily also easier testing and validation of extensions. I mean, right now, if you see an extension, ideally that extension — and it’s very easy to build — if you’re in GitHub, to build the container images. GitHub, for example, already provides the whole infrastructure for you to easily build container images.

So if we find a standard way to define a GitHub action to build Postgres extensions, I think, if you’re a developer of an extension, you can just use it and then you find a registry in your project directly that continuously publishes or periodically publishes this extension. Any user can just reference that image URL and then without having to build images, they’re just one rolling update away from testing a patch, testing also the upgrade paths.

I think there are some unknown unknowns that kind of scare me, in general, about upgrades, upgrades of extensions. This is, in my opinion, one of the biggest issues. It’s not that they’re not solved, but they require more attention and more testing if you’re using them in an immutable world. All of these will, in my opinion, will be much, much better with the approach we’ve proposed. Images will be lighter, and the lighter image is also safer and more secure, so less prone to have CVEs,lLess prone to require frequent updates, and also they reduce the usage of bandwidth, for an organization in general. What I was saying before, any extension project can be fully independent, have their own way to build images and publish them.

One last point. I keep hearing many signs, that all of the stuff that we are proposing right now seem like a kind of a limitation of Kubernetes. The way I see it, in my view, that it’s not actually a limitation, it’s that these problems have never been addressed before. The biggest mistake we can do is focus on the specific problem of managing extensions without analyzing the benefits that the entire stack brings to an organization. Kubernetes brings a lot of benefits in terms of security, velocity, change management and, operations that any organization must consider right now. Any Postgres DBA, any Postgres user, my advice is, if you haven’t done it yet, start taking Kubernetes, seriously.

Discussion

Floor: I do think that David, you wanted to talk maybe a little bit about the mutable volume pattern?

David: Well, if people are interested, in your early slide where you were looking at alternatives, one you were thinking of was putting extensions on a mutable volume and you decided not to do that. But at Tembo we did do that and I did a bunch of work trying to improve it and try to minimize image size and all that in the last couple months. Tembo Cloud is shutting down now, so I had to stop before I finished it, but I made quite a bit of progress. I’m happy to kind of talk through the ideas there. But I think that this approach is a better long term solution, fundamentally.

Gabriele: I would like if Marco and Niccolò, if you want to talk about the actual work you’ve done. Meanwhile, Peter asks, “why does an installation of an extension require a small downtime?” The reason is that at the moment, the image volume patch, if you add a new image volume, it requires the pod to restart. Nico or Marco, Jonathan, if you want to correct me on that.

Nico or Marco or Jonathan: It provides a rolling update of the cluster right now.

Gabriele: So that’s the reason. That’s the only drawback, but the benefits in my opinion, are…

David: My understanding is that, to add a new extension, it’s mounted it in a different place. And because every single extension is its own mount, you have to add it to both those GUCs. And at least one of them requires a restart.

Gabriele: But then for example, we’ve had this conversation at EDB for example, we’re planning to have flavors of predefined extensions. For example, you can choose a flavor and we distribute those extensions. For example, I dunno, for AI we place some AI kind of extensions in the same image, so it would be different.

But otherwise I’m considering the most extreme case of one extension, one container image, which in my opinion, for the open source world is the way that hopefully will happen. Because this way, think about that – I haven’t mentioned this — if I write an extension, I can then build the image and then run automated tests using Kubernetes to assess my extension on GitHub. If those tests fail, my commit will never be merged on main. This is trunk development, continuous delivery. This is, in my opinion, a far better way of delivering and developing software. This is, again, the reason why we ended up in Kubernetes. It’s not because it’s a technology we like, it’s a toy or so, it’s because it solves bigger problems than database problems.

Even when we talk about databases, there’s still work that needs to be done, needs to be improved. I’m really happy that we have more people that know Postgres nowadays that are joining CloudNativePG, and are elevating the discussions more and more on the database level. Because before it was primarily on Kubernetes level, but now we see people that know Postgres better than me get in CloudNativePG and propose new ideas, which is great. Which is the way it needs to be, in my opinion.

But I remember, Tembo approached us because we actually talked a lot with them. Jonathan, Marco, I’m sure that you recall, when they were evaluating different operators and they chose CloudNativePG. I remember we had these discussions where they asked us to break immutability and we said, “no way”. That’s why I think Tembo had to do the solution you described, because we didn’t want to do it upstream.

I think, to be honest, and to be fair, if image volumes were not added, we would’ve probably gone down that path, because this way of managing extensions, as I was saying, is not scalable, the current one. Because we want to always improve, I think that the approach we need to be critical on what we do. So, I don’t know, Niccolò, Marco, I would like you to, if you want, explain briefly.

[A bit of chatter, opened this Dockerfile.]

FROM ghcr.io/cloudnative-pg/postgresql-trunk:18-devel AS builder

USER 0

COPY . /tmp/pgvector

RUN set -eux; \
	mkdir -p /opt/extension && \
	apt-get update && \
	apt-get install -y --no-install-recommends build-essential clang-16 llvm-16-dev && \
	cd /tmp/pgvector && \
	make clean && \
	make OPTFLAGS="" && \
	make install datadir=/opt/extension/share/ pkglibdir=/opt/extension/lib/

FROM scratch

COPY --from=builder /opt/extension/lib/* /lib/
COPY --from=builder /opt/extension/share/extension/* /share/

Niccolò: I forked, for example, pgvector, That’s what we can do basically for every simple extensions that we can just build. This is a bit more complicated because we have to build from a trunk version of Postgres 18. So we have to compile pgvector from source, and then in a scratch layer we just archive the libraries and every other content that was previously built. But ideally whenever PG 18 comes out as a stable version of Postgres, we just need to apt install pgvector and grab the files from the path. Where it gets a bit more tricky is in the case of PostGIS, or TimescaleDB, or any extension whose library requires third party libraries. For example, PostGIS has a strong requirement on the geometric libraries, so you need to import them as well inside the mount volume. I can link you an example of the PostGIS one.

Gabriele: I think it’s important, we’ve got, I think Peter here, David as well, I mean, for example, if we could get standard ways in Postgres to generate Dockerfiles for extensions, that could be great. And as I said, these extensions can be used by any operator, not only CNPG.

David: That’s my POC does. It’s a patch against the PGXS that would build a trunk image.

Gabriele: This is the work that Niccolò had to do to make PostGIS work in the pilot project: he had to copy everything.

Niccolò: I think we can make it a little bit smoother and dynamically figure out everything from the policies library, so we don’t have to code everything like this, but this is just a proof of concept that it can work.

David: So you installed all those shared libraries that were from packages.

Niccolò: Yeah, they’re being copied in the same MountVolume where the actual extensions are copied as well. And then the pilot patch is able to set up the library path inside the pod so that it makes the libraries available to the system because of course, these libraries are only part of the MountVolume. They’re not injected inside the system libraries of the pod, so we have to set up the library path to make them available to Postgres. That’s how we’re able to use them.

David: So they end up in PKGLIBDIR but they still work.

Niccolò: Yeah.

Gabriele: I mean, there’s better ideas, better ways. As Niccolò also said, it was a concept.

David: Probably a lot of these shared libraries could be shared with other extensions. So you might actually want other OCI images that just have some of the libraries that shared between.

Gabriele: Yeah, absolutely. So we could work on a special kind of, extensions or even metadatas so that we can place, you know…

So, yeah, that’s it.

Jonathan: I think it’s important to invite everyone to try and test this, especially the Postgres trunk containers, when they want to try something new stuff, new like this one, just because we always need people testing. When more people review and test, it’s amazing. Because every time we release something, probably we’ll miss something, some extension like PostGIS missing one of the libraries that wasn’t included in the path. Even if we can try to find a way to include it, it will not be there. So testing, please! Test all the time!

Gabriele: Well, we’ve got this action now, they’re failing. I mean, it’s a bit embarrassing. [Cross talk.] We already have patch to fix it.

But I mean, this is a great project as I mentioned before, because it allows us to test the current version of Postgres, but also if you want to build from a Commitfest or if you’ve got your own Postgres repository with sources, you can compile, you can get the images from using this project.

Floor: Gabriele, did you want to talk about SBOMs?

Gabriele: I forgot to mention Software Bill of Materials. They’re very important. It’s kind of now basic for any container image. There’s also the possibility to add them to these container images too. This is very important. Again, in a change manager for security and all of that — in general supply chain. And signatures too. But we’ve got signature for packages as well. There’s also a attestation of provenance.

Floor: Very good, thanks everyone!

Mini Summit 5: Extension Management in CNPG

Orange card with large black text reading “Extension Management in CNPG”. Smaller text below reads “Gabriele Bartolini (EDB)” and that is the date, “05.07.2025”.

The last Extension Ecosystem Mini-Summit is upon us. How did that happen?

Join us for a virtual conference session featuring Gabriele Bartolini, who will be discussing Extension Management in CNPG. I’m psyched for this one, as the PostgresSQL community has contributed quite a lot to improving extensions management in CloudNativePG in the past year, some of which we covered in previously. If you miss it, the video, slides, and transcript will appear here soon.

Though it may be a week or two to get the transcripts done, considering that PGConf.dev is next week, and featuring the Extension Ecosystem Summit on Tuesday, 13 May in Montreál, CA. Hope to see you there; be sure to say “hi!”

Mini Summit 4 Transcript: The User POV

Orange card with large black text reading “The User POV”. Smaller text above reads “04.23.2025” and below reads “Celeste Horgan (Aiven), Sonia Valeja (Percona), & Alexey Palazhchenko (FerretDB)”

On April 23, we hosted the fourth of five (5) virtual Mini-Summits that lead up to the big one at the Postgres Development Conference (PGConf.dev), taking place May 13-16, in Montreál, Canada. Celeste Horgan, Developer Educator at Aiven, Sonia Valeja, PostgreSQL DBA at Percona, and Alexey Palazhchenko, CTO FerretDB, joined for a panel discussion moderated by Floor Drees.

And now, the transcripts of “The User POV” panel, by Floor Drees

Introduction

My name is Floor, I’m one of the organizers of these Extension Ecosystem Mini-Summits. Other organizers are also here:

The stream and the closed captions available for the recording are supported by PGConf.Dev and their gold level sponsors, Google, AWS, Huawei, Microsoft, and EDB.

Next, and last in this series, on May 7 we’re gonna have Gabriele Bartolini talk to us about Extension Management in CloudNativePG. Definitely make sure you head over to the Meetup page, if you haven’t already, and RSVP for that one!

The User POV

Floor: For the penultimate edition of this series, we’re inviting a couple of Postgres extension and tooling users to talk about how they pick and choose projects that they want to use, how they do their due diligence and, their experience with running extensions.

But I just wanted to set the context for the meeting today. We thought that being in the depth of it all, if you’re an extension developer, you kind of lose the perspective of what it’s like to use extensions and other auxiliary tooling. You lose that user’s point of view. But users, maybe they’re coming from other ecosystems are used to, maybe a different, probably smoother experience. I’m coming from the Rails and Ruby community, so RubyGems are my one stop shop for extending functionality.

That’s definitely a completely different experience from when I started using Postgres extensions. That’s not to say that those ecosystems and NPM and PIP and WordPress don’t have their own issues, ut we can certainly learn from some of the differences between the ecosystems. Ultimately, what we want to cover today is the experience of using extensions in 2025, and what are our users' wishes for the future?

Celeste: Hello my name is Celeste, I am on the developer relations team at Aiven. I only really started using Postgres as a part of my job here at Aiven, but have been a much longer contributor to similar-sized ecosystems. I was really heavily involved in the Kubernetes ecosystem for quite a while. Kubernetes is an extensible-by-design piece of software, but it’s many, many generations of software development later than some of the concepts that Postgres pioneered. Thank you for having me, Floor!

Sonia: Hello everybody! I started working with PostgreSQL in the year 2012, and since then it has been a quite a journey. Postgres has been my primary database, and along with learning PostgreSQL, I learned the other database alongside. I learned Oracle, I learned SQLServer, but only from the perspective — which is important — to migrate from X database to PostgresSQL, as in Oracle to PostgreSQL migration, SQLServer to PostgreSQL migration. I learned about the other databases and I’m fortunate to work as a PostgreSQL developer, PL/pgSQL Developer, PostgreSQL DBA, onsite coordinator, offsite coordinator, sometimes a trainer. So, in and out, it has been like I’m breathing PostgreSQL since then.

Alexey: Thanks for having me! I first worked with Postgres in 2005. Fast forward to today and I am doing FerretDB, which is the open source MongoDB replacement built on top of PostgreSQL and also on top of the DocumentDB extension recently open-sourced by Microsoft. We provide this extension to our users, but also we consume this extension as users of that extension. Somewhere in between, between 2005 and now, I also worked at Percona. At Percona I worked on monitoring software and worked with pg_stat_statements and pg_stat_monitor, which is made by Percona and I have pretty much a lot of experience with Postgres extensions.

Floor: And you’re cheating a little on this panel, seeing as you are not only a user but also a provider. I definitely have some questions for you!

And y’all talked a little about your sort of experience with extensibility of other software or technology, and comparing that to the Postgres experience. Can you all talk about what the main differences are that you have observed with other ecosystems?

Celeste: I think as somebody who’s a bit of a newer Postgres user and I guess comes from a different community, the biggest thing that weirded me out, when I started working with Postgres, is that there’s no way to install an extension except to install it against your live database.

If you compare that to something like Kubernetes, which again has a rather robust extensibility ecosystem, both on the networking side of things, but also other aspects of it, the inherent software architecture makes it so that you have to plan out what you’re going to do, and then you apply a plan. In theory you can’t apply a plan or add extensions to Kubernetes that won’t work or will somehow break the system. Again, in theory, in practice things are more interesting.

But with Postgres and with databases in general, you’re always working with the live dataset, or at some point you have to work with the live dataset. So there’s no real way to test.

Sonia: Most of the other databases — apart from PostgreSQL, which I have worked with — most of them are licensed. So Oracle and SQLServer. When it comes to PostgreSQL, it’s an open source, so you do your own thing: you do the installation, do the checkout everything, which is open source, you can see the code, and things like that. But when it comes to other databases, I since it’s licensed, it is managed by the specific vendor, so you do not have rights to do anything else. The things which will be common, like you do the POC in both the databases before you actually implement it in the production environment.

Alexey: Floor, you mentioned RubyGems, and I was thinking that actually there is something similar between PostgreSQL extensions and RubyGems in a sense that RubyGems quite often extend built-in Ruby classes, and Postgres extensions could do the same. There is no separation between public and private inside PostgreSQL, it’s all just C symbols, no special mark, don’t touch the CPI, we are going to change it at central detail. Nothing like that. They try not to break compatibility needlessly, but on the other hand, you have to check all versions of your extensions with all separate versions of PostgreSQL. In that sense it’s quite similar, unlike some other languages where’s there’s better separation between internal private, if not on the compiler level, at least on like documentation level or something like that.

Celeste: That’s not necessarily a criticism of Postgres. I think it’s just that’s those were the tools available to Postgres as a community when Postgres was being developed. There are some advantages to that too, because, for lack of a better word, the lack of checks and balances let some Postgres extensions do very, very interesting things that would maybe not be possible under a more restricted framework.

Floor: The main difference I see between those two is that I know to go to RubyGems as my place to get my plugins — or my gems, in that case. Whereas with Postgres, they can live pretty much anywhere, right? There’s different directories and there’s different places where you can get your stuff and maybe there’s something that is in a private repo somewhere because that’s what another team at your company is working on. It’s a bit of a mess, you know? It’s really difficult to navigate, where maybe other things are lot less difficult to navigate because there’s just the single place.

I wanna talk a little bit about when you’re looking for an extension to do a certain thing for you. What do you consider when you’re looking for an extension or when you’re comparing some of its tooling? I wrote down a couple of things that you might be looking at, or what I might be looking at: maybe it’s docs and tutorials, maybe it’s “has it seen a recent release?” Has it seen frequent releases? Is there only one company that is offering this extension? Or is it multiple companies supporting this extension? Is it a community-built tool? Is it already in use by other teams in your company? So it’s something that has been tested out with your system, with your stack, and you feel like it’s something that you can easily adopt.

So what are some of the things for you that you definitely look at when you’re looking to adopt new tooling?

Celeste: I think the main thing you wanna look for when you’re looking at really any open source project, whether it’s an extension or not, is both proof points within the project, but also social proof. Proof points within the project are things that you mentioned, like is there documentation? Does this seem to be actively maintained? Is the commit log in GitHub moving? How many open issues are there? Are those open issues being closed over time? Those are project health indicators. For example, if you look at the CHAOSS Project, Dawn Foster has done a ton of work around monitoring project health there.

But I think the other half of this — and this was actually something we worked on a lot at the Cloud Native Computing Foundation when I was there, and that work continues — is — and this makes a bit more sense in some cases than others — is social proof. So, are there other companies using it? Can you point to case studies? Can you point to case studies of something being in production? Can you point to people giving conference talks where they mention something being in use?

This becomes really important when you start thinking about things being enterprise-grade, an when you start thinking about the idea of enterprise-grade open source. Everybody who’s on this panel works for a company that does enterprise-grade open source database software, and you have to ask yourself what that means. A lot of what that means is that other enterprises are using it ,because that’s means that something comes to a certain level of reliability.

Sonia: I would like to add some things. What I look at is how difficult or how easy it is to install, configure, and upgrade the extension, and, whether it needs restart of the database service or not. Why do I look at the restart aspect? Because when I install it or configure or upgrade or whatever activity I perform with it, if it requires the restart, that means it is not configured online, so I need to involve other folks to do the database restart, as in an application is connecting to it. When I restart, it goes for a maintenance window for a very small time — whatever duration it goes offline, the database service. So whether it requires restart or not, that is also very important for me to understand.

Apart from the documentation, which should be of course easy to understand. That is one of the aspects while you install and configure. It should not be that difficult that I need to refer every time, everything, and do it, and then maybe, I might need to create another script to use it. It should not be the case. I look to those aspects, as well.

Apart from that, I also see how do I monitor the activities of this extension, like whether it is available in the logs — what that extension is doing. So it should not break my existing things basically. So how stable and how durable it is, and I should be able to monitor the activities, whatever that extension is doing.

From the durability perspective, even if I’m not able to monitor via logs, it should be durable enough to that it should not break anything else, which is up and running.

One more thing. I will definitely perform the POC, before putting it into the production, into some lower environment or in my test environment somewhere else.

Floor: How do you figure out though, how easy something is to sort of set up and configure? Are you looking for that information from a README or some documentation? Because I’ve definitely seen some very poorly documented stuff out there…

Sonia: Yeah, documentation is one aspect. Apart from that, when you do the POC, you will actually using you’ll be actually using that. So with that POC itself, you’ll be able to understand how easy it is to install, configure, and use it.

Alexey: For me as a user, I would say the most important thing is whatever extension is packaged and easy to install. And if it’s not packaged in the same way as PostgreSQL is packaged. For example, if I get PostgreSQL from my Ubuntu distribution, if extension is not in the same Ubuntu target, it might as well not exist for me because there is no way I’m going to compile it myself. It’s like hundreds of flags and that being C, and okay, I can make it 1% faster, but then it’ll be insecure and will bring PostgreSQL down, or worse. So there are a lot of problems like that.

If it’s not a package, then I would just probably just do something which is not as good, not as stable, but I will do it myself and will be able to support them using some third party extensions that is not packaged properly. And properly for me, is the high bar. So if it’s some third party network of extensions, that might be okay, I will take a look. But then of course, if it’s in the Ubuntu repository or Debian repository, that would be of course, much better.

Floor: I think that’s the build versus buy — or not necessarily buy if it’s open source. Not to say that open source is free. But that’s the discussion, right? When do you decide to spend the time to build something over adopting something? And so for you, that’s mainly down to packaging?

Alexey: For me that’s the most important one because for features we generally need to use in the current job and previous jobs, there are enough hooks on the PostgreSQL itself to make what we want to do ourselves. Like if sometimes we need to parse logs, sometimes we need to parse some low level counters, but that’s doable and we could do it in a different language and in the way we can maintain it ourselves. If you talk about PostgreSQL, I typically recommend C and if there’s some problem, we will have a bigger problem finding someone to maintain it, to fix it fast.

Floor: Alright When you build it yourself, would you then also open-source it yourself and take on the burden of maintenance?

Alexey: I mean that really depends on the job. Like at Percona we open sourced pg_stat_monitor. But that was like, implicit goal of making this extension open source to make it like a superset of pg_stat_statement. In FerretDB of course, DocumentDB is open source — we contribute to it, but I couldn’t say that’s easier. Of course if it was written like in our perfect language, Go, it would be much, much easier. Unfortunately, it’s not. So we have to deal with it with packaging and what not.

Floor: I guess it’s also like build versus buy versus fork because there’s definitely different forks available for a similar tooling that is just optimized for a little bit of a different use case. But again, that’s then another project out there that needs to be maintained.

Alexey: But at the same time, if you fork something, and don’t want to contribute back, you just don’t have this problem of maintaining it for someone else. You just maintain it for yourself. Of course, like if someone else in upstream wants to pull your changes, they will be able to. And then when they look at you like you’re a bad part of the community because you don’t contribute back, but that depends on the size of the company, whatever you have the sources and all that.

Celeste: But now you’re touching on something that I feel very strongly about when it comes to open source. Why open source anything to begin with? If we can all just maintain close forks of everything that we need, why is Postgres open source to begin with and why does it continue to be open source and why are we having this discussion 30 or 40 years into the lifespan of Postgres at this point?

The fact of the matter is that Postgres being open source is the reason that we’re still here today. Postgres is a 30 plus year old database at this point. Yes, it’s extremely well architected because it continues to be applicable to modern use cases when it comes to data. But really the fundamental of the matter is that it is free, and being free means that two things can happen. One, it’s a very smart move for businesses to build a business on top of a particular piece of software. But two — and I would argue that this is actually the more important point when it comes to open source and the long term viability of open source — is that because it is free, that means it is A) proliferative, it has proliferated across the software industry and B) it is extremely valuable for professionals to learn Postgres or to learn Kubernetes or to learn Linux because they know that they’re gonna encounter that sometime in their career.

So when it comes to extensions, why open source an extension? You could simply close source an extension. It’s the same reason: if you use open source extensions, you can then hire for people who have potentially encountered those extensions before.

I work for a managed service provider that deploys quite a few Postgreses for quite a few clients. I obviously have a bit of a stake in the build versus buy versus fork debate that is entirely financial and entirely linked to my wellbeing. Regardless, it still makes sense for a company like Aiven to invest in open source technologies, but it makes a lot more sense for us to hire Postgres experts who can then manage those extensions and manage the installation of those extensions and manage whether your database works or not against certain extensions, than it is for literally every company out there on the planet to hire a Postgres professional. There’s still a use case for open-sourcing these things. That is a much larger discussion though, and I don’t wanna derail this panel. [Laughs.]

Floor: I mean, if Alexey is game, you got yourself a conversation.

Alexey: First of all, I completely agree with you and I of course built my whole carrier on open source. But there’s also the other side. So let’s say you build an open source extension which is very specific, very niche, solves your particular problem. And there are like 20 other people who are like, you have the same problem, and then all 20 come to your GitHub and ask questions about it. And they do it for free. You just waste your time supporting them essentially. And you are a small company, you are just three people and you open-source this extension just for fun. And they are three people and two of them work full time and support that.

Celeste: Oh yeah, no, I didn’t say the economics of this worked out for the people doing the open-sourcing, just to be perfectly clear. I think a much larger question around the sustainability of open source communities in general. Postgres, the overall project, and say, for example, the main Kubernetes project, are outliers in terms of the amount of support and the amount of manpower and people and the energy they get. Whereas most things that get open-sourced are — I think Tidelift had a survey: the average maintainer size for any given open source project is one. That is a much larger debate though. Realistically it makes a lot of sense, particularly for larger companies, to use open source software, Postgres included, because it accelerates their time to innovation. They don’t need to worry about developing a database, for example. And if they’re using Postgres and they decide they want time series data, they don’t need to worry about migrating to a time series database when they can just use Timescale.

However, “are they contributing back to those projects?” becomes a really big question. I think the next questions that Floor would like to lead us to, amd I’m just going to take the reins here, Floor —

Floor: Are you taking my job??

Celeste: Hardly, hardly, I could never! My understanding of why we’re having this series of conversations that’s around the sustainability of the Postgres extensions ecosystem,is that there’s a governance question there as well. For the end user, the ideal state for any Postgres extension is that they’re blessed and vetted by the central project. But as soon as you start doing that, you start realizing how limited the resources in even a massive project like Postgres are. And then you start asking: Where should those people be coming from? And then you start thinking: There are companies like Microsoft out there in the world that are hiring a lot of open source contributors, and that’s great, but… What about the governments? What about the universities? What about the smaller companies? The real issue is the manpower and there’s only so far you can go, as a result of that. There’s always sustainability issues around all open source, including Postgres extensions, that come down to the sustainability of open source as a whole and whether or not this is a reasonable way of developing software. Sorry to get deep. [Laughs.]

Floor: Yeah, I think these are discussions that we’re definitely having a lot in the open source community, and in the hallway at a lot of conferences.

We’re gonna open it up to audience questions too in a minute. So if people want to continue talking about the drama that is open source and sustainable open source, we can definitely continue this discussion.

Maybe going back a little bit, Alexey, can we talk a little bit about — because you’re also a provider — what your definition of “done” is or what you wanna offer your users at minimum when you do decide to open-source some of your stuff or make available some of some of your stuff.

Alexey: As an open source company, what we do, we just publish our code on GitHub and that’s it. It’s open source, that’s done. Knock yourself out and if you want some support, you just pay us, and then we will. That’s how we make money. Well, of course not. That’s more complicated than that, and I wish it was like to some degree, sometimes. Now there still a lot of users who just come and ask for questions for free, and you want to support them because you want to increase adoption and all that.

The same with extensions. So as I just described the situation, of course, that was a bit like, not to provoke a discussion, but, let’s say you built a PostgreSQL extension, you need to have some hooks in the core that ideally would be stable, don’t change between versions as we discussed. That’s a bit of a problem. PostgreSQL, no separation between private and public API. Then how do you install? You need to package it some way that is the same way as your current PostgreSQL version is packaged. There is no easy way, for example, to extend a version of PostgreSQL, which is a part of Docker, you just build your own container.

Celeste: I’ll segway into the point that I think I was supposed to make when we were talking about extensions ecosystem, as opposed to a rant about the sustainability of open source, which I am unfortunately always down to give. Here’s the thing with extensions ecosystems. For the end user, it is significantly more beneficial if those extensions are somehow centrally-controlled. If you think about something like RubyGems or the Python package installer or even Docker to a certain extent, those are all ways of centralizing. Though with some of the exploits that have gone on with NPM recently, there are obviously still problems there.

I mentioned, there’s always staffing problems when it comes to open source. Assigning somebody to approve every single extension under the sun isn’t really sustainable from a human perspective. The way that we handle this in the Kubernetes community — particularly the container network interfaces, of which there are many, many, many — is we effectively manage it with governance. We have a page on the documentation in the website that says: here are all the container network interfaces that have chosen to list themselves with us. The listings are alphabetical, so there is no order of precedence.

The community does not take responsibility for this code because we simply cannot. In being a container network interface, it means that they implement certain functionalities, like an interface in the programming sense. We just left it at that. That was the solution that the Kubernetes community came to. I don’t know if that’s the solution that the Postgres community will eventually come to, but community governance is a huge part of the solution to that problem, in my opinion.

Alexey: I think one big difference between NPM and NodeJS ecosystem in general, and, for example, Postgres extensions, is that NPM was so popular and there are so many packages mostly because NodeJS by itself is quite small. The core of NodeJS is really, really small. There is now standard library and a lot of functionality is external. So I would say as long as your core, like PostgreSQL or Ruby or Kubernetes is large enough, the amount of extensions will be limited just by that. Because many people will not use any extensions, they will just use the core. That could solve a problem of waiting and name-squatting, but just by itself. I would say PostgreSQL more or less solves this problem to some degree.

Floor: Before we open up for some questions from participants, Sonia, in a previous call, shared a little bit of a horror story with us, with wanting to use a certain extension and not being able to. I think this is something that other people can resonate with, having been through a similar thing. Let’s hear that story, And then, of course, Celeste, Alexey, if you have similar stories, do share before we open up for questions from the rest of the peeps joining here.

Sonia: So there was this requirement to transfer data from one database to another database, specifically with respect to PostgreSQL. I wanted to transfer the data from the production environment to some other environment, or internally within the non-production environments. I created this extension called dblink. I’m talking about way back, 2012, 2013, somewhere, when I started working with PostgreSQL, I used that extension. When you configure that extension, we need to give the credentials in a human readable format. And then, at times it also gets stored in the logs or somewhere.

I mean, even if it is not storing the logs, what the security team or the audit team mentioned was that since it is using the credentials in a human readable format, this is not good. And if somebody has has access to X database, they also get the access to the Y database or the Y cluster. And what if it goes to the production environment and then somebody can just steal the data, without us even knowing it. It’ll not get logged inside the logs, that somebody has accessed my production database via non-production database. So that’s not good, and was not acceptable by the auditors.

I love that extension today also, because without doing any scripting or anything, you just access one database from another database and then get whatever you want. But then as a developer, it might be very easy for me to use that thing. But then as an other person who is trying to snoop into your production database or the other data of anything, it’s easy for them. So we were asked not to use that extension specifically, at least not to connect to the production environment.

I was working for a taxation project. It was a financial critical data, and they did not want it to have any risk of anybody reaching to that data because it was the numbers, the financial figures, and was critical. So that’s the reason we were refrained from using it for that particular project. But then other projects, which were not that critical, I somehow managed to convince them to use it. [Laughs.]

Floor: So it’s sometimes you will choose it for convenience and it’s acceptable risk, and then there might be restrictions from other teams as well. Thanks for sharing that. If anyone wants to un-mute and ask questions or share their own horror stories, you’re now very welcome to.

Yurii: There was a really good point about extensions being available as part of your operating system environment, for example Ubuntu packages or Red Hat packages. This is where we still have a lot of difficulty in general, in this ecosystem. Obviously PGDG is doing an amazing job capturing a fraction of those extensions. But because it is a complicated job, oftentimes unpaid, people are trying to make the best out of it. On the one hand, it does serve as a filter, as in only the best of the best extensions that people really use get through that filter and become part of PGDG distribution. But it also creates an impediment. For example, PGDG is not always able to update them as the releases come out. Oftentimes people do need the latest, the best releases available, and not when the packagers have time.

The other problem is how do extensions become popular if they’re not there in the first place? It creates that kind of problem where you’re stuck with what you have. And there’s a problem with a discovery: how do I find them? And how do I trust this build? Or can I even get those builds for my operating system?

Obviously there are some efforts that try to mitigate that by building a docker container and you run them with just copies of those files. But obviously there’s a demand for a native deployment method. That is, if I deploy my Postgres this way — say using RPM in my Red Hat-based distro, or Debian based — I want everything else to fall into that. I don’t want a new system.

I think we, we still have a lot of work to do on that end. I’ve been putting some effort on our end to try and find how can we save a packager’s time that has basically decreased the amount of work that that needs to be done. Can we go essentially from, here’s the URL for the extension, figure it out. Like 80% of them can, we just figure them out and package them automatically, and repackage them when new versions come out, an only assign people on them for the remaining 20% that are not building according to a certain convention. So they need some attention.

This way we can get more extensions out and extract more value out of these extensions. By using them, we’re helping the authors gain a wider audience and effectively create value for everybody in the community. Otherwise, they would feel like, “I can’t really promote this as well as I would’ve loved to, like another ecosystems — RubyGems were mentioned today, and NPM, etc. It’s easy to get your stuff out there. Whereas in the Postgres community, it is not easy to get your stuff out there. Because there are so many risks associated with that, we are oftentimes working with production data, right?

We need to make sure there is less friction on any other side. We need to get these extensions to get considered. That’s at least one of the points that I wanted to mention. I think there’s a lot to be done and I really hope that the conference next month in Montréal will actually be a great place to get the best minds together again and hash out some of the ideas that we’ve been discussing in the past number of months.

Floor: David, do you wanna ask your question of where people go to learn more about extensions and find their extensions?

David: This is something that I tried to solve a while ago with a modicum of success — a bit. My question is, where do you all go to learn more about extensions? To find out what extensions are available or, is there an extension that does X, Y, Z? How do you find out if there is and, then evaluate it? Where do you go?

Alexey: I generally just search, I guess. I don’t go to anything. The last place I generally research and quite often I learned on some blog post on sometimes on GitHub itself.

Celeste: If you think about that project-level activity proof, and then the social proof, I think that Postgres actually has a really unique advantage compared to a lot of other open source projects because it’s been going for so long and because there is a very entrenched community. It’s very easy to find social proof for basically anything Postgres-related that you might want.

If you do a search for, like, “I want a Postgres extension that does X”, you’re going to get comparatively better Google search results because there’s years and years and years of search results in some cases. However, that does come with the equal and opposite problem of when you have maintenance issues, because things have been going for years and years, and you don’t know whether things have been maintained or not.

I’m thinking about this from an open source management perspective, and as somebody who is not necessarily involved in the open source development of Postgres. I think there is a case that you could make for some amount of community vetting of some extensions and publicizing that community-vetting, and having a small subset of — this has some sort of seal of approval, it’s not gonna like nuke your database. To a certain extent, I think Postgres already does that, because it does ship with a set of extensions by default. In shipping with those extensions, it’s effectively saying the upstream Postgres community blesses these, such that we will ship Postgres with them because we are pretty confident that these are note going to nuke your database.

When I was at the CNCF, I supported a whole bunch of different open source projects. I was everybody’s documentation girl. So I’m trying to throw things at them and then hopefully you can talk about them in Montréal and maybe something useful will come of it. Another thing that you can use is almost like an alpha beta experimental sort of feature where you define some set of criteria for something being alpha or experimental, you define some set of criteria that if met, they can call themselves beta, you define some set of criteria of something being “production ready” for an extensions ecosystem. Then you can have people submit applications and then it’s less of a mad rush.

I guess if I had any advice — not that Postgres needs my Charlton advice — it would be to think about how you wanna manage this from a community governance perspective, or else you will find yourself in utter mayhem. There’s a reason that the Kubernetes container network interface page specifies that things have to be listed in alphabetical order. It’s because there was mayhem until we decided to list things in alphabetical order. It seems completely silly, but it is real. [Laughs.]

Alexey: So my next project is going to start with “aa”.

Sonia: Yeah, what Celeste said. I will research about it online, normally, and I will find something and, if I get lots of options for doing X thing, a lot of extensions, I will go and search the documentation on postgresql.org and then try to figure out which one is the one to start with my POC.

Celeste: Let me flip the question for you, Sonia. In an ideal world. If you were to try and find an extension to use for a particular task, how would you find that extension?

Sonia: Normally I will research it, Google it most of the times, and then try to find out —

Celeste: But pretend you don’t have to Google it. Pretend that maybe there’s a website or a resource. What would your ideal way of doing that be? If you had some way that would give you more of a guarantee that it was trustworthy, or would make it easier to find, or something. Would it be a tool like RubyGems? Would it be a page on the Postgres website’s documentation?

Sonia: Page! The PostgreSQL website documentation. The Postgres documentation is like a Bible for me, so I keep researching on that. In fact, previously when you used to Google out anything, you used to get the initial link as the postgresql.org, the website. Nowadays you don’t get the link as a first link, but then I will scroll down to the page. I will try to figure out where it is postgresql.org and then go there. That’s the first thing. Now since I’ve been into the field, since a very long time, then I know, okay, this website is authentic, I can go and check out the blogs, like who else has used it or what is their experience or things like that.

Jay Miller: I have to ask this only because I am new to thinking about Postgres outside of how I interact with it from a web developer’s perspective. Usually I use some ORM, I use some module. I’m a Python developer, so I use Python, and then from there, I don’t think about my database ever again.

Now I want to think about it more. I want to have a very strong relationship with it. And we live in a world where you have to say that one of the answers is going to be AI. One of the answers is I search for something, I get some AI response, and, and here’s like the…

David in comments: SLOP.

Jay: Exactly, this is the problem. If I don’t know what I should do and I get a response, when the response could have just been, “use this extension, it does everything you need to do and it makes your life so much easier.” Instead, I wind up spending days, if not weeks, going in and fighting against the system itself. Sonia, you mentioned having that experience. The idea or the ability to discern when to go with some very kludgey PostgreSQL function that makes your life miserable, to, “oh, there’s an extension for this already! I’m just going to use that.” How do you expose that to people who are not dumb, they’re not vibe coding, they just finally have a reason to actively think about what their database is doing behind the scenes.

Sonia: If I understood your question correctly, you wanted to explore what kind of activities a specific extension is doing.

Jay: I would just love the like, “hey, you’re trying to do a thing, this has already been solved in this extension over here, so you don’t have to think about it.” Or “you’re trying to do something brand new, no one’s thought about this before, or people have thought about it before and talked about how much of a pain it is. Maybe you should create an extension that does this. And here’s the steps to do that.” Where is the proper documentation around coming to that decision, or the community support for it?

Sonia: That’s a great question to discuss inside the community, to be honest. Like, how do we go about that?

David: Come to Montréal and help us figure it out.

Jay: I was afraid of that answer. I’ll see you in New York, or hopefully Chicago on Friday.

Floor: Fair enough, but definitely a wonderful question that we should note down for the discussion.

Sonia: One thing which I want to add, this just reminded me of. There was one podcast which I was listening with Robert Haas. The podcast is organized by one of the Microsoft folks. The podcast was revolving around how to commit inside the PostgreSQL, or how to read what is written inside the PostgreSQL and the ecosystem around that. The questions were related to that. That could also help. And of course, definitely when you go to a conference, which we are discussing at the moment, there you’ll find a good answer. But listening to that podcast will help you give the answers to an extent.

Floor: I think that’s Talking Postgres with Claire Giordano, or if it was the previous version, it was the “Path to Citus Con”, because that was what it was called before.

David: The summit that’s in Montréal on May 13th is an unconference session. We have a limited amount of time, so we want to collect topic ideas and ad hoc votes for ideas of things to discuss. Last year I used a website with Post-Its. This year I’m just trying a spreadsheet. I posted a link to the Google Sheet, which anybody in the world can access and pollute — I mean, put in great ideas — and star the ideas they’re really interested in talking about. And I’d really appreciate, people contributing to that. Good topics came up today! Thank you.

Floor: Thanks everyone for joining us. Thank you for our panelists specifically, for sharing their experiences.

Mini Summit 4: The User POV

Orange card with large black text reading “The User POV”. Smaller text above reads “04.23.2025” and below reads “Celeste Horgan (Aiven), Sonia Valeja (Percona), & Alexey Palazhchenko (FerretDB)”

And we’re back.

This Wednesday, April 9 at noon America/New_York (16:00 UTC) for Extension Mini Summit #4, where our panel consisting of Celeste Horgan (Aiven), Sonia Valeja (Percona), and Alexey Palazhchenko (FerretDB) will discuss “The User POV”. This session will be a terrific opportunity for those of us who develop extensions to get an earful from the people who use them, in both anger and joy. Bang on the Meetup to register for this live video session.

Mini Summit 3 Transcript: Apt Extension Packaging

Orange card with large black text reading “APT Extension Packaging”. Smaller text below reads “Christoph Berg, Debian/Cybertec” and “04.09.2025”. A photo of Christoph looking cooly at the camera appears on the right.

Last week Christoph Berg, who maintains PostgreSQL’s APT packaging system, gave a very nice talk on that system at the third PostgreSQL Extension Mini-Summit. We’re hosting five of these virtual sessions in the lead-up to the main Extension Summit at PGConf.dev on May 13 in Montréal, Canada. Check out Christoph’s session on April 9:

There are two more Mini-Summits coming up:

Join the Meetup to attend!

And now, without further ado, thanks to the efforts of Floor Drees, the thing you’ve all been waiting for: the transcript!

Introduction

David Wheeler introduced the organizers:

Christoph Berg, PostgreSQL APT developer and maintainer par excellence, talked through the technical underpinnings of developing and maintaining PostgresSQL and extension packages.

The stream and the closed captions available for the recording are supported by PGConf.dev and its gold level sponsors: Google, AWS, Huawei, Microsoft, and EDB.

APT Extension Packaging

Speaker: Christoph Berg

Hello everyone. So what is this about? It’s about packaging things for PostgresSQL for Debian distributions. We have PostgreSQL server packages, extension packages, application packages and other things. The general workflow is that we are uploading packages to Debian unstable first. This is sort of the master copy, and from there things eventually get to Debian testing. Once they’re being released, they end up in Debian stable.

Perhaps more importantly for the view today is that the same package is then also rebuilt for apt.postgresql.org for greater coverage of Postgres major versions. And eventually the package will also end up in an Ubuntu release because, Ubuntu is copying Debian unstable, or Debian testing, every six months and then doing their release from there. But I don’t have any stakes in that.

For an overview of what we are doing in this Postgres team, I can just briefly show you this overview page. That’s basically the view of packages we are maintaining. Currently it’s 138, mostly Postgres extensions, a few other applications, and whatever comes up in the Postgres ecosystem.

To get a bit more technical let’s look at how the Debian packages look from the inside.

We have two sorts of packages. We have source packages, which are the source of things that are built. The way it works is that we have a directory inside that source tree called Debian, which has the configuration bits about how the packages created should look like. And from this the actual binary packages, the .deb files are built.

Over the past years, I’ve got a few questions about, “how do I get my application, my extension, and so on packaged?” And I wrote that down as a document. Hopefully to answer most of the questions. And I kind of think that since I wrote this down last year, the questions somehow stopped. If you use that document and like it, please tell me because no one has ever given me any feedback about that. The talk today is kind of loosely based on this document.

I’m not going to assume that you know a whole lot of Debian packaging, but I can’t cover all the details here, so I’ll keep the generic bits a bit superficial and dive a bit more into the Postgres-specific parts.

Generally, the most important file in the Debian package is this Debian control file, which describes the source and the binary packages. This is where the dependencies are declared. This is where the package description goes, and so on. In the Postgres context, we have the first problem that, we don’t want to encode any specific PG major versions inside that control file, so we don’t have to change it each year once a new Postgres version comes out.

This is why, instead of a Debian control file, we actually have a debian/control.in file, and then there’s a tool called pg_buildext, originally written by Dimitri Fontaine, one or two decades ago, and then maintained by me and the other Postgres maintainers since then. That tool is, among other things, responsible for rewriting that control.in file to the actual control file.

I just picked one random extension that I happen to have on the system here. This postgresql-semver extension, the upstream author is actually David here. In this control file we say the name of the package, the name of the Debian maintainer — in this case the group — there’s a few uploaders, there’s build dependencies and other things that are omitted here because, the slide was already full. And then we have, next to this source section, we have a package section and here we have this placeholder: postgresql-PGVERSION-semver.

Once we feed this control.in file through this pg_buildext tool, it’ll generate the control file, which expands this PGVERSION placeholder to actually a list of packages. This is just a mechanical translation; we have postgresql-15-semver, 16, 17 and whatever other version is supported at that point.

Once a new PostgreSQL version is released, PostgreSQL 18 comes out, we don’t have to touch anything in this control.in file. We just rerun this pg_buildext update control command, and it’ll automatically add the new package.

There’s about half a dozen layers talking to each other when building a package On the lowest level, no one actually touches it at at that level. But Debian packages are actually ar archives, the one from library fame, was yet another, archive inside control called control.tar.xz or something. But. No one actually touches it at that level anymore.

We have dpkg on top of that, which provides some building blocks for creating actual Debian packages. So you would call dpkg-builddeb and other dpkg helpers to actually create a package from that. But because this is complicated, there’s yet another level on top of that, called debhelper. This is the actual standard for building Debian package nowadays. So instead of invoking all the dpkg tools directly, everyone uses the step helper tools which provide some wrappers for the most common build steps that are executed. I will show an example in a second.

Next to these wrappers for calling “create me a package”, “copy all files”, and so on, there’s also this program called dh, it’s called a sequencer because it’ll invoke all the other tools in the correct order. So let me show you an example before it gets too confusing. The top level command to actually build a Debian package — to create the binary packages from the source package — is called dpkg-buildpackage. It will invoke this debian/rules file. The debian/rules file is where all the commands go that are used to build a package. For historical reasons it’s a Makefile. In the shortest incantation it just says, “for anything that is called invoke this dh sequencer with some arguments.”

Let me skip ahead one more slide and if we’re actually running it like that, it kind of looks like this. I’m invoking dpkg-buildpackage, dpkg-buildpackage invokes debian/rules with target name debian/rules, invokes dh and dh then calls all the helper steps that are required for getting the package to run. The first one would be dh_update_autotools_config, so if any ancient auto conf things are used, it’ll be updated. The package will be reconfigured, and then it would it will be built and so on.

This was the generic Debian part. Postgres actually adds more automation on top of that. This is this “dh with pgxs step.” Let me go back two slides. We have this pgxs plugin for debhelper which adds more build steps that actually call out this tool called pg_buildext, which interfaces with the pgxs build system in your extension package. Basically debhelper calls this pgxs plugin, and this pgxs plugin called pg_buildext, and this one finally invokes the make command, including any PG_CONFIG or whatever settings that are required for compiling this extension.

If we go back to the output here, we can see that one of the steps here is actually invoking this pg_buildext tool and pg_buildext will then continue to actually compile this extension.

This means in the normal case for extensions that don’t do anything special, you will actually get away with a very short debian/rules file. Most of the time it’s just a few lines. In this case I added more configuration for two of the helpers. In this step, I told dh_installchangelogs that, in this package, the changelog has a file name that dh_installchangelogs doesn’t automatically recognize. Usually if you have a file called changelog, it will be automatically picked up. But in this case I told it to use this file. Then I’m telling it that some documentation file should be included in all packages. Everything else is standard and will be picked up by the default Debian tool chain.

Another thing specific for the Postgres bits is that we like to run the package tests at build time. One of the build steps that gets executed is this dh_pgxs test wrapper, which in turn invokes pg_buildext install check. That will create a new Postgres cluster and proceed to invoke pg_regress on that package. This is actually the place where this patch that Peter was talking about two weeks ago is coming into play.

The actual call chain of events is that dh_pgxs starts pg_buildext installcheck, pg_buildext starts pg_virtualenv, which is a small wrapper shipped with Debian — but not very specific to Debian — that just creates a new Postgres environment and then executes any command in that environment. This is actually very handy to create test instances. I’m using that all day. So if anyone is asking me, “can you try this on Postgres 15?” or something, I’m using pg_virtualenv -v 15 to fire up a temporary Postgres instance. I can then play with it, break it or something, and, as soon as I exit the shell that pg_virtualenv opens, the cluster will be deleted again.

In the context of pg_buildext, what pg_virtualenv is doing here is that it’s calling pg_createcluster to actually fire up that instance and it’s passing an option to set this extension_control_path to the temporary directory that the extension was installed to during the build process. While we are compiling the package, the actual install command is invoked, but it does not write to /usr/share/postgresql or something, but it writes to a subdirectory of the package build directory. So it’s writing to debian/$PACKAGE/$THE_ORIGINAL_PATH.

And that’s why before we had this in Postgres 18, the Debian packages had a patch that does the same thing as this extension_control_path setting. It was called extension_destdir. It was basically doing the same thing except that it was always assuming that you had this structure of some prefix and then the original path. The new patch is more flexible that: it can be an arbitrary directory. The old extension_destdir patch assumes that it’s always /$something/usr/share/postgres/$something. I’m glad that that patch finally went in and we can still run the test at build time.

So far we’ve only seen how to build things for one Postgres version. The reason why this pg_buildext layer is there is that this tool is the one that does the building for each version in turn. So pg_buildext will execute any command pass to it for all the versions that are currently supported by that package. What’s happening here is that we have one source package for extension covered. And that one source package then builds a separate binary for each of the major versions covered. But it does this from a single build run.

In contrast to what Devrim is doing with the RPM packages, he’s actually in invoking the builds several times separately for each version. We could also have done this, it’s just a design choice that, we’ve done it one way round and he’s doing it the other way round.

To tell pg_buildext which versions are supported by the package, there’s a file called debian/pgversions which usually just contains a single line where you can either say, “all versions are supported”, or you can say that “anything, starting 9.1” or “starting PostgreSQL 15 and later” is supported. In this example here, 9.1+ is actually copied from the semver package because the requirement there was that it needs to support extensions and that’s when 9.1 was introduced. We don’t care about these old versions anymore, but the file was never changed since it was written.

We know how to build several Postgres major versions from a source package. Now the next axis is supporting multiple architectures. The build is invoked separately for each architecture. This single source package is compiled several times for each architecture. On apt.postgresql.org, we’re currently supporting amd64, arm64 and ppc64el. We used to have s390x support, but I killed that recently because IBM is not supporting any build machine anymore that actually works. Inside Debian there are a lot more architecture supported.

There’s also something called Debian ports, which are not official architectures, but either new architectures that are being introduced like this loong64 thing, or it’s sometimes it’s old architectures that are not official anymore, but are still being kept around like the Sparc one. There’s also some experimental things like hurd-amd64, hurd-i386. Isn’t even Linux. This is a hurd kernel, but still running everything Debian on top of it, and some time ago it even started to support Postgres. The packages are even passing the tests there, which is kind of surprising for something that hasn’t ever seen any production.

For Postgres 17, it looks like this. The architectures in the upper half of that table are the official ones, and the gray area on the bottom are the unofficial ones that are, let’s say, less supported. If anything breaks in the upper half, maintainers are supposed to fix it. If anything breaks in the lower half, people might care or might not care.

I like to keep it working because if Postgres breaks, all the other software that needs it — like libpq, so it’s not even extensions, but any software that depends on libpq — wouldn’t work anymore if that’s not being built anymore. So I try to keep everything updated, but some architectures are very weird and just don’t work. But at the moment it looks quite good. We even got Postgres 18 running recently. There were some problems with that until last week, but I actually got that fixed on the pg-hackers list.

So, we have several Postgres major versions. We have several architectures. But we also have multiple distribution releases. For Debian this is currently sid (or unstable), trixie, (currently testing), bookworm, bullseye, Ubuntu plucky, oracular, noble, jammy, focal — I get to know one funny adjective each year, once Ubuntu releases something new. We’re compiling things for each of those and because compiling things yields a different result on each of these distributions, we want things to have different version numbers so people can actually tell apart where the package is coming from.

Also, if you are upgrading — let’s say from Debian bullseye to Debian bookworm — you want new Postgres packages compiled for bookworm. So things in bookworm need to have higher version numbers than things in bullseye so you actually get an upgrade if you are upgrading the operating system. This means that packages have slightly different version numbers, and what I said before — that it’s just one source package — it’s kind of not true because, once we have new version numbers, we also get new source packages.

But these just differ in a new change log entry. It’s basically the same thing, they just get a new change log entry added, which is automatically created. That includes this, plus version number part. Wwhat we’re doing is that the original version number gets uploaded to Debian, but packages that show up on apt.postgresql.org have a marker inside the version number that says “PGDG plus the distribution release number”. So for the Ubuntu version, it says PGDG-24.0.4 or something and then Debian is, it’s plus 120-something.

The original source package is tweaked a bit using this shell script. I’m not going to show it now because it’s quite long, but, you can look it up there. This is mostly about creating these extra version numbers for these special distributions. It applies a few other tweaks to get packages working in older releases. Usually we can just take the original source or source package and recompile it on the older Debians and older Ubuntus. But sometimes build dependencies are not there, or have different names, or some feature doesn’t work. In that case, this generate-pgdg-source has some tweaks, which basically invokes set commands on the source package to change some minor bits. We try to keep that to minimum, but sometimes, things don’t work out.

For example, when set compression support was new in Postgre, compiling the newer Postgres versions for the older releases required some tweaks to disable that on the older releases, because they didn’t have the required libraries yet.

If you’re putting it all together, you get this combinatorial explosion. From one project, postgresql-semver, we get this many builds and each of those builds — I can actually show you the actual page — each of those builds is actually several packages. If you look at the list of artifacts there, it’s creating one package for PostgreSQL 10, 11, 12, and so on. At the moment it’s still building for PostgreSQL 10 because I never disabled it. I’m not going to complain if the support for the older versions is broken at some point. It’s just being done at the moment because it doesn’t cost much.

And that means that, from one source package quite a lot of artifacts are being produced. The current statistics are this:

  • 63355 .deb files
  • 2452 distinct package names
  • 2928 source packages
  • 210 distinct source package names
  • 47 GB repository size

We have 63,000 .deb files. That’s 2,400 distinct package names — so package-$PGVERSION mostly built from that many source packages. The actual number of distinct source packages is 210. Let’s say half of that is extensions. Then there’s of course separate source packages for Postgres 10, 11, 12, and so on, and there’s a few application packages. Yeah, in total the repository is 47 gigabytes at the moment.

This is current stuff. All the old distributions are moved to apt-archive.postgresql.org. We are only keeping the latest built inside the repository. So if you’re looking for the second-latest version of something, you can go to apt-archive.postgresql.org. I don’t have statistics for that, but that is much larger. If I had to guess, I would say probably something like 400 gigabytes/ I could also be off by with guessing.

That was how to get from the source to the actual packages. What we’re doing on top of that is doing more testing. Next to the tests that we are running at build time, we are also running tests at installation time, or once the package is installed we can run tests. For many packages, that’s actually the same tests, just rerun on the actual binaries as installed, as opposed to debian/something. Sometimes it’s also different tests For some tests it’s just simple smoke tests. id everything get installed to the correct location and does the service actually start, sometimes it’s more complex things.

Many test suites are meant to be run at compilation time, but we want to run them at install time. This is kind of make check, make installcheck, but some projects are not really prepared to do that. They really want, before you can run the test suite, you have to basically compile everything. I try to avoid that because things that work at compilation time might not mean that it’s running at install time because we forgot to install some parts of the build.

I try to get the test suite running with as few compilation steps as possible, but sometimes it just doesn’t work. Sometimes the Makefile assumes that configure was run and that certain variables got substituted somewhere. Sometimes you can get it running by calling make with more parameters, but it tends to break easily if something changes upstream. If you’re an extension author, please think of someone not compiling your software but still wanting to run the tests.

What we’re doing there is to run these tests each month. On each day, each month, a random set of tests is scheduled — that’s three or four per day or something. It’s not running everything each day because if something breaks, I can’t fix 50 things in parallel. You can see test suite tab there. At the moment, actually everything worked. For example, we could check something…

With this background worker rapid status thing, that’s an extension that Magnus wrote sometime ago. Everything is running fine, but something was broken in January. Ah, there, the S390 machine was acting up. That was probably a pretty boring failure. Probably something with network broken. Not too interesting. This is actually why I shut down this architecture, because the built machine was always having weird problems. This is how we keep the system actually healthy and running.

One thing that’s also catching problems is called debcheck. This is a static installability analysis tool by Debian. You feed it a set of packages and it will tell you if everything is installable. In this case, something was not installable on Debian testing. And — if we scroll down there — it would say that postgresql-10-icu-ext was not installable because this lib-icu-72 package was missing. What happened there is that project or library change so-name, from time to time, and in this case, in Debian, ICU was moving from 72 to 76 and I just had to recompile this module to make it work.

Usually if something breaks, it’s usually on the development suites — sid, trixie, unstable, and testing — the others usually don’t break. If the others break, then I messed something up.

That was a short tour of how the packaging there works. For open issues or pain pain points that there might be, there are packages that don’t have any tests. If we are looking at, what was the number, 63,000 packages, I’m not going to test them by hand, so we really rely on everything being tested automatically. Extensions are usually very well covered, so there’s usually not a problem.

Sometimes there’s extensions that don’t have tests, but they are kind of hard to test. For example, modules that don’t produce any SQL outputs like auto_explain are kind of hard to test because the output goes somewhere else. I mean, in the concrete case, auto_explain probably has tests, but it’s sometimes it’s things that are not as easily testable as new data types.

Things that usually don’t have tests by nature is GUI applications; any program that opens a window is hard to test. But anything that produces text output is usually something I like to cover. Problems with software that we are shipping and that actually breaks in production is usually in the area where the tests were not existing before.

One problem is that some upstream extensions only start supporting Postgres 18 after the release. People should really start doing that before, so we can create the packages before the 18.0 release. Not sure when the actual best point to start would be; maybe today because yesterday was feature freeze. But sometime during the summer would be awesome. Otherwise Devrim and I will go chasing people and telling them, “please fix that.”

We have of course packages for Postgres 18, but we don’t have extension packages for Postgres 18 yet. I will start building that perhaps now, after feature freeze. Let’s see how, how much works and not. Usually more than half of the packages just work. Some have trivial problems and some have hard problems, and I don’t know yet if Postgres 18 will be a release with more hard problems or more trivial problems.

Another problem that we’re running into sometimes is that upstream only cares about 64bit Intel and nothing else. We recently stopped caring about 32 bits for extensions completely. So Debian at postgresql.org is not building any extension packages for any 32-bit architectures anymore. We killed i386, but we also killed arm, and so on, on the Debian side.

The reason is that there are too many weird bugs that I have to fix, or at at least find, and then chase upstreams about fixing their 32-bit problems. They usually tell me “I don’t have any 32-bit environment to test,” and they don’t really care. In the end, there are no users of most extensions on 32-bit anyway. So we decided that it just doesn’t make sense to fix that. In order to prevent the problems from appearing in the first place, we just disabled everything 32-bit for the extensions.

The server is still being built. It behaves nicely. I did find a 32-bit problem in Postgres 18 last week, but that was easy to fix and not that much of a problem. But my life got a lot better once I started not caring about 32-bit anymore. Now the only problem left is big-endian s390x in Debian, but that doesn’t cause that many problems.

One thing where we are only covering a bit of stuff is if projects have multiple active branches. There are some projects that do separate releases per Postgres major version. For example, pgaudit has separate branches for each of the Postgres versions, so we are tracking those separately, just to make pgaudit available. pg-hint-plan is the same, and this Postgres graph extension thing (Apache Age) is also the same. This is just to support all the Postgres major versions. We have separate source packages for each of the major versions, which is kind of a pain, but doesn’t work otherwise.

Where we are not supporting several branches is if upstream is maintaining several branches in parallel. For example, PostGIS is maintaining 3.5, 3.4, 3.3 and so on, and we are always only packaging the latest one. Same for Pgpool, and there’s probably other projects that do that. We just don’t do that because it would be even more packages we have to take care of. So we are just packaging the latest one, ad so far there were not that many complaints about it.

Possibly next on the roadmap is looking at what to do with Rust extensions. We don’t have anything Rust yet, but that will probably be coming. It’s probably not very hard; the question is just how much of the build dependencies of the average extension is already covered in Debian packages and how much would we have to build or do we just go and render all the dependencies or what’s the best way forward?

There’s actually a very small number of packages that are shipped on apt.postgresql.org that are not in Debian for this reason. For example, the PL/Java extension is not in Debian because too many of the build dependencies are not packaged in Debian. I have not enough free time to actually care about those Java things, and I can’t talk Java anyway, so it wouldn’t make much sense anyway.

I hope that was not too much, in the too short time.

Questions and comments

  • Pavlo Golub: When you show the pg_virtualenv, usage, do you use pre-built binaries or do you rebuild every time? Like for every new version you are using?

  • Christoph: No, no, that’s using the prebuilt binaries. The way it works is, I have many Postgres versions installed on that machine, and then I can just go and say, pg_virtualenv, and I want, let’s say, an 8.2 server. It’s calling initdb on the newer version, it’s actually telling it to skip the fsync — that’s why 8.3 was taking a bit longer, because it doesn’t have that option yet. And there it’s setting PGPORT, PGHOST and so on, variables. So I can just connect and then play with this old server. The problem is that psql pro-compatibility at some point, but it’s still working for sending normal commands to modern psql.

  • Pavlo: For modern psql, yeah. That’s cool! Can you add not only vanilla Postgres, but any other flavors like by EDB or Cybertec or, …?

  • Christoph: I’ve thought about supporting that; the problem there is that there’s conflicting requirements. What we’ve done on the Cybertec side is that if the other Postgres distribution wants to be compatible to this one, it really has to place things in the same directories. So it’s installing to exactly this location and if it’s actually behaving like the original, it’ll just work. If it’s installing to /opt/edb/something, its not supported at the moment, but that’s something we could easily add. What it’s really doing is just invoking the existing tools with enough parameters to put the data directory into some temporary location.

  • Pavlo: And one more question. You had Go extensions mentioned on your last slide, but you didn’t tell anything about those.

  • Christoph: Yeah, the story is the same as with Rust. We have not done anything with it yet and we need to explore it.

  • David Wheeler: Yurii was saying a bit about that in the chat. It seems like the problem is that, both of them expect to download most of their dependencies. And vendoring them swells up the size of the download and since they’re not runtime dependencies, but compile-time dependencies, it seems kind of silly to make packages.

  • Christoph: Yeah. For Debian, the answer is that Debian wants to be self-contained, so downloading things from the internet at build time is prohibited. The ideal solution is to package everything; if it’s things that are really used only by one package, then vendoring the modules might be an option. But people will look funny at you if you try to do that.

  • Yurii: I think part of the problem here is that in the Rust ecosystem in particular, it’s very common to have a lot of dependencies, as in hundreds. When you start having one dependency and that dependency brings another dependency. The other part of the problem is that you might depend on a particular range of versions of particular dependencies and others depend on others. Packaging all of that as individual dependencies is becoming something that is really difficult to accomplish. So vendorizing and putting that as part of the source is something that we could do to avoid the problem.

  • Christoph: Yeah, of course, it’s the easy solution. Some of the programming language ecosystems fit better into Debian than others. So I don’t know how well Rust fits or not.

    What I know from the Java world is that they also like to version everything and put version restrictions on their dependencies. But what Debian Java packaging helpers are doing is just to nuke all those restrictions away and just use the latest version and usually that just works. So you’re reducing the problem by one axis by having everything at the latest version. No idea how reasonable the Rust version ranges there are. So if you can just ignore them and things still work, or…

  • Yurii: Realistically, this is impossible. They do require particular versions and they will not compile oftentimes. The whole toolchain expects particular versions. This is not only dependency systems themselves, it’s also Rust. A package or extension can have a particular demand for minimum supported Rust version. If that version is not available in particular distro, you just can’t compile.

  • Christoph: Then the answer is we don’t compile and you don’t get it. I mean, Rust is possibly still very new and people depend on the latest features and then are possibly just out of luck if they want something on Debian bullseye. But at some point that problem should resolve itself and Rust get more stable so that problem is not as common anymore.

  • Yurii: It’s an interesting take actually because if you think about, the languages that have been around for much longer should have solved this problem. But if you look at, I don’t know, C, C++, so GCC and Clang, right? They keep evolving and changing all the time too. So there’s a lot of code say in C++ that would not compile with a compiler that is older than say, three years. So yeah, but we see that in old languages.

  • Christoph: Yea, but Postgres knows about that problem and just doesn’t use any features that are not available in all compilers. Postgres has solved the problem.

  • Yurii: Others not so much. Others can do whatever they want.

  • Christoph: If upstream doesn’t care about their users, that’s upstream’s problem.

  • David: I think if there’s there’s a centralized place where the discussion of how to manage stuff, like Go and Rust do, on packaging systems is happening, I think it’s reaching a point where there’s so much stuff that we’ve gotta figure out how to work up a solution.

  • Christoph: We can do back ports of certain things in the repository and make certain toolchain bits available on the older distributions. But you have to stop at some point. I’m certainly not going to introduce GCC back ports, because I just can’t manage that. So far we haven’t done much of that. I think Devrim is actually backporting parts of the GIST tool chain, like GL and libproj or something. I’ve always been using what is available in the base distribution for that. There is some room for making it work, but it’s always the question of how much extra work we want to put in, how much do we want to deviate from the base distribution, and ultimately also, support the security bits of that.

[David makes a pitch for the next two sessions and thanks everyone for coming].

Mini Summit 3: APT Extension Packaging

Orange card with large black text reading “APT Extension Packaging”. Smaller text below reads “Christoph Berg, Debian/Cybertec” and “04.09.2025”. A photo of Christoph looking cooly at the camera appears on the right.

This Wednesday, April 9 at noon America/New_York (16:00 UTC) for Extension Mini Summit #3, where Christoph Berg will take us on a tour of the PostgreSQL Global Development Group’s APT repository with a focus on packaging extensions. For those of us foolish enough to consider building our own binary packaging systems for extensions, this will be an essential session. For everyone else, come be amazed by the sheer volume of extensions readily available from the repository. Browse on over to the Meetup to register for this live video conference.

2025 Postgres Extensions Mini Summit Two

Orange card with large black text reading “Implementing an Extension Search Patch”. Smaller text below reads “Peter Eisentraut, EDB” and “03.26.2025”. A photo of Peter speaking into a mic at a conference appears on the right.

Last Wednesday, March 26, we hosted the second of five virtual Extension Mini-Summits in the lead up to the big one at the Postgres Development Conference (PGConf.dev) on May 13 in Montréal, Canada. Peter Eisentraut gave a very nice presentation on the history, design decisions, and problems solved by “Implementing an Extension Search Path”. That talk, plus another 10-15m of discussion, is now available for your viewing pleasure:

If you’d like to attend any of the next three Mini-Summits, join the Meetup!

Once again, with many thanks again to Floor Drees for the effort, here’s the transcript from the session.

Introduction

Floor Drees introduced the organizers:

Peter Eisentraut, contributor to PostgreSQL development since 1999, talked about implementing an extension search path.

The stream and the closed captions available for the recording are supported by PGConf.dev and their gold level sponsors, Google, AWS, Huawei, Microsoft, and EDB.

Implementing an extension search path

Peter: Thank you for having me!

I’m gonna talk about a current project by me and a couple of people I have worked with, and that will hopefully ship with Postgres 18 in a few months.

So, what do I know about extensions? I’m a Postgres core developer, but I’ve developed a few extensions in my time, here’s a list of extensions that I’ve built over the years.

Some of those are experiments, or sort of one-offs. Some of those are actually used in production.

I’ve also contributed to well-known extensions: orafce; and back in the day, pglogical, BDR, and pg_failover_slots, at EDB, and previously 2ndQuadrant. Those are obviously used widely and in important production environments.

I also wrote an extension installation manager called pex at one point. The point of pex was to do it in one shell script, so you don’t have any dependencies. It’s just a shell script, and you can say pex install orafce and it installs it. This was a proof of concept, in a sense, but was actually quite useful sometimes for development, when you just need an extension and you don’t know where to get it.

And then I wrote, even more experimental, a follow-on project called autopex, which is a plugin module that you load into Postgres that automatically installs an extension if you need it. If you call CREATE EXTENSION orafce, for example, and you don’t have it installed, autopex downloads and installs it. Obviously highly insecure and dubious in terms of modern software distribution practice, but it does work: you can just run CREATE EXTENSION, and it just installs it if you don’t have it. That kind of works.

So anyways, so I’ve worked on these various aspects of these over time. If you’re interested in any of these projects, they’re all under my GitHub account.

In the context of this presentation…this was essentially not my idea. People came to me and asked me to work on this, and as it worked out, multiple people came to me with their problems or questions, and then it turned out it was all the same question. These are the problems I was approached about.

The first one is extension management in the Kubernetes environment. we’ll hear about this in a future talk in this series. Gabriele Bartolini from the CloudNativePG project approached me and said that the issue in a Kubernetes environment is that if you launch a Postgres service, you don’t install packages, you have a pre-baked disk image that contains the software that you need. There’s a Postgres server and maybe some backup software in that image, and if you want to install an extension, and the extension is not in that image, you need to rebuild the image with the extension. That’s very inconvenient.

The ideal scenario would be that you have additional disk images for the extensions and you just somehow attach them. I’m hand waving through the Kubernetes terminology, and again, there will be a presentation about that in more detail. But I think the idea is clear: you want to have these immutable disk images that contain your pieces of software, and if you want to install more of them, you just wanna have these disk images augment ’em together, and that doesn’t work at the moment.

Problem number two is: I was approached by a maintainer of the Postgres.app project, a Mac binary distribution for Postgres. It’s a nice, user-friendly binary distribution for Postgres. This is sort of a similar problem: on macOS you have these .app files to distribute software. They’re this sort of weird hybrid between a zip file with files in it and a directory you can look into, so it’s kind of weird. But it’s basically an archive with software in it. And in this case it has Postgres in it and it integrates nicely into your system. But again, if you want to install an extension, that doesn’t work as easily, because you would need to open up that archive and stick the extension in there somehow, or overwrite files.

And there’s also a tie in with the way these packages are signed by Apple, and if you, mess with the files in the package, then the signature becomes invalid. It’s the way it’s been explained to me. I hope this was approximately accurate, but you already get the idea, right? There’s the same problem where you have this base bundle of software that is immutable or that you want to keep immutable and you want to add things to it, which doesn’t work.

And then the third problem I was asked to solve came from the Debian package maintainer, who will also speak later in this presentation series. What he wanted to do was to run the tests of an extension while the package is being built. That makes sense. You wanna run the tests of the software that you’re building the package for in general. But in order to do that, you have to install the extension into the the normal file system location, right? That seems bad. You don’t want to install the software while you’re into the main system while you’re building it. He actually wrote a custom patch to be able to do that, which then my work was inspired by.

Those are the problems I was approached about.

I had some problems I wanted to solve myself based on my experience working with extensions. While I was working on these various extensions over the years, one thing that never worked is that you could never run make check. It wasn’t supported by the PGXS build system. Again, it’s the same issue.

It’s essentially a subset of the Debian problem: you want to run a test of the software before you install it, but Postgres can only load an extension from a fixed location, and so this doesn’t work. It’s very annoying because it makes the software development cycle much more complicated. You always have to then, then run make all, make install, make sure you have a server running, make installcheck. And then you would want to test it against various different server versions. Usually they have to run this in some weird loop. I’ve written custom scripts and stuff all around this, but it’s was never satisfactory. It should just work.

That’s the problem I definitely wanted to solve. The next problem — and these are are all subsets of each other — that if you have Postgres installed from a package, like an RPM package for example, and then you build the extension locally, you have to install the extension into the directory locations that are controlled by your operating system. If you have Postgres under /usr, then the extensions also have to be installed under /usr, whereas you probably want to install them under /usr/local or somewhere else. You want to keep those locally built things separately, but that’s not possible.

And finally — this is a bit more complicated to explain — I’m mainly using macOS at the moment, and the Homebrew package manager is widely used there. But it doesn’t support extensions very well at all. It’s really weird because the way it works is that each package is essentially installed into a separate subdirectory, and then it’s all symlinked together. And that works just fine. You have a bunch of bin directories, and it’s just a bunch of symlinks to different subdirectories and that works, because then you can just swap these things out and upgrade packages quite easily. That’s just a design choice and it’s fine.

But again, if you wanna install an extension, the extension would be its own package — PostGIS, for example — and it would go into its own directory. But that’s not the directory where Postgres would look for it. You would have to install it into the directory structure that belongs to the other package. And that just doesn’t work. It’s just does not fit with that system at all. There are weird hacks at the moment, but it’s not satisfactory. Doesn’t work at all.

It turned out, all of these things have sort of came up over the years and some of these, people have approached me about them, and I realized these are essentially all the same problem. The extension file location is hard-coded to be inside the Postgres installation tree. Here as an example: it’s usually under something like /usr/share/postgresql/extension/, and you can’t install extensions anywhere else. If you want to keep this location managed by the operating system or managed by your package management or in some kind of immutable disk image, you can’t. And so these are essentially all versions of the same problem. So that’s why I got engaged and tried to find a solution that addresses all of ’em.

I had worked on this already before, a long time ago, and then someone broke it along the way. And now I’m fixing it again. If you go way, way back, before extensions as such existed in Postgres in 9.1, when you wanted to install a piece of software that consists of a shared library object and some SQL, you had to install the shared library object into a predetermined location just like you do now. In addition, you had to run that SQL file by hand, basically, like you run psql -f install_orafce.sql or something like that. Extensions made that a little nicer, but it’s the same idea underneath.

In 2001, I realized this problem already and implemented a configuration setting called dynamic_library_path, which allows you to set a different location for your shared library. Then you can say

dynamic_library_path = '/usr/local/my-stuff/something'

And then Postgres would look there. The SQL file just knows where is because you run it manually. You would then run

psql -f /usr/local/my-stuff/something/something.sql

That fixed that problem at the time. And when extensions were implemented, I was essentially not paying attention or, you know, nobody was paying attention. Extension support were a really super nice feature, of course, but it broke this previously-available feature: then you couldn’t install your extensions anywhere you wanted to; you were tied to this specific file system, location, dynamic_library_path still existed: you could still set it somewhere, but you couldn’t really make much use of it. I mean, you could make use of it for things that are not extensions. If you have some kind of plugin module or modules that install hooks, you could still do that. But not for an extension that consist of a set of SQL scripts and a control file and dynamic_library_path.

As I was being approached about these things, I realized that was just the problem and we should just now fix that. The recent history went as follows.

In April, 2024, just about a year ago now, David Wheeler started a hackers thread suggesting Christoph Berg’s Debian patch as a starting point for discussions. Like, “here’s this thing, shouldn’t we do something about this?”

There was, a fair amount of discussion. I was not really involved at the time. This was just after feature freeze,and so I wasn’t paying much attention to it. But the discussion was quite lively and a lot of people pitched in and had their ideas and thoughts about it. And so a lot of important, filtering work was done at that time.

Later, in September, Gabriele, my colleague from EDB who works on CloudNativePG, approached me about this issue and said like: “hey, this is important, we need this to make extensions useful in the Kubernetes environment.” And he said, “can you work, can you work on this?”

I said, “yeah, sure, in a couple months I might have time.” [Laughs]. But it sort of turns out that, at PGConf.EU we had a big brain trust meeting of various people who basically all came and said, “hey, I heard you’re working on extension_control_path, I also need that!”

Gabriele was there, and Tobias Bussmann from Postgres.app was there ,and Christoph, and I was like, yeah, I really need this extension_control_path to make this work. So I made sure to talk to everybody there and, and make sure that, if we did this, would it work for you? And then we kind of had a good idea of how it should work.

In November the first patch was posted and last week it was committed. I think there’s still a little bit of discussion of some details and, we certainly still have some time before the release to fine tune it, but the main work is hopefully done.

This is the commit I made last week. The fact that this presentation was scheduled gave me additional motivation to get it done. I wanna give some credits to people who reviewed it. Obviously David did a lot of reviews and feedback in general. My colleague Matheus, who I think I saw him earlier, he was also here on the call, did help me quite a bit with sort of finishing the patch. And then Gabriele, Marco and Nicolò, who work on CloudNativePG, did a large amount of testing.

They set up a whole sort of sandbox environment making test images for extensions and, simulating the entire process of attaching these to the main image. Again, I’m butchering the terminology, but I’m just trying to explain it in general terms. They did the whole end-to-end testing of what that would then look like with CloudNativePG. And again, that will, I assume, be discussed when Gabriele presents in a few weeks.

These are the stats from the patch

commit 4f7f7b03758

doc/src/sgml/config.sgml                                     |  68 +++++
doc/src/sgml/extend.sgml                                     |  19 +-
doc/src/sgml/ref/create_extension.sgml                       |   6 +-
src/Makefile.global.in                                       |  19 +-
src/backend/commands/extension.c                             | 403 +++++++++++++++++----------
src/backend/utils/fmgr/dfmgr.c                               |  77 +++--
src/backend/utils/misc/guc_tables.c                          |  13 +
src/backend/utils/misc/postgresql.conf.sample                |   1 +
src/include/commands/extension.h                             |   2 +
src/include/fmgr.h                                           |   3 +
src/test/modules/test_extensions/Makefile                    |   1 +
src/test/modules/test_extensions/meson.build                 |   5 +
.../modules/test_extensions/t/001_extension_control_path.pl  |  80 ++++++

the reason I show this is that, it’s not big! What I did is use the same infrastructure and mechanisms that already existed for the dynamic_library_path. That’s the code in that’s in dfmgr there in the middle. That’s where this little path search is implemented9. And then of course, in extension..c there’s some code that’s basically just a bunch of utility functions, like to list all the extensions and list all the versions of all the extensions. Those utility functions exist and they needed to be updated to do the path search. Everything else is pretty straightforward. There’s just a few configuration settings added to the documentation and the sample files and so on. It’s not that much really.

One thing we also did was add tests for this, Down there in test_extensions. We wrote some tests to make sure this works. Well, it’s one thing to make sure it works, but the other thing is if we wanna make changes or we find problems with it, or we wanna develop this further in the future, we have a record of how it works, which is why you write tests. I just wanted to point that out because we didn’t really have that before and it was quite helpful to build confidence that we know how this works.

So how does it work? Let’s say you have your Postgres installation in a standard Linux file system package controlled location. None of the actual packages look like this, I believe, but it’s a good example. You have your stuff under the /usr/bin/, you have the shared libraries in the /usr/lib/something, you have the extension control files and SQL files in the /usr/share/ or something. That’s your base installation. And then you wanna install your extension into some other place to keep these things separate. So you have /usr/local/mystuff/, for example.

Another thing that this patch implemented is that you can now also do this: when you build an extension, you can write make install prefix=something. Before you couldn’t do that, but there was also no point because if you installed it somewhere else, you couldn’t do anything with it there. Now you can load it from somewhere else, but you can also install it there — which obviously are the two important sides of that.

And then you set these two settings: dynamic_library_path is an existing configuration setting, yYou set that to where your lib directory is, and then the extension_control_path is a new setting. The titular setting of this talk, where you tell it where your extension control files are.

There’s these placeholders, $libdir and $system which mean the system location, and then the other locations are your other locations, and it’s separated by colon (and semi-colon on Windows). We had some arguments about what exactly the extension_control_path placeholder should be called and, people continue to have different opinions. What it does is it looks in the list directories for the control file, and then where it finds the control file from there, it loads all the other files.

And there’s a fairly complicated mechanism. There’s obviously the actual SQL files, but there’s also these auxiliary control files, which I didn’t even know that existed. So you can have version specific control files. It’s a fairly complicated system, so we wanted to be clear that what is happening is the, the main control file is searched for in these directories, and then wherever it’s found, that’s where it looks for the other things. You can’t have the control file in one path and then the SQL files in another part of the path; that’s not how it works.

That solves problem number five. Let’s see what problem number five was. I forgot [Chuckles]. This is the basic problem, that you no longer have to install the extensions in the directories that are ostensibly controlled by the operating system or your package manager.

So then how would Debian packaging use this? I got this information from Christoph. He figured out how to do this. He just said, “Oh, I did this, and that’s how it works.” During packaging, the packaging scripts that built it up in packages that you just pass these:

PKGARGS="--pgoption extension_control_path=$PWD/debian/$PACKAGE/usr/share/postgresql/$v/extension:\$system
--pgoption dynamic_library_path=$PWD/debian/$PACKAGE/usr/lib/postgresql/$v/lib:/usr/lib/postgresql/$v/lib"

These options set the control path and the dynamic_library_path and these versions and then it works. This was confirmed that this addresses his problem. He no longer has to carry his custom patch. This solves problem number three.

The question people ask is, “why do we have two?” Or maybe you’ve asked yourself that. Why do we need two settings. We have the dynamic_library_path, we have the extension_control_path. Isn’t that kind of the same thing? Kind of, yes! But in general, it is not guaranteed that these two things are in a in a fixed relative location.

Let’s go back to our fake example. We have the libraries in /usr/lib/postgresql and the SQL and control files in /usr/share/postgresql, for example. Now you could say, why don’t we just set it to /usr? Or, for example, why don’t we just set the path to /usr/local/mystuff and it should figure out the sub directories. That would be nice, but it doesn’t quite work in general because it’s not guaranteed that those are the subdirectories. There could be, for example. lib64, for example, right? Or some other so architecture-specific subdirectory names. Or people can just name them whatever they want. So, this may be marginal, but it is possible. You need to keep in mind that the subdirectory structure is not necessarily fixed.

So we need two settings. The way I thought about this, if you compile C code, you also have two settings. And if you think about it, it’s exactly the same thing. When you compile C code, you always have to do -I and -L: I for the include files, L for the lib files. This is basically the same thing. The include file is also the text file that describes the interfaces and the libraries are the libraries. Again, you need two options, because you can’t just tell the compiler, oh, look for it in /usr/local because the subdirectories could be different. There could be architecture specific lib directories. That’s a common case. You need those two settings. Usually they go in parallel. If somebody has a plan on how to do it simpler, follow up patches are welcome.

But the main point of why this approach was taken is also to get it done in a few months. I started thinking about this, or I was contacted about this in September and I started thinking about it seriously in the October/November timeframe. That’s quite late in the development cycle to start a feature like this, which I thought would be more controversial! People haven’t really complained that this breaks the security of extensions or anything like that. I was a little bit afraid of that.

So I wanted to really base it on an existing facility that we already had, and that’s why I wanted to make sure it works exactly in parallel to the other path that we already have, and that has existed for a long time, and was designed for this exact purpose. That was also the reason why we chose this path of least resistance, perhaps.

This is the solution progress for the six problems that I described initially. The CloudNativePG folks obviously have accompanied this project actively and have already prototyped the integration solution. And, and presumably we will hear about some of that at the meeting on May 7th, where Gabriele will talk about this.

Postgres.app I haven’t been in touch with, but one of the maintainers is here, maybe you can give feedback later. Debian is done as I described, and they will also be at the next meeting, maybe there will be some comment on that.

One thing that’s not fully implemented is the the make check issue. I did send a follow-up patch about that, which was a really quick prototype hack, and people really liked it. I’m slightly tempted to give it a push and try to get it into Postgres 18. This is a work in progress, but it’s, there’s sort of a way forward. The local install problem I said is done.

Homebrew, I haven’t looked into. It’s more complicated, and I’m also not very closely involved in the development of that. I’ll just be an outsider maybe sending patches or suggestions at some point, maybe when the release is closer and, and we’ve settled everything.

I have some random other thoughts here. I’m not actively working on these right now, but I have worked on it in the past and I plan to work on it again. Basically the conversion of all the building to Meson is on my mind, and other people’s mind.

Right now we have two build systems: the make build system and the Meson build system, and all the production packages, as far as I know, are built with make. Eventually we wanna move all of that over to Meson, but we want to test all the extensions and if it still works. As far as I know, it does work; there’s nothing that really needs to be implemented, but we need to go through all the extensions and test them.

Secondly — this is optional; I’m not saying this is a requirement — but you may wish to also build your own extensions with Meson. But that’s in my mind, not a requirement. You can also use cmake or do whatever you want. But there’s been some prototypes of that. Solutions exist if you’re interested.

And to facilitate the second point, there’s been the proposal — which I think was well received, but it just needs to be fully implemented — to provide a pkg-config file to build against the server, and cmake and Meson would work very well with that. Then you can just say here’s a pkg-config file to build against the server. It’s much easier than setting all the directories yourself or extracting them from pg_config. Maybe that’s something coming for the next release cycle.

That’s what I had. So extension_control_path is coming in Postgres 18. What you can do is test and validate that against your use cases and and help integration into the downstream users. Again, if you’re sort of a package or anything like that, you know, you can make use of that. That is all for me.

Thank you!

Questions, comments

  • Reading the comments where several audience members suggested Peter follows Conference Driven Development he confirmed that that’s definitely a thing.

  • Someone asked for the “requirements gathering document”. Peter said that that’s just a big word for “just some notes I have”. “It’s not like an actual document. I called it the requirements gathering. That sounds very formal, but it’s just chatting to various people and someone at the next table overheard us talking and it’s like, ‘Hey! I need that too!’”

  • Christoph: I tried to get this fixed or implemented or something at least once over the last 10 something-ish years, and was basically shot down on grounds of security issues if people mess up their system. And what happens if you set the extension path to something, install an extension, and then set the path to something else and then you can’t upgrade. And all sorts of weird things that people can do with their system in order to break them. Thanks for ignoring all that bullshit and just getting it done! It’s an administrator-level setting and people can do whatever they want with it.

    So what I then did is just to implement that patch and, admittedly I never got around to even try to put it upstream. So thanks David for pushing that ahead. It was clear that the Debian version of the patch wasn’t acceptable because it was too limited. It made some assumptions about the direct restructure of Debian packages. So it always included the prefix in the path. The feature that Peter implemented solves my problem. It does solve a lot of more problems, so thanks for that.

  • Peter: Testing all extensions. What we’ve talked about is doing this through the Debian packaging system because the idea was to maybe make a separate branch or a separate sub-repository of some sort, switch it to build Meson, and rebuild all the extension packages and see what happens. I guess that’s how far we’ve come. I doesn’t actually mean they all work, but I guess that most of them has tests, so we just wanted to test, see if it works.

    There are some really subtle problems. Well, the ones I know of have been fixed, but there’s some things that certain compilation options are not substituted into the Makefiles correctly, so then all your extensions are built without any optimizations, for example, without any -O options. I’m not really sure how to detect those automatically, but at least, just rebuild everything once might be an option. Or just do it manually. There are not thousands of extensions. There are not even hundreds that are relevant. There are several dozens, and I think that’s good coverage.

  • Christoph: I realize that doing it on the packaging side makes sense because we all have these tests running. So I was looking into it. The first time I tried, I stopped once I realized that Meson doesn’t support LLVM yet; and the second time I tried, I just diff-ed the generated Makefiles to see if there’s any difference that looks suspicious. At thus point I should just continue and do compilation run and see what the tests are doing and and stuff.

    So my hope would be that I could run diff on the results; the problem is compiling with Postgres with Autoconf once and then with Meson the second time, then see if it has an impact on the extensions compiled. But my idea was that if I’m just running diff on the two compilations and there’s no difference, there’s no point in testing because they’re identical anyway.

  • Peter Oooh, you want the actual compilation, for the Makefile output to be the same.

  • Christoph: Yeah. I don’t have to run that test, But the diff was a bit too big to be readable. There was lots of white space noise in there. But there were also some actual changes. Some were not really bad, like9 in some points variables were using a fully qualified path for the make directory or something, and then some points not; but, maybe we can just work on making that difference smaller and then arguing about correctness is easier.

  • Peter: Yeah, that sounds like a good approach.

  • Jakob: Maybe I can give some feedback from Postgres.app. So, thank you very much. I think this solves a lot of problems that we have had with extensions over the years, especially because it allows us to separate the extensions and the main Postgres distribution. For Postgres.app we basically have to decide which extensions to include and we can’t offer additional extensions when people ask for them without shipping them for everyone. So that’s a big win.

    One question I am wondering about is the use case of people building their own extensions. As far as I understand, you have to provide the prefix/ And one thing I’m wondering whether there is there some way to give a default value for the prefix. Like in pg_config or in something like that, so people who just type make install automatically get some path.

  • Peter: That might be an interesting follow on. I’m making a note of it. I’m not sure how you’d…

  • Jakob: I’m just thinking because a big problem is that a lot of people who try things don’t follow the instructions for the specific Postgres. So for example, if we write documentation how to build extensions and people on a completely different system — like people Google stuff and they get instruction — they’ll just try random paths. Right now, if you just type make install, it works on most systems because it just builds into the standard directories.

  • Peter: Yeah, David puts it like, “should there be a different default extension location?” I think that’s probably not an unreasonable direction. I think that’s something we should maybe think about, once this is stabilized. I think for your Postgres.app use case, it, I think you could probably even implement that yourself with a one or two line patch so that at least, if you install Postgres.app, then somebody tries to build an extension, they get a reasonable location.

  • David: If I could jump in there, Jakob, my assumption was that Postgres.app would do something like designate the Application Support directory and Preferences in ~/Library as where extensions should be installed. And yeah, there could be some patch to PGXS to put stuff there by default.

  • Jakob: Yeah, that would be nice!

  • Peter: Robert asked a big question here. What do we think the security consequences of this patch? Well, one of the premises is that we already have dynamic_library_path, which works exactly the same way, and there haven’t been any concerns about that. Well, maybe there have been concerns, but nothing that was acted on. If you set the path to somewhere where anybody can write stuff, then yeah, that’s not so good. But that’s the same as anything. Certainly there were concerns as I read through the discussion.

    I assumed somebody would hav security questions, so I really wanted to base it on this existing mechanism and not invent something completely new. So far nobody has objected to it [Chuckles]. But yeah, of course you can make a mess of it if you go into that extension_control_path = /tmp! That’s probably not good. But don’t do that.

  • David: That’s I think in part the xz exploit kind of made people more receptive to this patch because we want to reduce the number of patches that packaging maintainers have to maintain.

  • Peter: Obviously this is something people do. Better we have one solution that people then can use and that we at least we understand, as opposed to everybody going out and figuring out their own complicated solutions.

  • David: Peter, I think there are still some issues with the behavior of MODULEDIR from PGXS and directory in the control file that this doesn’t quite work with this extension. Do you have some thoughts on how to address those issues?

  • Peter: For those who are not following: there’s an existing, I guess, rarely used feature that, in the control file, you can specify directory options, which then specifies where other files are located. And this doesn’t work the way you think it should maybe it’s not clear what that should do if you find it in a path somewhere. I guess it’s so rarely used that we might maybe just get rid of it; that was one of the options.

    In my mental model of how the C compiler works, it sets an rpath on something. If you set an absolute rpath somewhere and you know it’s not gonna work if you move the thing to a different place in the path. I’m not sure if that’s a good analogy, but it sort of has similar consequences. If you hard-code absolute path, then path search is not gonna work. But yeah, that’s on the list I need to look into.

  • David: For what it’s worth, I discovered last week that the part of this patch where you’re stripping out $libdir and the extension make file that was in modules, I think? That also needs to be done when you use rpath to install an extension and point to extensions today with Postgres 17. Happy to see that one go.

  • Christoph: Thanks for fixing that part. I was always wondering why this was broken. The way it was broken. It looked very weird and it turned out it was just broken and not me not understanding it.

  • David: I think it might have been a documentation oversight back when extensions were added at 9.1 to say this is how you list the modules.

    Anyway, this is great! Im super excited for this patch and where it’s going and the promise for stuff in the future. Just from your list of the six issues it addresses, it’s obviously something that covers a variety of pain points. I appreciate you doing that.

  • Peter: Thank you!

Many thanks and congratulations wrap up this call.

The next Mini-Summit is on April 9, Christoph Berg (Debian, and also Cybertec) will join us to talk about Apt Extension Packaging.

Mini Summit 2: Extension Search Path Patch

Orange card with large black text reading “Implementing an Extension Search Patch”. Smaller text below reads “Peter Eisentraut, EDB” and “03.26.2025”. A photo of Peter speaking into a mic at a conference appears on the right.

This Wednesday, March 26 at noon America/New_York (16:00 UTC), Peter Eisentraut has graciously agreed to give a talk at the Extension Mini Summit #2 on the extension search path patch he recently committed to PostgreSQL. I’m personally stoked for this topic, as freeing extensions from the legacy of a single directory opens up a number of new patterns for packaging, installation, and testing extensions. Hit the Meetup to register for this live video conference, and to brainstorm novel uses for this new feature, expected to debut in PostgreSQL 18.

2025 Postgres Extensions Mini Summit One

Back on March 12, we hosted the first in a series of PostgreSQL Extensions Mini Summits leading up to the Extension Ecosystem Summit at PGConf.dev on May 13. I once again inaugurated the series with a short talk on the State of the Extension Ecosystem. The talk was followed by 15 minutes or so of discussion. Here are the relevant links:

And now, with many thanks to Floor Drees for the effort, the transcript from the session.

Introduction

Floor Drees introduced the organizers:

David presented a State of the Extension Ecosystem at this first event, and shared some updates from PGXN land.

The stream and the closed captions available for the recording are supported by PGConf.dev and their gold level sponsors, Google, AWS, Huawei, Microsoft, and EDB.

State of the Extensions Ecosystem

So I wanted to give a brief update on the state of the Postgres extension ecosystem, the past, present, and future. Let’s give a brie history; it’s quite long, actually.

There were originally two approaches back in the day. You could use shared preload libraries to have it preload dynamic shareable libraries into the main process. And then you could do pure SQL stuff using, including procedural languages like PL/Perl, PL/Tcl, and such.

And there were a few intrepid early adopters, including PostGIS, BioPostgres, PL/R, PL/Proxy, and pgTAP, who all made it work. Beginning of Postgres 9.1 Dimitri Fontaine added support for explicit support for extensions in the Postgres core itself. The key features included the ability to compile and install extensions. This is again, pure SQL and shared libraries.

There are CREATE, UPDATE, and DROP EXTENSION commands in SQL that you can use to add extensions to a database, upgrade them to new versions and to remove them. And then pg_dump and pg_restore support so that extensions could be considered a single bundle to be backed up and restored with all of their individual objects being included as part of the backup.

Back then, a number of us, myself included, saw this as an opportunity to have the extensibility of Postgres itself be a fundamental part of the community and distribution. I was a long time user of Perl and used CPAN, and I thought we had something like CPAN for Postgres. So, I proposed PGXN, the PostgreSQL Extension Network, back in 2010. The idea was to do distribution of source code. You would register namespaces for your extensions.

There was discovery via a website for search, documentation published, tags to help you find different kinds of objects, and to support installation through a command line interface. The compile and install stuff that Postgres itself provides, using PGXS and Configure.

This is what PGXN looks like today. It was launched in 2011. There’s a command line client, this website, an API an a registry you can upload your extensions to. The most recent one was pg_task a day or so ago.

In the interim, since that came out in 2011/2012, the cloud providers have come into their own with Postgres, but their support for extensions tends to be rather limited. For non-core extension counts, as of yesterday, Azure provides 38 extensions, GCP provides 44 extensions, and AWS 51. These are the third party extensions that don’t come with Postgres and its contrib itself. Meanwhile, PGXN has 420 extensions available to download, compile, build, and install.

A GitHub project that tracks random extensions on the internet, (joelonsql/PostgreSQL-EXTENSIONs.md), which is pretty comprehensive, has almost 1200 extensions listed. So the question is why is the support not more broad? Why aren’t there a thousand extensions available in every one of these systems?

Rthis has been a fairly common question that’s come up in the last couple years. A number of new projects have tired to fill in the gaps. One is Trusted Language Extensions. They wanted to make it easier to distribute extensions without needing dynamic shared libraries by adding additional features in the database itself.

The idea was to empower app developers to make it easy to install extensions via SQL functions rather than having to access the file system of the database server system itself. It can be portable, so there’s no compilation required, it hooks into the create extension command transparently, supports custom data types, and there have been plans for foreign data wrappers and background workers. I’m not sure how that’s progressed in the past year. The pg_tle extension itself was created by AWS and Supabase.

Another recent entrant in tooling for extensions is pgrx, which is native Rust extensions in Postgres. You build dynamic shared libraries, but write them in pure Rust. The API for pgrx provides full access to Postgres features, and still provides the developer-friendly tooling that Rust developers are used to. There’s been a lot of community excitement the last couple of years around pgrx, and it remains under active development — version 0.13.0 just came out a week or so ago. It’s sponsored and run out of the PgCentral Foundation.

There have also been a several new registries that have come up to try to fill the gap and make extensions available. They have emphasized different things than PGXN. One was ease of use. So, for example, here pgxman says it should be really easy to install a client in a single command, and then it installs something, and then it downloads and installs a binary version of your an extension.

And then there was platform neutrality. They wanted to do binary distribution and support multiple different platform, to know what binary∑ to install for a given platform. They provide stats. PGXN doesn’t provide any stats, but some of them are list stats like how many downloads we had, how many in the last 180 days.

And curation. Trunk is another binary extension registry, from my employer, Tembo. They do categorization of all the extensions on Trunk, which is at 237 now. Quite a few people have come forward to tells us that they don’t necessarily use Trunk to install extensions, but use them to find them, because the categories are really helpful for people to figure out what sorts of things are even available, and an option to use.

So here’s the State of the Ecosystem as I see it today.

  • There have been some lost opportunities from the initial excitement around 2010. Extensions remain difficult to find and discover. Some are on PGXN, some are on GitHub, some are on Trunk, some are on GitLab, etc. There’s no like one place to go to find them all.

  • They remain under-documented and difficult to understand. It takes effort for developers to write documentation for their extensions, and a lot of them aren’t able to. Some of them do write the documentation, but they might be in a format that something like PGXN doesn’t understand.

  • The maturity of extensions can be difficult to gauge. If you look at that list of 1200 extensions on GitHub, which ones are the good ones? Which ones do people care about? That page in particular show the number of stars for each extension, but that the only metric.

  • They’re difficult to configure and install. This is something TLE really tried to solve, but the uptake on TLE has not been great so far, and it doesn’t support all the use cases. There are a lot of use cases that need to be able to access the internal APIs of Postgres itself, which means compiling stuff into shared libraries, and writing them in C or Rust or a couple of other compiled languages.

    That makes them difficult to configure. You have ask questions lik: Which build system do I use? Do I install the tooling? How do I install it and configure it? What dependencies does it have? Et cetera.

  • There’s no comprehensive binary packaging. The Postgres community’s own packaging systems for Linux — Apt, and YUM — do a remarkably good job of packaging extensions. They probably have more extensions packaged for those platforms than any of the others. If they have the extension you need and you’re using the PGDG repositories, then this stuff is there. But even those are still like a fraction of all the potential available extensions that are out there.

  • Dependency management can be pretty painful. It’s difficult to know what you need to install. I was messing around yesterday with the PgSQL HTTP extension, which is a great extension that depends on libcurl. I thought maybe I could build a package that includes libcurl as part of it. But then I realized that libcurl depends on other packages, other dynamic libraries. So I’d have to figure out what all those are to get them all together.

    A lot of that goes away if you use a system like apt or yum. But if you, if you don’t, or you just want to install stuff on your Mac or Windows, it’s much more difficult.

  • Centralized source distribution, we’ve found found, is insufficient. Even if all the extensions were available on PGXN, not everybody has the wherewithal or the expertise to find what they need, download it, compile it, and build it. Moreover, you don’t want to have a compiler on your production system, so you don’t want to be building stuff from source on your production system. So then you have to get to the business of building your own packages, which is a whole thing.

But in this state of the extension ecosystem we see new opportunities too. One I’ve been working on for the past year, which we call “PGXN v2”, is made possible by my employer, Tembo. The idea was to consider the emerging patterns — new registries and new ways of building and releasing and developing extensions — and to figure out the deficiencies, and to engage deeply with the community to work up potential solutions, and to design and implement a new architecture. The idea is to serve the community for the next decade really make a PGXN and its infrastructure the source of record for extensions for Postgres.

In the past year, I did a bunch of design work on it. Here’s a high level architectural view. We’d have a root registry, which is still the source code distribution stuff. There’s a web UX over it that would evolve from the current website. And there’s a command line client that knows how to build extensions from the registry.

But in addition to those three parts, which we have today, we would evolve a couple of additional parts.

  1. One is “interactions”, so that when somebody releases a new extension on PGXN, some notifications could go out through webhooks or some sort of queue so that downstream systems like the packaging systems could know something new has come out and maybe automate building and updating their packages.

  2. There could be “stats and reports”, so we can provide data like how many downloads there are, what binary registries make them available, what kinds of reviews and quality metrics rate them. We can develop these stats and display those on the website.

  3. And, ideally, a “packaging registry” for PGXN to provide binary packages for all the major platforms of all the extensions we can, to simplify the installation of extensions for anybody who needs to use them. For extensions that aren’t available through PGDG or if you’re not using that system and you want to install extensions. Late last year, I was focused on figuring out how t build the packaging system.

Another change that went down in the past year was the Extension Ecosystem Summit itself. This took place at PGConf.Dev last May. The idea was for a community of people to come together to collaborate, examine ongoing work in the extension distribution, examine challenges, identify questions, propose solutions, and agree on directions for execution. Let’s take a look at the topics that we covered last year at the summit.

  • One was extension metadata, where the topics covered included packaging and discoverability, extension development, compatibility and taxonomies as being important to represent a metadata about extensions — as well as versioning standards. One of the outcomes was an RFC for version two of the PGXN metadata that incorporates a lot of those needs into a new metadata format to describe extensions more broadly.

  • Another topic was the binary distribution format and what it should look like, if we were to have major, distribution format. We talked about being able to support multiple versions of an extension at one time. There was some talk about the Python Wheel format as a potential precedent for binary distribution of code.

    There’s also an idea to distribute extensions through Docker containers, also known as the Open Container Initiative. Versioning came up here, as well. One of the outcomes from this session was another PGXN RFC for binary distribution, which was inspired by Python Wheel among other stuff.

    I wanted to give a brief demo build on that format. I hacked some changes into the PGXS Makefile to add a new target, trunk that builds a binary package called a “trunk” and uploads it to an OCI registry for distribution. Here’s what it looks like.

    • On my Mac I was compiling my semver extension. Then I go into a Linux container and compile it again for Linux using the make trunk command. The result is two .trunk files, one for Postgres 16 on Darwin and one for Postgres 16 on Linux.

    • There are also some JSON files that are annotations specifically for OCI. We have a command where we can push these images to an OCI registry.

    • Then we can then use an install command that knows to download and install the version of the build appropriate for this platform (macOS). And then I go into Linux and do the same thing. It also knows, because of the OCI standard, what the platform is, and so it installs the appropriate binary.

  • Another topic was ABI and API compatibility. There was some talk at the Summit about what is the definition of an ABI and an API and how do we define internal APIs and their use? Maybe there’s some way to categorize APIs in Postgres core for red, green, or in-between, something like that. There was desire to have more hooks available into different parts of the system.

    One of the outcomes of this session was that I worked with Peter Eisentraut on some stability guidance for the API and ABI that is now committed in the docs. You can read them now on in the developer docs, they’ll be part of the Postgres 18 release. The idea is that minor version releases should be safe to use with other minor versions. If you compiled your extension against one minor version, it should be perfectly compatible with other minor versions of the same major release.

    Interestingly, there was a release earlier this year, like two weeks after Peter committed this, where there was an API break. It’s the first time in like 10 years. Robert Treat and I spent quite a bit of time trying to look for a previous time that happened. I think there was one about 10 years ago, but then this one happened and, notably it broke the Timescale database. The Core Team decided to release a fix just a week later to restore the ABI compatibility.

    So it’s clear that even though there’s guidance, you should in general be able to rely on it, and it was a motivating factor for the a new release to fix an ABI break, there are no guarantees.

    Another thing that might happen is that I proposed a Google Summer of Code project to build an ABI checker service. Peter [embarrassing forgetfulness and misattributed national identity omitted] Geoghegan POC’d an ABI checker in 2023. The project is to take Peter’s POC and build something that could potentially run on every commit or push to the back branches of the project. Maybe it could be integrated into the build farm so that, if there’s a back-patch to an earlier branch and it turns red, they quickly the ABI was broken. This change could potentially provide a higher level of guarantee — even if they don’t end up using the word “guarantee” about the stability of the ABIs and APIs. I’m hoping this happens; a number of people have asked about it, and at least one person has written an application.

  • Another topic at the summit last year was including or excluding extensions in core. They’ve talked about when to add something to core, when to remove something from core, whether items in contrib should actually be moved into core itself, and whether to move metadata about extensions into catalog. And once again, support for multiple versions came up; this is a perennial challenge! But I’m not aware of much work on these questions. I’m wondering if it’s time for a revisit,

  • As a bonus item — this wasn’t a formal topic at the summit last year, but it came up many times in the mini-summits — is the challenge of packaging and lookup. There’s only one path to extensions in SHAREDIR. This creates a number of difficulties. Christoph Berg has a patch for a PGDG and Debian that adds a second directory. This allowed the PGDG stuff to actually run tests against extensions without changing the core installation of the Postgres service itself. Another one is Cloud Native Postgres immutability. If that directory is part of the image, for your CloudNative Postgres, you can’t install extensions into it.

    It’s a similar issue, for Postgres.app immutability. Postgres.app is a Mac app, and it’s signed by a certificate provided by Apple. But that means that if you install an extension in its SHAREDIR, it changes the signature of the application and it won’t start. They work around this issue through a number of symlink shenanigans, but these issues could be solved by allowing extension to be installed in multiple locations.

    Starting with Christoph’s search path patch and a number of discussions we had at PGConf last year, Peter Eisentraut has been working on a search path patch to the core that would work similar to shared preload libraries, but it’s for finding extension control files. This would allow you to have them in multiple directories and it will find them in path.

    Another interesting development in this line has been, the CloudNativePG project has been using that extension search path patch to prototype a new feature coming to Kubernetes that allows one to mount a volume that’s actually another Docker image. If you have your extension distributed as an OCI image, you can specify that it be mounted and installed via your CNPG cluster configuration. That means when CNPG spins up, it puts the extension in the right place. It updates the search path variables and stuff just works.

    A lot of the thought about the stuff went into a less formal RFC I wrote up in my blog, rather than on PGXN. The idea is to take these improvements and try to more formally specify the organization of extensions separate from how Postgres organizes shared libraries and shared files.

I said, we’re bringing the Extension Summit back! There will be another Extension Summit hosted our team of organizers, myself, Floor, Keith Fiske from Crunchy Data, and Yurii from Omnigres. That will be on May 13th in the morning at PGConf.dev; we appreciate their support.

The idea of these Mini Summits is to bring up a number of topics of interest. Have somebody come and do a 20 or 40 minute talk about it, and then we can have discussion about implications.

Floor mentioned the schedule, but briefly:

So, what are your interests in extensions and how they can be improved. There are a lot of potential topics to talk about at the Summit or at these Mini Summits: development tools, canonical registry, how easy it is to publish, continuous delivery, yada, yada, yada, security scanning — all sorts of stuff that could go into conceiving, designing, developing, distributing extensions for Postgres.

I hoe you all will participate. I appreciate you taking the time to listen to me for half an hour. So I’d like to turn it over to, discussion, if people would like to join in, talk about implications of stuff. Also, we can get to any questions here.

Questions, comments, shout-outs

Floor: David, at one point you talked about, metadata taxonomy. If you can elaborate on that a little bit, that’s Peter’s question.

David: So one that people told me that they found useful was one provided by Trunk. So it has these limited number of categories, so if you’re interested in machine learning stuff, you could go to the machine learning stuff and it shows you what extensions are potentially available. They have 237 extensions on Trunk now.

PGXN itself allows arbitrary tagging of stuff. It builds this little tag cloud. But if I look at this one here, you can see this one has a bunch of tags. These are arbitrary tags that are applied by the author. The current metadata looks like this. It’s just plain JSON, and it has a list of tags. The PGXN Meta v2 RFC has a bunch of examples. It’s an evolution of that META.json, so the idea is to have a classifications that includes tags as before, but also adds categories, which are a limited list that would be controlled by the core [he means “root”] registry:

{
  "classifications": {
    "tags": [
      "testing",
      "pair",
      "parameter"
    ],
    "categories": [
      "Machine Learning"
    ]
  }
}

Announcements

Yurii made a number of announcements, summarizing:

  • There is a new library that they’ve been developing at Omnigres that allows you to develop Postgres extensions in C++. For people who are interested in developing extensions in C++ and gaining the benefits of that and not having to do all the tedious things that we have to do with C extensions: look for Cppgres. Yurii thinks that within a couple of months it will reach parity with pgrx.

    David: So it sounds like it would work more closely to the way PGXS and C works. Whereas pgrx has all these additional Rust crates you have to load and like slow compile times and all these dependencies.

    Yurii: This is just like a layer over the C stuff, an evolution of that. It’s essentially a header only library, so it’s a very common thing in the C++ world. So you don’t have to build anything and you just include a file. And in fact the way I use it, I amalgamate all the header files that we have into one. Whenever I include it in the project, I just copy the amalgamation and it’s just one file. You don’t have any other build chain associated yet. It is C++ 20, which some people consider new, but by the time it’s mature it’s already five years old and most compilers support it. They have decent support of C++ 20 with a few exclusions, but those are relatively minor. So for that reason, it’s not C++ 23, for example, because it’s not very well supported across compilers, but C++ 20 is.

  • Yurii is giving a talk about PostgresPM at the Postgres Conference in Orlando. He’ll share the slides and recording with this group. The idea behind PostgresPM is that it takes a lot of heuristics, takes the URLs of packages and of extensions and creates packages for different outputs like for Red Hat, for Debian, perhaps for some other formats in the future. It focuses on the idea that a lot of things can be figured out.

    For example: do we have a new version? Well, we can look at list of tags in the Git repo. Very commonly that works for say 80 percent of extensions. Do we need a C compiler? We can see whether we have C files. We can figure out a lot of stuff without packagers having to specify that manually every time they have a new extension. And they don’t have to repackage every time there is a new release, because we can detect new releases and try to build.

  • Yurii is also running an event that, while not affiliated with PGConf.dev, is strategically scheduled to happen one day before PGConf.dev: Postgres Extensions Day. The Call for Speakers is open until April 1st. There’s also an option for people who cannot or would not come to Montréal this year to submit a prerecorded talk. The point of the event is not just to bring people together, but also ti surface content that can be interesting to other people. The event itself is free.

Make sure to join our Meetup group and join us live, March 26, when Peter Eisentraut joins us to talk about implementing an extension search path.

Extension Ecosystem Summit 2025

Logo for PGConf.dev

I’m happy to announce that some PostgreSQL colleagues and have once again organized the Extension Ecosystem Summit at PGConf.dev in Montréal on May 13. Floor Drees, Yurii Rashkovskii, Keith Fiske will be on hand to kick off this unconference session:

Participants will collaborate to learn about and explore the ongoing work on PostgreSQL development and distribution, examine challenges, identify questions, propose solutions, and agree on directions for execution.

Going to PGConf.dev? Select it as an “Additional Option” when you register, or update your registration if you’ve already registered. Hope to see you there!


Photo of the summit of Mount Hood

Extension Ecosystem Mini-Summit 2.0

We are also once again hosting a series of virtual gatherings in the lead-up to the Summit, the Postgres Extension Ecosystem Mini-Summit.

Join us for an hour or so every other Wednesday starting March 12 to hear contributors to a variety of community and commercial extension initiatives outline the problems they want to solve, their attempts to so, challenges discovered along the way, and dreams for an ideal extension ecosystem in the future. Tentative speaker lineup (will post updates as the schedule fills in):

Join the meetup for details. These sessions will be recorded and Posted to the PGConf.dev YouTube and we’ll have again detailed transcripts. Many thanks to my co-organizers Floor Drees and Yurii Rashkovskii, as well as the PGConf.dev organizers for making this all happen!

Update 2025-04-14: Added the April 23 session topic and panelists.

PGConf & Extension Ecosystem Summit EU 2024

The PGConf 2024 logo

Last week I MCed the first Extension Ecosystem Summit EU and attended my first at PGConf EU in Athens, Greece. Despite my former career as an archaeologist — with a focus on Mediterranean cultures, no less! — this was my first visit to Greece. My favorite moment was the evening after the Summit, when I cut out of a networking shindig to walk to Pláka and then circumnavigate the Acropolis. I mean just look at this place!

Nighttime photo of the Acropolis of Athens

The Acropolis of Athens on the evening of October 22, 2024. © 2024 David E. Wheeler

Highlight of the trip for sure. But the Summit and conference were terrific, as well.

Extension Ecosystem Summit

Floor Drees kindly organized The Extension Ecosystem Summit EU, the follow-up to the PGConf.dev original. While the Vancouver Summit focused on developers, we tailored this iteration to users. I started the gathering with a condensed version of my POSETTE talk, “State of the Postgres Extension Ecosystem”, but updated with a Trunk OCI Distribution demo. Links:

We then moved into a lightning round of 10 minute introductions to a variety of extensions:

Quite the whirlwind! There followed open discussion, in which each maintainer went to a corner to talk to attendees about contributing to their extensions. Details to come in a more thorough writeup on the Tembo blog, but I personally enjoyed some fascinating discussions about extension distribution challenges.

PGConf.eu

Following the Summit, I attended several thought-provoking and provocative presentations at PGConf.eu, which took place at the same hotel, conveniently enough.

Floor Drees speaking at a podium, next to a slide reading “Why Postgres?”

Floor Drees speaking at PGConf.eu 2024. © 2024 David E. Wheeler

There were many more talks, but clearly I tend to be drawn to the most technical, core-oriented topics. And also archaeology.

Museums

Speaking of which, I made time to visit two museums while in Athens. First up was the National Archaeological Museum of Athens, where I was delighted to explore the biggest collection of Mycenaean artifacts I’ve ever seen, including massive collections from the excavations of Heinrich Schliemann. So much great Bronze Age stuff here. I mean, just look at this absolute unit:

Photo of a Mycenaean Krater featuring a horse-drawn chariot

From the museum description: “Fragment of a krater depicting a chariot with two occupants. A male figure holding a staff walks in front of the chariot. Much of the Mycenaean Pictorial Style pottery (14th-12th centuries BC) with representations of humans, chariots, horses and bulls on large kraters, was produced at Berbati in the Argolid and exported to Cyprus, where it was widely imitated. Birds, fish, wild goats or imaginary creatures (i.e. sphinxes) occur on other types of vessels, such as jugs and stirrup jars. Usually only fragments of these vases survive in mainland Greece from settlement contexts. In Cyprus, however, complete vases are preserved, placed as grave gifts in tombs.” © Photo 2024 David E. Wheeler

The animal decorations on Mycenaean and Akrotiri pottery is simply delightful. I also enjoyed the Hellenistic stuff, and seeing the famed Antikythera Mechanism filled my nerd heart with joy. A good 3 hours poking around; I’ll have to go back and spend a few days there sometime. Thanks to my pal Evan Stanton for gamely wandering around this fantastic museum with me.

Immediately after the PGConf.eu closing session, I dashed off to the Acropolis Museum, which stays open till 10 on Fridays. Built in 2009, this modern concrete-and-glass building exhibits several millennia of artifacts and sculpture exclusively excavated from the Acropolis or preserved from its building façades. No photography allowed, alas, but I snapped this photo looking out on the Acropolis from the top floor.

Photo of the Acropolis as viewed from inside the Acropolis Museum.

The Acropolis as viewed from inside the Acropolis Museum. Friezes preserved from the Parthenon inside the museum reflect in the glass, as does, yes, your humble photographer. © 2024 David E. Wheeler

I was struck by the beauty and effectiveness of the displays. It easily puts the lie to the assertion that the Elgin Marbles must remain in the British Museum to protect them. I saw quite a few references to the stolen sculptures, particularly empty spots and artfully sloppy casts from the originals, but the building itself makes the strongest case that the marbles should be returned.

But even without them there remains a ton of beautiful sculpture to see. Highly recommended!

Back to Work

Now that my sojourn in Athens has ended, I’m afraid I must return to work. I mean, the event was work, too; I talked to a slew of people about a number of projects in flight. More on those soon.

⛰️ Postgres Ecosystem Summit EU

Given the success of the Extension Ecosystem Summit at PGConf.dev back in May, my colleague Floor Drees has organized a sequel, the Extension Ecosystem Summit EU on Tuesday, October 22, at the Divani Caravel Hotel in Athens. That’s “Day 0” at the same hotel as PGConf.eu. Tembo, Percona, Xata, and Timescale co-sponsor.

While the May event took the form of an open-space technology (OST)-style unconference aimed at extension developers, the EU event aims to inform an audience of Postgres users about the history and some exemplary use cases for extensions. From the invite:

Join us for a gathering to explore the current state and future of Postgres extension development, packaging, and distribution. Bring your skills and your devices and start contributing to tooling underpinning many large Postgres installations.

  • Jimmy Angelakos - pg_statviz: pg_statviz is a minimalist extension and utility pair for time series analysis and visualization of PostgreSQL internal statistics.
  • Adam Hendel (Tembo) - pgmq: pgmq is a lightweight message queue. Like AWS SQS and RSMQ but on Postgres. Adam is pgmq’s maintainer since 2023, and will present a journey from pure Rust → pgrx → pl/pgsql.
  • Alastair Turner (Percona) - pg_tde: pg_tde offers transparent encryption of table contents at rest, through a Table Access Method extension. Percona has developed pg_tde to deliver the benefits of encryption at rest without requiring intrusive changes to the Postgres core.
  • Gülçin Yıldırım Jelínek (Xata) - pgzx: pgzx is a library for developing PostgreSQL extensions written in Zig.
  • Mats Kindahl (Timescale) - TimescaleDB (C), pgvectorscale (Rust) and pgai (Python): maintaining extensions written in different languages.

I will also deliver the opening remarks, including a brief history of Postgres extensibility. Please join us if you’re in the area or planning to attend PGConf.eu. See you there!

🏔 Extension Ecosystem Summit 2024

Logo for PGConf.dev

The PostgreSQL Extension Ecosystem Summit took place at PGConf.dev in Vancouver on May 28, 2024 and it was great! Around 35 extension developers, users, and fans gathered for an open-space technology (OST)-style unconference. I opened with a brief presentation (slides) to introduce the Summit Theme:

  • Extension issues, designs and features
  • Development, packaging, installation, discovery, docs, etc.
  • Simplify finding, understanding, and installing
  • Towards ideal ecosystem of the future
  • For authors, packagers, DBAs, and users
  • Lots of problems, challenges, decisions
  • Which do you care about?
  • Collaborate, discover, discuss, document
  • Find answers, make decisions, set directions
  • Inform the PGXN v2 project

Before the Summit my co-organizers and I had put up large sticky notes with potential topics, and after reviewing the four principles and one law of [OST], we collectively looked them over and various people offered to lead discussions. Others volunteered to take notes and later published them on the community wiki. Here’s our report.

Extension Metadata

Samay Sharma of Tembo took point on this discussion, while David Wagoner of EDB took notes. The wide-ranging discussion among the five participants covered taxonomies, versioning, system dependencies, packaging & discoverability, development & compatibility, and more.

The discoverability topic particularly engaged the participants, as they brainstormed features such as user comments & ratings, usage insights, and test reporting. They settled on the idea of two types of metadata: developer-provided metadata such as external dependencies (software packages, other extensions the extension depends on etc.) and user metadata such as ratings. I’m gratified how closely this hews to the metadata sketch’s proposed packaging (author) and registry (third party) metadata.

Binary Distribution Format

I led this session, while Andreas “ads” Scherbaum took notes. I proposed to my four colleagues an idea I’d been mulling for a couple months for an extension binary distribution format inspired by Python wheel. It simply includes pre-compiled files in subdirectories named for each pg_config directory config. The other half of the idea, inspired by an Álvaro Hernández blog post, is to distribute these packages via OCI — in other words, just like Docker images. The participants agreed it was an interesting idea to investigate.

We spent much of the rest of the time reviewing and trying to understand the inherent difficulty of upgrading binary extensions: there’s a period between when an extension package is upgraded (from Yum, Apt, etc.) and ALTER EXTENSION UPDATE updates it in the database. If the new binary doesn’t work with old versions, it will break (and potentially crash Postgres!) until they update. This can be difficult in, say, a data analytics environment with uses of the extension in multiple databases and functions, and users may not have the bandwidth to ALTER EXTENSION UPDATE any code that depends on the extension.

This issue is best solved by defensive coding of the C library to keep it working for new and old versions of an extension, but this complicates maintenance.

Other topics included the lack of support for multiple versions of extensions at one time (which could solve the upgrade problem), and determining the upgrade/downgrade order of versions, because the Postgres core enforces no version standard.

ABI/API discussion

Yurii Rashkovskii took point on this session while David Christensen took notes. Around 25 attendees participated. The discussion focused in issues of API and ABI compatibility in the Postgres core. Today virtually the entire code base is open for use by extension developers — anything in header files. Some recent research revealed a few potentially-incompatible changes in minor releases of Postgres, leading some to conclude that extensions must be compiled and distributed separately for every minor release. The group brainstormed improvements for this situation. Ideas included:

  • Spelunking the source to document and categorize APIs for extensions
  • Documenting color-coded safety classifications for APIs: green, yellow, or red
  • Designing and providing a better way to register and call hooks (observability, administration, isolation, etc.), rather than the simple functions Postgres offers today
  • Developing a test farm to regularly build and tests extensions, especially ahead of a core release
  • And of course creating more hooks, such as custom relation type handling, per-database background workers, a generic node visitor pattern, and better dependency handling

Including/Excluding Extensions in Core

Keith Fiske led the discussion and took notes for this session, along with 10-15 or so attendees. It joined two topics: When should an extension be brought into core and when should a contrib extension be removed from core. The central point was the adoption of new features in core that replace the functionality of and therefore reduce the need for some extensions.

Replacing an extension with core functionality simplifies things for users. However, the existence of an extension might prevent core from ever adding its features. Extensions can undergo faster, independent development cycles without burdening the committers with more code to maintain. This independence encourages more people to develop extensions, and potentially compels core to better support extensions overall (e.g., through better APIs/ABIs).

Contrib extensions currently serve, in part, to ensure that the extension infrastructure itself is regularly tested. Replacing them with core features would reduce the test coverage, although one participant proposed a patch to add such tests to core itself, rather than as part of contrib extensions.

The participants collaborated on a list of contrib extensions to consider merging into core:

  • amcheck
  • pageinspect
  • pg_buffercache
  • pg_freespacemap
  • pg_visibility
  • pg_walinspect
  • pgstattuple

They also suggested moving extension metadata (SQL scripts and control files) from disk to catalogs and adding support for installing and using multiple versions of an extension at one time (complicated by shared libraries), perhaps by the adoption of more explicit extension namespacing.

Potential core changes for extensions, namespaces, etc.

Yurii Rashkovskii and David Christensen teamed up on this session, as well (notes). 15-20 attendees brainstormed core changes to improve extension development and management. These included:

  • File organization/layout, such as putting all the files for an extension in a single directory and moving some files to the system catalog.
  • Provide a registry of “safe” extensions that can be installed without a superuser.
  • Adding a GUC to configure a second directory for extensions, to enable immutable Postgres images (e.g., Docker, Postgres.app). The attendees consider this a short term fix, but still useful. (Related: I started a pgsql-hackers thread in April for a patch to to just this).
  • The ability to support multiple versions of an extension at once, via namespacing, came up in this session, as well.
  • Participants also expressed a desire to support duplicate names through deeper namespacing. Fundamentally, the problem of namespace collision redounds to issues un-relocatable extensions.

Until Next Time

I found it interesting how many topics cropped up multiple times in separate sessions. By my reading most cited topics were:

  • The need to install and use multiple versions of an extension
  • A desire for deeper namespacing, in part to allow for multiple versions of an extension
  • A pretty strong desire for an ABI compatibility policy and clearer understanding of extension-friendly APIs

I expect to put some time into these topics; indeed, I’ve already started a Hackers thread proposing an ABI policy.

I greatly enjoyed the discussions and attention given to a variety of extension-related topics at the Summit. So much enthusiasm and intelligence in one places just makes my day!

I’m thinking maybe we should plan to do it again next year. What do you think? Join the #extensions channel on the Postgres Slack with your ideas!

Mini Summit Six

Last week, a few members of the community got together for for the sixth and final Postgres Extension Ecosystem Mini-Summit. Follow these links for the video and slides:

Or suffer through my interpolation of YouTube’s auto-generated transcript, interspersed with chat activity, if you are so inclined.

Introduction

  • I opened the meeting, welcomed everyone, and introduced myself as host. I explained that today I’d give a brief presentation on the list of issues I I’ve dreamed up and jotted down over the last couple mini-summits as possible potential topics to take on at the Summit in Vancouver on May 28th.

Presentation

  • These are things that I’ve written down as I’ve been thinking through the whole architecture myself, but also that come up in these Summits. I’m thinking that we could get some sense of the topics that we want to actually cover at the summit. There is room for about 45 people, and I assume we’ll break up “unconference style” into four or five working groups. People an move to corners, hallways, or outdoors to discuss specific topics.

  • Recall the first mini-summit I showed a list of things that of potential topics that might come up as we think through what’s issues in the ecosystem. I left off with the prompt “What’s important to you?” We hope to surface the most important issues to address at the summit and create a hierarchy. To that end, I’ve created this Canva board1 following Open Space Technology2 to set things up, with the rules and an explanation for how it workjs.

  • I expect one of us (organizers) to give a brief introduction at the start of the summit to outline the principles of Open Space Technology, which are similar to unconferences.

  • Open Space Technology principles are:

    • Whoever comes are the right people
    • Whatever happens is the only thing that could happen
    • Whenever it starts at the right time (but we start at 2 p.m. and we have only three hours so we’ll try to make the best of it)
    • When it’s over it’s over
    • And whatever happens is the right place
  • There is also a “Law of Mobility”. If you start out interested in one topic and attending a session or discussion about one topic, and you decide you want to do something else, you can wander over to another session . Open Space Technology calls these people “bumblebees” who cross-pollinate between topics. “Butterflies” are the people who hover around a particular topic to make it happen.

  • And “Come to be Surprised” about what will come up.

  • I’ve split potential topics into topics in Post-its. we might have four or five spaces. Pick a space, pick a session; we have two two-hour-long sessions. I assume we’ll have 15-30 minutes to open the Summit, do intros, and split up the sessions; then have people do an hour on one topic and an hour on a second topic. At the end, we’ll do the readout in which we talk about decisions we came to.

  • If you’re interested in facilitating any of these topics, simply drag it in and stick your name on it.

  • First I thought I’d briefly go over the list of topics as I’ve imagined them. I posted the list on Slack a couple weeks ago and added to it as things have come up in the discussions. But I want to give a high level view of what these brief descriptions mean.

  • This is ad-hoc; I don’t have anything super planned. Please feel free to jump in at any time! I think I’ve turned on “talking permitted” for everybody, or stick your hand up and we’ll be glad to figure out other stuff, especially if you’re thinking of other topics or related things, or if you think things should be merged.

  • Any questions or thoughts or comments?

  • I put the topics in broad categories. There’s some crossover, but the the first one I think of is metadata. I’ve thought about metadata a fair bit, and drafted an RFC for the kinds of things to put in an updated metadata standard, like:

    • How do you specify third-party dependencies? For example, PostGIS depends on additional libraries; how can those be specified in an ideally platform neutral way within the metadata?

    • How to specify the different types of extensions there are? Stephen wrote a blog post last year about this: you have CREATE EXTENSION extensions, LOAD command extensions, background workers, applications, and more. You have things that need shared_preload_libraries and things that don’t. How do we describe those things about an extension within a distribution package?

    • Taxonomies have come up a few times. PGXN currently allows extension authors to put an arbitrary number of tags into their META.json file. Maybe in part because of the precedent of the stuff that that I released early on, people mostly put stuff in there to describe it, like “fdw”, or “function” or “JSON”. Some of the newer uh binary distribution packaging systems, in particular Trunk, have a curated list of categories that they assign. so there might be different ways we want to classify stuff.

      Another approach is crates.io, which has a canonical list of categories (or “slugs”), that authors can assign. These are handy they group things together in a more useful way, like “these are related to data analytics” or “these are related to Vector search” — as opposed to the descriptive tags PGXN has now. So, what ought that to look like? What kind of controls should we have? And who might want to use it?

    • How would we specify system requirements. For example “this package requires only a subset of OSes”, or the version of an OS, or the version of postgres, or CPU features. Steven’s mentioned vector-based ones a few times, but there’s also things like encryption instructions provided by most chips. Or the CPU architecture, like “this supports aarch64 but not amd64.” How should we specify that?

    • I covered categorization under taxonomies

    • Versioning. I blogged about this a couple months ago. I’m reasonably sure we should just stick to SemVer, but it’s worth bringing up.

  • Thoughts on metadata, or stuff I’ve left out? This is in addition to the stuff that’s in the META.json spec. It leaves room for overlap with core stuff. How do we create one sort of metadata for everything, that might subsume the control file as well as the metadata spec or trunk.toml?

    • Jeremy S in chat: So far this is seeming like a good recap of ground that’s been covered, questions & topics that have been raised. Great to see how broad it’s been
  • The next category is the source registry. This is thinking through how we should evolve the PGXN root registry for distributing extension source code. There are questions like identity, namespacing, and uniqueness.

    • These are broad categories but identity is how do you identify yourself to the system and claim ownership over something.

    • What sort of namespacing should we use? Most systems, including PGXN, just use an arbitrary string and you own a string from [first release]. But other registries, like Go, allow you to use domain-based namespacing for packages. This is really nice because it allows a lot more flexibility, such as the ability to switch between different versions or forks.

    • Then there’s the level of uniqueness of the namespacing. This is kind of an open question. Another another approach I thought of is that, rather than string that names your extension distribution being unique, it could be your username and the string. That makes it easier when somebody abandoned something and somebody else forks it and has a new username. Then maybe people can switch more easily. To be able to account for and handle that sort of evolution in a way that single string uniqueness makes trickier.

    • Distributed versus centralized publishing. I’ve written about this a couple times. I am quite attracted to the Go model where packages are not centrally distributed but are in three or four supported Version Control Systems, and as long as they use SemVers and appropriate tags, anybody can use them. The centralized index just indexes a package release the first time it’s pulled. This is where host names come into play as part of the namespacing. It allows the system to be much more distributed. Now Go caches all of them in a number of different regions, so when you download stuff it goes through the Go stuff. When you say “give me the XYZ package,” it’ll generally give you the cached version, but will fall back on the repositories as well. So there’s still the centralized stuff.

      I think there’s a a lot to that and it goes along with the namespacing issue. But there are other ideas at play as well. For example, almost all the other source code distribution systems just use a centralized system: crates.io, CPAN, npm, and all the rest.

      And maybe there are other questions to consider, like is there some sort of protocol we should adopt as an abstraction, such as Docker, where Docker is not a centralized repository other than hub.docker.com. Anyone can create a new Docker repository, give it a host name, and then it becomes something that anybody can pull from. It’s much more distributed. So there are a number of ideas to think through.

    • Binary packaging and distribution patterns. I have a separate slide that goes into more detail, but there are implications for source code distribution, particularly with the metadata but perhaps other things. We also might want to think through how it might vary from source distribution.

    • Federated distribution gets at the Docker idea, or the OCI idea that Alvaro proposed a few weeks ago. Stuff like that.

    • What services and tools to improve or build. This goes to the fundamental question of why we’ve had all these new packaging systems pop up in the last year or so. People were saying “there are problems that aren’t solved by PGXN.” How do we as a community collectively decide what are the important bits and what we should build and provide. Features include developer tools, command line clients, search & browse, and discovery.

    • Stats, reports, and badging. This is another fundamental problem that some of the emerging registries have tried to to address: How do you find something? How do you know if it’s any good? How do you know who’s responsible for it? How do you know whether there’s some consensus across the community to use it? The topic, then, is what sort of additional metadata could we provide at the registry level to include some hint about these issues. For example, a system to regularly fetch stars and statistical analysis of a GitHub or a Bitbucket project. Or people wanted review sites or the ability to comment on on systems.

      There’s also badging, in particular for build and test matrices for extensions that will not only encourage people to better support broad arrays of versions of Postgres and platforms. There could be badges for that. so you can see how well an extension supports various platforms. And any other sort of badging, like quality badging. The idea is a brainstorming of what sorts of things might be useful there, and what what might be best to build first, might be the the low hanging fruit.

  • Any questions, comments,m thoughts, additional suggestions on the root registry?

Interlude

  • Steven Miller: So the idea is there are topics on the left and then they get lined up into the schedule? So there are five five different rooms, so horizontally aligned it4ms are at the same time?

  • David Wheeler (he/him): Correct. These are session one and these are session two.

  • Jeremy S: I was kind of waiting to jump to that. It seemed like you were just doing a review of all the topics we’ve covered, but I was waiting until till you got through everything to bring that up.

  • Steven Miller: Oh yeah, good call, good call.

  • Jeremy S: I have the same kind of question/concern. This is a great list of topics, now what do we want to do with the time in Vancouver? David, do you think we need to go through everything on the list? How do you want to spend the time today?

  • David Wheeler (he/him): I was trying to do a quick review just so people knew what these words mean. If you all feel like you have a good idea, or you want to add topics of your own, please do!

  • Jeremy S: Like I commented in the chat, it’s amazing to see how much ground we’ve covered, and it’s good to have a a quick recap. It’s 9:22 right now Pacific time — 22 after the hour wherever you are — I just want to make sure we don’t run out of time going through everything.

  • David Wheeler (he/him): I agree, I’ll make it work. I can speed up a little. I know I can be verbose about some of this stuff.

  • David G. Johnson: Unless the ones from India, in which case they have half hour time zone.

  • David Wheeler (he/him): I was gonna say! [Laughs]

Presentation Continues

  • Binary packaging. This is the problem that PGXMan and trunk have tried to solve with varying degrees of success. I think it’d be worthwhile for us to think through as a community what, ideally, should a community-provided binary packaging system look like?

    • And what’s the format? Do we want to do tarballs, do OCI like Alvaro proposed? Do we want something like RPM or Apt or Python wheels? That’s a that’s actually something I’m super interested to get into. There was a question that came up two weeks ago in Yurii’s presentation. I think Daniele suggested that the Python wheel package format allows you to put dynamic libs into the wheel. That’s pretty interesting and worth looking into as well.

    • How we go about building a community-based binary packaging registry? How do we do the build farming, what platforms and architectures and OSes would it support, and what sort of security, trust, and verification? And the centralization: who runs it, who’s responsible for it, how should it work at a high level?

    • Philippe Noël in chat: Phil from ParadeDB here (pg_search, pg_analytics, pg_lakehouse) — First minisummit I can attend, glad to be here

  • Thank for coming, Philippe! Again, interrupt me anytime.

  • The next topic is developer tooling. Developer tooling today is kind of all over the place. There a PGXN client, there’s the PGXN utils client (which doesn’t compile anymore, as far as I can tell), there’s pgrx stuff, and maybe a few other things. What sorts of tools would be useful for developers who actually develop and build extensions?

    • CLIs and APIs can do metadata management, or scaffolding your source code and adding new features through some sort of templating system.

    • The packaging and Publishing system based on how we uh ultimately elect to distribute source code, and how we ultimately elect to distribute binary code. How does that get packaged up with the namespacing and all the decisions we made, to be as easy as possible for the developer?

    • What build pipelines do we support? today PGXS and pgrx are perhaps the most common, but I’ve seen GNU autoconf configure stuff and stuff that uses Rust or Go or Python-based builds. Do we want to support those? And how do we integrate them with our binary packaging format and where Postgres expects to put stuff?

      I think this is an important topic. One of the things I’ve been dealing with as I’ve talked to the people behind Apache Age and a couple other projects is how they put put stuff in /usr/local by default. I suggest that it’d be better if it went where pg_config wants to put it. How do we want to go about integrating those things?

    • Tooling for CI/CD workflows to make it as easy as possible to test across a variety of platforms, Postgres versions, and those pipelines.

  • Kind of a broad Community topic here. This gets to where things are hosted. There’s a Postgres identity service that does Oauth 2; is that something we want to plug into? Is there a desire for the community to provide the infrastructure for the systems or for at least the core systems of PGXN v2? Who would support it? The people who work on the development ideally would also handle the devops, but should work closely with whoever provides the infrastructure to make sure it’s all copacetic. And that there’s a a plan for when something happens. People exit the community for whatever reason; how will systems continue to be maintained? I don’t think there’s a plan today for PGXN.

  • Another topic is documentation. How do we help engineers figure out how to build extensions; tutorials and references for all the things and all the various details. Do we end up writing a book, or do we just have very specifically-focused tutorials like, “So you want to build a foreign data wrapper; here’s a guide for that.” Or you just need to write a background worker, here’s an example repository to clone. What should those things look like?

    • CREATE EXTENSION
    • Hooks
    • Background workers
    • CLI apps/services
    • Web apps
    • Native apps

    This also kind of covers the variety of different kinds of extensions we might want to package and distribute.

  • Then there’s all the stuff that I filed under “core,” because I think it impacts the core Postgres project and how it may need to evolve or we might want it to evolve over time. One is the second extension directory; there’s a patch pending now, but it’ll probably be deferred until until Postgres 17 ships; it’s on hold while we’re in the freeze. This is a patch that Christoph Berg wrote for the Debian distribution; it allows you to have a second destination directory for your extensions where Postgres knows to find stuff, including shared object libraries. This would make it easier for projects like Postgres.app and for immutable Docker containers to mount a new directory and have all the stuff be there.

  • I would love to see some sort of more coherent idea of what an extension pack package looks like, where like if I install pgTAP, all of its files are in a single subdirectory that Postgres can access. Right now it’s in package config, and the sharedir and the libder and the docdir — it’s kind spread all over.

  • Should there be a documentation standard, in the way you have JavaDoc and rustdoc and Godoc, where docs are integrated into the source code, so it’s easy to use, and there’s tooling to build effective documentation. Today, people mostly just write short READMEs and leave it at that, which is not really sufficient for a lot of projects.

  • There’s the longstanding idea of inline extensions that Dimitri proposed back as far as 2013, something they called “units”. Oracle calls them “packages” or “modules”. Trusted Language Extensions start making a stab at the problem, trying to make something like inline extensions with the tooling we have today. How should that evolve? What sorts of ideas do we want to adapt to make it so that you don’t have to have physical access to the file system to manage your extensions? Where you could do it all over SQL or libpq.

  • How can we minimize restarts? A lot of extensions require loading DSOs in the shared_preload_libraries config, which requires a cluster restart. How can we minimize that need? There are ways to minimize restarts; it’s just a broad category I threw in.

  • What Namespacing is there? I touched on this topic when I wrote about Go Namespacing a while ago. My current assumption is, if we decided to support user/extension_string or hostname/user/extension_string namespacing for package and source distribution, Postgres itself still has to stick to a single string. How would we like to see that evolve in the future?

  • What kind of sandboxing, code signing, security and trust could be built into the system? Part of the reason they’ve resisted having a second extension directory up to now is to have one place where everything was, where the DBA knows where things are, and it’s a lot it’s easier to manage there. But it’s also because otherwise people will put junk in there. Are there ideas we can borrow from other projects or technologies where anything in some directory is sandboxed, And how is it sandboxed? Is it just for a single database or a single user? Do we have some sort of code signing we can build into the system so that Postgres verifies an extension when you install it? What other kinds of security and trust could implement?

    This is a high level, future-looking topic that occurred to me, but it comes up especially when I talk to the big cloud vendors.

  • An idea I had is dynamic module loading. It came up during Jonathan’s talk, where there was a question about how one could use Rust crates in PL/Rust, a trusted language. Well, a DBA has to approve a pre-installed list of crates that’s on the file system where PL/Rust can load them. But what if there was a hook where, in PL/Perl for example, you use Thing and a hook in the Perl use command knows to look in a table that the DBA manages and can load it from there. Just a funky idea I had, a way to get away from the file system and more easily let people, through permissions, be able to load modules in a safe way.

  • A topic that came up during Yurii’s talk was binary compatibility of minor releases, or some sort of ABI stability. I’d be curious what to bring up with hackers on formalizing something there. Although it has seemed mostly pretty stable over time to me, that doesn’t mean it’s been fully stable. I’d be curious to hear about the exceptions.

So! That’s my quick review. I did the remainder of them in 11 minutes!

Discussion

  • Jeremy S: Well done.

  • David Wheeler (he/him): What I’d like to do is send an email to all the people who are registered to come to The Summit in two weeks, as well as all of you, to be able to access this board and put stars or icons or something — stickers which you can access —

  • Jeremy S: I do feel like there’s something missing from the board. I don’t know that it’s something we would have wanted to put on sooner, but I kind of feel like one of the next steps is just getting down into the trenches and looking at actual extensions, and seeing how a lot of these topics are going to apply once we start looking like at the list. I was looking around a bit.

    It’s funny; I see a mailing list thread from a year or two ago where, right after Joel made his big list of 1,000 extensions, he jumped on the hackers list and said, “hey could we stick this somewhere like on the wiki?” And it looks like nobody quite got around to doing anything like tha. But that’s where I was thinking about poking around, maybe maybe starting to work on something like that.

    But I think once we start to look at some of the actual extensions, it’ll help us with a lot of these topics, kind of figure out what we’re talking about. Like when you’re when you’re trying to figure out dependencies, once you start to figure out some of the actual extensions where this is a problem and other ones where it’s not, it might help us to have be a lot more specific about the problem that we’re trying to solve. Or whether it’s versioning, which platform something is going to build on, all that kind of stuff. That’s where I was thinking a topic — or maybe a next step or a topic that’s missing, or you were talking about how many extensions even build today. If you go through the extensions on PGXN right now, how many of them even work, at all. So starting to work down that list.

  • David Wheeler (he/him): So, two thoughts on that. One is: create a sticky with the topic you want and stick it in a place that’s appropriate, or create another category if you think that’s relevant.

  • Jeremy S: It’s kind of weird, because what I would envision is what I want to do on the wiki — I’ll see if I can start this off today, I have rights to make a Postgres Wiki page — is I want to make a list of extensions, like a table, where down the left is the extensions and across the top is where that extension is distributed today. So just extensions that are already distributed like in multiple places. I’m not talking about the stuff that’s on core, because that’s a given that it’s everywhere. But something like pg_cron or PGAudit, anybody who has extensions probably has them. That gives some sense of the extensions that everybody already packages. Those are obviously really important extensions, because everybody is including them.

    And then the next thing I wanted to do was the same thing with the list of those extensions on the left but a column for each of the categories you have here. For, say, PGAudit, for stuff across the top — metadata, registry packaging, developer stuff — for PGAudit are their packaging concerns? For PGAudit, go down the list of registry topics like identity, where’s the where is the source for PGAudit, is the definitive upstream GitLab, isit GitHub, is it git.postgresql.org? I could go right down the list of each of these topics for PGAudit. and then go down the list of all of your topics for pg_hint_plan. That’s another big one; pg_hint_plan is all over the place. Each of your topics I could take and apply to each of the top 10 extensions and there might be different things that rise to the surface for pg_hint_plan than there are for, like, pgvector.

  • David Wheeler (he/him): That sounds like a worthwhile project to me, and it could be a useful reference for any of these topics. Also a lot of work!

  • Jeremy S: Well, in another way to like think about Vancouver might be, instead of like splitting people up by these topics — I’m spitballing here, this this might be a terrible idea — but you could take a list of like 20 or 30 important extensions split people up into groups and say, “here’s five extensions for you, now cover all these topics for your five extensions.” You might have one group that’s looking at like pg_hint_plan and pgvector and PGAudit, and then a different group that has pg_cron and whatever else we come up with. That’s just another way you could slice it up.

  • David Wheeler (he/him): Yeah, I think that you’re thinking about it the inverse the way I’ve been thinking of it. I guess mine is perhaps a little more centralized and top down, and that comes from having worked on PGXN in the past and thinking about what we’d like to build in the future. But there’s no reason it couldn’t be bottom up from those things. I will say, when I was working on the metadata RFC, I did work through an example of some actually really fussy extension — I don’t remember which one it was — or no, I think it was the ML extension.3 I think that could be a really useful exercise.

    But the idea the Open Technology Space is that you can create a sticky, make a pitch for it, and have people vote by putting a star or something on them. I’m hoping that, a. we can try to figure out which ones we feel are the most important, but ultimately anybody can grab one of these and say “I want to own this, I’m putting it in session session one, and put your put your name on it. They ca be anything, for the most part.

  • Jeremy S: Sure. I think I don’t totally grok the Canva board and how that all maps out, but at the end of the day whatever you say we’re doing in Vancouver I’m behind it 100%.

  • David Wheeler (he/him): I’m trying to make it as open as possible. If there’s something you want to talk about, make a sticky.

  • Jeremy S: I’ll add a little box. I’m not sure how this maps to what you want to do with the time in Vancouver.

  • David Wheeler (he/him): Hopefully this will answer the question. First we’ll do an intro and welcome and talk about the topics, give people time to look at them, I want to send it in advance so people can have a sense of it in advance. I know the way they do the the Postgres unconference that’s been the last day of PGCon for years, they have people come and put a sticky or star or some sort of sticker on the topics they like, and then they pick the ones that have the most and and those are the ones they line up in here [the agenda]. But the idea of the Open Technology stuff is a person can decide on whatever topic they want, they can create their sticky, they can put it in the set slot they want and whatever space they want, and —

  • Jeremy S: Ooooh, I think I get it now. Okay, I didn’t realize that’s what you were doing with the Canva board. Now I get it.

  • David Wheeler (he/him): Yeah, I was trying to more or less do an unconference thing, but because we only have three hours try to have a solid idea of the topics we want to address are before we get there.

  • Jeremy S: I don’t know though. Are you hoping a whole bunch of people are going to come in here and like put it — Okay, it took me five or ten minutes to to even realize what you were doing, and I don’t have high hopes that we’ll get 20 people to come in and vote on the Post-it notes in the next seven days.

  • David Wheeler (he/him): Yeah, maybe we need to… These instructions here are meant to help people understand that and if that needs to be tweaked, let’s do it.

    • David G. Johnston in chat: How many people are going to in this summit in Vancouver?
    • David G. Johnston in chat: Is the output of a session just discussions or are action items desired?
  • Steven Miller: I have another question. Are people invited to present at the Summit if they’re not physically present at the Summit? And then same question for viewership

  • David Wheeler (he/him): I don’t think they are providing remote stuff at the Summit

  • Steven Miller: okay

  • David Wheeler (he/him): David, last I heard there were 42 people registered. I think we have space for 45. We can maybe get up to 50 with some standing room, and there’s a surprisingly large number of people (laughs).

    • David G. Johnston in chat: So average of 10 in each space?
  • Jeremy S: Have you gone down the list of names and started to figure out who all these people? Cuz that’s another thing. There might be people who have very little background and just thought “this sounds like an interesting topic.” How those people would contribute and participate would be very different from someone who’s been working with extensions for a long time.

  • David Wheeler (he/him): David, yeah, and we can add more spaces or whatever if it makes sense, or people can just arbitrarily go to a corner. Because it’s an unconference they can elect to do whatever interests them. I’m just hoping to have like the top six things we think are most important to get to ahead of time.

    Jeremy, Melanie sent me the list of participants, and I recognized perhaps a quarter of the names were people who’re pretty involved in the community, and the rest I don’t know at all. so I think it’s going to be all over the map.

  • Steven Miller: So would it work if somebody wanted to do a presentation, they can. They grab stickies from the left and then you could also duplicate stickies because maybe there’d be some overlap, and then you put them in a session. But there’s basically supposed to be only one name per field, and that’s who’s presenting.

  • David Wheeler (he/him): You can put however many names on it as you want. Open technology usually says there’s one person who’s facilitating and another person should take notes.

  • Steven Miller: Okay.

  • David Wheeler (he/him): But whatever works! The way I’m imagining it is, people say, “Okay I want to talk to other people about make some decisions about, I don’t know, documentation standards.” So they go off to a corner and they talk about it for an hour. There are some notes. And the final half hour we’ll have readouts from those, from whatever was talked about there.

  • Steven Miller: These are small working sessions really,it’s not like a conference presentation. Okay, got it

  • David Wheeler (he/him): Yeah. I mean, somebody might come prepared with a brief presentation if they want to set the context. [Laughs] Which is what I was trying to do for the overall thing here. But the idea is these are working sessions, like “here’s the thing we want to look at” and we want to have some recommend commendations, or figure out the parameters, or you have a plan — maybe — at the end of it. My ideal, personally, is that at the at the end of this we have a good idea of what are the most important topics to address earlier on in the process of building out the ecosystem of the future, so we can start planning for how to execute on that from those proposals and decisions. That’s how I’m thinking about it

  • Steven Miller: Okay, yeah I see.

  • Jeremy S: This sounds a lot like the CoffeeOps meetups that I’ve been to. They have a similar process where you use physical Post-it notes and vote on topics and then everybody drops off into groups based on what they’re interested in.

  • David Wheeler (he/him): Yeah it’s probably the same thing, the Open Technology stuff.

  • Steven Miller: Maybe we should do one field so we kind of get an idea?

  • David Wheeler (he/him): Sure. Let’s say somebody comes along and there are a bunch of stickers on this one [drops stickers on the sticky labeled “Identity, namespacing, and uniqueness”]. So so we know that it’s something people really want to talk about. So if somebody will take ownership of it, they can control click, select “add your name”, find a slot that makes sense (and we may not use all of these) and drag it there. So “I’m going to take the first session to talk about this.” Then people can put the stickies on it over here [pasties stickers onto the topic sticky in the agenda], so you have some sense of how many people are interested in attending and talking about that topic. But there are no hard and fast rules.

    Whether or not they do that, say, “David wants to talk about identity name spacing uniqueness in the core registry,” we’re going to do that in the first session. We’ll be in the northeast corner of the room — I’m going to try to get access to the room earlier in the day so I can have some idea of how it breaks up, and I’ll tweak the the Canva to to add stuff as appropriate.

    • David G. Johnston in chat: Same thing multiple times so people don’t miss out on joining their #2 option?
    • David G. Johnston in chat: How about #1, #2, #3 as labels instead of just one per person?
  • Jeremy S: Are you wanting us to put Post-it notes on the agenda now, before we know what’s been voted for?

  • David Wheeler (he/him): Yep! Especially if there’s some idea you had Jeremy. If there’s stuff you feel is missing or would be a different approach, stick it in here. It may well be not that many people interested in what I’ve come up with but they want to talk about those five extensions.

  • David Wheeler (he/him): (Reading comment from David Johnson): “One two and three as labels instead of just one per person?” David I’m sorry I don’t follow.

  • David G. Johnston: So basically like rank choice. If you’re gonna do I core one time and binary packaging one time, and they’re running at the same time, well I want to do both. I want to do core — that’s my first choice — I want to do binary packaging — that’s my second choice. If I had to choose, I’d go to number one. But if you have enough people saying I want to see this, that’s my number two option, you run binary packaging twice, not conflicting with core so you can get more people.

  • David Wheeler (he/him): I see, have people stick numbers on the topics that most interest in them. Let’s see here… [pokes around the Canva UX, finds stickers with numbers.] There we go. I’ll stick those somewhere that’s reasonable so people can rank them if they want, their top choices.

    This is all going to be super arbitrary and unscientific. The way I’ve seen it happen before is people just drop stars on stuff and say, okay this one has four and this one has eight so we definitely want to talk about that one, who’s going to own it, that sort of thing. I think what makes sense is to send this email to all the participants in advance; hopefully people will take a look, have some sense of it, and maybe put a few things on. Then, those of us who are organizing it and will be facilitating on the day, we should meet like a day or two before, go over it, and make some decisions about what we definitely think should be covered, what things are open, and get a little more sense of how we want to run things. Does that make sense?

  • Jeremy S: Yeah, I think chatting ahead of time would be a good idea. It’ll be interesting to see how the Canva thing goes and what happens with it.

  • David Wheeler (he/him): It might be a mess! Whatever! But the answer is that whatever happens this is the right place. Whenever it starts is the right time. Whatever happens could only happen here. It’s super arbitrary and free, and we can adapt as much as we want as it goes.

  • David Wheeler (he/him): I think that’s it. Do you all feels like you have some sense of what we want to do?

  • Jeremy S: Well not really, but that’s okay! [Laughs]

  • Steven Miller: Okay, so here’s what we are supposed to do. Are we supposed to go find people who might be interested to present — they will already be in the list of people who are going to Vancouver. Then we talk to them about these Post-its and we say, “would you like to have a small discussion about one of these things. If you are, then put a sticky note on it.” And then we put the sticky notes in the fields, we have a list of names associated with the sticky notes. Like, maybe Yurii is interested in binary distribution, and then maybe David is also interested in that. So there’s like three or four people in each section, and we’re trying to make sure that if you’re interested multiple sections you get to go to everything?

  • David Wheeler (he/him): Yeah you can float and try to organize things. I put sessions in here assuming people would want to spend an hour, but maybe a topic only takes 15 minutes.

  • David G. Johnston: Staying on my earlier thought on what people want to see, people who are willing to present and can present on multiple things, if we have a gold star for who’s willing to actually present on this topic. So here’s a topic, I got eight people who want to see it but only one possible presenter. Or I got five possible presenters and three possible viewers. But you have that dynamic of ranked choice for both “I’ll present stuff” or “I’m only a viewer.

  • David Wheeler (he/him): I think that typically these things are self-organizing. Somebody says, “I want to do this, I will facilitate, and I need a note taker.” But they negotiate amongst themselves about how they want to go about doing it. I don’t think it necessarily has to be formal presentation, and usually these things are not. Usually it’s like somebody saying, “here’s what this means, this is the topic, we’re going to try to cover, these are the decisions we want to make, Go!”

  • Jeremy S: You’re describing the the the unconference component of PGCon that has been down in the past.

  • David Wheeler (he/him): More or less, yes

  • Jeremy S: So should we just come out and say this is a unconference? Then everybody knows what you’re talking about really fast, right?

  • David Wheeler (he/him): Sure, sure, yeah. I mean —

  • Jeremy S: We’re just we’re doing the same thing as – yeah.

  • David Wheeler (he/him): Yeah, I try to capture that here but we can use the word “unconference” for sure. [Edits the Canva to add “an unconference session” to the title.] There we go.

  • Steven Miller: I imagine there are people who might be interested to present but they just aren’t in this meeting right now. So maybe we need to go out and advertise this to people.

  • David Wheeler (he/him): Yeah, I want to draft an email to send to all the attendees. Melanie told me we can send an email to everybody who’s registered.

  • Jeremy S: And to be clear it’s full, right? Nobody new can register at this point?

  • David Wheeler (he/him): As far as I know, but I’m not sure how hard and fast the rules are. I don’t think any more people can register, but it doesn’t mean other people won’t wander in. People might have registered and then not not come because they’rein the patch the patch session or something.

    So I volunteer to draft that email today or by tomorrow and share it with the Slack channel for feedback. Especially if you’re giving me notes to clarify what things mean, because it seems like there are more questions and confusions about how it works than I anticipated — in part because it’s kind of unorganized by design [chuckles].

  • David Wheeler (he/him): Oh that’s a good thing to include Jeremy. that’s a good call. But to also try to maximize participation of the people who’re planning to be there. It may be that they say, “Oh this sounds interesting,” or whatever, so and I’ll add some different stickers to this for some different meanings, like “I’m interested” or “I want to take ownership of this” or “this is my first, second, third, or fourth choice”. Sound good?

  • Steven Miller: Yes, it sounds good to me!

  • David Wheeler (he/him): Thanks Steven.

  • Jeremy S: Sounds good, yeah.

  • David Wheeler (he/him): All right, great! Thanks everybody for coming!


  1. Hit the #extensions channel on the Postgres Slack for the link! ↩︎

  2. In the meeting I kept saying “open technology” but meant Open Space Technology 🤦🏻‍♂️. ↩︎

  3. But now I can look it up. It was pgml, for which I mocked up a META.json↩︎

Extension Summit Topic Review

Boy howdy that went fast.

This Wednesday, May 15, the final Postgres extension ecosystem mini-summit will review topics covered in previous Mini-Summits, various Planet PostgreSQL posts, the #extensions channel on the Postgres Slack and the Postgres Discord. Following a brief description of each, we’ll determine how to reduce the list to the most important topics to take on at the Extension Ecosystem Summit at PGConf.dev in Vancouver on May 28. I’ll post a summary later this week along with details for how to participate in the selection process.

In the meantime, here’s the list as of today:

  • Metadata:
    • Third-party dependencies
    • Types of extensions
    • Taxonomies
    • System requirements (OS, version, CPU, etc.)
    • Categorization
    • Versioning
  • Registry:
    • Identity, namespacing, and uniqueness
    • Distributed vs. centralized publishing
    • Binary packaging and distribution patterns
    • Federated distribution
    • Services and tools to improve or build
    • Stats, Reports, Badging: (stars, reviews, comments, build & test matrices, etc.)
  • Packaging:
    • Formats (e.g., tarball, OCI, RPM, wheel, etc.)
    • Include dynamic libs in binary packaging format? (precedent: Python wheel)
    • Build farming
    • Platforms, architectures, and OSes
    • Security, trust, and verification
  • Developer:
    • Extension developer tools
    • Improving the release process
    • Build pipelines: Supporting PGXS, prgx, Rust, Go, Python, Ruby, Perl, and more
  • Community:
    • Community integration: identity, infrastructure, and support
    • How-Tos, tutorials, documentation for creating, maintaining, and distributing extensions
    • Docs/references for different types of extensions: CREATE EXTENSION, hooks, background workers, CLI apps/services, web apps, native apps, etc.
  • Core:
    • Second extension directory (a.k.a. variable installation location, search path)
    • Keeping all files in a single directory
    • Documentation standard
    • Inline extensions: UNITs, PACKAGEs, TLEs, etc.
    • Minimizing restarts
    • Namespacing
    • Sandboxing, code signing, security, trust
    • Dynamic module loading (e.g., use Thing in PL/Perl could try to load Thing.pm
    • from a table of acceptable libraries maintained by the DBA)
    • Binary compatibility of minor releases and/or /ABI stability

Is your favorite topic missing? Join us at the mini-summit or drop suggestions into the #extensions channel on the Postgres Slack.

Mini Summit Five

The video for Yurii Rashkovskii’s presentation at the fifth Postgres Extension Ecosystem Mini-Summit last week is up. Links:

Here’s my interpolation of YouTube’s auto-generated transcript, interspersed with chat activity.

Introduction

Presentation

  • Yurii: Today I’m going to be talking about universally buildable extensions. This is going to be a shorter presentation, but the point of it is to create some ideas, perhaps some takeaways, and actually provoke a conversation during the call. It would be really amazing to explore what others think, so without further ado…

  • I’m with Omnigres, where we’re building a lot of extensions. Often they push the envelope of what extensions are supposed to do. For example, one of our first extensions is an HTTP server that embeds a web server inside of Postgres. We had to do a lot of unconventional things. We have other extensions uniquely positioned to work both on developer machines and production machines — because we serve the the developers and devops market.

  • The point of Omnigres is turning Postgres into an application runtime — or an application server — so we really care how extensions get adopted. When we think about application developers, they need to be able to use extensions while they’re developing, not just in production or on some remote server. They need extensions to work on their machine.

  • The thing is, not everybody is using Linux Other people use macOS and Windows and we have to account for that. There are many interesting problems associated with things like dependencies.

  • So there’s a very common approach used by those who who try to orchestrate such setups and by some package managers: operating out of container. The idea is that with a can create a stable environment where you bring all the dependencies that your extension would need, and you don’t have to deal with the physical reality of the host machine. Whether it’s a developer machine, CI machine, production machine, you always have the same environment. That’s definitely a very nice property.

  • However, there are some interesting concerns that we have to be aware when we operate out of a container. One is specifically mapping resources. When you have a container you have to map how many cores are going there, memory, how do we map our volumes (especially on Docker Desktop), how we connect networking, how we pass environment variables.

  • That means whenever you’re running your application — especially locally, especially in development — you’re always interacting with that environment and you have to set it up. This is particularly problematic with Docker Desktop on macOS and Windows because these are not the same machines. You’re operating out of a virtual machine machine instead of your host machine, and obviously containers are Linux-specific, so it’s always Linux.

  • What we found is that often times it really makes a lot of sense to test extensions, especially those written in C, on multiple platforms. Because in certain cases bugs, especially critical memory-related bugs, don’t show up on one platform but show up on another. That’s a good way to catch pretty severe bugs.

  • There are also other interesting, more rare concerns. For example, you cannot access the host GPU through Docker Desktop on macOS or through Colima. If you’re building something that could have use the host GPU that would work on that machine it’s just not accessible. If you’re working something ML-related, that can be an impediment

  • This also makes me wonder: what are other reasons why we’re using containers. One reason that struck out very prominently was that Postgres always has paths embedded during compile time. That makes it very difficult to ship extensions universally across different installations, different distributions. I wonder if that is one of the bigger reasons why we want to ship Postgres as a Docker container: so that we always have the same path regardless of where where it’s running.

  • Any questions so far about Docker containers? Also if there’s anybody who is operating a Docker container setup — especially in their development environment — if you have any thoughts, anything to share: what are the primary reasons for you to use a Docker container in your development environment?

    • Jeremy S in chat: When you say it’s important to test on multiple platforms, do you mean in containers on multiple platforms, or directly on them?

    • Jeremy S in chat: That is - I’m curious if you’ve found issues, for example, with a container on Mac/windows that you wouldn’t have found with just container on linux

  • Daniele: Probably similarity with the production deployment environments. That’s one. Being free from whatever is installed on your laptop, because maybe I don’t feel like upgrading the system Python version and potentially breaking the entire Ubuntu, whereas in a Docker container you can have whatever version of Python, whatever version of NodeJS or whatever other invasive type of service. I guess these are these are good reasons. These were the motivation that brought me to start developing directly in Docker instead of using the desktop.

  • Yurii: Especially when you go all the way to to production, do you find container isolation useful to you?

  • Daniele: Yeah I would say so; I think the problem is more to break isolation when you’re are developing. So just use your editor on your desktop, reload the code, and have a direct feedback in the container. So I guess you have to break one barrier or two to get there. At least from the privilege points of having a Linux on desktop there is a smoother path, because it’s not so radically different being in the container. Maybe for Windows and macOS developers it would be a different experience

  • Yurii: Yeah, I actually wanted to drill down a little bit on this In my experience, I build a lot on macOS where you have to break through the isolation layers with the container itself and obviously the VM. I’ve found there are often subtle problems that make the experience way less straightforward.

  • One example I found it that, in certain cases, you’re trying to map a certain port into the container and you already have something running [on that port] on your host machine. Depending on how you map the port — whether you specify or don’t specify the address to bind on — you might not get Docker to complain that this port is actually overridden.

  • So it can be very frustrating to find the port, I’m trying to connect to it but it’s not connecting to to the right port. There’s just very small intricate details like this, and sometimes I’ve experienced problems like files not perfectly synchronizing into the VM — although that has gotten a little better in the past 2–3 years — but there there were definitely some issues. That’s particularly important for the workflows that we’re doing at Omnigres, where you’re running this entire system — not just the database but your back end. To be able to connect to what’s running inside of the container is paramount to the experience.

  • Daniele: Can I ask a question about the setup you describe? When you go towards production, are those containers designed to be orchestrated by Kubernetes? Or is there a different environments where you have your Docker containers in a local network, I assume, so different Dockers microservices talking to each other. Are you agnostic from what you run in it, or do you run it on Kubernetes or on Docker Compose or some other form of glue that you you set up yourself, or your company has set up?

    • Steven Miller in chat: … container on Mac/windows [versus linux]
    • Steven Miller in chat: Have seen with chip specific optimizations like avx512
  • Yurii: Some of our users are using Docker Compose to run everything together. However, I personally don’t use Docker containers. This is part of the reason why the topic of this presentation is about universally buildable extensions. I try to make sure that all the extensions are easily compilable and easily distributable on any given supported platform. But users do use Docker Compose, it’s quite common.

  • Does anyone else here have a preference for how to move Docker containers into production or a CI environment?

  • Nobody? I’ll move on then.

    • Steven Miller in chat: Since in docker will run under emulation, but on linux will run with real hardware, so the environment has different instruction set support even though the docker —platform config is the same

    • Jeremy S in chat: That makes sense

  • Yurii: I wanted to show just a little bit of a proof of concept tool that we’ve been working on, on and off for the last year—

  • David Wheeler (he/him): Yurii, there are a couple comments and questions in chat, I don’t know if saw that

  • Yurii: I didn’t see that sorry.

  • Jeremy is saying, “when you say it’s important to test on multiple platforms do you mean in containers on multiple platforms or directly on them?” In that particular instance I meant on multiple platforms, directly.

  • The other message from Jeremy was, “I’m curious if you found issues for example with a container on Mac or Windows that you wouldn’t have found with just container on Linux?” Yeah I did see some issues depending on the type of memory-related bug. Depending on the system allocator, I was either hitting a problem or not. I was not hitting it on Linux, I believe and it was hidden macOS. I don’t remember the details right now, unfortunately, but that difference was indicative of a bug.

  • Steven wrote, trying to connect this… “Have * seen chip-specific optimizations for containers?” And, “Docker will run under emulation but on Linux will run with real Hardware.” Yeah that’s an interesting one about ax512. I suppose this relates to the commentary about about GPU support, but this is obviously the other part of supporting specific hardware, chip-specific optimizations That’s an interesting thing to learn; I was not aware of that! Thank you Steven.

  • Let’s move on. postgres.pm is a pro of concept that I was working on for some time. The idea behind it was both ambitious but also kind of simple: Can we try describing Postgres extensions in such a way that they will be almost magically built on any supported platform?

  • The idea was to build an expert system of how to build things from a higher level definition. Here’s an example for pgvector:

    :- package(vector(Version), imports([git_tagged_revision_package(Version)])).
    git_repo("https://github.com/pgvector/pgvector").
    :- end_package.
    

    It’s really tiny! There are only two important things there: the Git tagged revision package and Git repo. There’s nothing else to describe the package.

  • The way this works is by inferring as much information as possible from what’s available. Because it’s specified as a Git-tagged revision package, it knows that it can download the list of version-shaped revisions — the versions — and it can checkout the code and do further inferences. It infers metadata from META.json if it’s available, so it will know the name of the package, the description, authors, license, and everything else included there.

    • David G. Johnston in chat: PG itself has install-check to verify that an installed instance is functioning. What are the conventions/methods that extension authors are using so that a deployed container can be tested at a low level of operation for the installed extensions prior to releasing the image to production?
  • It automatically infers the build system. For example for C extensions, if it sees that there’s a Makefile and C files, it infers that you need make and a C compiler and it tries to find those on the system: it will try to find cc, gcc, Clang — basically all kinds of things.

    *David Wheeler (he/him)() in chat: Feel free to raise hands with questions

  • Here’s a slightly more involved example for pg_curl. Ah, there was a question from David Johnson. David says, “PG has install-check to verify that installed instance is functioning. What are the conventions methods that extension authors are using so the deployed container can be tested at a low level of operation for the installed extension prior to releasing the image to production?”

  • I guess the question is about general conventions for how extension authors ensure that the extensions work, but I suppose maybe part of this question is whether that’s also testable in a production environment. David, are you talking about the development environment alone or both?

  • David G. Johnston: Basically, the pre-release to production. You go in there in development and you cut up an extension and source and then you build your image where you compile it — you compile PG, you compile it, or you deploy packages. But now you have an image, but you’ve never actually tested that image. I can run installcheck on an installed instance of Postgres and know that it’s functioning, but it won’t test my extension. So if I install PostGIS, how do I test that it has been properly installed into my database prior to releasing that image into production?

    • Tobias Bussmann in chat: shouldn’t have the extension a make installcheck as well?
  • Yurii: To my knowledge there’s no absolutely universal method. Of course the PGXS methods are the most standard ones — like installcheck — to to run the tests. In our [Omnigres’s] case, we replaced pg_regress with pg_yregress, another tool that we’ve developed. It allows for more structural tests and tests that test certain things that pg_regress cannot test because of the way it operates.

  • I can share more about this later if that’s of interest to anybody. So we basically always run pg_yregress on our extensions; it creates a new instance of Postgres — unless told to use a pre-existing instance — and it runs all the tests there as a client. It basically deploys the the extension and runs the set of tests on it.

  • David G. Johnston: Okay.

    Yurii: I guess you know it depends on how you ship it. For example, if you look at the pgrx camp, they have their own tooling for that, as well. I’ve also seen open-source extensions where they could be written in, say, Rust, but still using pg_regress tests to test their behavior. That would often depend on how their build system is integrated in those tests. I guess the really short answer is there’s probably no absolutely Universal method.

  • David thank you for pasting the link to pg_yregress. If there are ny questions about it, feel free to ask me. Any other thoughts or questions before I finish this slide? Alright will carry on then.

    :- package(pg_curl(Version), imports(git_explicit_revision_package(Version))).
    :- inherit(requires/1).
    git_repo("https://github.com/RekGRpth/pg_curl").
    git_revisions([
            '502217c': '2.1.1',
            % ... older versions omitted for now ...
        ]).
    requires(when(D := external_dependency(libcurl), version::match(D, '^7'))).
    :- end_package.
    
  • The difference between this example and the previous one is that here it specifies that there will be an explicit revision map because that project does not happen to have version tags, so they have to be done manually. You can see that in the Git revision specification. But what’s more interesting about this is that it specifies what kind of dependency it needs. In this particular instance it’s libcurl, and the version has to match version 7 — any version 7.

  • These kinds of requirements, as well as compiler dependencies, make dependencies, and others are always solved by pluggable satisfiers. They look at what’s available depending on the platform — Linux, a particular flavor of Linux, macOS, etc — and picks the right tools to see what’s available. In the future there’s a plan to add features like building these dependencies automatically, but right now it depends on the host system, but in a multi-platform way.

    • David Wheeler (he/him) in chat: How does it detect that libcurl is required?
  • The general idea behind this proof of concept is that we want to specify high level requirements and not how exactly to satisfy them. If you compare this to a Docker file, the Docker file generally tells you exactly what to do step by step: let’s install this package and that package, let copy files, etc. so it becomes a very specific set of instructions.

    • Jeremy S in chat: And how does it handle something with different names in different places?
  • There was a question: “how does it detect that libcurl is required?” There there is this line at the bottom says “requires external dependency libcurl, so that was the definition.”

  • The other question was “how does it handle something with different names in different places?” I’m not sure I understand this question.

  • Jeremy S: I can be more spe specific. A dependency like libc is called libc on Debian platforms and it’s called glibc on Enterprise Linux. You talked about available satisfiers like Homebrew, Apt and package config, but what if it has a different name in Homebrew than in Apt or something like? Does it handle that or is that just something you haven’t tackled yet?

  • Yurii: It doesn’t tackle this right now, but it’s part of the division where it should go. For certain known libraries there’s an easy way to add a mapping that will kick in for a distribution, and otherwise it will be a satisfier for another one. They’re completely pluggable, small satisfiers looking at all the predicates that describe the system underneath.

    • David G. Johnston in chat: How is the upcoming move to meson in core influencing or impacting this?
  • Just for point of reference, this is built on top of Prolog, so it’s like a knowledge base and rules for how to apply on this knowledge to particular requirements.

    • Tobias Bussmann in chat: Prolog 👍

    • Shaun Thomas in chat: What if there are no satisfiers for the install? If something isn’t in your distro’s repo, how do you know where to find the dependency? And how is precedence handled? If two satisfiers will fulfill a requirement, will the highest version win?

  • Jeremy S: I remember Devrim talking about, if you read through the [RPM] spec files, what find is all this spaghetti code with #ifdefs and logic branches and in his case is just dealing with differences between Redhat and SUSE. If this is something that we manually put in, we kind of end up in a similar position where it’s on us to create those mappings, it’s on us to maintain those mappings over time — we kind of own it — versus being able to automate some kind of automatic resolution. I don’t know if there is a good automatic way to do it. David had found something that he posted, which I looked at a little bit, but Devrim talked about how much of maintenance overhead it becomes in the long run to constantly have to maintain this which seemed less than ideal.

  • Yurii: It is less than ideal. For now, I do think that would have to be manual, which is less than ideal. But it could be addressed at least on on a case-by-case basis. Because we don’t really have thousands of extensions yet — in the ecosystem maybe a thousand total — I think David Wheeler would would know best from his observations, and I think he mentioned some numbers in his presentation couple of weeks ago. But basically handling this on on a case-by-case basis where we need this dependency and apparently it’s a different one on a different platform, so let’s address that. But if there can be a method that can at least get us to a certain level of unambiguous resolution automatically or semi-automatically, that would be really great.

    • Samay Sharma in chat: +1 on the meson question.
  • Jeremy S: I think there’s a few more questions in the chat.

  • Yurii: I’m just looking at them now. “how is the upcoming move to meson and core influencing or impacting this?” I don’t think it’s influencing this particular part in any way that I can think of right now. David, do you have thoughts how it can? I would love to learn.

  • David G. Johnston: No, I literally just started up a new machine yesterday and decided to build it from meson instead of make and the syntax of the meson file seems similar to this. I just curious if there are any influences there or if it’s just happenstance.

  • Yurii: Well from from what I can think right now, there’s just general reliance on either implicitly found PG config or explicitly specified PG config. That’s just how you discover Postgres itself. There’s no relation to how Postgres itself was built. The packaging system does not handle say building Postgres itself or providing it so it’s external to this proof of concept.

  • David G. Johnston: That’s a good separation of concerns, but there’s also the idea that, if core is doing something, we’re going to build extensions against PostgresSQL, if we’re doing things similar to how core is doing them, there’s less of a learning curve and less of everyone doing their own thing and you have 500 different ways of doing testing.

  • Yurii: That’s a good point. That’s something definitely to reflect on.

  • I’ll move on to the next question from Sean. “What if there are no satisfiers for the install? If something isn’t in your distro how do you know where to find the dependency?” And “if two satisfiers will fulfill a requirement, will the highest version win?” If there are no satisfiers right now it will just say it’s not solvable. So we fail to do anything. You would have to go and figure that out. It is a proof of concept, it’s not meant to be absolutely feature complete but rather an exploration of how we can describe the the packages and their requirements.

  • David Wheeler (he/him): I assume the idea is that, as you come upon these you would add more satisfiers.

  • Yurii: Right, you basically just learn. We learn about this particular need in a particular extension and develop a satisfier for it. The same applies to precedence: it’s a question of further evolution. Right now it just finds whatever is available within the specified range.

  • If there are no more pressing questions I’ll move to the next slide. I was just mentioning the problem of highly specific recipes versus high-level requirements. Now I want to shift attention to another topic that has been coming up in different conversations: whether to build and ship your extension against minor versions of Postgres.

  • Different people have different stances in this, and even package managers take different stands on it. Some say, just build against the latest major version of Postgres and others say build extensions against every single minor version. I wanted to research and see what the real answer should be: should we build against minor versions or not?

  • I’ve done a little bit of experimentation and my answer is “perhaps”, and maybe even “test against different minor versions.” In my exploration of version 16 (and also 15 bu Id didn’t include it) there there are multiple changes between minor versions that can potentially be dangerous. One great example is when you have a new field inserted in the middle of a structure that is available through a header file. That definitely changes the layout of the structure.

     typedef struct BTScanOpaqueData
     {
    -    /* these fields are set by _bt_preprocess_keys(): */
    +    /* all fields (except arrayStarted) are set by _bt_preprocess_keys(): */
         bool            qual_ok;                /* false if qual can never be satisfied */
    +    bool            arrayStarted;     /* Started array keys, but have yet to "reach
    +                                                               * past the end" of all arrays? */
         int                     numberOfKeys    /* number of preprocessed scan keys */
     }
    
  • In this particular case, for example, will not get number of keys if you’re intending to. I think that change was from 16.0 to 16.1. If you build against 16.0 and then try to run on 16.1, it might not be great.

    The other concern that I found is there are new apis appearing in header files between different versions. Some of them are implemented in header files, either as macros or static and line functions. When you’re building against that particular version, you’ll get the particular implementation embedded.

  • Others are exports of symbols, like in this case, try index open and contain mutable functions after planning, if you’re using any of this. But this means that these symbols are not available on some minor versions and they’re available later on, or vice versa: they may theoretically disappear.

  • There are also changes in inline behavior. There was a change between 16.0 and 16.1 or 16.2 where an algorithm was changed. Instead of just > 0 there’s now >= 0, and that means that particular behavior will be completely different between these implementations. This is important because it’s coming from a header file, not a source file, so you’re embedding this into your extension.

  • David Wheeler (he/him) in chat: That looks like a bug fix

  • Yeah it is a bug fix. But what I’m saying is, if you build your extension against say 16.0m which did not have this bug fix, and then you deploy it on 16.1, then you still have the bug because it’s coming from the header file.

  • *David Wheeler (he/him): Presumably they suggest that you build from the latest minor release and that’s Backward compatible to the earlier releases.

  • Yurii: Right and that’s a good middle ground for this particular case. But but of course sometimes when you do a minor upgrade you have to remember that you have to rebuild your extensions against that minor version so you can just easily transfer them yeah.

    • Jeremy S in chat: The struct change in a minor is very interesting
  • *David Wheeler (he/him)Jeremy points out that struct change is pretty interesting.

  • Yurii: Yeah, it’s interesting because it’s super dangerous! Like if somebody is expecting a different versioned structure, then it can be pretty nasty.

    • Shaun Thomas in chat: Yeah. It’s a huge no-no to insert components into the middle of a struct.
  • Jeremy S: Is that common? I’m really surprised to see that in a minor version. On the other hand, I don’t know that Postgres makes promises about — some of this seems to come down to, when you’re coding in C and you’re coding directly against structures in Postgres, that’s really interesting. That’s — I’m surprised to see that still.

    • Steven Miller in chat: In the case of trunk, we would have built against minor versions in the past then upgrade the minor version of postgres without reinstalling the binary source of the extension, so this is an issue

    • David G. Johnston in chat: Yeah, either that isn’t a public structure and someone is violating visibility (in which case yes, you should be tracking minor builds)

    • Shaun Thomas in chat: I’m extremely shocked that showed up in 16.2.

  • Yurii: Yeah, I didn’t expect that either, because that’s just a great way to have absolutely undefined behavior. Like if somebody forgot to rebuild their extension against a new minor, then this can be pretty terrible.

  • But my general answer to all of this unless you’re going really deep into the guts of Postgres, unless you’re doing something very deep in terms query planning, query execution, you’re probably okay? But who knows.

    • Jason Petersen in chat: yeah it feels like there’s no stated ABI guarantee across minor updates

    • Jason Petersen in chat: other than “maybe we assume people know not to do this"

    • David Christensen in chat: yeah ABI break in minor versions seems nasty

  • Jeremy S: But it’s not just remembering to rebuild your extension. Let’s let’s suppose somebody is just downloading their extensions from the PGDG repo, because there’s a bunch of them there. They’re not compiling anything! They’re they’re downloading an RPM and the extension might be in a different RPM from Postgres and the extension RPMs — I don’t know that there have been any cases with any of the extensions in PGDG, so far, where a particular extension RPM had to have compatibility information at the level of minors.

    • Shaun Thomas in chat: There was actually a huge uproar about this a couple year ago because they broke the replication ABI by doing this.

    • David G. Johnston in chat: I see many discussions about ABI stability on -hackers so it is a goal.

    • Steven Miller in chat: PGDG is the same binaries for each minor version because the postgres package is only major version, right?

  • Yurii: Yeah, that’s definitely a concern, especially when it comes to the scenario when you rebuild your extensions but just get pre-built packages. It’s starting to leak out of the scope of this presentation, but I thought it was a very interesting topic to bring to everybody’s attention.

    • Jason Petersen in chat: “it’s discussed on hackers” isn’t quite the same as “there’s a COMPATIBILITY file in the repo that states a guarantee”

    • Jason Petersen in chat: (sorry)

  • My last item. Going back to how we ship extensions and why do we need complex build systems and packaging. Oftentimes you want your extensions to depend on some library, say OpenSSL or SQLite or whatever, and the default is to bring the shared dependency that would come from different packages on different systems.

  • What we have found at Omnigres is that it is increasingly simpler to either statically link with your dependencies — and pay the price of larger libraries — but then you have no questions about where it comes from — what what package, which version – you know exactly what which version it is and how it’s getting built. But of course you also have a problem where, if you want to change the version of the dependency it’s harder because it’s statically linked. The question is whether you should be doing that or not, depending on the authors of the extension and their promises for compatibility with particular versions of their dependencies. This one is kind of naive and simple, as in just use static. Sometimes it’s not possible or very difficult to do so, some some libraries don’t have build systems amenable to static library production.

  • What we found that works pretty nicely is using rpath in your dynamic libraries. You can use special variables — $ORIGIN or @loader_path on Linux or macOS, respectively, to specify that your dependency is literally in the same folder or directory where your extension is. So you can ship your extension with the dependencies alongside, and it will not try to load them immediately from your system but from the same directory. We find this pretty pretty useful.

  • That’s pretty much it. Just to recap I talked about the multi-platform experience, the pros and cons of containers, inferencing how you build and how you can build extensions with dependencies, static and rpath dependencies, and the problems with PG minor version differences. If anybody has thoughts, questions, or comments I think that would be a great. Thank you.

Discussion

  • David Wheeler (he/him): Thank you, Yurii, already some good discussion. What else do you all have?

  • David G. Johnston: PG doesn’t use semantic versioning. They we have a major version and a minor version. The minor versions are new releases, they do change behaviors. There are goals from the hackers to not break things to the extent possible. But they don’t guarantee that this will not change between dot-three and dot-four. When you’re releasing once a year that’s not practical if things are broken, you can’t wait nine months to fix something. Some things you need to fix them in the next update and back-patch.

    • Steven Miller in chat: Thank you, this is very useful info

    • Jeremy S in chat: Dependency management is hard 🙂 it’s been a topic here for awhile

  • David G. Johnston: So we don’t have a compatibility file, but we do have goals and if they get broken there’s either a reason for it or someone just missed it. From an extension standpoint, if you want to be absolutely safe but absolutely cost intensive, you want to update every minor release: compile, test, etc. Depending on what your extension is, you can trade off some of that risk for cost savings. That’s just going to be a personal call. The systems that we build should make it easy enough to do releases every “dot” and back-patching. Then the real cost is do you spend the time testing and coding against it to make sure that the stuff works. So our tool should assume releasing extensions on every minor release, not every major release, because that’s the ideal.

    • Shaun Thomas in chat: It’s good we’re doing all of this though. It would suck to do so much work and just become another pip spaghetti.
  • Yurii: That’s exactly what I wanted to bring to everybody’s attention, because there’s still a lot of conversations about this and there was not enough clarity. So that helps a lot.

  • Jeremy S: Did you say release or did you say build with every Miner? I think I would use the word “build”.

  • David G. Johnston: Every minor release, the ones that go out to the public. I mean every commit you could update your extension if you wanted. but really the ones that matter are the ones that go public. So, 16.3 or 16.4 comes out, automation would ideally would build your extension against it run your test and see if anything broke. And then deploy the new [???] of your extension against version 16.3. Plus that would be your your release.

  • Jeremy S: I think there are two things there: There’s rebuilding it — because you can rebuild the same version of the extension and that would pick up if they they added a field in the middle of a struct which is what happened between 16.0 and 16.1, rebuild the same version. Versus: the extension author … what would they be doing? If they they could tag a new version but they’re not actually changing any code I don’t think it is a new release of the extension, because you’re not even changing anything in the extension, you’re just running a new build. It’s just a rebuild.

    • David Wheeler (he/him) in chat: It’d be a new binary release of the same version. In RPM it goes from v1.0.1-1 to v1.0.1-2

    It reminds me of what Alvaro did in his his OCI blog post, where he said you really have to … Many of us don’t understand how tightly coupled the extensions need to be to the database. And these C extensions that we’re we’re building have risks when we separate them don’t just build everything together.

  • David G. Johnston: The change there would be metadata. Version four of my extension, I know it works on 16.0 to 16.1. 16.2 broke it, so that’s where it ends and my version 4.1 is known to work on 16.2.

  • Jeremy S: But there is no difference between version 4 and version 4.1. There’s a difference in the build artifact that your build farm spit out, but there’s no difference in the extension, right?

    • Keith Fiske in chat: Still confusing if you don’t bump the release version even with only a library change

    • Keith Fiske in chat: How are people supposed to know what library version is running?

  • David G. Johnston: Right. If the extension still works, then` your metadata would just say, “not only do I work through version 16.2, I now work through 16.3.

  • Jeremy S: But it goes back to the question: is the version referring to a build artifact, or is the version referring to a version of the code? I typically think of versions as a user of something: a version is the thing. It would be the code of the extension. Now we’re getting all meta; I guess there are arguments to be made both ways on that.

    • Jason Petersen in chat: (it’s system-specific)

    • Jason Petersen in chat: no one talks in full version numbers, look at an actual debian apt-cache output

  • David Wheeler (he/him): Other questions? Anybody familiar with the rpath stuff? That seems pretty interesting to me as a potential solution for bundling all the parts of an extension in a single directory — as opposed to what we have now, where it’s scattered around four different directories.

  • Jason Petersen: I’ve played around with this. I think I was trying to do fault injection, but it was some dynamically loaded library at a different point on the rpath. I’m kind of familiar with the mechanics of it.

    I just wanted to ask: In a bigger picture, this talks about building extensions that sort of work everywhere. But the problems being solved are just the duplication across the spec files, the Debian files, etc. You still have to build a different artifact for even the same extension on the same version of Postgres on two different versions of Ubuntu, Right? Am I missing something? It is not an extension that runs everywhere.

  • Yurii: No, you still have to build against the set of attributes that constitute your target, whether that’s architecture, operating system, flavor. It’s not yet something you can build and just have one binary. I would love to have that, actually! I’ve been pondering a lot about this. There’s an interesting project, not really related to plugins, but if you’ve seen A.P.E. and Cosmopolitan libc, they do portable executables. It’s a very interesting hack that allows you to run binaries on any operating system.

  • Jason Petersen: I expected that to be kind of “pie in the sky.”

  • Yurii: It’s more of a work of art.

  • Jason Petersen: Do you know of other prior art for the rpath? Someone on Mastodon the other day was talking about Ruby — I can’t remember the library, maybe it was ssh — and they were asking, “Do I still have to install this dynamic library?” And they said, “No, we vendor that now; whenever you install this it gets installed within the Ruby structure.” I’m not sure what they’re doing; maybe it’s just a static linking. But I was curious if you were aware of any prior art or other packaging systems where system manages its own dynamic libraries, and use rpath to override the loading of them so we don’t use the system ones and don’t have to conflict with them. Because I think that’s a really good idea! I just was wondering if there’s any sort of prior art.

  • Daniele: There is an example: Python Wheels binaries us rpath. A wheel is a ZIP file with the C extension and all the depending libraries the with the path modified so that they can refer to each other in the the environment where they’re bundled. There is a tool chain to obtain this packaging — this vendoring — of the system libraries. There are three, actually: one for Unix, one for macOS, one for Windows. But they all more or less achieve the same goal of having libraries where they can find each other in the same directory or in a known directory. So you could take a look at the wheel specification for Python and the implementation. That could be a guideline.

  • Jason Petersen: Cool.

  • Yurii: That’s an excellent reference, thank you.

  • David Wheeler (he/him): More questions?

  • Jeremy S: Yeah, I have one more. Yurii, the build inferencing was really interesting. A couple things stood out to me. One that you mentioned was that you look for The META.json file. That’s kind of neat, just that it’s acknowledged a useful thing; and a lot of extensions have it and we want to make use of it. I think everybody knows part of the background of this whole series of meetings is — one of the things we’re asking is, how can we improve what’s the next generation of META.json to make all of this better? Maybe I missed this, but what was your high-level takeaway from that whole experience of trying to infer the stuff that wasn’t there, or infer enough information to build something if there isn’t a META.json at all? Do you feel like it worked, that it was successful? That it was an interesting experiment but not really viable long term? How many different extensions did you try and did it work for? Once you put it together, were you ever able to point it at a brand new extension you’d never seen before and actually have it work? Or was it still where you’d try a new extension and have to add a little bit of extra logic to handle that new extension? What’s your takeaway from that experience?

  • Yurii: The building part is largely unrelated to META.json, that was just primarily the metadata itself. I haven’t used in a lot of extensions because I was looking for different cases — extensions that exhibit slightly different patterns — not a whole ton of them yet. I would say that, so far, this is more of a case-by-case scenario to see for a particular type of or shape of extension what we need to do. But generally, what I found so far that it works pretty nicely for C extensions: it just picks up where all the stuff is, downloads all the necessary versions, allows to discover the new versions — for example you don’t need to update the specification for a package if you have a new release, it will just automatically pick that up rom the list of tags. These these were the current findings. I think overall the direction is promising, just need to continue adjusting the results and see how much further it can be taken and how much more benefit it can bring.

  • Jeremy S: Thank you.

  • Yurii: Any other comments or thoughts?

  • David Wheeler (he/him): Any more questions for Yurii?

  • David Wheeler (he/him): I think this is a an interesting space for some research between Devrim’s presentation talking about how much effort it is to manually maintain all the extensions in the Yum repository. I’ve been doing some experiments trying to build everything from PGXN, and the success rate is much lower than I’d like. I think there are some interesting challenges to automatically figuring out how things work versus convincing authors to specify in advance.

  • Jeremy S: Yep. Or taking on that maintenance. Kind of like what a spec file maintainer or a Debian package maintainer is doing.

  • Yurii: Yeah, precisely.

Wrap Up

  • David Wheeler (he/him): Thanks, Yurii, for that. I wanted to remind everyone that we have our final Mini-Summit before PGConf on May 15th. That’s two weeks from today at noon Eastern or 4 pm UTC. We’re going to talk about organizing the topics for the Summit itself. I posted a long list of stuff that I’ve extracted from my own brain and lots more topics that I’ve learned in these presentations in the Slack. Please join the community Slack to participate.

    The idea is to winnow down the list to a reasonable size. We already are full with about 45 attendees, and we we can maybe have a few more with standing room and some hallway track stuff. We’ll figure that out, but it’s a pretty good size, so I think we’ll be able to take on a good six or maybe eight topics. I’m going to go over them all and we’ll talk about them and try to make some decisions in advance, so when we get there we don’t have to spend the first hour figuring out what we want to, we can just dive in.

    And that’s it. Thank you everybody for coming, I really appreciate. We’ll see you next time

    • Tobias Bussmann in chat: Thanks for the insights and discussion!

    • Jeremy S: Thank you!