Changelog & Friends – Episode #82

Kaizen! Pipely goes BAM

with Gerhard Lazu

All Episodes

It’s Kaizen 18! Can you believe it? We discuss the recent Fly.io outage, some little features we’ve added since our last Kaizen, our new video-first production, and of course, catch up on all things Pipely! Oh, and Gerhard surprises us (once again). BAM!

Featuring

Sponsors

RetoolThe low-code platform for developers to build internal tools — Some of the best teams out there trust Retool…Brex, Coinbase, Plaid, Doordash, LegalGenius, Amazon, Allbirds, Peloton, and so many more – the developers at these teams trust Retool as the platform to build their internal tools. Try it free at retool.com/changelog

Sentry – Use the code CHANGELOG when you sign up to get $100 off the team plan.

Temporal – Build invincible applications. Manage failures, network outages, flaky endpoints, long-running processes and more, ensuring your workflows never fail. Register for Replay in London, March 3-5 to break free from the status quo.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 Let's Kaizen! 00:38
2 00:38 Sponsor: Retool 02:45
3 03:23 Pager duty & Friends 02:28
4 05:51 Gerhard's watch 03:06
5 08:57 SELECT count(*) FROM appearances 03:02
6 11:59 Changelog was down 06:53
7 18:51 Linkify chapters in Zulip 02:37
8 21:28 Adam eats (thicc) crow 👀 01:38
9 23:07 Pronouncing things is hard 02:34
10 25:41 Sponsor: Sentry 02:04
11 27:45 Video-first feedback 01:30
12 29:15 Adam's new machine 06:18
13 35:33 Gerhard's GPU adventure 01:51
14 37:24 Windows v Linux talk 02:39
15 40:02 WSL 2 01:10
16 41:13 Builder's remorse 01:52
17 43:04 Adam's why 02:02
18 45:06 Video pods launch retro 06:29
19 51:35 YouTube integrations 01:00
20 52:35 Commit-driven co 00:20
21 52:56 Sponsor: Temporal 02:04
22 55:00 CPU.fm talk 02:33
23 57:33 rails new cpu 01:38
24 59:11 Pipely! 01:18
25 1:00:29 Matt Johnson! 02:07
26 1:02:36 Nabeel Sulieman! 05:07
27 1:07:43 Pipely demo 08:56
28 1:16:39 Scale up? 05:20
29 1:22:00 One more thing! 👀 03:33
30 1:25:33 Adam wants his toy 01:28
31 1:27:01 Jerod wants Gerhard's toy 00:44
32 1:27:45 Fly scale fail 01:53
33 1:29:38 Scale costs 03:47
34 1:33:25 Throttling 03:04
35 1:36:29 System memory design 03:40
36 1:40:10 The roadmap 03:33
37 1:43:43 Birthday presents 00:52
38 1:44:34 Zooming out 03:11
39 1:47:45 Two more thing 03:24
40 1:51:10 A guided tour? 01:20
41 1:52:29 makeitwork.tv 03:08
42 1:55:37 Bye, friends 00:25
43 1:56:02 Coming up next 02:35

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

We are here to kaizen, which means Gerhard Lazu is also here. What’s up, man?

In the house. Gerhard Lazu in the house. Yes.

Welcome.

Everything’s up. Everything’s up.

That’s right. That’s the DevOp’s response, isn’t it? Or the sysadmin? I don’t know what you call yourself these days…

Well, it’s just titles… Right? They’re always hard…

Infra engineer? I mean, what is your title, Gerhard?

Officially, head of infrastructure for Dagger.

Okay, cool.

It’s a big role.

Yeah, it is. I’m enjoying it. I’ve grown into it.

Are you on pager duty?

Always. I’m responsible for everyone that’s on pager duty, and I’m responsible that pager duty is set up correctly.

That we are alerted when the right things go down. So yeah.

So you literally use Pager Duty.

No. It’s the placeholder for on-call.

It’s the Kleenex.

Oh, dang…! Is that a burn? Or was it just a fact?

It’s a fact, yeah.

Okay. Well, I know it’s a fact, but was it a burn as well?

A burn? I don’t know about that.

A Pager Duty burn?

I don’t know… Maybe.

Okay…

I never really I never really loved Pager Duty, I have to say… And it’s not what’s behind it, it’s like the whole setup is just too complex, I think.

I will say this about it, because this is all I know about it… Great name. It’s got a great name. That’s all I can say about it.

Right. Well, I prefer incident. Incident.io? I think that’s even a better name. When there’s an incident –

Really? Why, because we don’t have pagers anymore?

Pretty much. Yeah. Who has pagers…?

That’s true. I guess it’s a terrible name, but…

Well, now it’s just a “page that person”, which means call that person, or email that person, or slack that person, or zulip –

Just get a hold of them, however means possible.

Yeah, exactly. And if anything, if you only use a pager, it means you don’t have a backup. And if something goes down, you definitely want your whatever is monitoring to have multiple layers of redundancy, right?

Maybe you can just wear two pagers; like, slide under your belt, so you can just clip a second one next to it.

But it’s using a single network, so you need redundancy, you need cell phones, you need emails… The whole thing.

Well, two of everything, I guess.

Can’t silence it… That’s my that’s my biggest issue. I forget I silenced my phone, and then I’m like “Why did I not get that text? Oh, because my phone is on silent.”

Do you normally not have it on silent? My phone’s been on silent for 12 years.

Same here.

I don’t know, man… I don’t know.

That’s why I got the watch, right? The watch will alert.

Yeah. I feel like the phone is such a hard thing, man… I’m just like, when to make it alertable, let’s just say, or like something where it can bother me… Because I miss critical texts, or emails, or… Not so much emails, but more like texts or phone calls.

I wore the watch for a couple of years, and thought that I needed it in my life, and then the watch broke, and that made me ask the question, “Do I really need the watch?” And I just decided 300 bucks or whatever, I’m going to go without it for a couple of weeks and just see… I never felt more freedom than when my watch broke.

[unintelligible 00:06:28.02]

Oh, I haven’t bought one since. I haven’t had a watch for over a year now, and I don’t think I’m going to go back.

What kind of watch have you got, Gerhard?

It’s the Apple watch.

But which one? The Ultra ,2 or the Ultra 1?

I bet it’s the biggest, most expensive one.

Well, it is the Ultra… I was waiting for that.

Got it.

I love the extra GPS’es, and everything… So it has like a couple of things in it. Ultra 2, that would be the new one, and this would be the backup. [laughter] That’s what we’re working towards. But I do like – especially when I drive, I love Apple Maps. That integration is really, really good.

Not sure if you’ve tried it, but when you have to take an exit, or you have to take a turn, it just vibrates. It’s very, very helpful.

Yeah… I’m with you there, but I’m not with you there. I feel like I like the Apple Maps, and I go there, but I use CarPlay instead, rather than the watch. Let the car be the alert…

And she’ll just talk to you. She’ll just be like “Take your next right.”

Or just pay attention to the map.

Yeah, but you’ve got to pay attention to the road, Adam. Also, you’ve got the game on your – your handheld. You’ve got to watch the game while you’re driving…

That’s right. I’m playing PlayStation 2 while driving, and – that’s a Fast and Furious throwback, Jerod.

Oh, I thought maybe it was a Silicon Valley reference…

No, man. I’ve got more in me, you know? Deeper pop culture.

You’re not a single trick pony? This guy has more than –

Do you know Fast and Furious, the very first episode, or the very first, I guess, movie?

That’s the one that I remember. Yeah.

Before the race, the kid was playing PlayStation. It was actually PlayStation 1.

In his car, in the console, prior to the race. And it was like a flex. It was like “Oh my gosh, I’ve got to trick out my car.

I have to have a PlayStation console in my dashboard.”

That’s not realistic, because that sucker did not have – what was it called? When the CDs would just jostle…

Antivibration?

Yeah. You know, the old Walkman that took an actual CD, and you walked around with it… It would skip constantly. Skip protection.

Totally. Yeah.

I’m pretty sure PlayStation 1 had the same problem. If you were driving a car and playing it, you were probably skipping all over the place. Gerhard, get us on track here. We’re here to Kaizen. We’ll talk about movie references the entire show… Kaizen 18.

So I realized that this was, or will be - when it will come out - my 1-1-1 episode on the Changelog.

Oh, wow. You like that number. It’s not round, but it’s symmetrical… I don’t know what it is. It’s all ones.

It’s three ones. I mean, that happens rarely. Like, the next time - twos… I think it’s going to be such a long time, right? If we only do the Kaizens, I think that will last me to the end of life, honestly.

That might.

Yeah. I mean, two and a half months… 1-1-1 divided by two and a half months… That’s a lot of years, I think.

How many Kaizens do you think we’re going to make it to before one of us, you know, kicks the can?

Gosh, Jerod…

[laughs]

Well, hopefully we’ll get to a hundred… That’s what I would like to see. We’ll get to a hundred, at least.

Yeah, a hundred would be awesome.

Yeah. I mean, we won’t stop like Ship It, at 90… This one has to go to a hundred.

That’s right.

That’s what I’m thinking.

One hundred. Wow. So we’ve got 75 more episodes to go.

And that’s, I think… What is it? 40 years?

[unintelligible 00:09:54.28]

Let’s just acknowledge it and move on. Yeah.

Yeah. It’s a lot of years.

That’s a lot.

What about yours? Do you what episode appearance this will be for you?

Mostly all of them.

Well, we could look it up easily, because it’s on the person page.

Oh, it is, yes.

I love that page. I don’t know if anyone is aware of that, but if you’ve been as a guest to the Changelog, or even if – I think if you replied; I’m not sure about that part… But it will show all your interactions, or all your references exactly on the Changelog. So I use that quite a lot. So Changelog.com/, and what is it for the person?

Person/slug.

Person… Alright. Gerhard. Cool. So there you go. 110 zero episodes.

So I’ve been on 909 episodes.

909? Wow…

Wow. That’s a lot. Yeah, 909. Crazy.

Yup. So this will be 910 for me. Or maybe 911 by the time it comes out… I don’t know, because Wednesday’s show is Adam by himself. So this will be 910 for me.

Yeah. Do you think this is the year that you’ll crack a thousand? Is this it?

Good question. Three a week…

No. Three a week? Yeah, no.

Three a week times 50… Yeah, it will. I’ll get there.

It’s February already.

That’s true. [laughs] I keep thinking it’s the start of the year.

It’s March, actually.

Time is compressed.

Yeah, so it’s possible. Maybe our final episode of the year will be the thousandth.

Wow. Okay. And 802. What happened there? How come do you have more episodes than Adam? What’s this all about?

News… He’s got a hack.

I was on JS Party for a while… JS Party and News. Yeah.

Okay. Okay.

Alright, so I’m winning.

Alright so far…

No one’s catching me. Or losing, depending on how you think about it.

Yeah, I guess I couldn’t catch you. Could I? It would probably be pretty hard to do that.

You could take over News if you want… [laughs] He’s like, “It’s not worth it.”

I mean…

Funny news, funny news maybe…

You know, scaling is a people thing.

So let’s talk about something that happened… Let’s start with the low. Well, Changelog was down for four hours.

[00:12:08.16] Oh, let’s not talk about it…

Did anyone notice? Well, I’m really wondering, did you notice that Changelog was down?

I did.

You did. Okay. How did it happen for you?

Well, I went to the website and it wasn’t there.

Right. Okay. Okay, cool.

The classic way.

Alright.

I assume it was signed in people only, because I didn’t actually check, but I’m always signed in… And so we will cache with Fastly if you’re not signed in. But if you have a sign in cookie, we pass it through to the app every time. And the app was down, and so… I noticed, because I went to go share something and wanted to look at something, and I don’t know, it was down. Although I think I already knew that, because maybe you posted it… I don’t know. But I definitely just went to the website and it 503-ed, or whatever.

So for anyone, if you’re wondering if Changelog is down, go to status.changelog.com and you will see what is down, when it is down… So in this particular case, we had a previous incident, there’s a bit of a red right there… And this is the origin. So the origin was down. And if you click on that, it takes you to the status, and you can see the whole history.

So this is something that I do update whenever there’s an issue like this, especially when it’s a big one. We had a few small ones, just like a few minutes… But those don’t show up. But this one was significant. And February 16th, 10 AM. Actually, it was before 10 AM. So that was a Saturday… A Saturday or a Sunday? No, I think it was a Sunday. February 16th…

Yeah, I know it was on Sunday.

Yeah, it was a Sunday. So that was half the Sunday, looking at this. So what happened? Well, if you go to the discussion, 538, that’s where all the links are… But basically, as it happened, it was a Fly issue, and the fly.io [unintelligible 00:13:53.05] Fly.io - and I’m going to scroll down to that particular message - has providers. So in this case, one of the upstream networks… So let’s see. Let’s see, where is it? I’m looking for… There. It was a far upstream issue, and I’m now looking at a post from Kurt, the CEO of Fly… And he was saying that “The failure was far upstream from us, and a single point of network failure.” So one of their vendors let them down, basically, and there’s not much that they could do about it.

So this is what happens when – because we all depend on other systems, and other systems are always upstream systems… You have internet, the internet provider, I’m sure, has transit links and peering links and all of that… Some of those can be down if you don’t run two of everything. In this case, they didn’t have two of everything. The switch went down, and it took four hours for someone to fix it. And it was, I think, Sunday, very early morning on the East Coast, which just [unintelligible 00:15:03.13]

Somebody had a bad Sunday…

Well, lots of us did, but one person in particular probably…

Their virtual pager went off.

Yeah. So that was not great… But I think one of the key takeaways for us is that in terms of how many requests didn’t go through… So final impact - I posted, again, on the Fly community… So for the whole outage, actually, our SLI for successful HTTP requests - and that was like the last 24 hours - dropped to 97.40%. So well below three nines, even four nines… But it’s still 97% of the requests were served. Most of them, they go to our object storage. All the MP3s, all the static assets, all of that…

[00:15:59.09] The website itself - I mean, some of the pages, the most visited ones, they are being cached, and they were served from the CDN. Fastly in this case. So if you were not signed in, most likely you will not have noticed this. And I think for many people that consume the content through their podcast players, or from YouTube, wherever you get the Changelog content from, I don’t think you will have noticed this. This was very specific to the app. And if you have, let us know.

MP3s continue to serve, right?

Yeah. Exactly.

So yeah, unlikely that people really noticed, except for the people who noticed…

Yeah. I mean, there were some, for sure, because we can see that a bunch of requests failed… But in the big scheme of things, it wasn’t that much. Now, did we run two Changelog application instances?

We did not.

We did not. Actually – well, we did, but they were all in the same region.

In the same region.

So this was a regional failure… All of Ashburn, Virginia, in this case Fly’s Ashburn, Virginia, IAD, that’s the one that went down…

That’s a major one.

Yeah, that’s actually the primary one. What made this worse is that Fly itself, the control plane for the machines was running in that single region, which meant that no one could scale their apps. So if you happened to have a single or multiple app instances running only in that region, everything would have been down. If you could have scaled it while this was ongoing, you could just basically spin up another application in a different region. But that was not possible. So again, there were a couple of things that failed in surprising ways… And for me, what was surprising is that - well, we did have two application instances, but they were both in the same region, and the region went down. So now we have another one running in EWR. I think it’s New Jersey, somewhere in New Jersey… So yeah, we’re good.

So we’re good to go.

We’re good to go.

Why don’t you put that one somewhere closer to yourself, Gerhard?

Well, if I did, it would still need to go to Neon. That’s where the database is.

Good point.

So that would introduce a lot of latency. Now, if you could distribute the database, and we could have a couple of free replicas, which - it’s something that I’m thinking about, this would make more sense. Do we want to do that?

Oh, I don’t know… Do you have other stuff to work on?

I do, but… Yeah.

I don’t think we need to do that.

Chasing the nines is fun.

Cool. What’s next?

Alright. So there’s the thread… Linkify chapters in Zulip and new episode messages. I remember we talked about that in the last episode…

That’s right.

…and I think it was like a day before, two days before, it just landed. How amazing was that?

So amazing. Probably the coolest thing that happened in this whole Kaizen. No, just kidding…

So far, so far. Hang on, hang on.

So far we’ve had an outage and a feature.

Yeah. So what was it like to implement it?

It was not very hard, I don’t remember. 51 additions and 18 subtractions, so that’s a small feature… You know, just a little bit of code to go ahead and linkify those suckers. So for those who don’t know what we’re talking about, when a new episode is published, our system automatically notifies various social things, one of which is our awesome Zulip community. If you’re not in there, what’s wrong with you? Changelog.com/community Get yourself a Zulip. It’s totally free, and you’ll be able to chat about the shows after they come out.

[00:19:57.06] And so every time a show comes out, it posts in there “Hey, new episode.” It has the summary, the title and the link to listen to it, and we’ve also now embedded the chapters as markdown tables… And that was already there. That’s not this feature. What I didn’t do prior was I didn’t linkify the actual chapter timestamps. So you can click on a timestamp and immediately start listening to it. And so that’s what I added, was I made those timestamp links so you can click and listen from that spot… Which was requested by the both of you on the last Kaizen.

And so since we have a three-day turnaround between recording and shipping each episode, I actually shipped the feature out, I think, prior to that episode dropping.

That’s amazing.

Because like I said, it was half an hour of coding. But it’s useful. Those are the best features, right? A little bit of work, lots of value.

Exactly. I’m wondering if anyone else is using it, or if they noticed it, and what do you think? Useful?

Good question. Let us know in the Zulip comments. Useful? Useless?

Or do we revert it? Do we revert the commit? [laughs] Not going to happen, though.

Yeah, I just don’t see any reason why you’d take the links away. If one person likes it - we know Gerhard likes it - then why not?

That’s really cool. Did you click one of those links, Adam, since the feature landed?

I would say no.

You said you would. You said you were going to click on them.

Did I say that?

Something like that.

Go back and quote me. I want to hear. Jason, pull up a quote. If I said it, I want to know I said it. I’ll eat some – what do you call that? Eat crow? Do you say eat crow?

Yeah, eat crow.

I want to eat some crow, man. I want to eat some chicken. Eat more chicken. I do like the feature being there. I think that I’m just a go there and do it kind of person, not the stay here and click around kind of person… Although I do like – what I like about Zulip and what I like about what this offers, I believe, is that we tend to have thick conversations, much thicker than we had in Slack. And so one of my biggest excitements, I would say, if that’s even a word, happiness levels, pick your 8:34 in the morning word… This is an earlier recording time, as you can probably tell… I’m going to do a [unintelligible 00:22:16.20] a drink of coffee.

Are you apologizing for your lack of sharpness, or what?

Yes! Yes, yes, I am. I am not very sharp in this moment.

The conversation must be boring, Jerod. That’s what it is. We’re not good hosts. [laughs]

Adam’s boring himself over there.

That’s right. Geez… No, I was drinking the coffee.

Alright. Well, did you bring some crow?

No, I’ve got more to say. I’ve got more to say. What I enjoy – if you couldn’t tell, I’m getting there… It’s that how thick these comments are in Zulip. So back to what Jerod said, you are missing out. Changelog.com/community…

The quality. When you say thick, you’re talking about the quality.

Thick. Good. Yes. Sorry. I mean, is that not clear enough? Thick comments?

[laughs] No, I’m just making it clear.

Thick comments actually means they’re not very good. [unintelligible 00:22:57.04]

Well, it depends if you like thick or not, you know?

I think thick is always better than thin. I mean, go choose yourself a Reese cup, or whatever, right?

A what?

What do you call it, a Reese? You pick your nomenclature for your Reese cups, man. I like the big cups, okay?

Did you say a Reese cup? [laughs] What is a Reese cup?

Can you show us, Adam? Can you show the viewers what that means?

It’s this, okay? That’s what it is.

It’s big.

It’s big.

They have a big cup. Okay, we’ll call them Reese. My wife makes fun of me, too. I used to say Nike… I had no idea it was Nike my whole life. Okay?

You said Nike?

Yes. Like a fool. Like a fool.

Okay. [laughs] I’ve never heard anybody say Nike. This is amazing. How many years? How many years did you say that? [laughter]

That’s amazing. You never realized it until you were 35?

[00:24:03.12] I just didn’t know there were two ways to say it. [laughter]

That’s a blast…

Back on track, guys. You got me totally blushing way too early in the morning. Okay, anyways.

Alright. Sorry.

The comments are vast, lots… They are plentiful… They are thoughtful… And there’s lots of commentary in our Zulip. So I think what these – back to the links. Gosh… What they provide is if you are there and you’re in conversation and you’re using that table as a reference point - well, then you’re obviously going to be able to go and click directly from there… Which I think is super-cool, because you have the useful tool where the conversation’s happening.

Okay. Well, I’ve found a quote from our previous episode.

Oh, boy…

Did you bring your crow? Because you might have to eat a little bit of it.

What did I say?

I said “I could make those links clickable, and maybe I’ll do that.” And then Gerhard Lazu said “I would love that.” And then Adam Stacoviak said “I would concur and plus one that, because that would make me click a chapter start time easily… Because it would be clickable, for one. And I want to now.”

It’s just obvious…

So you said it would make you click it, because it’d be clickable, and you want to now.

Yeah. And I have, but I’m not like a – I’m not a daily clicker.

Oh, I thought you just said you hadn’t.

I clicked at least one…

Okay. Alright. Well, controversy solved.

At least one. My gosh, this bus is heavy I’m under here.

Alright. So I just wanted to close that loop, and then we can move on.

So I love this feature. Great job, Jerod.

He uses it all the time. [laughs]

I use it daily. I’m a daily active user of this feature.

Awesome.

Break: [00:25:41.18]

So yeah, so that was a good one. I enjoyed that it landed. We will talk about the YouTube videos, for sure, and that’s going to come up… Or we can talk about it now, by the way, because really, that’s for me – that just like took the highlight in terms of features.

So once the video podcast landed, that was just so amazing. So I am still watching Adam’s podcast, Adam’s video with Techno Tim. I’m almost at the end. That was such a great conversation.

Thank you, man.

Had it not been for the video part, I would have missed, for example, Tim’s background, the little mini rack that he was building, the little body language… It was just so good. I’m enjoying that a lot more than if it was just audio only, because there’s so much more detail in that content.

Well, that makes me happy.

Yeah. So that’s the one that – and it doesn’t often happen that I listen to a Changelog episode from start to finish. I usually have parts, which is where the links were coming in very handy…

Yeah, chapters.

Exactly, the chapters. But this one episode, I’m like near the end and I just cannot wait to see how it ends.

Let me ask you a question. If Tim and I did that more frequently, do you think that’d be a good thing?

Yes. But I think that you need to up your game and start delivering on some of the ideas. Like, start implementing some of your ideas to see how they work in practice.

Such as?

Such as… So you were saying about building a new PC. So I’m curious, what did you do about that? Did you buy –

I’ve built a thing.

Did you build a thing?

Oh, wow. Okay.

I’ve got a beefy AI homelab right now.

Very nice. What are you running?

Oh, you want the words here? Okay, fine. I will tell you. I will tell you the words. Let me see if I can –

Like, what’s the case? Did you go for Fractal? I know you’re a big fan…

Yeah, I did go Fractal.

It feels like we need some pictures. I mean, if this will be in the beeroll for Jason, I would love to see that. What GPU did you go for? I’m very curious about that. Like, that was something –

Well, so I repurposed. As you do anything, you start with what you have. So rather than go out and spend the five grand that I would really love to spend on something, all I did was just go pick up a 3090, and add it to the existing machine I already had. So I had a – I had just built this beefy machine for my Plex machine, which was like just overkill.

I just wanted to build something. So my motherboard is an Asus workstation-level motherboard. It’s a 680 ACE, and it’s got four DIMMs of DR5 RAM available, up to 128 gigabytes of RAM… So I’ve got that, I’ve got the 13900K, so it’s an older generation CPU, but it’s still very, very capable. Couple that with the RTX TUF Gaming 3090, and the maxed out RAM, and an NVMe SSD… Well, you’ve got yourself a really fast machine. And that’s my stack right there, basically.

That’s very nice. Network?

It’s 2.5, by default.

2.5. Okay. Okay, okay. Are you thinking of going higher on the network?

So the motherboard doesn’t offer it by default, but I can add a card. I don’t know if I’ve maxed out on my PCIe lanes though, with my 16-lane requirement for the GPU.

Yeah. Well, if you have NVMes, it means that you have only one or two. You can’t have more than two.

There’s three slots on the board. I’m only running one. I only have a need for one.

Right. So the reason why I ask that is because as soon as you – I think as soon as you fill the second lane, you’ll half the lanes for your GPU. It will go from 16 to 8.

Yeah, I don’t want to do that.

Because those lanes are shared with the NVMe drives.

Actually, in practice it’s not as bad as you would think. I did the same, so I maxed out the NVMes on another machine, and because I maxed them out, I have like four or five… And because of that, my GPU - which is a 4080 - dropped to eight lanes. But that’s enough. The drop in performance is so little, because I don’t game on it heavily.

[00:32:00.10] Yeah, it’s so fast already. What you really want is the storage. You want the VRAM, not so much the speed necessarily. Unless you really are pushing the speed, and you’re doing AI stuff and you’ve got a serious parameter, LLM sitting there or whatever, then maybe you want those tokens to be as fast as possible, because like that’s the whole point.

The actual difference is more like a few percent. So if you go from 16 to 8, it’s just a few percent.

This is where I would love to geek out at. This is what I love about these conversations with Tim. It’s just they’re so infrequent. It’s once per year, so we’re more catching up versus digging deep.

Yeah, I would love you to have these more often, and especially like – you know, you had that conversation, you said about some of your plans, and a lot of the things that you mentioned now, I remember you mentioning when you were talking to Tim. And so how did you follow through on that? Did you stick to what you said, or did you change your mind as you were building it? It sounds to me that you haven’t. I remember Tim mentioning the 3090, so I’m very curious, did you buy it off eBay? Because you were mentioning about your good experience with eBay.

Yeah, I got it on eBay. Really good experience on eBay. I think I got it for like 800 bucks.

It’s not the worst price ever. US dollars. It’s basically brand new, it’s super-clean. I tested it the moment I got it. I did like all these parameter tests… Initially, I spun up an Ubuntu installation, ran into issues with Docker and GPU… So I went to the dark side. I installed Windows 11 Pro. And so my AI home lab right now is being powered by Windows 11 Pro. I know Jerod is [unintelligible 00:33:39.16] over there on me, which I’m cool with… But man, you’ve got to explore. I love the idea – I’ve never played with Windows in like… And I told my son this. I’m like “It’s been 20 years since I’ve played with Windows. That long.” And I feel like there’s a lot of cool stuff there, but man, they’ve got some really terrible warts over there. Like, just so bad. It’s developer-hostile now, not just user-hostile. There’s ways to clean it up… Chris Titus has a script that you can run via terminal, the administrator-level terminal, and remove a bunch of stuff, and sort of like make some things nicer… Which I think is super-cool. It makes it a little easier as a non-Windows user to like easily get to a certain state…

But yeah, I played with it at first, I did some benchmark testing against it… I really pushed it as hard as I could to just confirm it was a good buy. And it was a good buy.

Very nice.

But I started with where I was at, versus “Okay, let me get a brand new motherboard, brand new stick of RAM…” And I would love that. That’s the fun side of building PCs, is like I really wish there was a better operating system that wasn’t… Gosh, will I get punched in the face for saying this? That wasn’t Linux. I will say though, this is the first time I’ve played with Ubuntu Desktop, in a long time. And that has actually come a very, very long way. Ubuntu Desktop, I think, is probably the closest contender to a non-macOS operating system that’s fun to play with GUI. Now, albeit I have not explored PopOS, and others. I just haven’t had a – you only test things that you’re curious about. I just haven’t been curious about desktop-level Linux stuff yet. Mainly because it’s been the year of the desktop Linux forever, and it’s never come, truly. I’m hopeful though that one day – I think it’s probably the closest it’s been in a very long time.

So when I started my adventure in GPUs - so I needed it to do the video editing properly… I went Linux first. I was saying, “You know what? I’m not going to go to Windows. What is the best Linux distribution that has good support for GPUs out of the box?” It just has the drivers pre-installed, and everything just works. And the tiling manager works as well, because that can be sometimes a pain.

[00:35:59.05] PopOS was the one that kept coming up very high, and I said “Let’s just try it.” So I did, and I think I’ve been running it for two years, coming to two years… Coming to two years I’ve been running it. And before I had the Nix OS.

Of course you would.

So this was a machine that went from Nix OS to PopOS, and I’m enjoying PopOS more. It feels more like a natural way of using it.

What’s it based off?

Ubuntu.

So it’s Ubuntu-based. But a lot of the little things that in Ubuntu maybe they don’t work, they seem to have a better – there seem to be a bit more polished off in PopOS. Specifically the NVIDIA integration and the tiling manager. Things are just, a bit more, I don’t know, cohesive. It feels that it’s a bit more cohesive.

So this machine I’m using for a bunch of things, and while I started editing the videos on it with DaVinci Resolve, DaVinci Resolve itself - PopOS is not a supported operating system. And then I was forced to go to Windows. So when Tim said that you have to try them all in that interview, I realized “Yes, I actually went through the same journey.”

That’s what compelled me, too. He’s like “Adam, you’ve gotta try them all, man.”

So I use Linux for something, I use Mac for something else, and I use Windows for editing… Because apparently, the editing software works and has like the best support, codecs and things like that, they work really well on Windows. The operating system itself… Oh, wow. I don’t know what words to use that would be politically correct, but also accurate…

They make it so hard to do everything. Manage your own user. Like, even manage your own user. There’s a control panel, and there’s user accounts there. Then you’ve obviously got system settings, or just settings, which you have those things there… There’s like three places to do pretty much anything, and you’ve gotta do three things to change one thing, and they’re all in different places. And some of them are in legacy-looking applications… Good luck even finding it in the sea of things you can find.

I just think that somebody there is not empowered to fix it, or somebody doesn’t care. I’m not sure which one it is, but they really could have a… Because of the reasons you’re saying, this out of the box support. I had such trouble getting my GPU to play well with the operating system, and then being able to pass it through to Docker… I had to like go and add some things, and I had to go to documentation that was like seemingly foreign… I just felt a little lost on Ubuntu Linux to try to get the initial state I wanted it to be at… And that’s my default. So I didn’t try to go to PopOS, or Explore. I could have, but because I had that conversation with Tim, he’s like “You should try them all”, so I was like “Well…” I tried Windows 11, and… Not the worst, but man… Even – I felt successful just SSH-ing into it. I had to post this video to our general channel, and then Jerod had to go in there and backslash me. [unintelligible 00:38:58.22] Love that song. Just because it’s such success to SSH into the machine. You had to go and install the OpenSSH server… The client was there by default, but the server was not. And then I think my original username had a space in it, so when I was SSH-ing into it, it wasn’t Adam, it was just something else… I don’t know. Trying to find my slug for my username even… I don’t even know if I found it. I think I luckily found something to like swap it out, and restart SSH server, and I was in.

Well, if you have multiple NVMe drives, you could always do a boot, and you could try –

…another Linux distribution. I can recommend PopOS. They have a new graphical manager, I think Cosmic; that’s new. It’s not as stable, but I hear very good things about it. So they rewrote it, everything in Rust. Apparently it’s amazing. I haven’t tried it yet. I’m still like on the old one.

[00:40:02.17] What are your thoughts on WSL2, and how it integrates? I haven’t explored that deeply, but I have a lot of hope that there’s cool integration. One thing I know I can do is I can SSH from one machine to another, rsync files from that via Ubuntu in WSL2 on Windows, and I can run Linuxy things, or if I have an operating system or whatever installed to WSL, a distro, I should say… What’s your experience there with that?

I tried it. It’s okay. I mean, it gives you a close enough Linux experience. It’s much better than it used to be before. PowerShell - I just can’t get along with it. I just wouldn’t use it. Command Prompt - seriously. That was like 20 years ago.

The thing is still around? So yeah, legacy, but I think good legacy.

So WSL2, I think it’s a good feature, but Windows itself as an operating system, as a package, the outer package just feels wrong to me.

It does feel wrong…

And I use it only for specific reasons. So DaVinci Resolve - I have a decent experience with that. If I had to do this all over again, I would get a Mac, an Ultra 2, an M2 Ultra, or an M4 Ultra when they come out, a really powerful CPU and a GPU… But the RTX, like a 4080 or 3090 - that is a level of like hacking that you just don’t have in the Mac world. So I just wanted to try it out. It was okay. I mean, the Windows workstation, for example, that has a 4090. It’s a very loud system. I don’t think people realize how quiet the Macs are.

So quiet.

Whether it’s a laptop, whether it’s a Studio, whether it’s a Mini… They’re like whisper-quiet. And this first Linux Workstation which I built, it’s a fanless one. The PSU has no fans… There’s no fan in the system, and I love it for that. For example, the NVMe - it has no spinning disks… Which, you know, it’s just like a great –

Fast and silent, yeah.

Exactly. It’s a great feature. In comparison, the Windows machine is just the opposite. It’s just loud, it’s just hot, very hot… And it’s a 1390KS. It’s like the top of the range 13 series.

Oh, okay. The KS is overclockable, I believe.

Exactly, yeah. It goes all the way back to six, six and something megahertz.

So we have the same CPU then.

Yeah, yeah.

Except for you’ve got the overclockable version of it.

Yeah. It has also like 192 gigs of RAM, so it’s like fully maxed out… NVMe, the whole – it’s like a fully maxed out, or it used to be fully maxed out PC maybe about a year ago. So yeah. It’s okay, but trying that world, it is my editing machine. And I love when Tim said that. You need to have roles for your machines, and that’s what it is. And if it was to break down, that’s okay; there’s another machine to use to replace it.

Your comparison though, the fans and the noise level… So the exploration for me is not “Okay, let me –” And I think for now I’m like “Yes, let’s make this a creator PC of some sort. Let’s explore this world.” I don’t know if I’ll stay there forever, but I’m enjoying the exploration. I’m not enjoying it because it’s Windows necessarily. It’s enjoyable because it’s new territory. It’s newfound “How does this work? Does this fit for me? If it does, where does it fit?” I will 100% concur and agree that while this machine spins – its fans just spin up opening applications. It doesn’t need to, it’s got this beefy CPU. So for whatever reason, the front three fans spin up for 10 seconds, just enough to hear it. And it goes back down, and it kind of cools off. Or if you ask a big question in Ollama, or whatever, it’s obviously going to spin up for the duration of that question. So it’s by design doing that.

[00:44:03.11] Will the Mac world supersede this in a smaller, easier package, that’s silent, and less power hungry? That’s cool. What they’re doing there is super-cool. But you can’t build it yourself, and it’s so sad.

I know, I know.

Anyways, we can probably move on, but that’s what I love about building PCs. It’s just the exploration of the hardware, how does it work, what works together… That kind of stuff.

Yeah, me too. Me too. And I think we’re at a stage where it does make sense to have a few lying around. Have a Windows machine. But if you have to do testing or anything like that, use the Linux machine. I think it’s a very eye-opening experience as to what is possible. And then if Mac is your default, or if not, if you have the opportunity to get maybe a Mac Mini, do that as well. And then you will find the one that you love and the one that’s your daily driver, and you have a couple as backups when something goes wrong… Because it does; it does happen.

Yeah, man.

Well, talking about podcasts - because in a video podcast, because that’s how we started… I don’t think we finished. There’s so many new features around YouTube and around content on YouTube. I think the reactions have been mostly positive. There was a whole Zulip discuss about it, which I don’t contribute to many, but this one I did contribute to… And February 1st I even got some love hearts from a few of you, Nabil and Marsh… So thank you very much for that. But what did you think about launching video podcasts? How was that transition? How was that new chapter?

I think it’s going pretty well. I guess I don’t consider it to be over with. Maybe it is, because I guess a lot of what we think about is production workflow, and we’re constantly trying to improve that, and make it better… I would say that we successfully went video-first now, and we have systems in place that we can do that reliably… I had to build a few things, and we had to figure out a lot with regards to chapters and timestamps, and how we handle the videos on YouTube, versus the podcast episodes and audio… And all the nuts and bolts I think were fine. We just kind of figured it all out and did it. Nothing really was too difficult there.

The response has been positive. I think a lot of our audio listeners have a little trepidation, because they think “Is it going to become a YouTube show?” and they never want to listen to it on YouTube, which I don’t either, honestly. We’re doing this for people who like that kind of thing, like Gerhard, I guess, and others… And I acknowledge that you all are out there, and we appreciate that you are. And we want you to watch it on YouTube, which is why we came there. But our existing audience - very few of them find much value in the videos, I think. Or the ones who at least are vocal, don’t. And I get that. And of course, the trepidation is like “Well, will the audio suffer? And will we start to pull a thing up on the screen and have reactions to it without explaining what we’re looking at?” I don’t ever want to get there. Hopefully we can be self-aware and always remember that we have a listener, not just a viewer, and explain what we’re looking at if we are looking at something…

So for them, I understand, because if you love something and it’s changing, you just hope that it doesn’t change for you for the worse. And so hopefully we haven’t done that. I think we’ve had most people who had trepidation, at least so far, have been fine with the change. They haven’t noticed much of a difference.

And for those who love video podcasts, or watching conversations on YouTube… Because is it a podcast, actually? I guess YouTube thinks it is. We’re there now, and people are watching. We get 500 to 1,000 watches on a video. We hope to grow that. And no real complaints there, I don’t think, besides your random YouTube troll, which we’ve had trolls our entire career, so we don’t feed them or care about them very much. That’s my initial thoughts. Adam, anything to add or subtract?

[00:48:16.11] I ran into – because I was actually talking to my son last night… I was like “Dude”, because my son’s nine, and I’m about to give him an Ubuntu desktop machine to play with. I’m going to start teaching him Linux. And I was like excited, because I had just SSH-ed – literally maybe earlier that day, SSH-ed into this Windows machine. I was like “Success!” you know? And I was like referencing embedded systems, and why it’s so cool, how Linux is so cool… And I’m like “Do you want to see something cool?” And I went to YouTube and I searched “embedded Changelog”, and I just searched those two things and it came up with the embedded podcast we did, Jerod, that you’re aware of.

And I go there and there’s this comment that’s like 500 words deep. And I’m like – I had no idea, one, that this comment was here. And two, I was re-revelationed, I suppose, in terms of how cool this move is, is that we’ve got this new commentary level… And the person’s “I like this podcast, I’d love to hear more”, and they kind of go into all this stuff. Now, the person doesn’t have a username, they don’t have an avatar, so that’s kind of sad… But, I’m still hopeful that there’s more like that, that are thicker. Geez, y’all don’t like that word.

I don’t dislike it.

I think thick is a good thing. Anyways, I won’t go back there. It’s an exhaustive, thoughtful comment, that I haven’t even read the whole thing yet, but I was like “Wow, there’s this super-huge comment that somebody’s like actually talking about relevant things, and not how we suck.” So that was cool. I loved that.

I was pushing for this, because I was like “This is what we need to do. There’s a whole audience there that we can tap into, that we’re not.” And clips are great, but they’re not the full-length podcast. I’m now sad that when I share with people that we’re on YouTube, that they’re like “Hey, did you just start producing this podcast?” I’m like “Nah, man. It’s been like forever, basically.” And so we have this huge backlog that’s not there. And that kind of makes me sad, because there’s a lot of visuals and a lot of just like seeing the reactions, like Gerhard mentioned, with Tim… Just being able to see his pause, or his thinking, or my thinking whenever I’m talking, or him pointing to his mini stacks behind him… I think that’s – it’s not for everybody, but I think there’s a large majority of people who are gravitating more and more towards that, who do listen on YouTube, pay attention when they want to, but when they want to, they can go and look at the screen. And that’s been my use case for it personally, and so I wanted that for us for so long, and I just felt… Not so much bored, but there was a missing, necessary, humanistic component that was visual, that wasn’t there. And so when you’re audio-only, I feel like you’re stuck in this box, and I feel like we’re now – we’re like the genie out; we’re the cats out of the box, so to speak. We’re able to explore the bigger world of YouTube, and capture not so much more of an audience, but I think there’s a lot of people that are waiting, wanting what we produce. And now we’re there, in full form.

Yeah. So YouTube - here to stay, a new way to interact, for sure… And more and more integrations in the websites. I quite like that. For example the Watch button, that was something which was one of the new things to drop on an episode… It’s getting a bit crowded. Maybe this one… But it’s there. You can click on it.

[00:51:56.25] There it is. You can click on that and it’ll pop in there, and just –

Look at that. How amazing is that? Cool.

Yeah. It’s cool, right?

That’s the good stuff right there.

And on the Play bar, if you go to an episode’s page, the play bar got a little wider, and it has a Watch button, which will do the same thing. It’ll pop, it’ll embed it underneath it once you click on it. We don’t auto-embed, because you know, only when you want it. On demand.

Should it say Listen, and this is Watch?

Yeah, maybe.

Listen and Watch, maybe. Yeah.

Play and Watch, maybe Listen and Watch… Yeah, that’d be a good improvement.

Yeah. But these are nice – like, you can watch it right here, and they just get automatically expanded. I like that we are a commit-driven company, by the way. A lot of the features that get dropped, I just find them through commits. [laughter] This is so basic.

No [unintelligible 00:52:43.13] circumstance, you know? No blog posts, nothing.

No. We are a commit-driven company. So if you want to know what is happening at Changelog, follow the repository and just like look at the commits.

That’s right. We’re very committed.

Break: [00:52:57.19]

Now that we’ve got video-first going on, it’s time to get CPU officially launched… Turn that frown upside down into a smile, and an index. Something that’s cool.

Yeah. Very nice. Okay. Any infrastructure that we need to think about, talk about, for CPU.fm?

You could share with him, Jerod, what your thoughts are on the application.

The plan is just to have a bog-standard web app, with RSS feeds.

Nightly style? Like, that is a bog-standard app? Or?

Oh, in terms of the actual software?

Stack. Do you have a database? Do you need a CDN? What would that look like?

There’ll be a database, a CDN would probably be smart… But maybe we just drop it on R2. Probably similar to what we’re running now for us, only it’s going to be simpler, and it’s going to be a separate software stack. So probably going to go back and give Ruby on Rails another kick down the road and see…

Oh, wow. Interesting.

…just because it’s been a long time and I’ve been in Elixir land for almost 10 years now… And every time I write a little bit of Ruby code, I’m like “You know what? This is my first love.” And so it’s probably going to be a Rails app, deployed on Fly. It’ll be pretty simple, have a backend, write out HTML pages and RSS feeds… That’s the plan so far. I haven’t written a lick of code yet, so these things may change. But that’s the plan.

Okay. Nice.

Keep it simple.

Okay, yeah. Neon for the database, I’m imagining?

Yeah, I would probably just reuse all the stuff that we’ve been using over here.

Public repo, private repo?

Good question.

Good question. Probably public. I don’t see why not. I’m not going to promise that, but I can’t think of a reason why it wouldn’t be public.

It’s mostly – the admin’s going to be for like just managing the podcasts that are part of it. And then the code, the actual logic of it is going to just be in building basically a super-feed for people. And maybe custom feeds too, so you can get your CPU pods that you like, and maybe if you don’t like one, uncheck it, or something. I built that already for us, so rebuilding it over there would be straightforward.

Which will require user accounts, of course, but… Or would it? Maybe not. I don’t know. I’ll figure that out. But that’s the plan. Pretty straightforward. Not much code. I don’t see why we wouldn’t open-source it. Unless I’m really bad at Ruby now. You know, it’s been a long time and –

It’s embarrassing.

…I’m embarrassed. Yeah.

I think that’s more of a reason to open-source it. You can ask for help. Contributions welcome.

I haven’t typed Rails new since probably 2015. So I’m kind of excited to like just type Rails new and see what happens.

Well, make sure you record that. I think many people will be interested in your reaction. I will.

See, Gerhard’s thinking; he’s thinking about the content. That’s what he’s trying to ask you about, Jerod. He’s like “How can we promote all these cool things?”

[00:57:53.17] Yeah, that’s what I’m thinking.

Yeah, I mean, maybe record it. I don’t know. I guess if you want that kind of content from me as I build out this new web app - which, honestly, is not a super-exciting web app… But still. Maybe I’ll use Cursor the whole way, and then I’ll just curse my way along, and then just rewrite it myself… If you want that, let us know in Zulip. Let us know in Zulip.

Yeah, it’s a new world, and I think seeing how you would approach that, with your Rails knowledge, things that have genuinely improved from how you remember it, what is better, what is worse… Because you have a unique perspective, which is the Elixir one… Running the Elixir application for so many years - how does that compare to Ruby and Rails? I don’t think many people did a switchback. I keep hearing about people going from Ruby and Rails to Elixir, but going back? I’m not aware of anyone doing that. It didn’t make the Hacker News, it didn’t appear on the Changelog… This would be news.

Well, I wouldn’t be ditching Elixir, because we’d still keep Changelog.com over there. So I would be going back for a new app, but living in both worlds from then on… Which I’m happy to do.

Of course.

Yeah, that could be interesting. You know, old man fumbles around in the dark with Rails, you know…

Yeah. He yells at Rails, and then yells more… Okay, yeah. That sounds interesting. Cool.

So… Pipely. Let’s see how long this is going to take. And by the way, this is where the screen sharing will get into its own. So there was a question that we had from Tim Uckun. I’m not sure if I’m pronouncing that right. “Why do you need a CDN if you have fly.io?” And I replied in Zulip. That’s the sort of conversation that happened there. And I went through all the various things… So the reasons why we need a CDN, even though we have Fly. So you can read it either in this GitHub discussion, or in Zulip; it’s all there, so you can go and check it out.

But the thing which I would like to talk about is that we are starting to have contributions to Pipely. And you may be wondering, “Pipely? What Pipely?”

“What is Pipely?”

Well, we renamed from Pipe Dream to Pipely. Why? Because Pipe Dream is taken. We can’t get Pipedream.com. We’ve already established that.

That’s a big company, very successful, I think VC-funded… So yeah. So Pipely.tech, I think is here to stay, and Pipely is the name of the repo. Whichever you go to, it will just redirect you. So the Changelog Pipely, or the Changelog Pipe Dream, there’s a redirect. And now we’re having – we had two contributions. If you go to the roadmap, the first one was “Make it easy to develop locally.” Pull request seven, from Matt Johnson, that took a while to write some Docker files, explain how all the pieces fit together… There’s a readme, so if I go to Pipely, we have docs, which we didn’t have before… Local dev…

So all of this explains what we’re testing, how we’re testing… Quite a few things there. So if you wanted to try Pipely, running it locally, there’s a doc that explains all of it. So thank you, Matt, for this contribution. This one’s great, and I’m sure that we will build on top of it.

So Matt - did he do this himself and just document as he went, or did he…? Do you know how he went about this?

So I think there were moments when we got together… So we had – okay, let’s go Pipely.tech. Pipely.tech has the whole story. There’s no more three mages, or three wise men… It’s just the whirl, so the image has changed… But we had – let’s build a CDN part two, with Matt and James.

Oh, nice.

So they’re there. And we’ll link to the video, so you can go and watch it… And make it easy to develop locally. So this was kind of like a follow-up to that.

Gotcha.

So Matt did a bunch of things… If you go to Pipely.tech you can read the whole story. Right now it’s the second article, “Let’s build a CDN, part II.” And this one, “Make it easy to develop locally”, is the first one. So in preparation for that, Matt had to do a bunch of work to understand how the pieces work, what they are, try running it locally… And he cleaned all of those notes up, and he contributed them to the repo. So if anyone else wants to try this, now they can. Now that’s there. So let us know what you think. So that was one.

[01:02:23.19] The second contribution, which was completely unexpected, is resolving the Varnish TLS issue.

Mm, this was a big issue.

This was a big issue. And we went deep. So Nabil Suleiman - he’s someone that you may remember from a Ship It episode. We talked about [unintelligible 01:02:44.03] It was a simpler alternative to certmanager that Nabil wrote, because certmanager was too complex… And I forget which episode exactly it was, but you can go and look it up.

So he heard us talk about the issues that we had when it comes to Varnish connecting to TLS backends, or TLS origins, and he wrote something that solves the problem. It’s called TLS Exterminator, and now Pipely is using TLS Exterminator to connect to origins that require TLS termination.

How does it work in a nutshell? We now spin two processes. We now spin Varnish and TLS Exterminator. Varnish connects to TLS Exterminator, which then that proxies requests to HTTPS backends. And that does the TLS termination, and all of that. So with that, we can now, if I go back to… Actually, I was here. With that, we can now add feed backends.

Now, these URLs, they have HTTPS. We could disable it. We can go via HTTP as well. This is something to discuss. Do we want to disable it? I think we should keep HTTPS on. And if we want to keep HTTPS on, we need a component that terminates TLS between Varnish and the origin. So keep TLS on?

Alright. So we’re keeping TLS on. Great. Because again, HTTP currently is available, but I think we should disable that, so it’s HTTPS only. We learned quite a bit with Nabil about why Varnish doesn’t have support for TLS. So if we go to Varnish Cache, why no SSL? There is a page on Varnish that talks about why SSL was not implemented. And you may be thinking, “Who wrote this?” Poul-Henning. If anyone doesn’t know who Poul-Henning is, let’s look him up. This was 2011, by the way. TL;DR before I move on, OpenSSL is too complex. And when it comes to the implementation, if this was implemented in Varnish, which would have complicated the code significantly, the SSL proxying would have required a separate process. Which is basically what TLS Exterminator is, is a separate process.

Right.

The difference is that not only would have Varnish been more complicated, it would have been slower. So the whole code it does with SSL would have slowed down Varnish. And that makes a lot of sense. So who is Poul-Henning? Poul-Henning Kamp - he’s a Danish computer developer, and he’s known for work on previous projects such as FreeBSD and Varnish. So he’s the guy that you can thank for FreeBSD. He had a significant contribution.

[01:05:41.17] And he is the top contributor on Varnish. Some would say Varnish is his idea. But what’s really surprising… So let’s go to Poul-Henning again… And he has – not a FreeBSD. There’s that… Okay, so we’ll go through GitHub… phk.freebsd.dk. So freebsd.dk apparently is his TLD, and he just has a subdomain. I think that’s really cool. So apparently, Varnish has a [unintelligible 01:06:10.05] license - I had no idea about that - and he’s very transparent about the accounting. Like, he runs a lot of the software behind the Varnish docs, and a couple of other things. And he’s very transparent about how he spends his time, and how much he charges for it… I was fascinated as an open source project how transparent it is. And who contributes the most, and things like that.

So popping the stack and just going through who he is… So FreeBSD, apparently, MD5 Crypt, jails, nanokernels, timecounters, and the Bikeshed.

He invented the Bikeshed concept?

Nice…

He’s the guy behind this. And look at this. When you refresh it, the color changes.

It changes colors. Okay…

So Bikeshed.org… Bikeshed.org is what we’re looking at.

Poul-Henning Kamp has to come on the Changelog at some point…

Oh my gosh, yes…

I think he does. I think he does. And the Pipely connection is just too strong to ignore.

It’s so strong.

Alright, so that explains why Varnish doesn’t have SSL, and Varnish Enterprise does. So there’s the whole commercial aspect. But Varnish Open Source does not have SSL, and there’s a couple of ways to solve it. And we may have talked about this on a recording that’s not public yet with Nabil… So we will wait for that to land.

But what does this mean in practice? And this is where I go to the terminal. So we’re looking at Pipely. Everything has been merged, and anyone can follow along, and we’ll do the same here. So let’s do this… Alias j is for Just. So Just is something that I love, and I think I mentioned about it. Just do it, right? It was one of the Kaizens.

That’s right.

Kaizen 16, I think. Not the last one… The one before last. So there’s a bunch of recipes that people can run. And Just Debug is the one that we’ll look at now. This is in the context of Pipely. So Pipely, as you download it right now, today, this is what it has.

So what it does behind the scenes - it’s using Dagger. And the reason why it’s using Dagger is because it needs to create a specific environment, with different tools, and it has to wire everything together. So we can use – in this case, we’re using Dagger to publish, package the container, and publish the container, even deploy the container… So deploys are a thing now. We have deploys wired up. So any commit to Pipely will go out and it will deploy a Pipely application. And we’ll see that in a minute. But now, I just want to look at Debug.

So what Debug does, it adds some extra tools on top of the application container. So what are the tools? Let’s just open it up and just have a quick look at what Debug does. So Debug – actually, I forget it’s not here. It’s in Dagger, main.go. So Debug… So for example, we get curl on top of the application container, which has just Varnish. Tmux, Htop, Neovim, Httpstat, Sasquatch, which is an interesting utility… It’s like watch with some extra features… Gotop and [unintelligible 01:09:29.05] you will remember. And then Just, obviously.

So it’s just a way to interactively debug the container and try a few things out, without polluting your system. I think that’s the key takeaway there. So let’s just run that. Let’s run Debug. And the terminal function, it’s what puts us in that container.

So I ran the command, and right now I’m in a container and I have a bunch of toolings available to me. So what are the toolings? If I do just - again, Just is there. I have a couple of commands to run. I could run these things locally, but really, I just want all that to be wrapped, because typos and a couple of other things. So what would you like me to run first?

Just backends.

[01:10:20.02] Just backends, the first command. So let’s see what just backends does. So it just wraps Varnish ADM backend.list. Because varnish isn’t running, there’s no backends to list. What would be a backend? A backend would be, for example, the Changelog origin, the Changelog application. A backend would be the feeds origin, or the assets origin. So this is where backends get plugged into Varnish, and Varnish provides caching for those backends.

So how do we start Varnish? Let’s see if Jerod –

Just up.

Look at that. See? That’s why we have something like this. So just up - boom, there it is. It’s Tmux, it’s a terminal in a terminal, so there’s quite a few things there. So just backends - there’s nothing there. And if I do just check, that’s the one. It does the first request, it fails, and the second one, we can see we got a 200. Run it again, it’s really fast… And again, this is messed up. Alright, it will have to be horizontally. I’m sorry, it will have to be horizontally. So just check - it won’t fit a lot, but there you go, there it is; we can see HTTP 200 okay, and we can see that the request came from… We’ve got a hit, it’s a second hit, from local. So this came from Varnish.

And if I do Just Backends, we see the two backends, which are healthy. Cool. What other commands should we run? Let’s do Bench CDN. I think that’s where – actually, bench origin. So bench origin, and you will recognize this… This is going to [unintelligible 01:11:53.08] and we are benchmarking…

Well, that’s beautiful.

Yeah, it’s not as good as it runs locally, there’s a bit more detail, but it’s pretty decent, I have to say. So we have just-benchmarked the Changelog application.

This is in the cloud, what we’re doing here?

So this runs locally, the benchmark runs locally, but we are benchmarking the Changelog origin application, which is production right now.

This is production, okay.

Yeah, so we’re benchmarking production.

Okay, cool.

How many requests per second?

  1. So about 90 requests per second. Now, I’m in London… This is actually split between New Jersey and Ashburn, Virginia. So there’s two data centers; it can go to either one. It goes through the edge, and then eventually connects there, which then it has to connect to, I think, the database. It hits the database. So 90 requests per second - not great, but only the CDN goes to the application directly. So let’s bench the CDN. And we are sending 100,000 requests per second. Sorry, 100,000 requests. Not per second. 100,000 requests. And let’s see how long it takes. So that took just under 10 seconds, and we completed 10,000 requests per second. 10,000 requests per second. So the CDN - we can see it’s doing its job. I’m connecting to it locally… The latency is low, and this is our Changelog.com CDN.

Alright, so now let’s benchmark… Let’s go to CDN2. CDN2 is Pipely deployed on Fly, that now proxies to the origin. So this is the new Pipely that we’re setting up. And I think we already had this, but how does it behave with the TLS proxying, with all of that? Now we have all those things in place. And we’re almost complete… Remember, we had about, I think, 10,000, 11,000, something like that… This one has 4,000 only. So it’s slightly slower, it’s going to cdn2.changelog.com.

[01:13:59.23] Now, the application itself has a shared CPU; only 256 gigs of RAM… So it’s like the smallest, lowest, cheapest CDN instance, or cheapest fly.io instance application that we can run. So we could make it quicker, we could make it bigger, but that’s not what I would like to show. What I would like to show is if we benchmark Varnish directly.

73,000.

73,000 requests per second. And actually, it’s quicker. It’s 132,000. The problem is the benchmark – we’re only sending 1,000 requests. So let’s just make it a little bit more… Let’s just send a bit more. Let’s send – Varnish, let’s send to it. Let’s go via HTTP 1.1. I just need to add a couple of things, and let’s go a million. So let’s just go a bit more. So let’s benchmark Varnish… I messed something up. Let’s see what did I mess up. Bench, that is 1.1. There we go. I just made a typo. Alright, let’s benchmark Varnish.

So we are sending it a million requests per second. Where is this running? Everything is running locally. It’s running inside of Dagger.

A million requests total.

How many requests total?

A million requests total, you said. Is that a million requests total?

We’re sending a million – oh, I made a typo. Actually, 10 million. We’re sending 10 million requests to it. [laughter]

So let’s see how does it behave exactly - oops - when we send it 10 million requests. And we are more than halfway there. So if I go to this instance, remember, this same PopOS instance, and if I run a Btop, I can see what’s happening here. You can see the CPUs, there’s a lot of red… So this is now CPU-bound, actually, and everything is local. So there’s no network, because it happens in the same container, in the same namespace, same everything… Which means that this is really as fast as you get it. And there’s our result. That’s how many requests per second Varnish can serve.

211,000.

211,000, local. So Varnish isn’t slow. It’s caching well. We can look at the distribution, because right there where Varnish is there is TLS Exterminator that it needs to talk to, which terminates TLS… So that’s an external process, which - that connects to the origin. And it can connect to multiple origins. So right now, we have only Changelog configured, but we’ll have feeds, we’ll have a couple more… This will run next to Varnish, and I think the pieces are starting to come together. Any thoughts?

This is cool. Man.

Would we like - and here’s a question… Would you like to scale those instances up, to see how much faster they will go if we provide bigger instances?

Well, what are we getting right now, against our current setup?

So our current setup, which uses Fastly, we’re getting between 10,000 and 11,000 requests per second.

So we’re about halfway there.

We’re about halfway there, yes. With the cheapest, smallest instance, we’re about 4,000 there.

You’re saying that’s all Fastly can do, is 10,000 to how many thousand? 10,000?

It was about 10,000 to 11,000. So there’s a couple of things at play. This is like the POP which is closest to me. I have seen it go faster. So I’ve done a couple of other benchmarks, and sometimes it goes to 16,000, 17,000. So it can go faster. I think it just depends on network conditions, load on their system… But we are sharing network with everybody. But if I can push 11,000 requests per second, that’s a lot of requests per second, by the way. I think.

Yeah. It doesn’t matter. Like, it’s 4,000… Just good enough.

[01:17:45.28] Yeah. So how fast is it? If we were to do here, you can see that right now, my download, I’m downloading 1.27 gigabits per second. And my network connection goes more. My network connection goes all the way to two gigabits. So right now I’m about 10,000, 11,000. So basically, Fastly is limiting my one connection - I mean, I say one connection - one IP, to about one point something gigabits per second. And maybe we could benchmark it elsewhere, but the point is you don’t want one user to use all your available bandwidth. So you need to apply some throttling.

So let me show you something interesting… If I, for example – let’s just do bench. So I do just bench, and if I do bench… Let’s do – remember bunny? We have bunnychangelog.com. Remember that? The other CDN? This is how the other CDN behaves, 1,700. So let’s go two. And I would like to go – let’s go 100,000 requests per second. Let’s see how that behaves.

100,000 requests total.

10,000 requests totals.

You keep saying per second.

That’s all you do.

That’s what Adam was trying to fix earlier, too.

Yeah. Sorry, 100,000 requests… We’re sending 100,000 requests, and we want to see how many can you serve per second. And what we see here is that it stopped at 2,000 – at about just about 3,000 requests.

They block you?

Exactly. They throttle.

Smart.

And this was a surprise. This was a surprise. So they have some sort of protection… Because I could be DDoSing them.

Imagine if it would be like 100 of us doing the same thing…

I mean, we’d be sending hundreds of gigabytes, and they would just – sorry, hundreds of gigabits.

They have to consume that bandwidth too, as an infrastructure.

Exactly. Yeah. So as you can see – I mean, I’m not liking this behavior… I mean, I can’t benchmark it. So from a benchmarking – now it’s resumed. See, so there must be –

And then it stopped again.

And then it stopped, exactly. So it was just about 100 requests it let through… So it’s just blocking me, and then letting more requests [unintelligible 01:19:55.17]

Are you doing this via an API key? How do you authenticate this – is this just a…

I’m not. I’m just hitting it as public, like anyone.

Okay. I was gonna say, if you can do this via an authenticated way, then you can always just like pass a benchmark flag or something like that, to get past this.

Maybe, but –

I’m just hypothesizing how I would build it if I was building it, you know?

I would allow somebody to benchmark my system, because there’s gonna be times you want to benchmark your system.

Yeah. I would definitely look into that, but I didn’t have to do any such thing for Fastly or for Fly. I could just send all the traffic –

Is that a good thing, though? I wonder if that’s a good thing. Because it’s just like letting anybody just like benchmark them. You just sent 10 million requests to them.

I did, yeah.

If they can handle it…

If they can handle it, yeah… [laughter]

“Yeah, I did.” [laughter]

[unintelligible 01:20:39.17]

You almost sent 10 million per second, until you corrected it.

Yeah. No, no, no, no, no… That’s too much. I need more computers. I need my Windows machine for that.

That’s right. Two of everything’s not enough in that case.

So my question, brass tacks, “Is Pipely fast enough?” That’s the question.

Correct. So let’s scale it up.

And I feel like 4,000 requests a second versus the 10,000 to 11,000 you get on Fastly… Is that going to noticeably impact anybody? And I would assume the answer is no.

Well, he’s scaling that machine right now to test it out, Jerod…

I know he is.

I’m doing it…

He’s always scaling stuff.

So let’s pause for one second. Remember earlier in this show when we were talking about Fly, and being down, and stuff like that? That wasn’t hate. That was just facts. Okay? What he’s doing right now is in the moment of having a conversation, is essentially upgrading that machine to be more performant. Taking it from a cheap box to a slightly more expensive box.

And this is all via the Fly command line. So cool. It is the coolest tech, man… They really are doing some cool stuff there. I love it.

Yeah. I’m just wondering, if we were to replace Fastly with Pipely, how well would it behave? Would we get the same thing? And how does it compare? But for that, I promised one more thing, so I’m going to deliver on one more thing…

Alright…

Do you notice anything different about my setup?

Like, in here, in the terminal?

No, no, no. Like looking at my camera, did you notice anything different?

It’s super-black behind you. Are you in a whole different space?

Well, it was black before…

Yeah, it was always black.

A better camera?

Yeah, well, there was – the black, it had some more detail. So before, the black wasn’t quite as dark.

Oh… Do you just green-screen yourself?

Not quite… Alright. So –

What’s going on right now? Something just went behind your head, and it looks like…

Grafana.

Some sort of dashboard.

Grafana’s behind your head, Gerhard.

Yeah, so… That’s one of the early birthday presents, which I couldn’t wait to open…

Yeah, your birthday is coming right up, isn’t it?

It is, yeah. It is. I think by the time this will be out, it’ll be out.

Alright. If you’re listening to this, find Gerhard, tell him happy birthday.

Thank you. I appreciate that.

You’re welcome.

So I always wanted to have a big ass monitor. Like really, really big. So big…

A BAM, as they call it.

A BAM, there we go. I always wanted to have a BAM.

BAM! He’s got one.

And that’s what happened. Behind me, the whole screen, the whole background is actually now one giant screen.

This is a real screen back there?

Is it a TV screen, or…?

It is a TV screen.

It’s a BAM.

It’s a BAM.

Tell us more about this big ass monitor.

Yeah, I’ve heard of this…

So what is it?

It’s a Samsung S95D.

Okay…

And it’s a 65-inch TV. So it’s big. And what it means is that I can talk to anyone, and I can see exactly what’s happening across every infrastructure. Right now I have the Changelog infrastructure running there, and… Do you see those spikes right there? Do you know what that spike is?

You, just now.

Yeah, exactly. That’s me, just now. I did that. I created a spike. [laughter]

That spike is me.

He’s so proud of himself. “I did that spike.”

Yeah, you did.

So that spike right there is the benchmark that went directly to Fly. Now, on the left-hand side, as you look at it, that’s all the metrics coming from the Fly application, our Fly Changelog application. On the right, it’s all Honeycomb. And because it’s a bit blurry, you can’t see the details, which is exactly what we would want. We don’t want to advertise all the details… But really, what’s interesting is the shape of it.

So Honeycomb - I can’t figure out how to automatically refresh. Grafana in Fly has that capability. So I just need to manually refresh it. I have to click the refresh. So let me do that. There you go.

He’s clicking refresh… He’s leaning over.

Leaning over, I’m hitting refresh, and then you should see that other half refresh. Like, half of the background refreshed.

And actually, it’s the same timestamp, so I need to go to the last 24 hours. There you go. Now we’re looking at the last 24 hours. So do you see those spikes there?

Yes. Those spikes is the benchmark, which I did against Fastly, against our CDN. So you can see that we never hit those levels under normal operating conditions.

That’s like 100x what we normally operate. So maybe being able to serve 10,000 requests per second doesn’t make that much difference, since really, we never hit those levels.

I feel like we’ve gone to Target, the three of us, y’all took me to the toy department, and you said “Pick a toy.” I chose my toy, we went to the checkout… Target is a popular store here, by the way, Gerhard, if you didn’t know Target… We checked out, we successfully paid, we’ve left, we’ve gone home, and you’ve not given me my toy.

Where is my toy?

Well, I can’t get the monitor for you. You need to get it for yourself. [laughter]

No, I mean Pipely. Pipely. Pipely is the toy.

So Pipely, if you go to cdn2.Changelog.com, it runs, it now uses a component that terminates TLS to origins, and now we need to add more origins. While it’s half as fast as the current CDN is, we know that it can sustain all the load that we need to replace our CDN. So the toy will work.

In terms of what comes next, we need to configure more origins. We need to get, for example, the feeds one, the assets one, and we need to scale the instances in such a way so they can handle the traffic. Right now, we only serve or we only save the assets, or we only save the responses in the actual memory. So we need to configure disk. There’s a couple more things, a couple more knobs to configure, but this is getting closer and closer and closer.

I feel like the real toy is the Samsung S95D 65-inch OLED HDR Pro, glare-free, with motion accelerator… Gerhard, that sucker is expensive, man. That’s a nice –

Well, eBay - half price. That’s what I say. Brand new. So you just need to shop around. Do what Adam does, you know? Do what Adam does, basically.

Well, I was just going to let you know my birthday is July 12th, just in case you’re wondering…

Cool. [laughter]

Mine’s sooner… March 17th.

March 17th.

Because while I wasn’t really jealous of all your computers you were talking about earlier…

But now you are.

…that screen is amazing. Holy cow.

Yeah, that is a nice screen.

Alright, so you scaled up our Pipely to the Performance X1? Performance 1X.

It didn’t work.

Ah, it failed.

Yeah, it failed. So maybe we’re trying to scale too much.

So Fly let us down.

Oh, dang, man… It was cool until…

I didn’t know. I didn’t know what would happen. Maybe we… Yeah, let’s see. If you do flyctl… Let me just do that. Let me go flyctl machines list, and let’s see what’s going on. Live debugging. Why not? So we see Performance 1, 2… Only two, really. But the rest could not be scaled. And I don’t know exactly why. So let’s just do that again. Let’s do VM scale, updating machine… See, this other one just couldn’t update it, and I’m not sure why exactly. That’s the one in Heathrow. Waiting for machine to be happy… Sorry, to be healthy. To become happy.

I mean, a healthy machine is a happy machine.

Yeah. A happy machine is a healthy machine indeed.

That’s right.

So that’s still waiting for machine… Okay, so now it’s moving to the next one.

How many pipeline instances are we running right now?

1, 2, 3, 4, 5, 6, 7, 8, 9, 10. 10.

So there’s 10 of them in different regions around the world.

And we’ve got two of the 10 upgraded to Performance 1x. The other ones are on Shared CPU 1x.

Exactly.

Which has also 10x the RAM, it looks like… So the Shared CPU is at 256 megabytes, whereas the Performance 1x is at 2048 gigs.

Exactly. Yeah.

So that’s quite a scale…

Yeah, it’s about 10x. And I’m wondering, if we do that 10x, how will it behave?

And what would that affect the bottom line of running Pipely? Because you’ve now 10x-ed our costs, probably… Because you’ve upgraded every instance around the world.

I’m not sure how much it changes the cost. I mean, we can check it exactly to see how much that would cost, and maybe we don’t need 10, maybe we need just one per continent. Maybe that will be enough.

[unintelligible 01:29:46.12]

[01:29:48.17] Or one per like East Coast, West Coast… This is, like you remember, the old one… So there’s a couple of optimizations which we can change there. So what’s the question? How much it would cost?

Well, I was just wondering how much extra it is, but we don’t need to get the exact answer. These are just concerns that I have as we move forward. And then “Is 10 even the right number?” is a question. I mean, maybe it’s smarter to leave it at the Shared CPU 1x, but have 30 of them, versus 10 at the Performance 1x, for instance.

Yeah, I think – so we went with the cheapest one, smallest one. So Shared CPU 1x, you get – I think you get the bandwidth, which depends on how big the instance is. You get like a fair share of the bandwidth. So these instances were costing about $2 per month, just in compute costs. We went to Performance 1x, which is 31. So that is more than a 10x jump. Maybe if we went to a 4x – sorry, Shared CPU 4x, which is about 8… That would have been…

Yeah, a 4x, and also a more realistic upgrade. But I wanted to make sure that we get the higher tier ones. Performance 1x is like the lowest high tier one, which means that you get a full core, it’s not getting throttled… And my assumption is you’ll also get more bandwidth. And that’s what we’re testing here. If we go to like the next tier of instance, which is like compute optimized in a way, it’s like a huge jump. But does that translate to bandwidth performance? So we’re still going through that… I mean, we can try benchmarking it again to see how it behaves. And the reason why you could benchmark it again is because the POP, the one in Heathrow has already scaled. So let’s see, how this one compares. We are pushing 480, 470, 480… Okay, so I think we’ll get a similar result, I think.

And you did 10 million again.

480 – I did, I think 100,000 requests in total.

Sorry, 100,000.

100,000 requests in total, yeah. Just to throw some load its way, and then see how that behaves. And we’re at 4,000. So apparently, scaling up the instance did not increase the bandwidth.

Interesting.

Mm-hm. So the question would be “Is this as much as we can get? And should we/could we go higher?” I don’t know.

Is the limitation then network? Is that what we just resolved to then? Because CPU and other things didn’t really influence it? RAM didn’t influence it…

Yeah, so I would ask for example Fly how do they allocate network bandwidth based on instance size. How do those limits work?

Yeah, that’s not clear.

So that was the one question… And what I’m wondering, is 4,000 enough? Because we’re looking at the graph, we’re seeing the spikes, and apparently, we never even hit 4,000 requests per second on our existing CDN. It means that the ceiling is lower, but since we’re never hitting that ceiling, maybe that’s okay.

Not to mention that we’ve seen – Bunny, for example… This is a perspective which I haven’t seen in Bunny before, where we can see the throttling kicking in. We can’t even benchmark it properly, because it throttles you much earlier. And I looked through the config, I went through the settings… CDNs, apparently, they’re not all configured the same… Which is why I was looking at Varnish to see what can Varnish do. Where exactly is this bottleneck coming from? And are we okay with the ceiling?

Is it necessary to have this throttling in place?

For who?

For, I guess, just the system; the uptime of the system.

The vendors… Yeah, it would make sense for them, too.

I mean, for us, we would be, at least temporarily – Pipely would not have a lot of users, I would say. We would deploy Pipely on-prem, basically. It would not be a service we’re consuming. Pipely would be software we deploy for us to use. And so do we really need throttling if we’re our own user, and we control our systems?

[01:34:00.02] Oh, I see what you mean.

You see what I’m saying? Because Bunny has it probably as a safeguard because they’re public. Whereas Pipely would be deployed for us in our use case.

Just us.

Right. So we don’t need throttling.

It’s deployed for us, but it’d be hit by randos around the world.

So we can get DDoSed.

Yeah, we could. That is a real possibility.

I mean, if you send us 5,000 requests a second, we’re DDoSed. To one POP, at least.

To one, yeah.

Yeah, exactly.

So I think some form of rudimentary throttling makes a lot of sense. I don’t think it would add very much in terms of software on our side to say, you know, you can make it very rudimentary. This IP can only have so many requests a second. Done. Then you’re at least avoiding that low-hanging fruit…

I haven’t looked into that, but having this discussion is valuable. I mean, this is why I don’t think we can have the toy, because we’re still debating what the toy should be, and how it should behave.

Well, we’re building our toy.

Exactly, we’re building our toy… And I think this just makes it more real, because these are the steps that we would go through before we take this toy into production. And I think this is the perspective which is valuable to have. The level of care and attention and detail that we go through to make sure that what we put out there will behave correctly. And the comparison that we have right now is Fastly, which from some perspectives it behaves really well. I mean, we’re seeing performance is amazing… Caching - not so good. But again, it makes sense why they don’t keep content in memory as, for example, we would, because we would optimize for that. Which means that because we optimize for that, we want to store as much of it in memory as possible. Memory that we pay for, or disks that we pay for, wherever they may be.

And then I think this will also – we’ll have questions about “How should we size those instances?” We just heard that maybe the Performance 1x, maybe it’s a bit too expensive, because we need to run a bunch of them. And how many – if you remember the first time we were running 16… Maybe that’s a bit too many. Maybe 10 is a better number. But even that might be too much.

Now, if we’re looking at the cost, we’re paying $30 per instance. And if we have 10 of those, we would be paying $300 per month for the compute. I think that’s okay… I think that’s not crazy in terms of cost.

Let me ask you a question. Maybe this is a stupid question, but let’s ask it anyways… We want to store – are we designing the system to be memory-heavy, where we have terabytes of memory available to the system, so we can store all of our data in memory?

I don’t think so.

Or just on disk, and have lots of memory available if we need it.

So I think that we need both. I think that the data which is hot should reside in memory… And think about how ZFS works, the file system, and when you have an arc. So this would be exactly that.

We would want to store the most often accessed data in memory, and the one which is least accessed on disk. So I think we need both, because the memory - we can scale it. If we go back to two gigabytes of memory, for $10… Let’s say we keep the 1x - we can get eight gigabytes of memory. That doesn’t seem a lot of memory. For example, I wouldn’t know – and this is where the cash statistics would come in handy. How much data do we frequently serve? And I know that we have the peaks… When we release something, there’s like a bulk of content that we serve often. How much is the bulk of – how much is the hot content? I don’t have an answer to that. But all these things are getting us closer to those concerns, shall I say, that the system will need to take care of.

Honestly, I don’t think that we should give it more than, for example, 16 gigs per instance. And even that might be a bit big. And I’m wondering whether all regions should have the same configuration. And I’m thinking no, because maybe in South America - and I know this for a fact - there’s less traffic than, for example, in North America. And maybe Oceania – I’m sorry, Asia; let’s go with Asia. Again, it’s less traffic than we have in North America and even Europe. So then I think the same configuration across all regions doesn’t make sense. But knowing how much data is hot, I think that’s something important.

[01:38:26.28] How would we know that? Just based on like stats to the direct path itself?

Yeah. Stats from the cache, to see how much cache is being used. And are there any configurations – and I haven’t even looked into this… Are there any configurations in terms of evictions? Like, how frequently should we automatically drop content? I think this is where our cash hit ratio will come into play.

So if you don’t store enough of it in memory, you will have a lower cache hit ratio. While if you store too much, maybe you’re being wasteful. I mean, having a high cache ratio, cache hit ratio, while a lot of that data is infrequently used - you’re paying for memory that you don’t need.

The other question which I have is “Are the NVMe disks fast enough?” And if we think about Netflix, Netflix does the same thing. They put there those big servers in ISPs, they cache the content on those big servers so that they can deliver them really quickly to customers wherever they may be.

We’re not going to go there. So this is not that. But that’s one pattern that they apply, because they realize the importance of having lots of content close to users. Memory is not big enough. You need disks. Again, we’re not there. We don’t have that problem.

Well, we’re getting there… I think we have some decisions to make as we go. I think, roughly speaking, the dog hunts, I think, 4,000 requests per second, well managed, we’ll be fine. And I think we’ll find out otherwise, and be able to scale one way or the other around such issues. What else is left in Pipely’s roadmap, as we look towards the future now? Because – let’s do next steps.

Yeah. So I think that now we are finding the place where we can add feeds backend. Feeds backend, and we also need the static assets one. So I would add both. When we add them, we need to figure out, do we store all that in memory? And I think the answer is no, because especially static assets, they’ll use a lot of memory… But maybe disk. And I think we should look into that. Can we configure different backends? Like, how does that work? We’re basically getting to the hard part of configuring Varnish for our various backends. And each backend needs to have a different behavior, I think. So that’s something to look into.

Logs, sending logs to Honeycomb - I think that is a much easier problem to solve, because we would be using Vector. And now we have the building blocks, which - we have the first sidecar process, if you want to think about it like that… Which means that we have – there’s Varnish and there’s a couple of other smaller processes that support it. We have TLS Exterminator, that terminates TLS to origins, to backends… The second one would be this – in my mind, it will be Vector.dev, which is what we would use for these logs. So Vector.dev would get the logs from Varnish, and send them to Honeycomb. So it’s an integration which I’ve used before, I know how it works… It’s very performant, it’s fairly easy to configure… And then we’d have another helping process that would work in combination with Varnish, to accomplish a certain task. And Honeycomb and S3, all those - it supports multiple syncs. So collecting the logs on one side, and just sending them, like in multiple syncs - that is very straightforward, because it just handles all of that itself.

[01:42:07.12] And then really, the last hard bit is purge across all application instances. And I think that one is like maybe a step too far to think about it now. But I think the way we – so first of all, now we have an image to publish. We are deploying the application automatically through CI. That’s like some plumbing that you want to have in place… We have support for TLS backends, and that was an important one, especially when it comes to other origins… Because let’s say if we are running in Fly, we can use the private network to connect to the Changelog instance. But for external origins like feeds, we would need to go to HTTP, because we didn’t have HTTPS. Now we have HTTPS. I think that’s also an important building block.

And now we’re hitting this benchmarking – I won’t say we got sidetracked by it, but I think it’s something worth considering, because you may end up building something that won’t work. We won’t be able to use this to replace our current CDN. And the goal is to be able to say with confidence that Pipely is able to do the work that currently our CDN is doing. And what does that mean from a configuration perspective, from a resources perspective?

I think everything adds up, and it feels like we’re more than halfway there. For real. I don’t mean like “Will this work?” No, no. We’re more than halfway there to replace Fastly, with Pipely, for us.

Alright, take us to the promised land, Gerhard. Give us that toy.

It won’t be Christmas. It’ll be before Christmas, Adam. When is your birthday?

March…

March? Okay…

That’s too soon.

Yeah, that’s too soon. Jerod, yours is July.

All I want for my birthday is Pipely and a Samsung S95D. [laughter]

Alright. Well, I behaved very well, I think… And my wife must love me very much, because it was… Yeah, it was a present from her to me, so…

That’s nice.

Yeah, very nice. So she loves the nerd in me. She has no choice. They come as a package.

Yeah, I was gonna say… If he doesn’t love the nerd - I mean, what’s left? [laughter] No offense.

That’s what you see, Jerod, but this is not the show for that.

Don’t answer that. Don’t answer that.

Well, that’s cool. I love this exploration, though. I like that there’s possibility to run your own thing like this, to configure it to the way we want to. I mean, to zoom out, the challenge has been that it’s been hard to configure Fastly, not as a CDN, but as the CDN we need. Our particular use case. It’s not that Fastly is not good as a CDN. It’s just that it has not been highly configurable by us. It’s been challenging over the years, mainly just because they’re, I think, designed for different customer types. We’re a different customer type. And we’ve been holding it not so much wrong, but it’s just been a square peg around a hole kind of thing, where it’s not perfect for our particular CDN needs.

I think we’ve had lots of cash misses over the years… Like, why is our stuff not cached…? It seems like it should be. We’re not prioritized as a thing to serve because of the way the system works, and that’s just it. And we’re designing something that serves that kind of system, where it serves the data, holds more in memory, has more available to it, and is not a miss…

That’s right.

…which I think is cool. Very cool. Man, this tooling you built is so cool, man. I can’t believe how cool this stuff is that you’ve built, man. It’s really awesome.

Thank you. It’s coming together. It’s –

And I love the TV, too.

[01:45:57.13] Yeah. It is like a well-rounded experience, right? So the idea is to be able to have the TV on… Now it’s a bit bright, and it’s running a little bit hot. I can feel it, it’s not winter yet. I don’t need heating in the office. Well, it’s the end of the winter, but… I’m able to see a lot of metrics, and I think that’s something that I always loved… To be able to see how things behave, and when they misbehave, to be able to see and understand at a glance what is wrong. Are we getting DDoSed? Am I running out of memory? Like, which instances is problematic? And I think this is just like a starting point. I literally threw the two dashboards that we have over there… But I haven’t optimized them in any way. And I think having something like this just makes it a more… I don’t know, a breathing, living system.

Well, yeah, you can see in real time what’s happening. The metrics is the life. It’s each organ, so to speak.

It’s like your own NOC.

It is super-cool. It is super-cool. I’m jealous, and I want one.

I inspired you… The Christmas is far away. People can save. I know that we did… And it’s been in the making for a long time. So before I could get this, I had to get a system that is able to power it. That’s what PopOS does, one of the things which it does. It has a GPU which is powerful enough to be able to power it… I have another monitor here which does like screen mirroring, so I can change things here and set things up… A system that’s able to be on, and that it’s not too loud… That was like another consideration. A black wall, so it blends nicely… It was like so many things, like years in the making. As Pipely as years in the making…

That’s right.

And it’ll be just as beautiful as that.

I would love a tour. A homelab tour. I would love that. Coming soon to…

Well, I’m working towards that… But I can show you one more thing, which was not planned… I know we’re a bit over time, but if you want to see one more thing, I’ll show you my M25. Every year I basically take one of these machines online… And I have like a 24, 25… So this year, this is the machine that came online. It’s running TrueNAS; as you can see, it’s an i9-990K. It has 128 gigs of RAM. So it’s like fully maxed out, basically… And this is how… I’ll just move that screen a little bit. You can see all the storage, it has two pools… It has an SSD pool and an HDD pool, so spinning disks, and some slower SSDs. They’re the Evo, I think, 870. And it’s something that you need to have in place to be able to have decent storage, between Linux, and Windows, and Mac, and everything to just work. So that was one of the projects, and I didn’t have time to talk about it. Maybe next time. I know that you are a TrueNAS user, Adam… ZFS, and like all that stuff.

Yeah. Several pools, several things, similar to this… A slightly beefier machine. It’s a Xeon processor, and it’s a 4210, I want to say, Silver. I think there’s 100 and some gigs of RAM, I want to say. 192 maybe, 128…

It’s something like that. It’s not 256, I know that for sure. I just don’t have a need for it. I mean, it’s a tinkerer’s dream to have lots of RAM in a ZFS system, but I just don’t need it. It’s just – I caught myself just wanting to have it just to have it, and I’m like “That doesn’t make any sense.” Like, you spend all that money on the RAM? Just spend it on the disks instead.

Because disks are expensive.

Yeah. For video storage, it’s something that you would need. If you were to do editing… And especially if you edit it for multiple machines, you need like a fast network, a couple of other things. But…

Yeah, my home lab is suffering right now. I don’t have 10-gigabit everywhere. I do have it in the network, I just don’t have it everywhere… So I’m in the process of fixing that. There’s some slight life updates for me that will make it more important, I should just say.

I had a flood in the studio, and…

Oh, wow. Okay…

I don’t think I can stay here anymore, let’s just say. I’ve got to go home. So I’m turning my home office into a true home lab, and work lab, and that’s in the making, so… It’s a bummer.

Techno Tim. We need more Techno Tim.

Yeah, you know… I mean, I’ll be close to the things I play with more frequently. I feel like I’ve always been like two-locationed, and it’s been challenging… Because right now I can’t access TrueNAS. It’s at home. I can’t access that Windows PC, it’s at home. It’s in the home lab, you know?

And I’ve just sort of like stripped away more and more here, to the point that it doesn’t make sense to stay here any longer.

Well, your background will change. It’s a pretty cool background that you have…

Yeah, that’s the thing. I’ve got to make sure it’s video-ready, and I’ve got a month to do that, basically.

Well, and I know we went a bit long, I know we covered a lot of stuff…

I dig this. I love it. I’m glad you showed this. I would love on Make It Work – do you mind if I promote that?

No, no. Go for it. Go for it. Yeah, go for it.

I would love to see a tour of whatever you can share. It could just be iPhone, it could be low-produced. I don’t really care. I just want to see what you’re doing… Because what I love talking to Tim about in particular - one, I like him as a human being. He’s so cool. And I truly think we’re not just friends on the podcast, but I think if he wasn’t 2,000 miles away, I would hang out with him, and spend time. Same with you. And you’re a lot more than 2,000 miles away.

But I love – there’s not many geeks I can meet that nerd out on hardware. Like you do, and like he does, and a couple others do out there that are friendly in the world, like Tom Lawrence… We met him years ago at a Microsoft, something or other, I think in New York. I haven’t reached out since then. He’s become more and more famous since then, so now I just watch him on YouTube, you know? And I appreciate his takes, and stuff like that. But there’s not a lot of geeky nerds who nerd out on hardware, for no reason, like we do. We build things we want to need, and so we make a need for it… You know what I mean? Maybe there’s some true need, but you justify it, like this TV behind you… Because why not have a NOC, like Jerod said? Why not have this big thing behind you, and let it be a green screen, let it be a real thing? Just because, you know? So makeitwork.fm, to not hide the URL…

.tv. That’s what I would say.

.tv. Sorry, .tv.

That’s all new, by the way.

Yeah. And that is – geez, I keep fat-fingering it. I’ve put a comma there instead. I haven’t been there in a bit. So this is still running from your home lab, right?

No, actually, this is running on Fly, and it has a CDN in front. Yeah.

Because last time it was on your home lab stuff.

It was, yeah.

Was it Jellyfin?

[01:53:02.10] Well, Jellyfin is still on my home lab. The media is still served from there, because of the iGPU. But actually, this one, if I would just click on this one… So there’s a couple of things here, and I’m logged in… So it’s obviously the episode, the audio, which is coming from [unintelligible 01:53:18.25] embed, and there’s obviously the embed video… And as a member, when you sign in, you get the whole thing. This is served from the CDN directly, so this is like the CDN content… And there’s also Jellyfin. So once you log in… See, the quality for this one wasn’t very good. That’s something that I’m still working on. And that’s why I mentioned that I have to record my screen locally, which is what I did for this… Because Riverside is not great with screen recording. They improved it, but it’s not there yet. The quality is not as high.

Yeah. Distributed podcasting is so hard. It really is… Because you want to share that screen with us, but then counting on Riverside to record it in a resolution that is good for long-term uses…

Yeah. Makeitwork.tv, and .fm if you want to go the audio route only… But .tv is where you said to go, so go there instead. Yeah, man… I want a studio tour, I want something… Don’t take six months. Do the simple version, Gerhard. Or just – hey, listen, we can just Zoom and you can just show me in the real, you know?

Okay. I mean, that would be –

We can just FaceTime. You just show it to me.

Yeah, we can definitely do that. That’s much easier.

Yeah, I’d love to see it.

It’s the whole backlog that I have to go through… So I’m still working on that. It took me such a long time to find a good editor, and I think I finally have him. It took me at least four months, five months of proper searching…

…to get someone that I’m also able to afford… Because this is still like everything self-funded. But it works, and first, I need to make Makeitwork work, before I can… But even like Makeitwork.tv, now there are subscribers, and there’s even members, people can pay for it… So that’s up and coming. That was something new –

I want to put you on the spot. We did talk about CPU for you, so I’m hoping you’re still excited.

I am. I am. Yeah.

We’re making steps. So maybe –

I’m keen to be part of that. I just did not have time between everything.

Same. I’ve been focused on getting the agreement solid. I wanted to make a solid promise and have it be clear to folks… And so I think that’s like a simple thing, but understanding your terms between the people you’re going to serve I feel like is – you’ve got to examine that and have clarity there.

Cool… Man, this has been a fun Kaizen, a deep Kaizen. If you stuck around to now, holy moly. I’m not sure what’s getting cut, but wow. You are a trooper, you’re a super-fan, and you should be a Plus Plus member. I’m not going to force you, but changelog.com/++, it is better. Bye, friends!

See you in the next one. See you in the next one.

Kaizen.

Kaizen.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00