Security Was Already a Mess. Generative AI Is About to Prove It.

I was thinking about some of the points from the Polyglot Conf list of predictions for Gen AI, titled “Second Order Effects of AI Acceleration: 22 Predictions from Polyglot Conference Vancouver“. One thing that stands out to me, and I’m sure many of you have read about the scenario, of misplaced keys, tokens, passwords and usernames, or whatever other security collateral left in a repo. It’s been such an issue orgs like AWS have setup triggers that when they find keys on the internet, they trace back and try to alert their users (i.e. if a user of theirs has stuck account keys in a repo). It’s wild how big of a problem this is.

Once you’ve spent any serious amount of time inside corporate IT, you eventually come to a slightly uncomfortable realization. Exponentially so if you focus on InfoSec or other security related things. Security, broadly speaking, is not in a particularly great state.

That might sound dramatic, but it’s not really. It is the standard modus operandi of corporate IT. The cost of really good security is too high for more corporations to focus where they should and often when some corporations focus on security they’ll often miss the forrest for the trees. There are absolutely teams doing excellent security work, so don’t get the idea I’m saying there aren’t some solid people doing the work to secure systems and environments. There are some organizations that invest heavily in it. There are people in security roles who take the mission extremely seriously and do very good engineering.

A lot of what passes for security is really just a mixture of documentation, policy, and a little bit of obscurity. Systems are complicated enough that people assume things are protected. Access is restricted mostly because people don’t know where to look. Credentials are hidden in configuration files or environment variables that nobody outside the team sees.

And that becomes the de facto security posture.

Not deliberate protection.

Just… quiet obscurity.

I’ve lost count of the number of times I’ve been pulled into a system review, or some troubleshooting session, where a secret shows up in a place it absolutely shouldn’t be. An API key sitting in a script. A database password in a config file. An environment file committed to a repository six months ago that nobody noticed.

That sort of thing happens constantly. Not out of malice. Out of convenience. But now we’ve introduced something new into the environment.

Generative AI.

More importantly though, the agentic tooling built around it. Tooling that literally takes actions on your behalf. Tools that can read entire repositories, analyze logs, scan infrastructure configuration, generate code, and help debug systems in seconds. Tools that engineers increasingly rely on as a kind of external thinking partner while they work through problems.

All that benefit is coming with AI tools. However AI doesn’t care about the secret. It’s just processing text. But the act of pasting it there matters. Because the moment that secret leaves your controlled environment, you no longer know exactly where it goes, how it’s stored, or how long it persists in the LLM.

The mental model a lot of people are using right now is wrong. They treat AI like a scratch pad or an extension of their own thoughts.

It isn’t.

The more accurate model is this: an AI tool is another resource participating in your workflow. Another staff member, effectively.

Except instead of being a person sitting at the desk next to you, it’s a system operated by someone else, running on infrastructure you don’t control, processing information you send to it. Including keys and secrets.

Once you start looking at it that way, a few things become obvious. You wouldn’t casually hand a contractor your production API keys while asking them to help debug something. You wouldn’t drop a full .env file containing service credentials into a conversation with someone who doesn’t actually need those values.

Yet that is exactly the pattern that is quietly emerging with generative AI tools. Especially among new users of said tools! Developers paste configuration files, snippets of infrastructure code, environment variables, connection strings, and logs directly into prompts because it’s the fastest way to get an answer.

It feels harmless. But secrets have a way of spreading through systems once they start moving.

The real issue here is that generative AI doesn’t create security problems. It amplifies the ones that already exist. Problems that the industry has failed (miserably might I add) at solving. If an organization already has sloppy credential management, AI just gives those credentials another place to leak. If engineers already pass secrets around informally to get work done, AI becomes another convenient channel for that behavior.

And because AI tools accelerate everything, they accelerate the consequences too. What used to take hours of searching through documentation can now happen instantly. A repository full of configuration files can be analyzed in seconds. Systems that were once opaque are now far easier to reason about.

The Takeaway (Including secrets!)

The practical takeaway here isn’t that people should stop using AI tools. That’s not realistic and frankly a career limiting maneuver at this point. The tools are genuinely useful and they’re going to become a permanent part of how software gets built.

What needs to change – desperately – is operational discipline.

Secrets should never be treated casually, and that includes interactions with generative systems. API keys, tokens, passwords, certificates, environment files, connection strings—none of those belong in prompts or screenshots or debugging sessions with external tools.

If you need to ask an AI for help, scrub the sensitive pieces first. Replace real values with placeholders. Remove anything that grants access to a system. Setup ignore for the env files and don’t let production env values (or vault values, whatever you’re using) leak into your Generative AI systems.

Treat every AI interaction the same way you would treat a conversation with another engineer outside your organization, or better yet outside the company (or Government, etc) altogether.

But not someone you hand the keys to the kingdom. Don’t give them to your AI tooling.

Polyglot Conference Vancouver 2025: Real Talk About AI, Industry Hubris, and the Art of Unconferencing

Just got back from another incredible Polyglot Conference in Vancouver, and I’m still processing everything that went down. There’s something magical about this event – it’s not your typical conference with polished presentations and vendor booth nonsense. It’s an unconference, which means the real magic happens in the conversations, the debates, and the genuine human connections that form when you put a room full of smart, opinionated developers together and let them talk about what actually matters.

The People Make the Conference

It was excellent to meet so many new people and catch up with friends I’ve not gotten to see in some time! This is what makes Polyglot special – it’s not just about the content, it’s about the community. I found myself in conversations with developers from startups to enterprise, from different countries and backgrounds, all bringing their unique perspectives to the table.

There’s something refreshing about being in a room where everyone is there because they genuinely want to be there, not because their company sent them or because they’re trying to sell something. The conversations flow naturally, the questions are real, and the debates are substantive. No one’s trying to impress anyone with buzzwords or corporate speak (Albeit we’ll often laugh our asses off at the nonsense of Corp speak and marketecture).

I caught up with folks I hadn’t seen since before the pandemic, met new faces who are doing interesting work, and had those serendipitous hallway conversations that often lead to the most valuable insights. The kind of conversations where you’re still talking an hour later, completely forgetting that there’s a scheduled session happening somewhere else.

The Unconference Format: Getting to the Heart of Things

The sessions were, as always with an Unconference jam packed with content and when we dove in we got to the heart of the topics real quick. This is the beauty of the unconference format – there’s no time for fluff or corporate posturing. People show up with real problems, real experiences, and real opinions, and we get straight to the point.

Unlike traditional conferences where you sit through 45-minute presentations that could have been 15-minute talks, unconference sessions are dynamic and responsive. Someone brings up a topic, the group decides if it’s worth exploring, and then we dive deep. If the conversation isn’t going anywhere, we pivot. If it’s getting interesting, we keep going. The format respects everyone’s time and intelligence.

The sessions I participated in covered everything from microservices architecture to team dynamics in the face of agentic AI tooling, from introspecting databases with AI tooling to the future of programming languages in the face of AI tooling. But the most compelling discussions were around AI – not the hype, not the marketing, but the real-world implications of what we’re building and how it’s changing our industry – for better or worse – and there’s a lot of expectation it’s bring a lot of the later.

Coping with AI: The Real Talk

Some of the talks included coping with AI, and just the general insanity that surrounds the technology and the hubris of the industry right now. This is where things got really interesting, because we weren’t talking about AI in the abstract or as some distant future possibility. We were talking about it as a present reality that’s already reshaping how we work, think, and build software.

The “coping with AI” discussion was particularly revealing. We’re not talking about how to use AI tools effectively – that’s the easy part. We’re talking about how to maintain our sanity and professional integrity in an industry that’s gone completely off the rails with AI hype and magical thinking.

The Insanity of AI Hype

The insanity surrounding AI right now is breathtaking. Every company is trying to cram AI into every product, whether it makes sense or not. We’re seeing AI-powered toasters and AI-enhanced paper clips, things that have boolean operation where they’re burning through tokens to make a yes or no decision. Utter madness on that front, that’s like half a tree burned up, a windmill rotation, or a chunk or two of coal just to flip a light switch! The technology has become a solution in search of problems, and the industry is happy to oblige with increasingly absurd use cases.

But the real insanity isn’t the over-application of the technology – it’s the way we’re talking about it. AI is being positioned as the solution to every problem, the answer to every question, the future of everything. It’s not just a tool, it’s become a religion. And like any religion, it’s creating true believers who can’t see the limitations, the risks, or the unintended consequences. Maybe “cult” should be added to the “religion” moniker?

The conversations at Polyglot were refreshing because they cut through this hype. We talked about the real limitations of AI, the actual problems it creates (holy bananas there are a lot of them), and the genuine challenges of working with these systems in production. No one was trying to sell anyone on the latest AI miracle – we were trying to understand what’s actually happening and how to deal with it. Simply put, what’s our day to day action plan to mitigate these problems and what are we doing when the hubris and house of cards comes crumbling down? After all, much of the world’s economy is hinged on AI becoming all the things! Nuts!

The Hubris of the Industry

The hubris of the industry right now is staggering. We’re building systems that we don’t fully understand, deploying them at scale, and then acting surprised when they don’t work as expected. The confidence with which people make claims about AI capabilities is matched only by the lack of evidence supporting those claims.

I heard stories from developers who are being asked to implement AI solutions that don’t make technical sense, from managers who think AI can replace human judgment, and from executives who believe that throwing more AI at a problem will automatically make it better. The disconnect between what AI can actually do and what people think it can do is enormous.

The hubris extends beyond just the technology to the way we’re thinking about the future. There’s this assumption that AI will solve all our problems, that it will make us more productive, that it will create a better world. But we’re not asking the hard questions about what we’re actually building, who it serves, and what the long-term consequences might be.

The Real Challenges

The real challenges of working with AI aren’t technical – they’re human. How do you maintain code quality when your team is generating code they don’t fully understand? How do you make architectural decisions when the tools can generate solutions faster than you can evaluate them? How do you maintain professional standards when the industry is racing to the bottom in terms of quality and sustainability?

These are the questions that kept coming up in our discussions. Not “how do I use ChatGPT to write better code” but “how do I maintain my professional integrity in an environment where AI is being used to cut corners and avoid hard thinking?”

The conversations were honest and sometimes uncomfortable. People shared stories of being pressured to use AI in ways that didn’t make sense, of watching their colleagues become dependent on tools they didn’t understand, and of struggling to maintain quality standards in an environment that prioritizes speed over everything else.

The Path Forward

The most valuable part of these discussions wasn’t just identifying the problems – it was exploring potential solutions. How do we maintain our professional standards while embracing the benefits of AI? How do we educate our teams and our organizations about the real capabilities and limitations of these tools? How do we build systems that are both powerful and maintainable?

The consensus seemed to be that we need to be more thoughtful about how we integrate AI into our work. Not as a replacement for human judgment, but as a tool that augments our capabilities. Not as a way to avoid hard problems, but as a way to tackle them more effectively.

We also need to be more honest about the limitations and risks. The industry’s tendency to oversell AI capabilities is creating unrealistic expectations and dangerous dependencies. We need to have more conversations about what AI can’t do, what it shouldn’t do, and what the consequences might be when it’s used inappropriately.

The Value of Real Conversation

What struck me most about these discussions was how different they were from the typical AI conversations you hear at other conferences. There was no posturing, no trying to impress anyone with the latest buzzwords, no corporate speak about “digital transformation” or “AI-first strategies“.

Instead, we had real conversations about real problems with real people who are dealing with these issues every day. People shared their failures as well as their successes, their concerns as well as their optimism, their questions as well as their answers.

This is the value of the unconference format and the Polyglot community. It creates a space where people can be honest about what’s actually happening, where they can ask the hard questions, and where they can explore ideas without the pressure to conform to industry narratives or corporate agendas.

Looking Ahead

As I reflect on the conference, I’m struck by how much the industry has changed since the last time I was at Polyglot. AI has gone from being a niche topic to dominating every conversation. The questions we’re asking have shifted from “what is AI?” to “how do we live with AI?” and “how do we maintain our humanity in an AI-driven world?”

The conversations at Polyglot give me hope that we can navigate this transition thoughtfully. Not by rejecting AI or embracing it uncritically, but by engaging with it honestly and maintaining our professional standards and human values.

The industry needs more spaces like this – places where people can have real conversations about real problems without the hype, the marketing, or the corporate agenda getting in the way. Places where we can explore the hard questions and work together to find better answers.

The Takeaway

The biggest takeaway from Polyglot this year is that we’re at a critical juncture. The AI revolution isn’t coming – it’s here. And the choices we make now about how we integrate these tools into our work, our teams, and our industry will shape the future of software development for decades to come.

We can either let the hype and hubris drive us toward a future where software becomes disposable, quality becomes optional, and human judgment becomes obsolete. Or we can choose a different path – one where AI augments our capabilities without replacing our humanity, where we maintain our professional standards while embracing new tools, and where we build systems that are both powerful and sustainable.

The conversations at Polyglot suggest that there are people in the industry who are choosing the latter path. People who are thinking critically about AI, asking the hard questions, and working to build a future that serves human needs rather than corporate interests.

That gives me hope. And it makes me even more committed to being part of these conversations, to asking the hard questions, and to working with others who are trying to build a better future for our industry.

The Polyglot (Un)Conference and (Un)Conference like events continue to be one of the most valuable events in the software development community. If you’re looking for real conversations about real problems with real people, I can’t recommend it highly enough.

The conference was such a good time with such great topics, introductions, and interactions that I’ve already bought a ticket for next year. If you’re interested in joining the conversation, check out polyglotsoftware.com and grab your tickets at Eventbrite.

Next Week is Hasura Conf 2021

Next week is Hasura Con 2021, which you can register here, and just attend instead of reading any further. But if you want some reasons to attend, read on, I’ll provide a few in this blog entry!

First Reason – What Have People Built w/ Hasura

You’re curious to learn about what is implemented with Hasura’s API and tooling. We’ve got several people that will be talking about what they’ve built with Hasura, including;

Second Reason – Curious About GraphQL

You’re still curious about GraphQL but haven’t really delved into what it is or what it can do. This is a chance, for just a little of our time, to check out some of the features and capabilities in specific detail. The following are a few talks I’d suggest, to get an idea of around what GraphQL can do and what various aspects of it provides.

Third Reason – Minimal Time, Maximum Benefit

Attending the conference, which is online, will only require whatever amount of time you’d like to put into it! There’s no cost, registration is free, so join for the talks you want or even join me for one of the topic tables or workshops that I’ll be hosting and teaching!

Hope to see you in the chat rooms! If you’ve got any questions feel free to reach out and ask me, my DMs are open on Twitter @Adron and you can always just leave a comment here too!

NOLA Vieux Carré Hack n’ Life n’ Lagniappe

I’ve been organizing conferences (with other awesome organizers of course, it’s never a singular person getting that work done!) for a long while now and they’re what they are. Then along came the pandemic and splat, in person conferences became extinct. I’m sure they’ll be back, but I’m not entirely sure they ought to come back. At least, they ought not come back in the same way they existed pre-pandemic time.

Mississippi River in New Orleans along the ole’ Crescent

There’s another type of get together that I’ve been thinking of that I’m really excited about. This experience, I was fortunate enough to experience a bunch of years ago in New Orleans with an awesome group of folks. To add a little context to this, I lived in New Orleans for a good while and grew up about 45 minutes from the city across the state line in Mississippi. With that, I feel like I’ve got a little bit of context for living the New Orleans lifestyle. I must add, it is distinctively and specifically a very unique lifestyle among these United States. Living a New Orleans life is like nothing else in these United States, not even remotely!

When I lived in the area I loved many aspects of this city and there were aspects that I was not happy with. The city has a few parts that make the famous south side of Chicago seem like a peaceful hippy village, but on the other side of the spectrum New Orleans has an intense passion and love among its people. The city is amazing, beautiful, and honestly a marvel of engineering (it’s below sea level). This city, always standing as a monument to passion, music, love, and more is prominent throughout the city. This passion and love of life itself is a positive among positives that in the end, vastly outweigh any of the negatives.

A Dose of That NOLA Life

It’s that famous street y’all!

This adventure I experienced a number of years ago went something like this. In 2010 I had a conference to attend where I was going to speak about various data analysis techniques, coding project ideas, and related technologies around web and data analytics. At the time I worked for a company called WebTrends with a solid bunch. The conference was all set and would be a great time, but it wasn’t the key experience of this trip.

Some friends with a business startup that also were attending the conference decided to rent a house down near Decatur Street. They rented this house and turned it into a coder’s house for a full week! It was a wildly entertaining, enjoyable, unique, and worthwhile experience to undertake. In addition we were wildly productive! Implementing a number of features, swarming on some ideas, and writing up a number of ideas for future implementation while thinking out the design in a great thorough way. It was spectacular!

But there was more, much more to this truly excellent trip. We had access to New Orleans after all which is well known for truly epic food – arguably some of the best options – to explore flavors, tastes, and truly expansive ideas in foodie explorations! The local creole food, the surrounding local southern food, and the combinations therein are unto themselves not comparable in any other part of the United States. Also no, New York, San Francisco, Portland, or anywhere doesn’t even come close in food comparisons and I’m not even going to engage in that silliness. New Orleans food is a culinary delight in it’s own world ranking! As can be see below…

In addition since I knew the city well there were streets to walk, places to explore such as Jax Brewery, the markets, the levies along the riverfront, a riverwalk that’s great, steamship paddle wheelers that traverse the Mississippi river for some amazing explorations, views, and food too!

Ok, ok, ok so that’s a lot of me telling you about the awesomeness of New Orleans. If you’re not into the idea of exploring or visiting the city I can’t really do much more to sell you on the trip. But the next aspect of this post I’ll detail an idea of forming a krewe to head south to the city of New Orleans, build awesome software, eat wonderful food, and generally live the relaxed life for a solid week or so. The idea is this krewe will be a parade of its own that’ll setup shop and live this for the escape, the celebration, and the experience of it all! If this sounds interesting to you, read on, here’s the details.

How This Would Work

For some, we’d join onboard the City of New Orleans, the Crescent, or the Sunset Limited into New Orleans. For others the option of choice may be to fly into the Louis Armstrong Airport or even take the train in from Chicago, Memphis, or other place onboard the City of New Orleans or out of Washington DC, Charlotte, Atlanta, Montgomery or elsewhere onboard the Crescent. Upon arriving we’d converge at the house or houses we’d choose for this adventure where we’d live for the week and get setup for the projects we’d do during the week. That night we’d gather for a grand dinner at our first excellent destination.

Day one dinner at Lil Dizzy’s Cafe & Coding Plans

The first day we’d all get breakfast at Lil Dizzy’s Cafe or somewhere thereabouts. There we’ll get fueled up with a most epic food win and then depart to gather to plot what we’ll create for the week. This is when we’d get a full plan and some goals together as a group. Decide if we want to break out further into groups (depending on our overall group size) and such. We’d find a good place (likely organized well before the trip) and gather there, post-wicked-awesome-amazing good breakfast, and get into all this. This one goal, would be the goal for day one!

Looking at that sinking (yes, by almost an inch per year!) Central Business District in New Orleans!

Day two rolling in… later rise, more good food, and coding time

Day two rolling in. We’d rise a bit later, get some piping hot coffee and maybe a kicker at Cafe Du Monde for the start of day two. Once collected we’d gather for some day hacking or maybe checking out the brewery blocks (it’s more than just breweries, just sayin’). Then we’d get in some evening coding, building, and creating then back into some food and entertainment of whatever sort for the evening. Possibly some jazz at Julius Kimbrough’s Prime Example, Little Gem Saloon, or the Spotted Cat. Either way, a good time and good evening however we want to slice and dice it up.

Day three, onward and forward and advance!

Day three and onward would continue along this theme. Dynamic organization with a loosely coupled and loosely designed scheduled workflow. Mostly to keep it flexible to live NOLA while we’re there. All the while we get to build something as a krewe (team, crew, cohort, however you’d call the group)!

This would continue for the rest of the week. I’ll have more ideas, more to this proposal, more to this trip coming in subsequent blog posts. This post has one purpose, to get the idea introduced to you dear reader and to start the conversation about getting this event put together. If you’d be interested in this idea, please reach out to me via Twitter @Adron, or you can message me via my Contact Form, or if you have some other means – txt me, sms me, slack me, or whatever – that’ll work too. Whatever the medium, let’s get a conversation started about traveling down to the Crescent City for an EPIC week of food, life, music, and hacking together a solution for whatever it is we create!

For more on this, follow me on Twitter, stay tuned here on the blog, and eventually we’ll get an organizing krewe together and start getting together more specifics, like dates and travel times, core ideas, and more.

Cheers!

References:

  • New Orleans skyline as featured image above is from Wikipedia Commons.
  • I did try to make sure there wasn’t rights issues with all those glorious food pictures, but will fix if anything is contested.

TRIP REPORT: QCon SF 2019, Amtrak Coast Starlight, #Bikelife in San Francisco, and Thoughts

This past week has been QCon. I departed last Sunday on the Coast Starlight. My preference is to take the train when it’s possible. Sometimes the schedule allows it, sometimes it doesn’t. This trip, the schedule was perfect for a little coding time on the train, reading, and introspection. Taking the train always gives me a bunch of time to do these things uninterupted while being comfortable and enjoying the countryside rolling by.

The train got out of the station and I cut some video for a VLOG episode or two. To note, I’ve got more than a few, some linked in this post, VLOG’s of the week and the various adventure. I hope they’re interesting and in some cases informational! Feel free to ask questions, I’m more than happy to elaborate on any of the videos, content, and the related topics.

Departing Seattle for San Francisco to attend QCon

The train departs at 9:45am from King Street Station. If I had to drive or take transit I’d have to get up at about 6am to get there and fiddle with luggage and all that, but since I was cycling bikelife style to the station, I got up around 7:15. However, I didn’t follow that schedule a made a coffee stop on the way.

When I arrived at the station I saw one of those post boards that showed the old Union Station near the King Street Station and I point out a few details about the two. I included some tips for bike life traveling via the train too. Rolled on out to the platform and boarded. Watch the video for a shrot summary of my departure and boarding the train.

The countryside is beautiful on this trip, and getting into Oakland and the ferry ride across the bay is spectacular. I had to, of course, VLOG a bit of that too.

After getting in I made my way back down via Valencia onto Market Street to the Hyatt for QCon Day 1 events. A VLOG on that run with a little montage and then some thoughts.

First thoughts, it won’t be soon enough that get get SOV (Single Occupant Vehicles) off of Market Street altogether. The street is used in a vastly superior way having transit, active transport, and work vehicles as is. Having SOV’s plying the streets just makes it dangerous and clogs up the whole thing, but alas, that’s just a first though.

I got into QCon and was super stoked to catch a few talks and talk to fellow data folks. I had noted though, even as a sponsor, our badges don’t get us access to anything really but the sponsorship hallway. That was kind of a bummer, so in the interim I had to work some magic so that I could catch some talks!

Palumi & Langauges of Infrastructure by Joe Duffy was the first talk I wanted to see. Alas, with scheduling I couldn’t make it. The description read,

“We have all become cloud developers. Every day we use the cloud to supercharge our applications, deliver new capabilities, and reach scales previously unheard of. Leveraging the cloud effectively, however, means navigating and mastering the ever-expanding infrastructure landscape, including public cloud services for compute, data, and AI; containers, serverless, and Kubernetes; hybrid environments; and even SaaS — often many at once.

Join us to learn about the modern languages, tools, and techniques that leading-edge companies are using to innovate in this world of ever-increasing cloud capabilities. We will explore: how to create, deploy, and manage cloud applications and infrastructures; approaches for cloud architectures and continuous delivery; and how modularity and reuse is being applied to infrastructure to tame the complexity, boost productivity, and ensure secure best practices.”

Hopefully we can get Joe to come speak at Seattle Scalability in the coming year! I’d even like to setup a hack day akin to a workshop to try out some of these techniques and related languages for infrastructure for the meet! Ping me Joe and we’ll make it happen!

The next talk I really wanted to catch too was Lachlan‘s “Helm 3: A Mariner’s Delight”.

“Adjusting your spyglass and looking out over the water, you can see how useful a package manager like Helm is. Perhaps you’ve used it to manage the fractal complexity of packages on your Kubernetes clusters (without losing track of versions stashed in the hold). But Helm 3 is rumored to be different, and you’re ready to get started on this exciting voyage – as soon as you have some idea of what’s port and what’s starboard!

In this story-fueled session, we’ll take you through differences from the Helm of yore, tips for a successful rollout or upgrade, and opportunities to shape the project’s future. The cloud native waters can be choppy, but a technical deep dive powered by open source tooling will steer you right!”

But again, my scheduling and access prevented this but I’m hopeful. This next week is KubeCon and I should be able to catch up with a number of people, maybe even Lachlan, on the Helm 3 bits!

Other talks that I might have or might not have officially attended included “Beyond Microservices: Streams, State and Scalability”, “Better Living through Software at The Human Utility”, and “Parsing JSON Really Quickly: Lessons Learned”. I hear they were all spectacular talks! 😉

Day 2 rolled in. Talked with Auth0 and Solace at their respective booths, if you’re curious.

After all that, another solid QCon, I’ll make sure to get a full pass next time if I can make it. Unless of course they fix that ranked access sponsorship pass mess, then I’d happily opt for that again. It is after all rather interesting to speak with all the companies.

After the conference I put together an exit VLOG. Enoy! Catch everybody next time!

Next week, on to KubeCon, cuz two conferences in two weeks is like a two-fer!