Leaving the Cult

I have been trying an experiment over the past couple of months, no caffeine. More specifically, only decaf coffee. I haven’t had caffeine in a little over two months, and here are my observations.

I love the ritual of coffee. Starting the morning with a hot beverage is a lovely way to start one’s day. Decaf continues this tradition. Depending on the type of day, the method of preparation changes, a Mocca Master or an Aeropress. The great thing about this experiment is that I do not need a coffee to start my day. There is no headache if I skip my morning cup. I am also finding that my energy is more consistent through the day, less frantic spikes in energy as caffeine and cortisol flood my system.

I have found a local coffee roaster who has a decaf espresso roast that is perfect for the espresso machine, and I continue to experiment with the decaf coffee that I use for non-espresso brewing methods. Wimp Decaf is a great cup and I have been considering getting a couple of different bags of Dekaf to try.

I’ve found that “commercial coffee” from Starbucks is middling to poor coffee, not something I generally look forward to. I do think the new Starbucks Clover brewer an improvement over the traditional brew and boil coffee urn, and there is always decaf available. Coffee I don’t make myself is always considered suspect, meaning that it could be caffeinated, so in the ordering process I try to emphasize that the decaffeinated option is critical.

The benefit that I have noticed in removing caffeine from my system is the improvement in my sleep. Without having caffeine sneaking into any time after noon, I am tired at bed time and do not find myself staring at the ceiling in the dark. This improvement has been the most worthwhile reason for leaving the cult of caffeine.

A life with out caffeine includes quality sleep, consistent energy, no headaches, and no addiction. Caffeine may be vital to some, but no longer to me.


Bracing

I have been lifting weights in a dedicated fashion since at least 2010, when I fell head first into the world of Crossfit. Since then, I have tried to move heavy objects several times a week, albeit with less zeal and fewer ripped calluses than in my younger days.

One important part of a weight lifting practice is learning to brace one’s core muscles and perform the Valsalva Maneuver. This combination creates optimal conditions to protect the spine by increasing the internal pressure of the torso and ensures that there is no weak point along the torso that a heavy weight could exploit causing injury. Anyone who has been in a weight room knows the sound of a lifter exhaling at the end of a rep and can be the indicator of a good brace.

I feel like I have been bracing a lot lately.

I could gesture broadly at the state of things in the world or my country and would be perfectly justified.

I could explain the stressors of the current season of work and would be justified.

I could describe the unyielding calendar that leads to departure of my first child for college and would be justified.

Those are not the most immediate reasons for my current bracing.

The current brace is for the impending death of Earl, the family cat. He is old and it is clear his health is failing. A veterinarian would either tell us about our options for helping him along or utter the dreaded word “exploratory”, but I am unwilling to face a diagnosis of a terminal disease or entertain a rooting around and medicating before the inevitable comes, well, inevitably. He is not in pain, he is still his charming self, but he is certainly not well. We are providing him with the riches that he enjoys, tinned fish and cream along with his normal food, trying to make his last days comfortable, but understanding that this road goes one way.

I am reminded of my grandfather’s last weeks. My entire extended family gathered and spent as much time with him as we could. He was frail, not the man I remember from my childhood, not the man that I drew pictures of titled “Super Gramps”. Earl is in the same stage as my grandfather was when we gathered, not exactly on his deathbed, but certainly a shadow of himself.

The hardest part is that Earl has been from the day we brought him home from the shelter, my daughter’s cat. She chose him. She bonded with him at the shelter and kindled that bond for the years we have lived with him. And I know his passing will devastate her.

In many ways, it is good that she took a college course last summer covering Eastern religions including Buddhism. I think that her learning about five remembrances may provide her a tiny modicum of comfort or understanding.

The problem with bracing is that it isn’t possible to constantly brace. The weightlifter must release the brace. Given the multiple layers of bracing about the cat, my daughter, work, and the world, I am ready to release. I am weary. I am tired. But I must continue to brace, lest the heavy weight find a weak point and break me.

Deep breath, tighten the core.

“You can do this,” I say to myself.

Still bracing.


Why I don’t worry about the photos

John Rosenthal writing for the Hedgehog Review (one of my wife’s favorite publications):

I thought about my son’s birth. I was in the room when it happened, but was I really there, or had I been hiding behind my camera?

In reading his essay, I recalled one I had written in 2013 and thought it would be worth revisiting.


This week I went to my son’s Chistmas chapel, which included him singing “Twinkle Twinkle” and “Go Tell It On The Mountain”. He was great and I thoroughly enjoyed his performance. He has really blossomed in the past few months of school.

The thing that was interesting to me was the rabid nature of the parents trying to capture photos and video of their kids. Multiple cameras, video equipment, cellphones, all being schlepped and used, with everyone jockeying for a good vantage point. They seem so focused on capturing the event that they aren’t actually there for the event. Instead experiencing it through the lenses and displays of their devices, a technological intermediary, overlaying battery indicators, number of images or minutes captured, storage space remaining. All ambient pieces of data not related to the actual event going on, particularly if the one capturing is continually ape-ing the camera to see the results of their last button press.

While I realize that there are things, illness and injury, that can take my memories from me, I would much rather be fully present at events like any son’s Christmas chapel than a slave to capturing it. Sure, I won’t remember every detail of all of the events in my kids lives, but I will truly experience them and enjoy it with them. If I forget the details, my kids or wife will help remind me. And if they can’t remember it in striking detail, I would love to hear their story about it.

The other reason why I am not fanatical about capturing every moment of my kids life: The mental overhead. A woman I know takes photos at an event and then spends the moments after taking the picture trying to decide which ones to keep, which ones to delete and which ones to post to Facebook. The moments that she chooses to do this ritual is immediately after capture, while the event is still in swing, while there is still a chance to be part of the experience.

The ritual of sorting, selecting and sharing is part of the mental overhead that I am not willing to expend. In addition, I always wonder what this woman and the other parents voracious capturing are going to do with the images and videos they take.

I am sure some make scrapbooks, and others get prints. Most sit untouched, unseen. The rare few are shared on Facebook where they get some “likes” and a couple of comments. But all of this, getting prints, long term digital storage, posting to Facebook, it is all mental overhead.

“Where did I put those pictures I got prints of?”

“Did I download those photos off of my camera?”

“Where are those pictures on my computer? Did I upload them to Facebook?”

Not to mention backing the digital versions up. Seth Clifford:

I don’t remember the first time I lost data. I don’t even remember what it was. I do remember the feeling of utter despair though, and the declaration that I wouldn’t let it happen again. Since then, so much of my time and mental energy has been spent thinking about ways to prevent this from happening and creating layers of redundancy around my data and in many cases the data of those close to me.

Seth writes later about the first of the Buddha’s Four Noble Truths: The anxiety or stress of trying to hold onto things that are constantly changing.

Trying to preserve the moment we can destroy our actual experience of the moment. The overhead of what to do with the artifacts captured can lead to more stress, and the loss of what was considered safe, will almost always lead to suffering.

I hope to never be struck with a disease that corrupts my memory, I want to cherish the memories I have for as long as I live. I also don’t want to work myself into a basket case regarding what to share, where to store, and how to best capture a moment.

In a sermon at church the assistant rector shared how during the season of Advent, Christians should prepare to be awakened by Christ’s birth, but we have any number if things, from cellphone to calendars to todo lists to keep us asleep from being awake to Christ’s presence in the mundane, everyday moments and interactions. Whether it is seeing Jesus in the moment or reducing dukkha in our lives, mindfulness is not just being physically present, but only in being fully present.

Originally published on December 13, 2013. Lightly edited for typos and clarity.


Agentfarce

Artificial intelligence is being wedged into every app at breakneck pace. Not just general AI assistants, like Siri, but standard workplace tools like Microsoft Word and Excel. Companies are betting the farm on AI-infused software, or at least trying to not be left behind. I can see plenty of places in which a sprinkle of AI can be beneficial for users, but explaining to everyday consumers what value is being added is difficult.

Enter Salesforce, the cloud-based software behemoth that has a solution for every business problem. Salesforce powers many of the “back office” functions that people encounter every day, and it is a natural fit for the magic AI dust. The issue that I have is with how it is being advertised.

Salesforce partnered with Matthew McConaughey, giving him the title of “Brand Partner and Advisor”. I’m not sure what a Brand Partner is, but if the current spate of advertisements is anything to judge the relationship from, Salesforce got fleeced. Strong accusation, but follow along.

The first ad for Salesforce’s AI offering features Mr. McConaughey on a steam locomotive train, dressed as a lawman in the Wild West. The train passengers are being robbed of their data by cowboy-tech-bro-puffer-vest-wearing types. There is a clever pun about the train already leaving the station, and a promise that Salesforce’s AI doesn’t share or steal your data. This on the whole is a good ad.

But the more recent incarnations have seen Mr. McConaughey be the helpless rube, to Woody Harrelson’s more successful AI-aided persona. From understanding which gate his connecting flight is leaving from, to a dream house being sold out from underneath him, to questionable fashion choices, McConaughey is the butt of the joke because the company that he is working didn’t have AgentForce, the presumably agentic (this is the accepted jargon, not my favorite) offering from Salesforce.

The ad that has drawn my ire is known as “Dining Alfiasco,” a thirty second spot in which Mr. McConaughey is seated outside a restaurant in the pouring rain, being served a dish that he is not interested in, all while Mr. Harrelson is across the street enjoying dinner with friends, dry and happy.

This ad is one that I cannot abide.

“The booking app I used didn’t have Agentforce. So an AI agent didn’t know to move my reservations inside or know what I like to eat, which is not that.”

The booking app didn’t have an AI agent, got it. But the restaurant didn’t have a single human working there to check the weather and adjust the seating for the evening’s dinner rush? The server who presumably seated Mr. McConaughey didn’t offer to move him inside, and he appears to not have requested it as he is sitting outside getting soaked. He is served a plate of shrimp, which he did not order (see him grumbling about not knowing what he likes to eat), in the pouring rain after which the server dashes to the dry restaurant. And AI was supposed to solve this?

Ethan Mollick, Ralph J. Roberts Distinguished Faculty Scholar and Co-Director of the Generative AI Labs at the University of Pennsylvania’s Wharton School of Business writes in Co-Intelligence that a human should be “in the loop” when using AI. This is to ensure that the AI is operating as expected, that there are not hallucinations (a term that is going out of style in reference to AI, but I believe is still relevant) and that there is supervision.

This is where the Salesforce ad fails. In a restaurant there are plenty of humans, we see one who delivers the food, yet there is nothing to be done for a customer sitting in a torrential downpour. As of now, restaurants always have a human in the loop. The next ad in the series is as egregious if not more, with Mr. McConaughey being seen in the emergency room by an OB/GYN who puts him in stirrups and tells him behind a curtain that it will be cold.

While I get that these are ads and are trying to be humorous, I think that it is a mistake to oversimplify the use of AI to an audience who have little context for adding AI to the workplace. On one hand, the ads are trying to prove that AI can solve problems that could have easily been resolved by a human, and on the other, it assumes that all employees are robots who are not able to adapt to the situation around them.

These ads make me think that Agentforce is an agentfarce.


How do you set an alarm?

I have had an Apple Watch since the first version, later dubbed Series 0. Until the Apple Watch had a battery that could last for 24 hours and fast charging, I would charge my Apple Watch at night and used an alarm clock that sat on my bedside table. At one point, I bought a Xiaomi bracelet that was the equivalent of a Fitbit, no screen, just four LEDs on the “face” of the bracelet. The one thing it had was a vibrating alarm, so I would charge my Apple Watch overnight, and then use the Xiaomi bracelet as my alarm clock. This setup was finally replaced when the Apple Watch was more capable.

Why a vibrating alarm to wake me up?

I remember when I was required to wake up on my own. It was 6th grade. We lived in England. I had to wear a uniform with a tie, and I would be going to secondary school in the next year. It was preparing me for the unrealized sad future of living by the alarm.

And the alarm clock was a horrid object. Chromed metal with a white face shaped like the cartoon representation of an alarm clock. It required winding every night, lest the alarm would die in the night due to a lack of main spring tension. Two bells mounted above the clock face with a small hammer between them that would, at the appropriate time, rattle between the bells and startle me from sleep.

But the appropriate time was by default, earlier than I would have liked and more specifically, earlier than I set. The cruel nature of the alarm clock’s mechanical nature resulted in the alarm ringing before the time set on the clock face. The alarm would ring when the minute hand first occluded the alarm hand. It took me far too long to realize this. Every morning, before the time that I thought I would be waking up, my alarm would startle me awake with the violence of a horrible buzzing ring. Being awakened by a cymbal being crashed near one’s face or a bucket of cold water would be an equivalent experience.

Why a vibrating alarm to wake me up?

I am not one that falls asleep after the alarm goes off. If I do, it is a sign that I have not been sleeping enough and need to address it post-haste. For that reason, I do not require an elaborate series of alarms to ensure I get up. I am not one that requires a “snooze” before I am ready to rise. With this noted, I do live with a lovely person who values her rest more than I do. My wake up time is earlier than hers.

I cannot subject the person who I live with to the alarms that wake me up because what if they are like me and cannot fall back asleep, how rude is that? To have an alarm go off and it is not for you, but you’re awake any way. Rude.

This is why I chose to use a vibrating alarm to wake me up. The Xiaomi band was not great, and the Apple Watch is better.

Except…

I charge my Apple Watch as I brush my teeth, which requires me putting it back on my wrist before bed. If I do not authenticate on my Apple Watch, the alarm will not ring.

The best I can figure is that the Apple Watch will ring if it is on the charger or if it is authenticated on the wrist. But with the watch on the wrist and unauthenticated, the alarm algorithm appears to think it is not supposed to reveal that personal information and thus does not wake me up. Worse, a loose strap can cause the watch to lock, resulting in no alarm. So it is not only an issue of not authenticating on the way to bed.

I was just going to write: I should start logging when I authenticate on my watch to better understand the pattern. But I paused. I should not have to log when I remember to punch in my PIN on my watch to get an alarm.

I now know that if I have a day that requires me, for work, or to work an election, to be up with an alarm, I have to set backups. I have two additional alarms on my nightstand, one that is plugged in, the other battery operated. I have not gone as far as doing the Jocko Wilinik “I have three alarms” thing, but I cannot lie and say I haven’t considered it.

While the Apple phrase “It just works,” can be applied to many different facets of my computing life. This is one instance in which I think more effort is required, it does not “just work.” The failure rate of my Apple Watch’s alarm is rapidly rising to the same level of trust I have in Siri to be able to do complex tasks delivered in a conversational format. This is one place where “It just works,” feels like a cruel joke.


Apple Music’s Loops

My house is filled with music. Every day we have music coming from at least one HomePod Mini. Drifting between rooms having the same music playing seamlessly is a wonderful way to entertain and party. We will regularly shout into the air “Hey Siri, play…” the genre or decade that strikes the mood. I have found many new and interesting songs that I would not have without an algorithmically generated playlist.

There are two types of Apple Music days: Days in which the algorithm picks the perfect mix, and days that the algorithm kills the vibe. In both cases, there is a similar pattern of musical loops. A musical loop is a playlist that has a “seed” song that repeats. Each time I hear that “seed” song, I think of it as one complete loop. Eventually, the playlist will come back to the first song played.

The most interesting thing about these music loops are the quality of the songs that are in the loop. There is the “seed” song, a hit, one that everyone can tap their toe to. After the anchor, the next several songs fit the genre, but are by bands that I have never heard of, not covers, but adjacent to popular songs. Then there are the B-sides or the tracks that were before the artist’s hit on the album. After the B-sides, the loop heads into the weird part, songs that are in the genre but are either bands that I have never heard of.

The other item of note is the source for the hit songs. It is rare that the hit songs in a given loop come from the original artists album, instead they are from a compilation album or a re-release or remaster. The number of times a hit song is from “Soul Chillout” or “90s Hits Best 90s Music” (a real album that is a compilation).

Does Apple algorithmically push the music towards songs with lower royalty payments? It is not hard to imagine in a world in which Spotify has been using AI generated music and listeners that Apple favors lower costing versions of popular songs. Maybe I’m reading too much into the ambient music in my house, but I keep looping back to the same observations.


Seeing all possible futures

With reporting of Alexa+, Amazon’s “next generation Alexa, powered by generative AI” being released in such a limited fashion as to be effectively unreleased, and Apple’s Apple Intelligence-based new Siri being delayed, I think I see where the issues around widespread use of both of these services lie. Both offerings are trying to be the personal assistant that they were originally billed to be, but adding artificial intelligence (which will encompass Large Language Models and other generative technologies) is turning out to be not as easy as it would seem.

Issue 1: Seeing every possible future

All software before the current AI boom was (the past tense feels weird here, but given that software developers are increasingly using AI to write software, it feels apt) deterministic, meaning that all inputs were controlled and all outputs could be figured out based on the logic of the software. Think of a calculator app: pressing the “2” key followed by “+”, and “2”, are all inputs of known possibilities of numbers and functions. Pressing the “=“ key kicks off the application through the internal logic that would result in “4”. We know what we put in, we have a pretty good idea of what we are going to get out based on the program. This is like being able to see every possible future path.

The difference with software utilizing artificial intelligence is the non-deterministic nature of the output, you will not get the same output even with the same prompt (while there may be some technical quibbling with this point, I would argue that it is accurate). The AI is guessing the next word, given the patterns of the words before the one being generated. Non-determinism means that the software developer cannot see every, or potentially any, possible future.

The differences between deterministic versus non-deterministic results are the reason that companies are having a hard time harnessing the technology in their software. Paul Kafasis found when asking both ChatGPT and Siri about Super Bowl winners. If the AI is guessing at the next word from the soup of knowledge that it was trained in, the likelihood of picking words that makes sense, like the names of teams, is high, but the validity of the statement can be low

The problem with using AI to derive factual answers is that we know that there is a factual answer. We are asking a deterministic question of a non-deterministic system. The same is true of concept video that Apple showed at the 2024 World Wide Developer Conference. “What time does my mom’s flight arrive?” is a question that has a deterministic answer. An intern with the right access could find the email or messages, look up flight numbers, check for delays or changes and report the arrival time. But asking a non-deterministic system, you may end up apologizing to your mother.

AI systems are not great for factual answers, but the AI do not know that. The AI will confidently answer your question, because it is simply guessing the next word based on your question. It wants to provide an answer, so it continues in, right or wrong.

Issue 2: Quality matters

What month is it?” became another nail in the Siri coffin, but I’m not as quick to point this out as a failure of AI, but the specific failure of Apple’s AI implementation. As noted in John Gruber’s update, ChatGPT answers correctly.

The inability to answer the question appears to be an issue of prompt engineering quality. In the computer science (and life) principle of “Garbage In, Garbage Out,” prompt engineering is a concept that the better you form your prompt to a Large Language Model artificial intelligence (essentially, what I have been referring to as artificial intelligence), the better the results are.

(As the AI makers release more advanced models, prompt engineering may be less necessary, but on the whole it is a valuable skill set to have in working with AI.)

The immediate issue with the prompt née question is that it is not a good prompt. It is like asking an amnesiac or your “innie” what day it is. It is knowledge that either requires awareness of continuity (what was yesterday, so what is today?), or context clues. I would venture that ChatGPT was able to answer the question correctly, because the submission of the prompt also includes the submission date as part of the instructions that augments the prompt. We know that the software developers who are trying to harness non-deterministic AI augment the prompts with instructions. It is reasonable to assume that ChatGPT includes the date of the prompt in the instructions, as it would allow for a prompt context relative to the model’s knowledge cutoff date (essentially, when model stopped receiving information to be boxed up, tuned, and shipped).

I am skeptical that the current technology is sufficient to overcome these two issues. I do not believe secretly including “do not hallucinate” into your prompt instructions will be enough to correct for poor prompt engineering. I am equally, if not more, dubious that our current level of building AI systems can overcome the ability of a well-trained intern in finding deterministic answers to complex problems reliant upon reasoning and context. Both Amazon and Apple are pouring significant resources into the problem, and they both have smart teams pushing the boundaries of what is possible alongside frontier AI companies (OpenAI, Anthropic, and the like), but it is hard to trust these tools until they can improve the way the software combines both non-deterministic, creative output and deterministic, factual results.


In the Kitchen

I’m not sure when it happened, but the start of my Siri-related woes feel tied to the roll out of Apple Intelligence. To start from the beginning…

I live in a house with other humans, my wife and two kids. Our house has an open floor plan, but not to be confused with open concept, there are doorways and walls, but rooms flow together. I have HomePods Mini in the kitchen, dining room, and the salon (what I would have called a “formal living room” when I was young), and we regularly have music playing. The goal is to have the same music playing in all three rooms, as to allow one to move between the rooms not having different music, or music out of sync.

My wife will speak to the house “Siri, play jazz in the kitchen.” And as expected the kitchen HomePod Mini starts playing a “station” or “playlist” (more on that distinction in a later post).

The inclusion of “in the kitchen” has been a learned addition. The Apple flavored version of what Panos Panay calls “Alexa speak.” Due to the shape of the house, and the proximity of the HomePods Mini, it has been a common occurrence that a HomePod Mini in another room will hear the wake word and start playing. The sensitivity and zealousness of the HomePods to respond can result in the music being played across the house instead of the room one is in.

“Siri, play jazz in the kitchen.”

Success, jazz plays on the Kitchen HomePod Mini.

Sometimes, the mood is not jazz. The mood dictates music, but there is no specific genre required. “Hey Siri, play some music,” has resulted in the “Just for you” mix to be played. The “Just for you” mix is a gamble, but after years of liking songs and adding them to Apple Music libraries, the mix can be a perfect match for the mood of “the thread” is right.

The addition of “in the kitchen” is habitual, but necessary “Siri speak”, again, based on the sensitivity and zealous nature of Siri on the HomePod. The phrase “Hey Siri, play some music in the kitchen,” has been a success.

Until recently…

Now, the same phrase will result in the kitchen HomePod playing one of three songs, “In the Kitchen” by Reneé Rapp, “La Linda” by Gabriel Morales (a presumed mishearing of “in the kitchen”), or “IN THE KITCHEN (feat. Fox BD)” by Baby Mel.

I have confirmed that the HomePod Mini being addressed is in the “kitchen” room in the Home app, and it was recently reset due to ongoing connectivity issues (which thankfully have been resolved with a slight relocation). The only change that I cannot account for is the addition of Apple Intelligence across the Apple ecosystem. Something is indeed rotten in the state of Cupertino.