AI Should Be Concise

One of the things that I’ve noticed about the rise of AI is that everything feels so wordy now. I’m sure it’s a byproduct of the popularity of ChatGPT and other LLMs that are designed for language. You’ve likely seen it too on websites that have paragraphs of text that feel unnecessary. Maybe you’re looking for an answer to a specific question. You could be trying to find a recipe or even a code block for a problem. What you find is a wall of text that feels pieced together by someone that doesn’t know how to write.

The Soul of Wit

I feel like the biggest issue with those overly word-filled answers comes down to the way that people feel about unnecessary exposition. AI is built to write things on a topic and fill out word count. Much like a student trying to pad out the page length for a required report, AI doesn’t know when to shut up. It specifically adds words that aren’t really required. I realize that there are modes of AI content creation that value being concise but those are the default.

I use AI quite a bit to summarize long articles, many of which I’m sure were created with AI-assistance in the first place. AI is quite adept at removing the unneeded pieces, likely because it knows where there are inserted in the first place. It took me a while to understand why this bothered me so much. What is it about having a computer spend way too much time explaining answers to you that feels wrong?

Enterprise D Bridge

Then it hit me. It felt wrong because we already have a perfect example of what an intelligence should feel like when it answers you. It comes courtesy of Gene Roddenberry and sounds just like his wife Majel Barrett-Roddenberry. You’ve probably guessed that it’s the Starfleet computer system found on board every Federation starship. If you’ve watched any series since Next Generation you’ve heard the voice of the ship computer executing commands and providing information to the crew members, guests, and even holographic projections.

Why is the Star Trek computer a better example of AI behavior to me? In part because it provides information in the most concise manner possible. When the captain asks a question the answer is produced. No paragraphs necessary. No use of delve or convolutional needed. It produces the requested info promptly. Could you imagine a ship’s computer that drones on for three paragraphs before telling the first officer that the energy pulse is deadly and the shields need to be raised?

Quality Over Quantity

I’m sure you already know someone that thinks they know a lot about a subject and are more than happy to tell you about what they know. Do they tend to answer questions or explain concepts tersely? Or do they add in filler words and try to talk around tricky pieces in order to seem like they have more knowledge than they actually do? Can you tell the difference? I’m willing to be that you can.

That’s why GPT-style LLM content creation feels so soulless. We’re conditioned to appreciate precision. The longer someone goes on about something the more likely we are to either tune out or suspect it’s not an accurate answer. That’s actually a way that interrogators are trained to uncover falsehoods and lies. People stretching the truth are more likely to use more words in their statements.

There’s also more reasoning behind the padding. Think about how many ads are usually running on sites that have this kind of AI-generated content. Is it just a few? Or as many as possible inserted between every possible paragraph. It’s not unlike video sites like Youtube having ads inserted at certain points in the video. If you insert an additional ad in a video that is a minimum of twenty minutes how long do you think the average video is going to be for channels that rely on ad revenue? The actual substance of the content isn’t as important as getting those extra ad clicks.


Tom’s Take

It’s unlikely that my ramblings about ChatGPT is going to change things any time soon. I’d rather have the precision of Star Trek over the hollow content that creates yarns about family life before getting to the actual recipe. Maybe I’m in the minority. But I feel like my audience would prefer getting the results they want and doing away with the unnecessary pieces. Could this blog post have been a lot shorter and just said “Stop being so wordy”? Sure. But it’s long because it was written by a human.

Butchering AI

I once heard a quote that said, “The hardest part of being a butcher is knowing where to cut.” If you’ve ever eaten a cut of meat you know that the difference between a tender steak and a piece of meat that needs hours of tenderizing is just inches apart. Butchers train for years to be able to make the right cuts in the right pieces of meat with speed and precision. There’s even an excellent Medium article about the dying art of butchering.

One thing that struck me in that article is how the art of butchering relates to AI. Yes, I know it’s a bit corny and not an easy segue into a technical topic but that transition is about as subtle as the way AI has come crashing through the door to take over every facet of our lives. It used to be that AI was some sci-fi term we used to describe intelligence emerging in computer systems. Now, AI is optimizing my PC searches and helping with image editing and creation. It’s easy, right?

Except some of those things that AI promises to excel at doing are things that professionals have spent years honing their skills at performing. Take this article announcing the release of the Microsoft CoPilot+ PC. One of the things they are touting as a feature is using neural processing units (NPUs) to allow applications to automatically remove the background from an image in a video clip editor. Sounds cool, right? Have you ever tried to use an image editor to remove or blur the background of an image? I did a few weeks ago and it was a maddening experience. I looked for a number of how-to guides and none of them had good info. In fact, most of the searches just led me to apps that claimed to use some form of AI to remove the background for me. Which isn’t what I wanted.

Practice Makes Perfect

Bruce Lee said, “I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.” His point was that practice of a single thing is what makes a professional stand apart. I may know a lot about history, for example, but I’ll never be as knowledgeable about Byzantine history as someone who has spent their whole career studying it. Humans develop skills via repetition and learning. It’s how our brains are wired. We pick out patterns and we reinforce them.

AI attempts to simulate this pattern recognition and operationalize it. However, the learning process that we have simulated isn’t perfect. AI can “forget” how to do things. Sometimes this is built into the system with something like unstructured learning. Other times it’s a failure of the system inputs, such as a corrupted database or connectivity issue. Either way the algorithm defaults back to a state of being a clean slate with no idea how to proceed. Even on their worst days a butcher or a plumber never forgets how to do their job, right?

The other maddening thing is that the AI peddlers try to convince everyone that teaching their software means we never have to learn ever again. After all, the algorithm has learned everything and can do it better than a human, right? That’s true, as long as the conditions don’t change appreciably. It reminds me of signature-based virus detection from years ago. As long as the infection matched the definition you could detect it. As soon as it changed the code and became polymorphous it was undetectable. That led to the rise of heuristic-based detections and eventually to the state of endpoint detection and response (EDR) we have today.

That’s a long way to say that the value in training someone to do a job isn’t in them gaining just the knowledge. It’s about training them to apply that knowledge in new situations and extrapolate from incomplete data. In the above article about the art of butchering, the author mentions that he was trained on a variety of animals and knows where the best cuts are for each. That took time and effort and practice. Today’s industrialized butcher operations train each person to make a specific cut. So the person cutting a ribeye steak doesn’t know how to make the cuts for ribs or cube steaks. They would need to be trained on that input in order to do the task. Not unlike modern AI.


Tom’s Take

You don’t pay a butcher for a steak. You pay them for knowing how to cut the best one. AI isn’t going to remove the need for professionals. It’s going to make some menial tasks easier to do but when faced with new challenges or the need to apply skills in an oblique way we’re still going to need to call on humans trained to think outside the box to do it without hours and days of running simulations. The human brain is still unparalleled in its ability to adapt to new stimuli and apply old lessons appropriately. Maybe you can train an AI to identify the best parts of the cow but I’ll take the butcher’s word for it.

Copilot Not Autopilot

I’ve noticed a trend recently with a lot of AI-related features being added to software. They’re being branded as “copilot” solutions. Yes, Microsoft Copilot was the first to use the name and the rest are just trying to jump in on the brand recognition, much like using “GPT” last year. The word “copilot” is so generic that it’s unlikely to be to be trademarked without adding more, like the company name or some other unique term. That made me wonder if the goal of using that term was simply to cash in on brand recognition or if there was more to it.

No Hands

Did you know that an airplane can land entirely unassisted? It’s true. It’s a feature commonly called Auto Land and it does exactly what it says. It uses the airports Instrument Landing System (ILS) to land automatically. Pilots rarely use it because of a variety of factors, including the need for minute last-minute adjustments during a very stressful part of the flight as well as the equipment requirements, such as a fairly modern ILS system. That doesn’t even mention that use of Auto Land snarls airport traffic because of the need to hold other planes outside ILS range to ensure only one plane can use it.

The whole thing reminds me of when autopilot is used on most flights. Pilots usually take the controls during takeoff and landing, which are the two more critical phases of flight. For the rest, autopilot is used a lot of the time. That’s the boring sections where you’re just flying a straight line between waypoints on your flight plan. That’s something that automated controls excel at doing. Pilots can monitor but don’t need to have their full attention on the readings every second of the flight.

Pilots will tell you that taking the controls for the approach and landing is just smart for many reasons, chief among them that it’s something they’re trained to do. More importantly, it places the overall control of the landing in the hands of someone that can think creatively and isn’t just relying on a script and some instrument readings to land. Yes, that is what ILS was designed to do but someone should always be there to ensure that what’s been sent is what should be followed.

Pilot to Copilot

As you can guess, the parallels in this process for using AI in your organization are a good match. AI may have great suggestions and may even come up with more novel ways of making you more productive but it’s not the only solution to your problems. I think the copilot metaphor is perfectly illustrated with the rush to have GPT chatbots write reports and articles last year.

People don’t like writing. At least, that’s the feeling that I got when I saw how many people were feeding prompts to OpenAI and having it do the heavy lifting. Not every output was good. Some of it was pretty terrible. Some of it was riddled with errors. And even the things that looked great still had that aura of something like the uncanny valley of writing. Almost right but somehow wrong.

Part of the reason for that was the way that people just assumed that the AI output was better than anything they could have come up with and did no further editing to the copy. I barely trust my own skills to publish something with minimal editing. Why would a trust a know-it-all computer algorithm? Especially with something that has technical content? Blindly accepting an LLM’s attempt at content creation is just as crazy as assuming that there’s no need to doublecheck math calculations if the result is outside of your expectations.

Copilot works for this analogy because copilots are there to help and to be a check against error. The old adage of “trust by verify” is absolutely the way they operate. No pilot would assume they were infallible and no copilot would assume everything the pilot said was right. Human intervention is still necessary in order to make sure that the output matches the desired result. The biggest difference today is that when it comes to AI art generation or content creation a failure to produce a desired result means wasted time. In a situation with an autopilot on an airline making a mistake in landing the results are more horrific.

People want to embrace AI to take away the drudgery of their jobs. It’s remarkably similar to how automation was going to take away our jobs before we realized it was really going to take away the boring, repetitive parts of what we do. Branding AI as “autopilot” will have negative consequences for adoption because people don’t like the idea of a computer or an algorithm doing everything for them. However, copilots are helpful and can take care of boring or menial tasks leaving you free to concentrate on the critical parts of your job. It’s not going to replace us as much as help us.


Tom’s Take

Terminology matters. Autopilot is cold and restrictive. Having a copilot sounds like an adventure. Companies are wise not to encourage the assumption that AI is going to take over jobs and eliminate workers. The key is that people should see the solution as offering a way to offload tasks and ask for help when needed. It’s a better outcome for the people doing the job as well as the algorithms that are learning along the way.