Am I The AI Luddite?

Some thoughts on the rush to AI adoption.

I had a really good week at the Lavacon conference last month. As usual, it delivered some exceptional sessions alongside fun and engaging conversations.

Some of the sessions have been informative, some educational, and many thought-provoking. And it is probably no surprise that discussions and mention of AI was an almost constant presence, having grown from a topic of interest last year to having its own dedicated track this year.

Yet in all honesty, I remain somewhat conflicted in my feelings about both the technology and the speed of its seemingly global adoption.

In fact part way through the first day I texted the following to my wife, Gill:

God I’m feeling like the Luddite old codger. It seems that everyone is drinking from the AI hype hose. Maybe I’m too much a writer than a techy these days.

Now don’t get me wrong, I’m not anti-AI in general terms. I’ve been honored to work on projects and products that use AI for things where I believe that AI can really help – by taking on repetitive tasks to do things at a speed, scale, and level of accuracy that humans can’t match. AI is brilliant at pattern matching, building connections between data types, data mining and even providing predictive analytics.

What I do have issues with is the apparent blind adoption of Generative AI. Maybe it’s because I am a writer at heart, that I don’t understand the apparent rush to divest ourselves of the skill that made us human in the first place – the ability to share our personal knowledge and ideas.

As my colleague Joe Gollner pointed out on one of the slides in his Lavacon presentation, “The Unlikely (but necessary) marriage of Content & Engineering,” there is danger in a situation ‘When AI Runs Amok’ which he defined as:

AI consuming information indiscriminately, without management guidance on objectives and guardrails, and without scalable oversight.

Organizations savor the chance to offload responsibility while harvesting superficial benefits.

Sound familiar?

As Joe pointed out:

This form of AI is as popular as it is dangerous.

Yet despite this, pretty much every other presentation I attend included phrases like “just feed content to the AI, or “make your content AI-ready,” or “use AI to generate content…” with none of them addressing what to me are the underlying issues of:

Legal and Moral – Where is the content that is feeding the Large Language Model (LLM) powering your AI coming from? Does your company own it, or have rights to use it? 

If you are using a public OpenAI tool, like ChatGPT, the chances are that the content driving your output is based on stolen copyrighted content just scraped from an online source without the original owner’s permission (I have found a couple of my own short stories where this has happened – so I may have a personal bias).

Then there is the fact that most platforms these days switch on AI scraping tools as default and you have to jump through multiple hoops to disable the feature to opt out – Looking at you LinkedIn, WordPress, Meta, and others. – If you are OK with your content being used to train someone else’s AI then it should be an Opt-In process.

Environmental –  There seems to be a perception that AI is some sort of magic that just happens, The infrastructure behind it is complex, and growing at an exponential rate.

The power needs and environmental impact of the huge data centers needed to power AI is currently catastrophic. I’ve heard it said that the power distribution systems are about 5 years behind the AI data centers’ current needs. We could be facing power outages caused just by AI demand. There was an article recently that reported that Microsoft was looking at an operating lease to recommission the infamous Three-Mile Island nuclear power plant just to power its AI data centers!

Then there is the water consumption needed to cool these massive data centers.  The average data center uses 300,000 gallons of water per day, which is about the same as 100,000 homes. Each time an AI tool is prompted, it uses about 16 ounces of water  – about the amount of a regular bottle of drinking water – each time you make a prompt it’s like pouring a bottle of water on the floor.

The environmental impact of a single AI prompt

Business needs – Putting aside the dubious business practices of many of the new AI-based tech companies for the moment. If your company is using AI we should be asking what business problem they are trying to solve? I have yet to have anyone give me an answer for using Generative AI that made me go “oh yeah now I get it.” – In many cases what I get is something along the lines that a senior executive has told various functional groups that they need to find a place where AI can be used in the business. This is putting the cart before the horse. In my opinion, Generative AI is currently an immature technology in search of a solution.

Of course, there’s also the whole quality of the output issue – which is a topic for another long future discussion.

If you are going to implement Generative AI, then you need to be asking yourself questions:

  • What is the problem you will solve by using it?
  • Do you know where the content is coming from?
  • Do you have the rights to it?
  • Are you OK with the environmental impact of your AI usage?
  • Are you just doing it so the CEO can say you are an AI company on the next earnings call?

In his presentation, Joe outlined how to sensibly align Content Strategy, Content Operations, and Content Engineering to form a stable ecosystem where Content and AI can work together. Now that I could get behind. (provided we can address the business, governance, and environmental issues – all big asks) – But unfortunately, we’ve got a long way to go to get there.

———————–

This article was first published in THE CONTENT POOL newsletter on 31 October 2024 / Header art by Tom Humberstone [first published in the MIT Technology Review].

The Man from P.O.S.T. – “The Where to Prioritize Technology Affair”

Despite the fact that for over half of my career technology companies have paid my mortgage, I have always been a long standing, and increasing vocal, proponent of the idea that in deciding to pursue any business-process change or innovation the technology must come last. In fact I devoted a whole chapter to the topic in my book The Content Pool (end of shameless plug).

At one industry conference a few years ago I even ended up getting a quick round of applause during the closing panel discussion when I said that audience members should stop talking about tools and start talking about business need.

A sign that I thought meant we were making some headway.

Another sign that we may be making headway was a recent conversation with a potential vendor for a client project I’m currently working on, where one of the first things the vendor pre-sales team asked my client for was a list of their top three business priorities for the project.

However anothetr conversation a few days later reminded me of a past project that I worked on that was still ticking over after nearly three years and not making any apparent progress. I recalled that the norm on that project was for conversations to quickly get into the weeds about the features, functionality, and development efforts needed around various alternative technology options.

When I asked the basic question of what was the project’s high-level business objective, no one could articulate it.

The whole conversation reminded me of an acronym developed by a major consulting group: P.O.S.T.

The P.O.S.T. approach was developed as part of a corporate social network strategy, but I believe it applies equally well to implementing any innovation or process improvement strategy:

  • P = People
  • O = Objectives
  • S = Strategy
  • T = Technology

Seems obvious doesn’t it.

Start with those who have a need, figure out what you need to do to fill that need, develop a strategy to do it, and then think about the tools you can use to do it.

You should be thinking along the lines of “We need to decrease the time it takes to get our information into the hands of our customers,” not “We need to install Wizgadget3.0.”

Just remember that if you put the T first, all you are left with is a P.O.S.

AI’s Missing Ingredient – Intelligent Content

My Saturday mornings used to be full of artificial intelligence (AI). Thanks to the TV shows I watched and the comics and books I read, I grew up expecting to live in a world of robots that could think and talk, vehicles of all sizes that would whisk me off to far-away destinations with no need for drivers or pilots, and computers that would respond to voice commands and know the answer to just about everything.

I may not yet have that robot butler, and my first experience with a self-driving car left me more apprehensive than impressed, but in other ways artificial intelligence is now part of my everyday existence, and in ways that I don’t even think about.

One of the first things I do each morning is ask Siri for the day’s weather forecast and then check to make sure that my Nest thermostat is reacting accordingly. During the day, Pandora’s predictive analytics choose my music, and in the evening Netflix serves up my favorite shows and movies. My books arrive courtesy of Amazon, and there’s a fair chance that some of those purchases were driven by recommendations generated via AI.

And now everyday I see several posts about content generated by the AI driven chatbot ChapGT (most of which seems very repetitive to me), while my artist friends debate the ethics of AI generated art (or is it even art at all).

It seems to me that we are on the edge of a potential leap forward in the application of AI, or perhaps more accurately we are making noticeable strides in the application of Machine Learning (ML) rather than true AI.

Outdated practices hampers AI advances

What we have today is just a small representation of the promise of AI, and that promise has not yet been realized.

Many companies and organizations still use older technology and systems that get in the way of a truly seamless AI customer experience. When the systems we already have don’t interact, and companies continue to build point-solution silos, duplicate processes across business units, or fail to take a holistic view of their data, content, and technology assets, then AI systems will continue to pull from a restricted set of information.

Over the past several years, as I have talked and worked with companies that are pursuing AI initiatives, I have noticed that the majority of those projects fail for a common reason; AI needs intelligent content. It may not be the only reason, but it’s definitely a common denominator.

AI needs intelligent content

No artificial intelligence proof of concept, pilot program, or full implementation will scale without the fuel that connects systems to users — content. And not just any content, but the right content at the right time to answer a question or move through a process. AI can help automate mundane tasks and free up humans to be more creative, but it needs the underpinning of data in context — and that is content, specifically content that is intelligent. According to Ann Rockley and Charles Cooper, intelligent content is “content that’s structurally rich and semantically categorized and therefore automatically discoverable, reusable, reconfigurable, and adaptable.” [Ann Rockley and Charles Cooper: Managing Enterprise Content: A Unified Content Strategy, Berkeley: New Riders, 2012]

The way we deliver and interact with content is changing. It used to be good enough to create large monolithic pieces of content: manuals, white papers, print brochures, etc. and publish them in either a traditional broadcast model or a passive mode. We would then hope that, in the best case, we could drive our customers to find our content or, in the worst case, that whoever needed it would stumbled across it via search or navigation.

With the rise of new delivery channels and AI-driven algorithms, that has changed. We no longer want to just consume content, we want to have conversations with it. The broadcast model has changed to an invoke-and-respond model. To meet the needs of the new delivery models like AI, our content needs to be active and delivered proactively. We need to build intelligent content that supports an advanced publishing process that leverages data and metadata, coordinates content efforts across departmental silos, and makes smart use of technology, including, increasingly, artificial intelligence and machine learning.

In addition to Rockley and Cooper’s definition of intelligent content, our content should also be modular, coherent, self-aware, and quantum. Here are definitions of those four characteristics:

  • Modular: existing in smaller, self-contained units of information that address single topics.
  • Coherent: defined, described, and managed through a common content model so that it can be moved across systems.
  • Self-Aware: connected with semantics, taxonomy, structure, and context.
  • Quantum: made up of content segments that can exist in multiple states and systems at the same time.

Intelligent content with a common content and semantics model that allows systems to talk the same language when moving content across silos may be the key to unlocking the technology disconnect that is holding AI back from even greater acceptance.