Some thoughts on the rush to AI adoption.
I had a really good week at the Lavacon conference last month. As usual, it delivered some exceptional sessions alongside fun and engaging conversations.
Some of the sessions have been informative, some educational, and many thought-provoking. And it is probably no surprise that discussions and mention of AI was an almost constant presence, having grown from a topic of interest last year to having its own dedicated track this year.
Yet in all honesty, I remain somewhat conflicted in my feelings about both the technology and the speed of its seemingly global adoption.
In fact part way through the first day I texted the following to my wife, Gill:
God I’m feeling like the Luddite old codger. It seems that everyone is drinking from the AI hype hose. Maybe I’m too much a writer than a techy these days.
Now don’t get me wrong, I’m not anti-AI in general terms. I’ve been honored to work on projects and products that use AI for things where I believe that AI can really help – by taking on repetitive tasks to do things at a speed, scale, and level of accuracy that humans can’t match. AI is brilliant at pattern matching, building connections between data types, data mining and even providing predictive analytics.
What I do have issues with is the apparent blind adoption of Generative AI. Maybe it’s because I am a writer at heart, that I don’t understand the apparent rush to divest ourselves of the skill that made us human in the first place – the ability to share our personal knowledge and ideas.
As my colleague Joe Gollner pointed out on one of the slides in his Lavacon presentation, “The Unlikely (but necessary) marriage of Content & Engineering,” there is danger in a situation ‘When AI Runs Amok’ which he defined as:
AI consuming information indiscriminately, without management guidance on objectives and guardrails, and without scalable oversight.
Organizations savor the chance to offload responsibility while harvesting superficial benefits.
Sound familiar?
As Joe pointed out:
This form of AI is as popular as it is dangerous.
Yet despite this, pretty much every other presentation I attend included phrases like “just feed content to the AI, or “make your content AI-ready,” or “use AI to generate content…” with none of them addressing what to me are the underlying issues of:
Legal and Moral – Where is the content that is feeding the Large Language Model (LLM) powering your AI coming from? Does your company own it, or have rights to use it?
If you are using a public OpenAI tool, like ChatGPT, the chances are that the content driving your output is based on stolen copyrighted content just scraped from an online source without the original owner’s permission (I have found a couple of my own short stories where this has happened – so I may have a personal bias).
Then there is the fact that most platforms these days switch on AI scraping tools as default and you have to jump through multiple hoops to disable the feature to opt out – Looking at you LinkedIn, WordPress, Meta, and others. – If you are OK with your content being used to train someone else’s AI then it should be an Opt-In process.
Environmental – There seems to be a perception that AI is some sort of magic that just happens, The infrastructure behind it is complex, and growing at an exponential rate.
The power needs and environmental impact of the huge data centers needed to power AI is currently catastrophic. I’ve heard it said that the power distribution systems are about 5 years behind the AI data centers’ current needs. We could be facing power outages caused just by AI demand. There was an article recently that reported that Microsoft was looking at an operating lease to recommission the infamous Three-Mile Island nuclear power plant just to power its AI data centers!
Then there is the water consumption needed to cool these massive data centers. The average data center uses 300,000 gallons of water per day, which is about the same as 100,000 homes. Each time an AI tool is prompted, it uses about 16 ounces of water – about the amount of a regular bottle of drinking water – each time you make a prompt it’s like pouring a bottle of water on the floor.
Business needs – Putting aside the dubious business practices of many of the new AI-based tech companies for the moment. If your company is using AI we should be asking what business problem they are trying to solve? I have yet to have anyone give me an answer for using Generative AI that made me go “oh yeah now I get it.” – In many cases what I get is something along the lines that a senior executive has told various functional groups that they need to find a place where AI can be used in the business. This is putting the cart before the horse. In my opinion, Generative AI is currently an immature technology in search of a solution.
Of course, there’s also the whole quality of the output issue – which is a topic for another long future discussion.
If you are going to implement Generative AI, then you need to be asking yourself questions:
- What is the problem you will solve by using it?
- Do you know where the content is coming from?
- Do you have the rights to it?
- Are you OK with the environmental impact of your AI usage?
- Are you just doing it so the CEO can say you are an AI company on the next earnings call?
In his presentation, Joe outlined how to sensibly align Content Strategy, Content Operations, and Content Engineering to form a stable ecosystem where Content and AI can work together. Now that I could get behind. (provided we can address the business, governance, and environmental issues – all big asks) – But unfortunately, we’ve got a long way to go to get there.
———————–
This article was first published in THE CONTENT POOL newsletter on 31 October 2024 / Header art by Tom Humberstone [first published in the MIT Technology Review].



