As some folks already know, I will be bringing my 40 year corporate career to a close when I officially retire on December 31st, 2025.
From technical writer, to marketing, company executive, strategist, business owner, and consultant it’s been a fascinating journey through the ever-changing world of business content. I’ve worked on challenging and transformative projects in the worlds of aerospace, manufacturing, technology, and more as well as with some of the leading brands in retail, sports, and entertainment.
I’ve been lucky enough that my career included the privilege of frequent travel that has allowed me to experience multiple countries and cultures around the world.
But most of all I’ve been honored to work along side numerous amazing colleagues and clients over the decades, many of whom have remained long-term friends.
One of the things I’ve enjoyed most about my career is the sense of community. The content world is a welcoming one and I can’t thank it enough for providing support when I needed it, as well as a platform to share my thoughts and ideas as a speaker, columnist, and author. Thanks to everyone who read or listened – we had some great conversations along the way.
But what of the future?
As well as spending more time with family, I’ll be focusing more on my creative work as a writer, podcaster, and publisher, plus continuing my non-profit work as President of The Ian Fleming Foundation.
Thanks again, to everyone for the memories. It’s been fun.
I had a really good week at the Lavacon conference last month. As usual, it delivered some exceptional sessions alongside fun and engaging conversations.
Some of the sessions have been informative, some educational, and many thought-provoking. And it is probably no surprise that discussions and mention of AI was an almost constant presence, having grown from a topic of interest last year to having its own dedicated track this year.
Yet in all honesty, I remain somewhat conflicted in my feelings about both the technology and the speed of its seemingly global adoption.
In fact part way through the first day I texted the following to my wife, Gill:
God I’m feeling like the Luddite old codger. It seems that everyone is drinking from the AI hype hose. Maybe I’m too much a writer than a techy these days.
Now don’t get me wrong, I’m not anti-AI in general terms. I’ve been honored to work on projects and products that use AI for things where I believe that AI can really help – by taking on repetitive tasks to do things at a speed, scale, and level of accuracy that humans can’t match. AI is brilliant at pattern matching, building connections between data types, data mining and even providing predictive analytics.
What I do have issues with is the apparent blind adoption of Generative AI. Maybe it’s because I am a writer at heart, that I don’t understand the apparent rush to divest ourselves of the skill that made us human in the first place – the ability to share our personal knowledge and ideas.
As my colleague Joe Gollner pointed out on one of the slides in his Lavacon presentation, “The Unlikely (but necessary) marriage of Content & Engineering,” there is danger in a situation ‘When AI Runs Amok’ which he defined as:
AI consuming information indiscriminately, without management guidance on objectives and guardrails, and without scalable oversight.
Organizations savor the chance to offload responsibility while harvesting superficial benefits.
Sound familiar?
As Joe pointed out:
This form of AI is as popular as it is dangerous.
Yet despite this, pretty much every other presentation I attend included phrases like “just feed content to the AI, or “make your content AI-ready,” or “use AI to generate content…” with none of them addressing what to me are the underlying issues of:
Legal and Moral – Where is the content that is feeding the Large Language Model (LLM) powering your AI coming from? Does your company own it, or have rights to use it?
If you are using a public OpenAI tool, like ChatGPT, the chances are that the content driving your output is based on stolen copyrighted content just scraped from an online source without the original owner’s permission (I have found a couple of my own short stories where this has happened – so I may have a personal bias).
Then there is the fact that most platforms these days switch on AI scraping tools as default and you have to jump through multiple hoops to disable the feature to opt out – Looking at you LinkedIn, WordPress, Meta, and others. – If you are OK with your content being used to train someone else’s AI then it should be an Opt-In process.
Environmental – There seems to be a perception that AI is some sort of magic that just happens, The infrastructure behind it is complex, and growing at an exponential rate.
The power needs and environmental impact of the huge data centers needed to power AI is currently catastrophic. I’ve heard it said that the power distribution systems are about 5 years behind the AI data centers’ current needs. We could be facing power outages caused just by AI demand. There was an article recently that reported that Microsoft was looking at an operating lease to recommission the infamous Three-Mile Island nuclear power plant just to power its AI data centers!
Then there is the water consumption needed to cool these massive data centers. The average data center uses 300,000 gallons of water per day, which is about the same as 100,000 homes. Each time an AI tool is prompted, it uses about 16 ounces of water – about the amount of a regular bottle of drinking water – each time you make a prompt it’s like pouring a bottle of water on the floor.
The environmental impact of a single AI prompt
Business needs – Putting aside the dubious business practices of many of the new AI-based tech companies for the moment. If your company is using AI we should be asking what business problem they are trying to solve? I have yet to have anyone give me an answer for using Generative AI that made me go “oh yeah now I get it.” – In many cases what I get is something along the lines that a senior executive has told various functional groups that they need to find a place where AI can be used in the business. This is putting the cart before the horse. In my opinion, Generative AI is currently an immature technology in search of a solution.
Of course, there’s also the whole quality of the output issue – which is a topic for another long future discussion.
If you are going to implement Generative AI, then you need to be asking yourself questions:
What is the problem you will solve by using it?
Do you know where the content is coming from?
Do you have the rights to it?
Are you OK with the environmental impact of your AI usage?
Are you just doing it so the CEO can say you are an AI company on the next earnings call?
In his presentation, Joe outlined how to sensibly align Content Strategy, Content Operations, and Content Engineering to form a stable ecosystem where Content and AI can work together. Now that I could get behind. (provided we can address the business, governance, and environmental issues – all big asks) – But unfortunately, we’ve got a long way to go to get there.
———————–
This article was first published in THE CONTENT POOL newsletter on 31 October 2024 / Header art by Tom Humberstone [first published in the MIT Technology Review].
I used to be a strident proponent of breaking down system silos. Over the years, I took the stage at conferences and preached that tearing down walls between various pieces of technology across the organization would help deliver a better customer experience. But not any longer. I’ve come to realize that this was an impossible dream.
Let’s face it: no one is going to throw out their incumbent systems just because we say they should. Especially not if those systems still do the job they were purchased to do. We have all worked with systems that are good enough to fulfill a specific set of tasks. Removing and replacing existing systems isn’t quick or cheap, but the biggest hurdle isn’t budget or technology (although that’s what’s often cited) — it’s human.
Navigating the shifting customer experience waters
People often build their careers around particular systems. They develop specific areas of knowledge related to their system and their data. The idea of letting others into their area of expertise can be seen as threatening. It’s a world of what ifs. What if they (the other departments) mess up my data? What if their findings contradict my own?
Then there’s the cultural aspect. Systems become ingrained to the point that policies and procedures develop around them, and they become an essential part of the way the company operates. It may not be the best way, and often it isn’t, but if it works why change it?
Change is hard, and dramatic change is the hardest of all. But change is inevitable.
The way customers interact with companies is evolving and so are their expectations. It has always taken a combination of divisions and departments to deliver goods, services, and the desired brand experience to customers. Similarly, customers have always interacted with organizations via multiple touchpoints spread across multiple departments. Those differences in customer experience were once an accepted part of doing business, but they no longer are.
Today’s customers demand a seamless, consistent experience at every interaction with the company, both before and after purchase. Delivering a series of disparate transactions will no longer cut it. Organizations must develop ongoing relationships, and to do that, they need to take a holistic view of the customer’s data — data that resides in those silo-ed systems.
Build bridges between systems of record
Watching two characters from Game of Thrones negotiate across a castle drawbridge instead of just battling things out to take control of a castle, got me thinking about information flow. That metaphor holds true when it comes to using data to deliver improved customer experiences.
Instead of tearing down the castle (or system) walls, look at how you can build bridges between them. That way those who have built a body of expertise can share it while still having authority over their own keep (data set). Bridges allow data to be collected once and then flow freely between systems, allowing individual system owners to use it in the way that suits their needs.
Each customer-interfacing system can still stand alone and address the needs of a particular line of business or be an enterprise-wide single source of truth. But by allowing data to pass between individual systems, including internal enterprise-wide systems, you can create a fully connected, continuous customer experience.
Over a period of time you will find out which systems you really need and which data is central to delivering the customer experience. Some silos will gradually fall into disuse and fade away to be replaced by true systems of record. This was brought home to me recently when a VP at a large company told the story of their digital transformation project. Midway through the project, they realized that of the 30-plus systems they had in use, only nine were vital to the process of delivering customer service. Taking a customer-centric view of their data and the way it flowed around the organization helped them understand which were the real systems of value.
Building bridges between systems allows you to develop data journeys that reflect your customers’ journeys.
This will be our first time at the CEX event and we are looking forward to mixing with the new generation of content entrepreneurs to learn about building profitable content-based businesses.
The annual conference for the Society of Technical Communicators has been a regular highlight of our year for many years now, and I’m delighted to once again be participating as both a speaker and industry panelist.
Despite the fact that for over half of my career technology companies have paid my mortgage, I have always been a long standing, and increasing vocal, proponent of the idea that in deciding to pursue any business-process change or innovation the technology must come last. In fact I devoted a whole chapter to the topic in my book The Content Pool (end of shameless plug).
At one industry conference a few years ago I even ended up getting a quick round of applause during the closing panel discussion when I said that audience members should stop talking about tools and start talking about business need.
A sign that I thought meant we were making some headway.
Another sign that we may be making headway was a recent conversation with a potential vendor for a client project I’m currently working on, where one of the first things the vendor pre-sales team asked my client for was a list of their top three business priorities for the project.
However anothetr conversation a few days later reminded me of a past project that I worked on that was still ticking over after nearly three years and not making any apparent progress. I recalled that the norm on that project was for conversations to quickly get into the weeds about the features, functionality, and development efforts needed around various alternative technology options.
When I asked the basic question of what was the project’s high-level business objective, no one could articulate it.
The whole conversation reminded me of an acronym developed by a major consulting group: P.O.S.T.
The P.O.S.T. approach was developed as part of a corporate social network strategy, but I believe it applies equally well to implementing any innovation or process improvement strategy:
P = People
O = Objectives
S = Strategy
T = Technology
Seems obvious doesn’t it.
Start with those who have a need, figure out what you need to do to fill that need, develop a strategy to do it, and then think about the tools you can use to do it.
You should be thinking along the lines of “We need to decrease the time it takes to get our information into the hands of our customers,” not “We need to install Wizgadget3.0.”
Just remember that if you put the T first, all you are left with is a P.O.S.
I’ll confess I love origin stories. They are among the storytelling tropes that first attracted me to comics, and over the years I’ve even written a few origin stories. Sure they can be overdone — do we really need to revisit Batman’s or Spider-man’s origin in every single movie incarnation? — but they can also be an effective way of defining who a character is; their motivation, moral compass, and mission.
The same is true of a brand. Knowing the story behind the brand can go a long way to establishing the brand’s culture. Thinking about this reminded me of a networking event I attended for small business owners and entrepreneurs.
In the space of two hours I must have heard about at least a dozen new businesses — what they did and what they were called. That’s a lot of information to take in in a short time. On the drive home I did a quick mental review to see if I could recall the salient points from each conversation. I managed to recall something about everyone, but what struck me was that the first two businesses that came to mind were the two that had stories attached, and one in particular that had a story attached to the brand name.
Brand names with a story behind them stick.
Several years ago I wrote a regular marketing newsletter that included the stories and histories behind some of the most well-known brand names. That section was always the most popular part of the newsletter. It gave me the idea of maybe writing a book on the subject, but then I found out that someone had already done it. Evan Morris’ fun book From Altoids to Zima is now one of the most thumbed books on my content marketing book shelf.
There is a story behind most company and brand names. I’ve worked for companies named after bags of chips, science fiction villains, a historical event, and even one that got its name from a typo.
Discover your origin story – tell your origin story. People will remember it, and they will remember you.
Back in the mists of the primordial internet when I built my first website, also known as the mid-nineties, I recall one of the best pieces of advice I was given was, ‘don’t post anything online you don’t want shared.’
The basic purpose of that very first website was in fact to share information and establish some credible provable subject matter expertise for a book idea I was shopping around. But in the twenty-eight years since, of multiple websites, blogs, and a dozen different social media platforms, that single piece of advice has remained my mantra.
I am pretty open about my online activities, and I’ve written and published a lot on the web, but I give very careful consideration before I post something I consider my intellectual property (IP).
Yet, I’m constantly amazed at the number of companies I talk to who, mainly unwittingly, are happy to throw the content that contains their IP into the digital world without thought of the consequences just to save a dollar or two.
Think about the content that is produced across your company. No matter which functional group produces it, or what its purpose is, your content contains everything a company is, knows, and does. The information contained in your content is the lifeblood of any organization, it will outlast most employees, and by default is one of, if not the, highest value assets a company has. Yet it is rarely thought of that way. So why do many organizations carelessly just give it away?
Over the last decade or so the rise of social media platforms spurned another truism that is worth remembering, “if the platform is free, then you are the product.” While the companies that run these platforms may not be really ‘selling’ us as individuals, they are definitely selling data about us collectively, and about trends in online behavior. They need the information we put on-line to be able to train their tools.
The same goes for most of the free on-line tools that consume and manipulate content. They need to build a corpus of content from which to analyze and learn.
Many of these tools can be very useful, let’s take on-line automated translation tools for instance. I use the one I have on my phone a lot when exchanging messages with my friends in France and Germany, when my schoolboy level proficiency fails me. On a trip to Japan a few years back I would never have been able to navigate my way around Tokyo without such an app. The accuracy of these sorts of tools comes down to the volume and history of the translation memory that sits behind them, and each time we use them we add more data, and more examples to that memory.
This is where the issues with IP start. I have had way too many conversations with senior-level executives who insist that because they used a well known free web-based translation tool on their latest European vacation, that they don’t need to pay for a professional translation service anymore. They fail to realize that by uploading their business content, which often includes company confidential or competitive information, that they are adding it to the general pool of content owned by the platform developer.
This doesn’t just apply to translation tools, it also applies to things like online grammar checkers, workflow tools, and other free productivity tools that store, move, manage, or manipulate your content.
The rapid rise in interest in artificial intelligence (AI) and Large Language Learning Models (LLM) has added another dimension, as the developers of AI driven solutions are ravenous for content on which to train their platforms.
Tools which we didn’t necessarily think of as content-centric are now wanting to crawl and scrape any scrap of content that they can get their hands on, and that means your content.
At the beginning of August 2023 the leading on-line virtual conferencing collaboration platform was at the center of a controversy when they issued new terms and conditions that gave them the right to: “A perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content, including AI and ML training and testing.”
In short they could take whatever you said, presented, shared, or posted through their platform and do whatever they liked with it, including any of your proprietary content. Such was the outcry once this was made public that within a few days the company concerned had retracted and issued a new Terms & Conditions document that stated: “ (Company) does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train (platform name) or third-party artificial intelligence models.”
While this was the right result, it begs the question why the company concerned felt it had the right to use its customers’ content this way to start with, and how many other companies are rushing to do the same? I’ve seen similar discussions happening around the use of photography and art work by a well-known supplier of creative studio software.
So how do we avoid exposing our intellectual property to potential misuse or oversharing? To quote another maxim, one I first heard from the Content Marketing Institute, ‘don’t build your content home on rented land,’ i.e. be wary of developing content on platforms you don’t control. I’d add to that also be wary of sharing your content on platforms you don’t control, especially your high value content.
Think about what your content contains, and what it means to you. Look at any free platform you use and ask yourself, why is it free? What am I supplying that is valuable to the company that owns it? Read the terms and conditions, and then consider, is that a place where you want to share the things that make your organization special?
Welcome to Intelligent Content for an Intelligent World – so ran the tag line for this year’s Adobe DITAWorld which bills itself as the world’s biggest DITA Online Conference for Marketing and Technical Communication Professionals.
I’ll be honest I’m not a big fan of online conferences as I tend to find my focus drifting away by the distractions of being in my home office. I much prefer being at in-person events. When I’m in a session listening to a speaker I can remain focused on them and what they are saying. Outside of the sessions I can network with other attendees or vendors, or just enjoy a little people watching.
So when I do sign up for a virtual event the pressure is on for the organizers and speakers to put together a program that will keep my attention over a span of several days. Not an easy task, but I’m glad to say that this year’s DITAWorld did just that.
It was an engaging three days with topics running from the current rise of generative AI, to deep dives in to DITA coding and verification, by way of practical use cases in structured content, knowledge management, and translation. And the sidebar chat and Q&A channels were equally engaging. – It was almost like being at an in-person event.
Here’s a quick rundown of the sessions that I enjoyed and found educational and thought provoking.
Day 1
Is AI the meteor? Are we the dinosaurs? An early assessment of AI in Content Operations – Sarah O’Keefe, CEO at Scriptorium, USA
All Knowledge in Balance: EY’s Knowledge Evolution – How EY uses DITA to Drive the Evolution of Global Assurance Knowledge – Brian R. Scordinsky, Technology Director at Ernst & Young, USA – Shannon Waida, Associate Director at Ernst & Young, USA
The Best of Both Worlds – Why Mayo Clinic chose Adobe Experience Manager Guides – Sebastian Fuhrer, Director of Content Engineering & Operations at Mayo Clinic, USA – Bernard Aschwanden, EVP, Business Development at Precision Content, Canada
Content Symphony – Orchestrating Excellence – Elevating Enterprise Communication in the AI Age – Chad Dybdahl, Senior Solution Consultant at Acrolinx
From Structure to Process – Why Structured Content is more than you think! – Markus Wiedenmaier, CEO at c-rex.net, Germany
Day 2
Structured Content Horizons: Mapping the Future – Harnessing DITA, Adobe Experience Manager Guides, and AI for future experiences – Rob Hanna, President & Co-Founder at Precision Content, Canada
Using the Power of DITA – Creating content for multichannel delivery – Hanna Heinonen, Digital Content Lead at KONE, Finland – Kristian Forsman, Solution Owner Content Management at KONE, Finland
Don’t Reinvent the Wheel – Implementing Structured Authoring – Amanda Patterson, Technical Communications Manager at Hunter Douglas, USA
Intelligent Content for the Manufacturing Industry – How to digitalize content in a traditional industry – Ulrike Parson, CEO at parson AG, Germany
Day 3
Pioneering Journeys: Tracing Content Evolution – Customer Engagement Transformation –Julian Murfitt, CEO of Mekon, UK
Engineering Global Connections – Multilingual Mastery – Ronald Egle, Content Systems Administrator at Ariel Corporation, USA
Stick to Your DITA Standards – Using Schematron to add custom DITA validation – Marco Cacciacarro,, Enterprise Documentation Manager at BlackBerry, USA
Recordings of the session are available on YouTube in theAdobe DITAWORLD 2023 Playlist.
And I must give a shout out to Stefan Gentz, Senior Worldwide Evangelist for Technical Communication at Adobe Matt Sullivan, CEO at Tech Comm Tools for organizing the event and being excellent co-hosts across the three days.