Blog

  • Do Humans Dream of Electric Minds? How language influences our thinking about AI

    Do Humans Dream of Electric Minds? How language influences our thinking about AI

    “One of our persisting challenges is excessive hallucinations.”

    I’ll cut right to the quick: AI systems are nothing like human beings, but our language makes us think they are. That’s a problem, for us and for the AI systems we’re building.

    When ChatGPT was introduced to the public in November 2022, people were baffled to discover when you asked the system a question it would often return text presenting information which appeared to be true at first blush but on further scrutiny was only partially true or even entirely fabricated. A barrage of news articles and social media commentary followed, about how the AI systems were “lying” or intentionally “deceiving” us, suggesting these passive computer systems were acting out of some form of malice. 

    AI experts explained this is a well-known phenomenon called “hallucinations.” The term quickly took root in the public consciousness and provided a platform from which our common understanding of these technologies would grow:

    “If a machine can hallucinate like me, it must have a mind like mine.”

    A relatable metaphor can be a useful way of explaining something complex by referring to something similar and less complex. Metaphors are found throughout our everyday language: “You are an angel for doing this”, “she was on fire today,” “I am toast,” these are all nonsensical statements with real-world meanings easily understood by people with sufficient language skills and shared cultural and societal experience. 

    Metaphor can also be a useful tool when explaining complex concepts without requiring the listener to understand the full complexity. When our son complains about his hands hurting after he’s been coloring for a long time and I tell him it’s because they are “tired” he understands he needs to give them a “rest” without needing to understand the physiological causes of muscle fatigue. When a TV show is abruptly interrupted by an error message on the TV and I tell him it’s because our TV can’t talk to the streaming service, he understands this is a communication problem without needing to understand the intricacies of HTTP protocol, DNS servers, or packet loss.

    So when seemingly all-knowing AI systems inexplicably fabricate information, it’s easier to explain what happened through the metaphor of hallucination than it is to explain the inner workings of computer systems even the people who build them do not have a complete understanding of.

    Throughout the history of the science of artificial intelligence, we’ve used metaphorical language rooted in human cognition and behaviour to explain how these systems operate. The term “artificial intelligence” is a metaphor describing systems whose capabilities go beyond traditional computer systems and are “smart” the way humans are smart. Saying AI systems “learn” about the world through “training” uses education metaphors to make simple the enormously complex machine learning algorithms and processes that go into building their models. Saying AIs have “knowledge,” “reasoning” capabilities, and the ability to “follow instructions” uses metaphor to explain their often surprising power. We use the metaphor of human communication and interaction when we tell people to “have conversations” with the systems and refer to them as individuals with human traits like attitudes and emotions.

    When we use these metaphors to describe AI, people get enough of an understanding of what’s going on to be able to speak about these systems and see how they can fit into their lives and work without having to understand their technical underpinnings.

    The problem is by using anthropomorphic language – metaphors referencing human traits – we construct an image in our minds of these systems being variants of ourselves: machines that are intelligent like us humans, that learn about the world through training like us humans, that hold knowledge and reason like us, and follow instructions like us, and have conversations like us. And when some of those systems use our own very human language as both input and output, our metaphors get validation in the real world and we start thinking of the machines and their software as living conscious agents even when we know they are not.

    No wonder then when an AI outputs information that looks true but turns out to be a fabrication we continue the pattern and describe the machine as a liar. 

    We could have, and probably should have, chosen to use more technical language for these machines, but in doing so we’d have missed out on the magic and the marketing. “This is the courtyard and Juliet is a human on the balcony” pales to the evocation of “This is the east and Juliet is the sun!” because the language we use colours and shapes our understanding of the world. So now that we’ve chosen to use human metaphors to describe systems of non-linear computing algorithms that process information and build network models, perform advanced retrieval from data graphs and calculate responses based on neural networks, take input and produce output in the form of tokens, and output statistically correct but sometimes fabricated token sequences that reproduce human language, we must always be on guard against the hallucinations our language conjures within us.

    I fear in our attempt to make AI more understandable we have committed an unintentional act of self-deception. The metaphor of humanity rides too close to our dreams of machines built in our image, and our language makes us confuse those dreams with reality.

    In the Age of AI, our biggest challenge may be overcoming our own excessive hallucinations.

    Cross-posted to LinkedIn.

  • Transformer: Year One of the ChatGPT Era

    Transformer: Year One of the ChatGPT Era

    “We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”OpenAI Blog post “Introducing ChatGPT,” November 30, 2022

    With these words, a transformation – from the time before to our time with the machines and systems we’ve chosen to call “Artificial Intelligence.”

    ChatGPT, launched a year ago today as a “research preview” of our future, was our dreams and our nightmares made real: A computer we could talk to that seemed to talk back. A mirror and a marvel, at times indiscernible for magic, brimming with peril and potential.

    Born from the transformational transformer, the Generative Pre-Trained Transformer outfitted with a chat interface would prove to be the tickler our imaginations needed after three years of pandemic trauma and looming economic uncertainty.

    Can I get this thing to do my work, my homework, my chores, can I outsource my thinking, make it my therapy? Can it be my friend, my companion, my lover, my master?

    The fenced path toward our carefully designed future suddenly opened into an infinite opportunity space with us standing at its edge, in awe, uncertain and excited and afraid to take our first steps, take any steps, to go at all because where do you go when you can go anywhere?

    Listen carefully in those days to the whispers in the halls of power and you’d find those who thought it had come too soon; this tool that broke the academic confines of AI and put it in the hands of everyone. They had not yet the time to forge our path towards their vision. No fences were ready to keep us in line. Our future with AI was suddenly here, unannounced and unbounded, with imaginary grandmothers telling children’s stories of napalm and search engines catfishing journalists and a democratization and lowering of barriers to almost every field of exploration and creativity and discovery.

    “Just a perfect day, you made me forget myself. I thought I was someone else, someone good.” – Lou Reed, Perfect Day, Transformer

    One year in and we’ve learned things; about ChatGPT; about the people who built it; about creativity and generation; about people and machines; about work and life; and about us, ourselves, the humans in the mix.

    One year ago my colleague asked me to help the world move forward with this. “Show the world how to get this right, how we can get it right together” he said, or something like it. And I said to him “I’m glad it’s us.” And I am. Glad it’s us. Not only in the narrow sense of him and me and our friends who we have the honour and privilege of working with, but in the broadest sense of all. I’m glad we, the global community who share this moment in time, get to explore this opportunity space together, unbounded by prior plans and pre-determined directions.

    With AI we have an opportunity to think anew about the things that occupy our lives: How we work, how we play, how we create, how we commune. We get to think again, or maybe for the first time, about our time and how we use it.

    We get to ask questions, about the role of technology and the role of humanity, about what we value and how value is measured, about who gets a say as we build our future with AI, and who decides who gets a say. We get to ask questions about power and privilege and access and limitations and what is good and right for us, our communities, and our world.

    The decisions we make and the steps we take into the possibility space of AI determine where we go next and where those who come after us get to go in their futures. The time before is gone. This time is something new, and we get to define it.

    This transformer in our midst, this up-ender of every process of creation and alteration and organization and interaction, is something so new, so unknown we don’t yet know what it is for. And the way we use it and its myriad of sibling cats well and truly out of the bag and on their way into the world today is the way we used the iPhone when it first arrived in our hands: As a phone with a flashlight app. The AIs brought forward and into our hands by ChatGPT are both tools for us to use and materials for us to work with. With new tools and materials in our practice we get to reflect on our old ways and find new ways – hopefully better ways – to do the old things and the new.

    As I stand here, one year into the era of ChatGPT, what I see before me in the near future and several years from now is a period of ceaseless transformation. This year was the preparation; the slow organizing of exploratory teams, the uneven distribution of resources, the first furtive steps on untouched ground. What comes next is the journey into unexplored opportunity space. What we find there and how we use it will be up to us. And if we do it right, if we care for one another and help each other and build paths everyone can follow, it might – no – it will be amazing.

    Hope is a catalyst. Build the future for everyone and for yourself.

  • Your No-Hype Guide to Everything OpenAI Announced at their DevDay

    Your No-Hype Guide to Everything OpenAI Announced at their DevDay

    OpenAI – the creators of ChatGPT and current designers of our collective futures with AI – announced a metric tonne of new updates, features, and products at their inaugural DevDay this Monday. I was there, and here is my no-hype, no-nonsense, pragmatic guide to what was released and what it means for you.

    GPT-4 Turbo: Upgraded model with upgraded speed, upgraded “knowledge,” and lower pricing

    All software has version releases. So does GPT – the underlying foundation model for ChatGPT and OpenAI’s other language-based AI services. The latest version – GPT-4 Turbo – boasts:

    • faster speed. GPT-4 was notoriously slow compared to GPT-3.5 and GPT-3.5 Turbo. GPT-4 Turbo is reportedly significantly faster meaning you don’t have to wait as long for responses. This is especially noticeable with their text-to-speech features.
    • bigger context window. GPT-4 Turbo has a 128,000 token context vs GPT-4’s 8,000 – 32,000 tokens. This means you can now provide around 300 pages of text for the system to reference during your session without it losing track of what you’re talking about. LLM systems are language transformers, and the more context you provide, the better they are able to perform tasks. This has big implications which I’ll address in the below sections on GPTs and the Assistant API.
    • updated knowledge cutoff. GPT-3, 3.5, 3.5 Turbo, and 4 were all trained on data collected before September 2021. This meant if you asked them about something that happened at a later date, they would not be able to answer. GPT-4 Turbo’ knowledge cutoff is April 2023 and in the DevDay keynote OpenAI CEO Sam Altman said they will “try to never let it get that out of date again.”
    • lower cost. GPT-4 Turbo is 3x cheaper than GPT-4 for prompts, and 2x cheaper for completions. This is significant for developers who are building things with OpenAI’s API because every token costs money. This is a transparent play to get more developers to work with the platform. As for ChatGPT, the pricing stays the same, so the majority of users won’t see any pricing impact.
    • Other things: Multimodal by default in ChatGPT (Dall-E 3, web lookup, code interpreter triggers automatically), invite-only fine-tuning for GPT-4, and increased rate limits for the API.

    What this means for us

    GPT-4 Turbo is the next version of GPT, and based on previous history we can expect either a GPT-4.5 Turbo or a GPT-5 in the relatively near future. The model provides incremental and obvious improvements and we see a clear pattern here: Base models improve performance, turbo models improve speed, extend context windows, and lower cost. The real-world implications of this new model are significant:

    • ChatGPT will appear “smarter” and more “knowledgable” meaning people will be more inclined to think of these systems as “intelligent” and neutral arbiters of the truth. This continues to be a serious societal problem and will be amplified every time the models get upgraded.
    • Using GPT models for practical things got a lot easier. Context window limits have been a major issue for use cases including knowledge retrieval from large documents, summary writing, and more. The enormous context window of GPT-4 Turbo means students can use ChatGPT to summarize academic articles and entire textbooks, writers can use it to review entire chapters and even books, and data professionals can use it to parse much larger data sets.
    • More people will be using ChatGPT and GPT-based systems for more advanced things, and lean more on the mythologized “reasoning” within these systems to make decisions, and the systems will produce completions good enough to pass a cursory overview leading people to think they are doing good work. Education is necessary to help people understand why this is not the case.

    GPTs: The first step towards GPT Agents and the sidelining of plugins

    ChatGPT users can now create so-called GPTs – effectively tailored ChatGPT versions with custom instructions, expanded “knowledge”, and specialized actions. These GPTs are built from the ChatGPT interface and programmed using natural language meaning you don’t have to be a programmer to build them. This democratizes the creation of custom GPT agents and gives people new AI capabilities.

    • Each GPT has its own custom instructions – a large system prompt where you describe what the GPT is for, what it should do, and how it should behave.
    • You can upload “knowledge” to a GPT in the form of documents and other data and the GPT will refer to this knowledge in its completions. For example, you can upload a textbook as a PDF and tell the GPT to act like a teaching assistant and it can help you learn the content of the textbook, quiz you on important topics, provide summaries, etc.
    • Actions allow you to connect GPTs to external services and customize their interactions. For example you can connect a GPT to a weather API and instruct it on how to pull real-time data from that API for accurate reporting.
    • You can create private GPTs with any content you want.
    • You can share GPTs (when you do they go through a copyright check to make sure you’re not sharing content you don’t own the rights to).
    • Enterprise users can create enterprise-only GPTs to share within their orgs.
    • There will be a future GPT marketplace where you can buy and sell GPTs with profit sharing.
    • Currently GPTs are in beta, available only to ChatGPT Plus users, and being rolled out slowly. Unclear whether they will become available to non-paying users.
    • Some mentions were made about how actions could be associated with ChatGPT plugins, but reading between the lines the message is quite clear: Plugins are being silently sidelined in favour of GPTs.

    What this means for us

    GPTs will become the new primary way people use ChatGPT because they eliminate the need to state the purpose of your interaction with each chat. GPTs will also dramatically accelerate advanced user of ChatGPT because they bring down some significant barriers to entry:

    • The massive 128,000 token limit allows you to upload entire books as “knowledge” in a GPT meaning every student can and will create a GPT for every textbook they own and use it to supercharge their learning.
    • Sharing of GPTs means as people create new capabilities with ChatGPT they’re able to give those capabilities to others. This will be especially important for things like helpdesk, documentation search, and internal enterprise operations.
    • The plugins ecosystem is fading into irrelevancy, both because GPTs take over for them and because the release of GPTs meant the death of hundreds of well-funded startups and projects, all informed by the plugins they created. For example, every “talk to your PDF” type plugin is now meaningless as GPTs do this by default.
    • OpenAI will have a nightmare task on their hands as they try to moderate the tsunami on top of an avalanche of GPTs people make and try to sell in their marketplace. Moderation will be key, and it will be enormously costly.

    Assistants: The programmer’s path to agents

    Along with GPTs (which belong in ChatGPT), OpenAI released the Assistant API which provides the same family of functionality in for programmers who build tools utilizing GPT services. With the Assistant API comes a bunch of features that make the work of every developer a lot easier:

    • Threaded responses for streaming so you don’t have to keep track of every prompt/response pair in your own database
    • Invoke multiple functions at once with function calling
    • “Knowledge” retrieval from documents (low-key low-investment RAG for smaller documents)
    • a stateful API ? because this is 2023 not 1996
    • API access to the code interpreter (and a future path towards custom code interpreters)
    • Future promises including mutimodal API, async, and support for web sockets and web hooks.

    What this means for us

    If you’ve built any application on top of the OpenAI API, chances are you now have to rebuild it. Many of the new features released (threading, multi-function calling, retrieval, statefulness) replace custom features developers were forced to build due to the lack of core support from the API. This will be enormously expensive and damaging to many projects, but is necessary to move the entire space forward. The lack of these features in the original API were deficiencies and their introduction is long overdue. One thing I didn’t see was any mention of proper authentication. The current key-based auth in the OpenAI API is sub-optimal at best and leaves developers having to rig their own security around their apps which is… not great.

    • Building anything with the API is now way easier.
    • This is an aggressive play to onboard more developers, and OpenAI is clearly taking slow developer adoption seriously.
    • The importance of parallel function calling cannot be overstated – this is the path to a lot of advanced functionality.
    • Building extensions to OpenAI’s features remains risky as the API and underlying services evolve, so make sure you have room to rapidly iterate and change.
    • I expect we’ll see a continuation of this rapid evolution of API features for a long time, so stay nimble.

    In summary

    OpenAI is rapidly evolving from a startup with an insanely popular experimental service to a full-fledged platform company with professional products on offer. The rapid evolution of their products shows now sign of slowing down and I expect by next year’s DevDay what was released this week will appear quaint and old.

    If you’re working with AI in any way, get used to the constant change and uncertainty, because they are going to keep accelerating for a while.

  • AI is a Loom: The End and the New Beginning of Web Dev

    AI is a Loom: The End and the New Beginning of Web Dev

    Web dev as we know it is deprecated. We just haven’t downloaded the latest version yet. What comes next is a metamorphosis, a paradigm shift, a revolution changing how we work, how the web works, and everything we know.

    In March 2023, OpenAI CTO Greg Brockman drew a rough sketch of a website on a livestream, uploaded it to GPT, and got fully functional HTML, CSS, and JavaScript in return. That feature – dubbed “multimodal GPT” – is shipping to all ChatGPT users in the coming months. Soon you’ll be able to open an app, provide some instructions, and have a fully designed and populated website with real content, an ecommerce store, and a social media marketing campaign built in minutes with the help of AI. 

    I’m not saying coding is a dying craft; I’m saying the craft of actually writing code on a keyboard to be input into a coding environment for further processing will become a narrow specialty. Most people who write code today will not be writing code in the near future. Many of them don’t need to and shouldn’t have to write code today! Writing code instructing computers to do things when we have software that writes better code for us makes no sense, and this whole notion of everyone learning how to code feels more and more like telling highschool students their future employment hinges on their ability to master T9 texting.”

    Me, in an email dated November 2021

    Web development stands on the precipice of an AI-driven metamorphosis. In front of us, the demarcation line between two paradigms: The old world of human-written code aided by machines, and the new world of AI-generated code aided by humans. 

    For the web, AI is the Jacquard loom. 

    For most developers, this means transitioning from writing and debugging code to directing AI what to build and checking its work. AI represents a Jacquard loom moment for web development, transitioning our work from hand-coding the fabric of the web to using that fabric as material for building new experiences.

    The implications are enormous, not just for our work but the web’s future. As AI becomes part of our practice, our role shifts from writing standards-based, accessible, progressively-enhanced code to ensuring AIs use the latest, most advanced standards to build the future. If we don’t embrace this new role, progress will stall as AI biases established standards and ignores new tools and best practices..

    Here’s what I see:

    Very soon the public will access AI services that create websites in minutes from prompts, sketches and assets. Wix teased this, and competitors aren’t far behind.

    I’d be shocked if Canva and Figma don’t unveil full “draw it, AI builds it” services by year’s end. Soon there will be ChatGPT plugins that build websites for you from scratch. This is inevitable.

    When I say this out loud, the immediate response is usually some version of “AI can’t write good code” or “AI doesn’t understand users” or “AI makes constant mistakes.” All true, and all irrelevant. This isn’t about AI writing code or autocorrecting our code. AI will instead use the well-documented and well-established frameworks, templates, build processes, and automation we’ve created to make our work easier to weave together the websites and apps we ask for.

    For walled gardens like Wix, this is straightforward: their back-ends, systems, and design languages allow AI to rapidly wire sites to user specifications. And that’s just the start. We’ll soon see new semi-agentive tools supporting various stacks, so you (with the help of an AI) can select frameworks, design systems, ecommerce platforms, etc. based on project needs without writing or knowing how to write code.

    Look at what the people over at Builder are doing, then add an agentive AI on top and you start getting the idea:

    What People Want, What Automation Provides

    Two massive waves of progress are converging:

    Developers have spent a decade building automation tools, frameworks, and systems to improve dev experience. You can now spin up a Next.js site in GitHub Codespaces in minutes without writing a single line of code. Component-based frameworks provide code consistency and add LEGO-like block assembly to web development. Design systems, component libraries, style guides, and tokens enable rapid prototyping and consistent UX/UI. Linting, optimization, accessibility, testing, CI/CD are largely automated. Bridging layout and code is reality. Often, we just connect these pieces. AI serves well as an automated and intelligent loom weaving these pieces together into workable materials.

    On the user side, people want friction-free, no-maintenance, always-on experiences. Faced with the choice between the DIY bazaar of the open web and the shiny mall of app-based walled gardens, they pick the moving sidewalk of least resistance. And they are willing to pay for that convenience; with money and by giving up their data and privacy to surveillance capitalism. Where publishing on the web used to mean standing up a WordPress site (or paying someone else to do it), today bloggers, creators, influencers, and small businesses opt for TikTok, Instagram, YouTube, Medium, Substack, Shopify, and Linktree. 

    The web we lost is a bygone web a larger and larger portion of the public never experienced, and concepts like self-hosting seem archaic and inefficient to the masses. Counterintuitively AI may help bridge this gap and reignite the interest in carving out your own space on the web by lowering the barriers to entry down to describing what you want and watching it manifest.

    What is pushed down as these waves converge and elevate the capabilities of the web-using public is the need for traditional developers. When the options are either an AI site from Wix built from a prompt in minutes or a complex and expensive custom build that takes months to complete, there’s no choice for most people and businesses. When the Jacquard machine automated weaving, hand-woven textiles transitioned from an essential commodity to a luxury art form, and the expertise of manual weaving morphed from a commodity skill into an artistic pursuit. Weavers still exist, and bespoke fabrics are still made, but the vast majority of textile products were made by machines guided by humans who spent their time designing the products instead of making the materials. That’s what comes next for the web. 

    AI Creates Opportunity Space

    This may sound like AI replacing humans. It’s not. Instead it’s a fundamental shift and refocusing of the role of the developer: From writing code to auditing AI-written code. From knowing how to wire together different frameworks to architecting the system that serves up the website. From fighting with CSS to fine-tuning AI-powered user experiences. 

    The people currently working as coders will take a step up the ladder to focus on higher-order capabilities, using their expertise in languages and frameworks to help AIs produce the best output instead of doing the grunt work of writing the code. 

    Web dev as we know it is dead. What comes next is a metamorphosis, a paradigm shift, a revolution changing everything we know.

    Our new human focus as we move into this future together is to ease the persistent tensions found in the intersection between technology and humanity. AI can’t conduct UX research, design novel experiences, innovate standards and best practices. That was always and will remain our territory. As AI takes over the work of weaving the fabric of the web, we do the work of making new things with those materials while improving their quality and inventing new ones.

    In the short term, we’ll become AI managers – customizing, configuring, ensuring user flows and information architectures make sense, monitoring the generated code to ensure the latest standards are in use, and counteracting the inherent bias of AI to repeat prevalent patterns even when they are outdated. We’ll shift from writing code to deciding what the code should accomplish. To do that, we must all become experts at the bleeding edge of code, and invest our time in innovating new standards, patterns, and frameworks for the AIs to use. It’s a whole different job needing a whole new version of the skills we’ve always had.

    This transformation is happening now. For consumers and SMBs, it will be lightning fast. For institutions and large enterprises it will be slower, hindered by legacy systems, institutional inertia, and resistance to change. But it’s coming. 

    For web workers, it is no longer enough to know the core languages and established best practices. UX, interaction design, accessibility, and innovation is our new bread and butter, built on a strong foundation of modern web standards and bleeding edge HTML, CSS, JavaScript.

    The future of the web belongs to those who strategically apply AI to meet user needs. With proper guidance, AI can supercharge our work, provided we put ethics, accessibility, user experience, and innovation front and center.

    We build the future with every decision we make. How we decide to work with AI decides what future we get to live in.

    Cross-posted to LinkedIn and dev.to

  • “Ice Cream So Good” and the Behavioural Conditioning of Creators

    “Ice Cream So Good” and the Behavioural Conditioning of Creators

    If you’ve been on TikTok or Instagram over the past few months, chances are you’ve come across creators exclaiming “yes, yes, yes, mmm, ice cream so good” while moving in repetitive patterns akin to video game characters. There’s also a good chance you’ve thought to yourself “This is ridiculous! I would never do something like that” even though you and I and everyone else perform the same type of alchemic incantations to please the algorithmic gods of the attention economy on a daily basis.

    Every time we use a hashtag or think about the SEO of a piece of content or create a post to match a trend or ask our viewers to “hit that bell and remember to like and subscribe,” we are acting on the behavioural conditioning social media and other platforms expose us to, changing our behaviour to get our meagre slice of the attention pie (and maybe some money to boot.) Look no further than YouTube where for every type of content there is an established style and creators mimic each other so closely it’s becoming hard to tell them apart.

    The only substantive difference between optimizing your article title for SEO and exclaiming “ice cream so good” when someone sends you a sticker on TikTok live is the latter act comes with a guarantee of financial return.

    “Yes, yes, yes, gang gang, ice cream so good”

    https://twitter.com/alsoabouteve/status/1681397885940711426

    Dubbed “NPC streaming, the latest trend on TikTok is being met with equal parts astonishment, concern, and mimicry. The core idea is simple: TikTok has a feature where creators can host live events. During those live events, viewers can buy tokens in the form of stickers, animations, and filters they can send to the creator in real time. The creator in turn gets a tiny percentage of the profits from the sticker or animation or filter being used.

    In other words, the more viewers a creator gets, and the more incentive they give those viewers to send them stickers and animations and filters, the more money the creator (and the platform) gets. Crafty creators have figured out the easiest way to get people to send them these digital tokens is by responding directly to them. Thus if you send an ice cream sticker, PinkyDoll will smile and say “mmmm, ice cream so good.”

    Creating live content encouraging users to send stickers is nothing new. I remember seeing a live of a man who pretended to try to have a serious conversation about something while getting more and more outraged as people applied ridiculous filters to his face a few years ago. The recent invention of the NPC streaming characters are the refined distillate of this insight:

    Forget about content – the easiest way for creators to earn money is by letting people control them directly through payment.

    Based on recent reporting, the most successful NPC Streamers can earn thousands of dollars per day doing this work. TikTok takes a reported 50% of their profits, so this trend is enormously lucrative for the platform even when the creators themselves don’t earn all that much.

    Please Please Me Like I Please You

    In a recent article titled “Operant Conditioning in Generative AI Image Creation,” UX pioneer Jacob Nielsen makes the following observation:

    “Generative AI for images torments users with alternating emotions of euphoria and anguish as it metes out sublime or disastrous pictures with wanton unpredictability. This makes users feel like the rat in an operant conditioning experiment, entrapping them in a ceaseless pursuit of rewards amidst sporadic punishments.”

    Replace “Generative AI for images” with “monetization schemes on social media platforms” and the observation rings just as true:

    From SEO to NPC Streaming, the opaque and ever-changing algorithms trickling out a tiny share of the enormous profits social media platforms make off their creators are giant (hopefully) accidental operant conditioning experiments demonstrating just how far we humans are willing to go in our pursuit of a promised reward.

    Social media monetization is exploitationware (aka “gamification”) in its purest form: Creators are placed in an environment where if they stroke the algorithm the exact right way at the exact right time, there may or may not be a payout at the end. Like a rigged slot machine, most creators get close enough to see the big win, but never quite close enough to grab it. Like a casino the platforms promote a select few creators who actually hit the jackpot, making everyone else feel like if they just try one more time, they might win as well. And like every subject in an effective operant conditioning system, we alter and conform our behaviour to the conditions of the system in a never ending chase to get that dopamine fix of cracking the code and getting our reward.

    In the book “The Willpower Instinct“, author Kelly McGonigal describes how this exploit of our reward system works:

    “When dopamine hijacks your attention, the mind becomes fixated on obtaining or repeating whatever triggered it. (…) Evolution doesn’t give a damn about happiness itself, but will use the promise of happiness to keep us struggling to stay alive. And so the promise of happiness–not the direct experience of happiness–is the brain’s strategy to keep you hunting, gathering, working, and wooing. (…) When we add the instant gratification of modern technology to this primitive motivation system, we end up with dopamine-delivery devices that are damn near impossible to put down.”

    That’s creator platform monetization: A dopamine-delivery system encouraging creators to seek happiness in cracking the code, gaming the system, and chasing the promise of happiness in the form of a paycheck.

    TV-shaped eyes

    Growing up in the 1980s there was much talk among the adults about their kids developing “TV-shaped eyes” from watching too many cartoons. Never mind that in Norway in the 1980s there was only one channel, and it aired one hour of children’s programming per day, at 6pm, right before the evening news.

    The underlying concern was prescient though: Our media consumption not only consumes our time and attention; it alters our behaviour in significant ways. Social media platforms have taken this to the ultimate extreme through their incentive-based monetization systems, and we are all paying the price for it.

    SEO is about gaming the ever-changing search engine algorithms to get higher ranking. NPC streaming is about gaming the TikTok monetization system to get as much money out of it as possible. If it was easy, if the platforms shared their profits automatically with every creator, the dopamine incentive of the game would go away and we would stop posting and shareholder profits would tank. So instead we get the attention economy and its latest most pure incarnation: The NPC Streamer.

    Breaking the cage

    The engine driving the NPC Streaming trend (and every other trend on creator platforms) is monetization, and the monetization models they use are fundamentally inequitable to both creators and passive users. Rather than paying creators their fair share of platform profits, platforms use the gamification of payments as behavioural conditioning to get creators to make content that encourages other users to consume more content and pay money into the system. What we need is something else, something more in the shape of a monetization system that pays creators for the quality of their content and the value and utility people derive from it.

    What got us here won’t get us anything but fake ice cream. I welcome your ideas about how we break this cage and build a better online future for us all.


    Cross-posted to LinkedIn.

  • The Zeroth Law of AI Implementation

    The Zeroth Law of AI Implementation

    “An AI may not be used to harm humanity, or, by not being used, allow humanity to come to harm.”

    As Artificial Intelligence systems (AI) like #ChatGPT enter into our lives and our work, we need some basic guidelines for how we implement them going forward. Here’s a place to start:

    The Zeroth Law of AI Implementation:

    An AI may not be used to harm humanity, or, by not being used, allow humanity to come to harm. Implement AI in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end, and treat AI always as a means to an end and never as an end in itself.

    The sufficiently esoteric sci-fi and philosophy reader will note these are rewrites and recontextualizations of Isaac Asimov’s Zeroth Law of Robotics and the second formulation of Kant’s Categorical Imperative.

    The Breakdown

    My proposed Zeroth Law of AI Implementation aims to ground us in a shared purpose and vision for how we build our future with AI. It sets forth four basic principles:

    1. Do No Harm* with AI.
    2. Harm can be caused by having a tool and refusing to use it or otherwise limiting its use. For example harm can be caused by limiting access or capability based on factors including socio-economic status, geography, disability, etc.
    3. Humans are always ends in themselves, and must never be considered only means to an end (see “The Age of Surveillance Capitalism“).
    4. AIs are always means to (human) ends, and must never be ends in themselves.

    * We need a clear definition what “harm humanity” means including 1) who gets to determine what constitutes harm, 2) who can be harmed, and 3) who adjudicates whether harm has been caused.

    The Reason

    The goal of technology is to build futures where humans have the capabilities to be and do what they have reason to value. AI technology presents an infinite possibility space for us to make that happen. AI technology also allows us potential to limit our own capabilities and cause real harm.

    These statements seem self-evident, yet when technology is developed and implemented, we often forget its core purpose as we are blinded by the capabilities of the technology (it becomes an end in itself), by the power it affords us (those with access gain powers not afforded to others), and by the financial opportunities it affords us (using technology to turn humans into means for financial ends).

    Grounding ourselves in a common principle like this proposed Zeroth Law of AI Implementation reminds us of the core purpose of technological advancement. It gives us a reference point we can use to evaluate and challenge our decisions, and a shared foundational platform we can build further principles on.

    Discuss

    Right now, at the start of our future with AI, we have a unique opportunity to talk about where we want to go next and how we want to get there. That conversation starts with talking about core principles. The Zeroth Law of AI Implementation is my contribution to this conversation. I’d love to hear yours!


    Cross-posted to LinkedIn.

  • Book Review: How To Be Perfect by Michael Schur

    Book Review: How To Be Perfect by Michael Schur

    4 1/2 of 5

    While the book doesn’t teach you how to be perfect, you’ll be a better person for reading it.

    If ever I teach an intro to moral philosophy class, this book will be prerequisite reading. Sold as a fun book about ethics from the creator of the TV show “The Good Place,” this is actually a solid introduction to the academic subject of ethics, sprinkled with humour and real-life anecdotes to make it relatable.

    “How To Be Perfect” is a semi-biographical story about a TV writer who goes on a journey through moral philosophy to try to figure out how to be a better person. And maybe more importantly how to teach his young children how to be the best they can be. Not to spoil anything, but at the end of the book there’s an entire section where the author talks to his kids about how to be good people, and it is wonderful. 

    The book introduces a variety of branches of moral philosophy with questions like “Should I lie and tell my friend I like her ugly shirt?” and “Do I have to return my shopping cart to the shopping cart rack thingy?” and “Should I punch my friend in the face for no reason?” And this is where the book truly shines: It succeeds at framing real moral problems in a comedic yet relatable way and introducing ethics to people in a way that actually makes practical sense to them.

    Something we all need more of.

    I suggested “How To Be Perfect” to my design ethics book club as a light read for the holidays. Two chapters in I dreaded the comments I’d get from my friends. “Light read? I bet Kant would have some opinions on passing off a textbook as an enjoyable holiday treat!” Then I continued reading and realized I’d sold my friends and the book short.

    “How To Be Perfect” is an imperfect but damn fine effort at making the exceptionally challenging and often mind-numbingly turgid topic of ethics and moral philosophy fun and engaging. If you’re interested in ethics at all, and you’ve wondered where to start or worried it would be either too boring or too depressing, I recommend this book. In fact I recommend this book, period. And I’m not just saying that because I am a philosopher by education and deeply fascinated by ethics.

    This book sets out to do something moral philosophy sorely needs: Make ethics make sense, in a human and relatable way. Moral philosophy has a bad tendency of being at the same time overbearingly moralistic (“here’s how you’re doing everything wrong in your life, and here are some impossible standards you must follow to right yourself!”), philosophically partisan (“my form of ethics, in my specific interpretation, is the only real ethics. All other ethics are wrong!”), and fundamentally unrelateable (“Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.”) Michael Schur tries (and mostly succeeds) in balancing on a knife’s edge between staying true to the academic foundations of moral philosophy while also framing the many theories covered in real-world scenarios, funny anecdotes, personal experiences, and a heavy helping of yelling through a bullhorn at the ivory towers of academic philosophy.

    This last point is probably best exemplified in the chapter on charity where Schur points out how moral philosophers of different traditions will contort themselves into Gordian knot over the moral failings of massively wealthy people using charitable giving as a self-congratulatory popularity contest while in the real world the money they raise actually does some good.

    Schur also does something extraordinary in the book: He tries (and I sincerely hope he succeeds!) to introduce a new term both to philosophy and to our common language: “Moral Exhaustion.” Let me quote from the book:

    “even if we scale the triple-peaked mountain of Daily Stress, Serious Problems, and Circumstance, and (running on 5 percent battery power) try our very best to do the harder/better thing, we often fail miserably despite our best intentions. It. Is. Exhausting.”

    Michael Schur, How To Be Perfect

    I think moral exhaustion is a great description of the malaise we are all feeling in our lives and our work today, and I’m now using the term freely in my everyday language thanks to this book.

    One major problem with moral philosophy (aka ethics) – and I say this as someone who studied moral philosophy for years at university – is its detachment from the real world and its separation into distinct traditions. You are either a Utilitarian or a Deontologist, a Virtue ethician or a Contractualist, and whatever position you hold, you must defend your tradition against the others. (I am oversimplifying here, but this is a real struggle. Call it trauma from years of being an analytical philosopher taught by a faculty almost entirely composed from Kantians.) Through the book, Schur attempts to line up these and other moral philosophy traditions and theories and thread a needle straight through them to show that rather than treat ethics as One Theory to Rule Them All we are best served with an Everything, Everywhere, All At Once approach to our decision making.

    As an introduction to ethics and moral philosophy, “How To Be Perfect” does a good job introducing the main branches of western philosophy (Virtue, Duty, and Consequentialist ethics), newer traditions like Contractualism, and even non-western traditions including Ubuntu and Bhudist ideas. This breadth stems from the impressive research Schur did while writing the TV comedy show “The Good Place” which in reality is a covert psy-op to secretly educate people about ethics by making ethics fun.

    Side note: Watching “The Good Place” I would typically at least once in every episode jump up and yell “ARE YOU KIDDING ME?!?!?!? They are doing a WHOLE EPISODE on [insert obscure moral philosophy thing]???!?!?!” To which my wife of endless patience would say “Sit down and watch the show.” Point being that show was astounding and if you haven’t watched it, I cannot recommend it enough. Because it is hilarious. And well written. And exceptionally acted. And also, it contextualizes ethics in a way that just makes sense.

    Another side note: I recommend getting the audiobook version of this book. It is narrated by the author and the entire leading cast of “The Good Place,” with snarky footnotes from the book’s academic advisor Todd May and even occasional cameos.

    How is “How To Be Perfect” not perfect? In brutal honesty I’ll say it reads like what it is: An introduction to moral philosophy written by someone who is at an introductory level in moral philosophy. Schur finds fascination in the typical places: The vileness and eye-watering absurdity of Ayn Randy’s Objectivism, the spectacle of Jeremy Bentham’s posthumous existence as a cadaver on display at a random university (content warning on that link), the turgidness of Immanuel Kant’s writings, etc. We’ve all been there. 

    In the same vein, in my opinion he makes two significant blunders – one historical and one of lack of foresight: 

    He writes off Heidegger’s works due to their impenetrability and his much discussed association with Nazism, ignoring the enormous impact Heidegger had on moral and other philosophy. As one of the members of my book club said “I wish he (Schur) would go beyond just hints and snarky remarks to actually explain why he sidesteps Heidegger. I felt like he was making excuses for not reading the work.”

    Schur also spends a fair bit of time towards the end of the book celebrating the works of Peter Singer and his longtermism. Anyone paying attention to the collapse of crypto and the bizarre politics driving many Silicon Valley founders will know Singer’s ideals have become a breeding ground for … let’s call them problematic ideas from white men of enormous wealth and power about how we should structure and organize our society today to protect the people of tomorrow. I can’t help but think had “How To Be Perfect” been written in 2022 that entire section of the book would have been very different. So in honesty my critique on this point is a perfect example of an anachronism.

    Let me be perfectly clear here: I consider these issues minor to the point of being irrelevant. This book is not an academic textbook, it’s a deeply personal book about morals and ethics that tries to do right by the subject matter and the reader and succeeds more than any similar book I’ve ever read.

    Final thoughts

    If you’re still with me at this point, you’re definitely the type of person who will enjoy this book, so go out and get it in whatever format you prefer. If on the other hand you are looking for a book to give to your friend who refuses to return their shopping cart to the shopping cart shed thingy, or to subtly tell your family member that it’s not OK to tell people their shirt is ugly even if it is, chances are it’ll be a nice decoration on a shelf and will eventually end up in a donation box. “How To Be Perfect” is not light reading for an airplane ride, in spite of how it’s marketed. It is so much more, and because of this it demands much more from the reader. Just like real life demands so much more from us all. And why this book is wroth reading.


    Cross-posted to LinkedIn.

  • The Real Impact of AI on Education will be Accessibility

    The Real Impact of AI on Education will be Accessibility

    I have a confession to make: My university degree sits atop a mountain of lies I told to mask my dyslexia. Now AI is positioned to make education more accessible so future students don’t have to lie to get where they need to go.

    I read maybe a quarter of the required materials for my university studies. My term papers are filled with quotes from books I never even opened. I became a master at convincing my fellow students to give me summaries, quick explainers, and relevant quotes from materials I knew I’d never be able to finish in time to meet the inaccessible demands of academia. And after 5 years and a degree, I abandoned my hopes for a graduate degree in philosophy to pursue other avenues where my reading disability was not a constant blocker.

    Today, as we stand at the beginning of a new era of computing, one shining beacon in the infinite possibility space of AI is making education more accessible.

    AI can make education more accessible today

    Here are a few ideas for how we can implement existing AI tools in education right now to dramatically improve accessibility:

    • AI audiobooks on demand: AI can generate natural sounding audiobooks from any written text. Audiobooks of academic texts are hard to come by and prohibitively expensive. AI can solve that problem and allow the reader to choose their preferred reading modality. This tech already exists (Apple has shipped it).
    • AI translations to any language: AI models are very good at translating text from one language to another. This means academic texts written in one language can now be accessible in any language. Again, the technology already exists in the form of Google Translate etc.
    • AI reading level adjustment: You know that “explain it to me like I’m five” meme? AI language models like ChatGPT can do that, and ensure the salient points and meaning of the text is preserved. Academic texts are often superfluously arcane and turgid. I predict in the near future we’ll have browser plugins and other AI-powered services where you can set the reading level and writing style of any text to your preference and preserve its meaning.
    • AI summaries on demand: Using summaries of long texts to enforce learning has a long tradition in academia. There’s a lucrative industry and pop culture mythology around services like Coles Notes and CliffsNotes. AI can be used to generate custom summaries from any text, large or small to make it more accessible to people like me who can’t read the whole thing.
    • AI assessments, flashcards, and other learning tools: Dump any text into an AI and ask it to generate assessments, flashcards, questions, examples, or other things. The possibilities here are limitless.
    • AI auto-captions and transcripts of audio, video, and in-person events: 5 years ago captions were an expensive nice-to-have very few could afford or cared about. Today, auto-captions are available on every LinkedIn and YouTube video, and most platforms also provide verbose transcripts on the fly. These same technologies are used for live captioning in video chat apps like Teams, and can be used at live events including in classrooms. The technology is available, and quite frankly I can’t think of any good reason this tech is not immediately implemented across all educational campuses world wide to provide improved accessibility. Imagine knowing you will have a transcript at the end of every lecture so you can focus on understanding what is being taught instead of just writing it down!
    • The possibility space here is infinite!

    Let me guess: You have concerns. About the accuracy of AI summaries and transcripts and translations. About whether leaving AIs to do this work will take jobs away from humans. About whether students relying on AIs will result in the students not learning anything.

    Here’s my reality check to you: The alternative is students not learning at all. The alternative is students like me lying through their teeth to pass arbitrary tests of reading speed (that’s what mandatory minimum page counts are). The alternative is large numbers of people being excluded from education because we’ve chosen to tie access to education to reading ability and speed rather than literally anything that has to do with actual learning.

    As we build our future with AI, our conversations about AI in education should be about how these technologies can lower barriers and make education accessible and possible for everyone.

    Reading ? Learning

    Much handwringing and professional dread has been published in the past months around the risks of AI models like #ChatGPT being used by students to plagiarize materials and generally cheat on their papers and exams. Headlines like “ChatGPT passes exams for MBA courses and medical licences” are addictive clickbait in an environment of economic uncertainty and pandemic fatigue. But look beyond the immediate “the graphing calculator / internet / smartphones / YouTube / chat apps / smart watches / AI will destroy education and plunge us back into the dark ages” response and you see the potential of this technology to make education not only better, but more available and accessible to more people. If we build the tools to make that future happen.

    I have moderate dyslexia meaning I read at about speaking pace and when I read the cognitive load of just getting the words right and keeping track of where I am is exhausting. Reading a regular length novel cover to cover can take me 6 months. Reading a densely worded academic textbook is a multi-year project. But in university, the expectation was for me to consume up to 5,000 pages of material each semester. So while I never wanted to lie to my fellow students, my professors, and my academic advisors about my reading, I was forced to. Academia, and education in general, is built on a core assumption that people’s ability to read and read fast is tied to their ability to learn, grow, innovate, and build the future. I am walking proof this is not the case, and I’m also a prime example of how reading speed is a permanent blocker for many educational paths and the careers that follow.

    Which is absurd because it is not the reading that matters, it’s understanding the ideas captured in the reading materials.

    Enter AI

    Last week I was trying to explain the ethics of Amartya Sen’s Capability Approach to a friend. One of the core ideas of Capability Approach is that people should have the capabilities and functionings to be and do what they value and have reason to value in their lives and environments. Which begs the question: What is the difference between what we value and what we have reason to value?

    This is not something Sen answers directly. Instead it is something you glean from reading several of his published works. As with much of academic philosophy, this distinction is at the same time essential to understanding and lacking in common sense explanation.

    So I asked ChatGPT: “What is the difference between what we value and what we have reason to value?”

    Here’s what the AI answered:

    “What we value refers to the things that we personally consider important or desirable, such as material possessions, relationships, or personal achievements.

    What we have reason to value, as described by Amartya Sen in the capability approach, refers to the things that we have the capability to achieve and that are consistent with our personal values and goals. In other words, it refers to the things that we would value if we had the opportunity to achieve them.

    For example, one might value owning a house, but may not have the capability to buy one due to financial constraints. In this case, having a safe and affordable place to live would be something they have reason to value. In this way, the distinction between what we value and what we have reason to value is one of actual opportunity or capability. The capability approach takes into account the real opportunities for people to achieve the things they would value if they had the capability to do so.”

    Screenshot of the ChatGPT output quoted above.

    This is extraordinary. Notice how the system not only provides an understandable and plain language explanation of the subject, but also manages to pick up on the obscure origins of my question without the prompt providing what I consider sufficient context?

    This explanation of the difference between what we value and what we have reason to value is so good it should be used in introductory textbooks on Sen’s work. And it serves as an example of what AI can do for education if we build AI-powered systems for education.

    And Sen’s distinction between what we value and what we have reason to value is salient to this issue: We as a society value education, and we have reason to value education for all. Even so there are millions of people around the world whose real opportunities to access education are limited for a an endless list of reasons. AI can grant us all the capabilities to access education in new and better ways, allowing more of us to do and be what we have reason to value in our lives, our schools, and our work!

    Our future with AI is the future we build

    We are at the beginning of the beginning of our future with AI. Ahead of us is an infinite possibility space. And more now than at any time in human history we have the capabilities and agency and communal connection and global wealth and power to build positive and meaningful futures for ourselves and those around us together.

    It starts with looking beyond the perpetual shock of disruption every new technology brings to what those new technologies can do for us as we integrate them into our lives and our work. It starts with talking about the futures we want to build for ourselves and how we make them real. It starts with seeing the world, thinking about how to make it better, and then making it happen.


    Cross-posted to LinkedIn.

    Header image: Screenshot of the Dyslexia Simulator from Harvard University.

  • Stepping Into the Future: Pair Programming with AI

    Stepping Into the Future: Pair Programming with AI

    If we do this right, AI can make our jobs and our lives easier and give us time back to do the things we have reason to value. Pair programming with AI serves as a practical example.

    With the realization of AI’s power comes well-justified concerns about how AIs will figure into our lives – and in particular our work. Look to any media outlet and you’ll find a dense fog of articles, videos, podcasts, and think pieces about whether, when, and how AIs will take people’s jobs, and whose jobs are most at risk right now.

    In this darkness, let me put up a bright beacon on the horizon of possibility and give you a glimpse of what a future of human-AI collaboration can look like.

    Explain it to me

    You bump up against a problem at work: an Excel formula you’ve forgotten, an inscrutable data processing script written by people no longer on the team, the right way to invoke a particular JavaScript function while being mindful of state. These situations are common, and they consume significant time and cognitive resources. They are also what I call “robot work,” as in the kind of repetitive rote work you can imagine a robot doing. 

    Now imagine having a skilled co-worker on call, at all times, ready to help you find and explain the right formula, document that inscrutable script, and refactor or even build from scratch that JavaScript function you need.

    That’s what AI can be for us: Just-In-Time assistants for all the tedious, time consuming, and rote robot work taking up our valuable time and cognitive capacity.

    If you’re a developer, you can experience this future today via various AI integrations including GitHub Copilot and ChatGPT.

    No alt text provided for this image
    GitHub Copilot Labs panel in VS Code.

    GitHub Copilot coupled with the new GitHub Copilot Labs extension in VS Code gives you a pair programming assistant right in your development environment. Highlight any block of code and in the Copilot Labs panel you can ask for an explanation of the code, have it translated into another (applicable) code language, use a series of “brushes” on it including making the code more readable, adding types, cleaning, chunking, even documenting. You can even use Copilot to write and run tests on your code.

    A myriad of ChatGPT extensions including Ali Gençay’s ChatGPT for VS Code does much the same, via a slightly different route. Authenticate the extension with OpenAI’s ChatGPT API, highlight any code, and you can ask ChatGPT to add tests, find bugs, optimize, explain, and add comments automatically. You also get the ability to start a full chat with ChatGPT in a dedicated panel right inside the editor where you can talk to the AI in more detail about whatever you want. 

    No alt text provided for this image
    Features from the ChatGPT VS Code extension.

    Time and Energy

    This past week I’ve been working on a large project involving code written by someone else. The JavaScript of the project has been co-located in two giant files, and is an unruly mix of carbon copies of standard components and functions, modified code copied from documentation sites, and custom code. As is often the case, the documentation is lacking where it’s most needed.

    For my project, I need to refactor (rewrite to do the same code in a different way) large parts of the code, and add extensive documentation along the way. That means figuring out how the code works, adding comments to clearly document and explain it, reorganizing the code so it can be split into components, cleaning up and updating the code to modern standards, and finally splitting it up into discrete modules. This is robot work: rote repetitive tasks consuming significant time and cognitive energy better spent focusing on what the code is for: Serving as an example for a course on JavaScript build processes.

    In the past, I’d end up investing several days doing this work. Now, with the help of AI, the workload is significantly reduced. This, I think, is an instructive example of how our future with AI can unfold: with AI assisting us as we do our work rather than take over that work.

    Refactor this

    Both GitHub Copilot and ChatGPT are conversational AIs. You chat with them as you would chat to a person. You can ask questions, give instructions, and ask them to perform tasks for you. Using AIs as pair programmers you do a combination of all of this and more.

    If you’re using the VS Code extensions mentioned above, they are already set up for the correct context. In the case of ChatGPT you can also use it as a stand-alone pair-programmer with some basic setup:

    To start, set the stage (literally) by instructing the AI on how you want it to behave. In a new chat, provide a prompt similar to this:

    “You are a pair programmer specializing in JavaScript. You help refactor code to bring it up to ES6 standards. Is that understood?”

    No alt text provided for this image
    Setting the stage for ChatGPT to provide useful feedback.

    The format here is “you are a [some role]. You [perform some skill/action]. Is that understood?” The last question gives the AI an opportunity to state how it is setting itself up based on your instructions and gives you an opportunity to provide further instructions. In my case ChatGPT responded as follows:

    “Yes, that is understood. As a language model, I am able to understand and provide assistance with various programming languages, including JavaScript and its latest version, ES6. I can help refactor code to bring it up to current standards, as well as provide guidance on best practices for writing maintainable, efficient code.”

    From here you can for example paste in a JavaScript function and ask the AI to help find an error and resolve it:

    No alt text provided for this image
    ChatGPT finds and explains issues in JavaScript.

    In response, ChatGPT provides an explanation of the errors it discovered, prototype examples of solutions to the issues, and finally a full refactoring of the pasted code with the issues resolved.

    This kind of contextual response not only helps solve immediate problems, but also teaches you what’s wrong and how to fix it.

    This is invaluable for people learning to code and people working with code in any capacity which is why I’d strongly discourage any teacher or manager who is right now trying to figure out how to block people from using AIs in their work. AIs reduce the need for Googling or looking up code examples on documentation sites, coding forums, and open source repositories. Instead they give you contextual explanations and references related to your specific code, and even help you with refactoring. This is the future of work, and gives us more capabilities as workers.

    • Have some code you can’t make heads or tails of? AI can explain what it does. Computers are much better at parsing logic based languages than humans, and conversational AI like ChatGPT are specifically constructed to output human-sounding language making them ideal tools for decrypting complex code for human consumption.
    • Have some code in need of documentation? AI can write a function description, inline comments, or whatever you prefer based on your instructions.
    • Need to refactor based on specific parameters? AI can get you started.
    • I could go on but I think you get the idea.

    I’ve worked alongside these AI pair programmers for the past year and a bit, and I can say with absolute conviction these tools and materials will make our lives better if we use them right and integrate them in our lives as helpers for rather than replacements of human labor.

    In my experience, pair programming with an AI feels like working with an overly polite person with encyclopedic knowledge of coding and no awareness of what they don’t know. And this constitutes just our first timid steps into the infinite possibility space we are entering as AIs become our assistants.

    The beginning of the beginning

    As you interact with AI today, be constantly aware of where you are: At the beginning of the beginning of a new era. While these tools are powerful, they are not omnipotent. Far from it. They are shallow, error prone, and while they sound convincing they cannot be trusted. A good mental model for what they produce right now is bullshit as defined by Harry G. Frankfurt: It looks true, and it may be true, but some of the time it’s just plain wrong and the AI will still present it as the truth. While they talk like humans, AIs are not conscious or sentient or aware. They have no human comprehension of your question or their answer. They are advanced pattern recognition systems who tumble down enormously complex decision trees any time a prompt is provided to issue human-sounding strings of text (or code) with a statistically high probability of being the kind of answer that is considered correct by their human trainers. 

    When I asked ChatGPT to correct a function containing a deprecated method, it corrected the syntax of the function but kept the deprecated method. When I told it the method was deprecated, it omitted it and refactored the code, but the result used a similar-sounding method that serves a very different purpose and was therefore non-functional and just plain wrong.

    When I asked ChatGPT to find an error in a large body of code, it found two real errors and invented a third one, going as far as referencing use of a method that wasn’t even in the code provided.

    These examples highlight why I see AIs as tools and materials rather than replacements for human labor. They have no understanding, no contextual awareness, no ability to do creative or lateral thinking. A human still needs to be in the loop; to make sure the output meets parameters, does what was intended, and follows best practices and standards (not to mention hold ethical responsibility for the work created). These things we call “AI” are very much artificial, but they are not intelligent. 

    Intelligence is added by the human operator.

    Even so, the pair programming offered by these prototype AIs is an enormous leap forward for human workers. And you can easily see how this type of AI-driven assistance can be extended to other work and other tasks. 

    I’ve come to think of them as overconfident colleagues with a lack of self-awareness. Because of how they are “trained” – being fed large volumes of data from a corpus lifted off the internet – their “knowledge” is limited to the coding world of two years ago. Therefore, when it comes to modern features, frameworks, techniques, and standards released in the past two years, our current AIs know naught, and more importantly do not know what they do not know. Therefore, if you’re writing code on the bleeding edge of standards, you’re on your own. Or better yet: You’re training your future AI pair programmer! So the pressure is on to get it right!

    The future is today

    Having seen what AIs can do today, I desperately wish I had a looking glass to see what the future of work looks like. The potential here is infinite. The very best AI tools we have today are prototypes and MVPs trained on old data and limited in their scope. The AIs we’ll have a year from now, five years from now, ten years from now will be beyond anything we can imagine. And with these tools and materials in hand we can choose to build meaningful futures for everyone where we all have the capabilities to be and do what we have reason to value.

    The future we are stepping into today is a future where AI is part of our lives, our work, our communities and our society. If you are alive today, and especially if you find yourself in a job, you are in the right place at the right time: These next few years is when we collectively get to figure out how AI fits into our lives and our work. This is when we set the stage for our futures with AI, and we all have a part to play. The work starts by asking yourself in what parts of your life you act like a robot, and whether you’re willing to part with that work and let an AI do it for you so you can do something else. 

    If we do this right, AI will allow us to reclaim our time to be human.

    Cross-posted to LinkedIn.

  • Book Review: “How High We Go In The Dark” by Sequoia Nagamatsu

    Book Review: “How High We Go In The Dark” by Sequoia Nagamatsu

    (4/5)

    This is a book about death. Let me say that up front. It’s also a book about life after death – as in the lives of the people whose loved ones are facing or have passed the threshold of death. As post-pandemic novels go, this one ranks among the best I’ve read.

    A pandemic. A dying child. A VR suicide group. A slow recovery. A spaceship. Thousands of years of longing experienced in a single lifetime.

    “How High We Go In The Dark” reads like a series of short stories, tied together by time and global events. It’s contemplative in a way readers of “Station Eleven,” “Severance,” and “The Memory Police” will recognize. What sets this book apart from the others is its lack of a central progagonist or linear threaded story. In “How High We Go In The Dark” each chapter is a first-person narrative of a character met once, diary like in its presentation. Each chapter stands alone and can be read as a singular unit. This is accentuated by the audiobook having a different narrator for each chapter.

    Every word steeped in melancholy and longing, this book is not for those who seek joy and excitement. It roots in a deep sense of grief for a dying people, a dying planet; reflecting the nebulous grief and loss of past normality we’ve all experienced over the 3+ years of the COVID-19 pandemic.

    What’s missing is relief: reading “How High We Go In The Dark” is riding perpetually just behind a creating wave – feeling it’s resolutions within reach but never quite getting there.

    Read, with caution and emotional support.

    Inspired by Christina Stathopoulos, MSc‘s #BookAMonthChallenge and Mordy Golding‘s yearly book summary I’m committing myself to posting reviews of all the books I read in 2023. This is the first.

  • Tools and Materials: A Mental Model for AI

    Tools and Materials: A Mental Model for AI

    “Language shapes the way we think, and determines what we can think about.”

    Benjamin Lee Whorf

    Before we begin, I asked ChatGPT to rewrite this article at a 4th grade reading level. You can read the result here.

    Artificial? Yes. Intelligent? Not even close. It is not without reason things like ChatGPT are called “AI” or “Artificial Intelligence.” We humans have a propensity for anthropomorphizing – attribute human characteristics to – things that are not human. Thus if we are told something is intelligent, let’s say a very large computer system we can submit questions and get answers from, we look for intelligence in that thing. And if that thing is trained on our own language and art and mathematics and code, it will appear to us as intelligent because its training materials came from intelligent beings: Us ourselves.

    “Artificial Intelligence” is a clever marketing term for computer models designed to appear intelligent even though they are not.

    So, as we crash headfirst into the AI present and future, we need to reset our mental model before we start believing these things we call “Artificial Intelligences” are actually intelligent (again, they are not).

    Tools and Materials

    I propose we all start thinking of these things we call “AI” as tools and materials. Because that’s what they are and that’s how we’ll end up using them.

    Sometimes we’ll use them as tools the same way we use our phones and computers and the apps on them as tools. Sometimes we’ll use them and what they produce as materials the same way we use printed fabrics and code snippets to create things. And sometimes we’ll use them as both tools and materials the same way we use word processing applications first as a tool with which we write a body of text and then a material as the thesaurus function helps us use more fanciful words and phrases.

    Here are some basic examples to help you build the mental model:

    AI as a tool performs a task for us:

    • Fill out tax forms, write contracts and legal documents.
    • Summarize text, rewrite text to a specific reading level.
    • Write code.
    • Online shopping including booking flights and hotels etc.
    • Any interaction with any CSR.
    • Magic eraser for images, video, and audio.

    AI as a material generates something for us:

    • Simple stories.
    • Plot lines for stories.
    • News articles and summaries.
    • Images and other art.
    • Variants of a layout, or a theme, or an image, or a painting.

    Thinking of AI as tools and materials rather than intelligent things with magical human-like powers is an essential mental shift as we figure out how to fit these things into our lives and our world. We have to move away from the linguistic trick their creators foisted upon us with their naming, and move towards the practical realities of what these things really are:

    AI are if-this-then-that machines using enormously complex decision trees generated by ingesting all available writings, imagery, and other human-made materials and filtering that data through pattern-matching algorithms.

    They are regurgitation machines echoing our own works back to us.

    And just like we are drawn to our own image every time we pass a mirrored surface, we are drawn to the echoes of ourselves in the output of these machines.

    Shallow Work and Human Creativity

    Asked for one word to describe AIs, my immediate answer is “shallow.” You’ve probably felt this yourself without being able to put your finger on it. Let me explain:

    There is a bland uniformity to AI output. It’s easiest to notice in generative AI images. Once you’ve been exposed to enough of them, they start taking on a very specific “AI-ness.” For all their variety, there is something recognizable about them – some defining feature that sets them apart from what we recognize at human-made images. That thing is shallowness.

    AIs are conservative in the sense they conserve and repeat what already exists. They don’t come up with anything new. They are also elitist in the sense they lean towards what is predominant, what there is more of. They are swayed by trends and popularity and amplify whatever majority opinion they find in their training data.

    This makes their output bland and uniform and shallow like a drunk first-year philosophy student at a bar: The initial conversation may be interesting, but after a few minutes you notice there’s little substance behind the bravado. I’ve been that drunk first-year philosophy student so I know what I’m talking about.

    This means while AIs are great at doing shallow rote work, they have no ability to bring anything new to the table. They lack creativity and ingenuity and lateral thinking skills because these skills require intelligence. And AIs are not intelligent; they just play intelligent on TV.

    Will an AI take my job?

    Our instinctual response any new technology is “will it take my job?” It’s a valid question: Jobs are essential for us to be able to make a living in this free-market capitalist delusion we call “modern society,” yet job creators have a tendency to let go of expensive human workers if they can replace them with less expensive alternatives like self-checkout kiosks that constantly need to be reset by a staff member because you put the banana in the bagging area before you chose whether to donate $2 to a children’s charity, or automated “voice assistants” that never have the answers to your customer service questions and only pass you to an actual human once you’ve repeated the correct incantation of profanity (try it, it totally works!)

    So now that we have these things some clever marketing people have told us to call “AI,” are they coming for your job? Well, that depends:

    If your job is shallow and constitutes mainly rote work, there’s a good chance an AI will enter your life very soon – as in within months – and become part of the toolkit you use to get your job done quicker. And if it turns out that AI can be trained to do your job without your intervention (by having you use it and thereby training it), there’s a non-zero chance it will eventually replace you. That chance hinges more on corporate greed than it does AI ability though.

    If your job involves any type of creative, or deep, or lateral, or organizational, or original, or challenging, or novel thinking, AI will not take your job because AI can’t do any of those things. You’ll still work with AI – probably within months – and the AI may alleviate you of a lot of the rote work you are currently doing that takes your attention away from what you were actually hired to do – but the AI is unlikely to replace you. Unless corporate greed gets in the way. Which it often does because of the aforementioned free-market capitalist delusion we call “modern society.”

    What we all have to come to terms with today is we’re long past the point of no return when it comes to AI. While technology is not inevitable, technology often becomes so entrenched it is impossible to … un-entrench it. That’s where we are with AI. No matter where you live and what you do for work, for school, or in your own time, you’re already interacting with AIs in more ways that you can imagine. And these AIs are going to become part of your work, your school, and your home life whether you want them or not.

    Our job now is to talk to one another about what role these things called “AI” are going to play in our lives. How do we use them in ways that don’t take jobs away from the humans who need them the most – the historically marginalized and excluded people who tend to hold jobs comprising mainly shallow rote work? How do we build them in ways that don’t cannibalize the creative works of artists and writers and coders and teachers? How do we incorporate AI into education to improve learning outcomes for students and build a more informed and skilled populace? How do we wrench control over our AI future from the surveillance capitalists and longtermists currently building the world to their libertarian techno-utopian visions?

    How do we use AI and all technology to create human flourishing and build futures in which we all have the capabilities to be and do what we have reason to value?

    If we don’t talk about the future, the future becomes something that happens to us. Let’s have this conversation.

    Cross-posted to LinkedIn.

  • Tools and Materials: A Mental Model for AI – the ChatGPT 4th grade reading level version

    Note to the reader: I asked ChatGPT to rewrite my article “Tools and Materials: A Mental Model for AI” at a 4th grade reading level. The article below is the unedited output:

    Sometimes, we think of things called “AI” or “Artificial Intelligence” as being intelligent. But they’re not really. They’re just computers that can answer questions. We humans have a tendency to give human characteristics to things that aren’t human. So if we think something is intelligent, like a big computer system, we might see it as being smart. But really, it’s just using the things we’ve taught it, like our language, art, and math.

    That’s why it’s important to change the way we think about AI. Instead of thinking of it as being intelligent, we should think of it as tools and materials. That’s what it really is and how we’ll use it.

    Sometimes, we’ll use AI like a tool, like we use our phones and computers. Other times, we’ll use what it makes as materials, like we use printed fabrics to create something. We might even use it as both a tool and material, like when we use a word processing app to write and then use the thesaurus function to make the writing more fancy.

    Here are some examples of how we can use AI:

    • As a tool to fill out tax forms, write contracts, or create code
    • As a tool to summarize text or rewrite it for a specific reading level
    • As a tool for online shopping, like booking flights and hotels
    • As a tool to talk to customer service representatives
    • As a tool to clean up images, video, and audio
    • As a material to make simple stories, plot lines, news articles, and images
    • As a material to create variations of a layout, theme, or painting

    AI is really just a bunch of if-then statements and very complex decision-making based on patterns in the data it’s been given. It’s like a machine that repeats what we’ve already made. And just like we’re drawn to our own reflection in a mirror, we’re drawn to the things AI makes that are like us.

    But there’s a problem with AI. It’s shallow. Have you ever noticed that AI images and things it makes all kind of look the same? That’s because AI is shallow. It doesn’t come up with anything new. It just repeats what it’s already seen. It also likes what’s popular and trendy, and it amplifies the things it sees a lot of in its training data. This makes the things it makes feel boring and shallow, like a student who’s had too much to drink talking about philosophy.

    Human creativity is different. It’s deep. It comes up with new ideas and approaches things in unique ways. AI can’t do that. So we need to use it as a tool and material, not as a replacement for human creativity.