Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, 12 March 2026

What if robots take all the jobs? Hint: They can't.

"People have it all wrong" about AI and robots, says philosopher Harry Binswanger. 
Robots are going to take your job? No doubt.

What if robots take all the jobs?  Hint: They can't.

You may not keep this job. But your next one will pay so much more.  How can we know that?  Because, he argues, "We’re all going to get richer. The more that AI and robots can do for us, the richer we will get."

How so? Because AI and robots makes everyone’s labour far more productive -- and the result will be more goods produced, and hence "more wealth in the whole economy."

More wealth means more savings. More savings means more investment. And "more investment means more goods produced, which means a drop in the cost of living, which means a rise in the standard of living."

But how can he be so sure that if your job is replaced you'll be able to find a new one and "take part in this bonanza?"

The temptation is to answer by finding things robots won’t ever be able to do. “Robots will never be great chefs.” “Robots will never be venture capitalists.” “Robots will never write a first-rate symphony.”

That’s irrelevant. The point is that even if AI and robots could do everything better than any human being, that would enhance, not undermine, the value of human labour.

Why? The explanation comes from applying here an important truth discovered two centuries ago. In 1817, the great English economist David Ricardo identified “The Law of Comparative Advantage.”
Ricardo's Law of Comparative Advantage explains that no matter how poor you country may be at producing stuff, if both you and others specialise in what they each do best then, at the end of the day, we are all better off. It's best, for example, if Scotland trades whisky with France for claret and burgundy, rather than the other way around. ("It is the maxim of every prudent master of a family,"explained Adam Smith, "never to attempt to make at home what it will cost him more to make than to buy.")

Equally, the best way for New Zealanders to get cars and electronics is not to try making cars and electronics ourselves, but to process grass into milk powder, meat and wool so that New Zealanders can trade for those fancy devices. And when we do, we're all better off. ( If you're struggling with the concept, because it is remarkably subtle, PJ O 'Rourke's short explanation is one of the funniest on record, and undoubtedly the only one using Courtney Love to help explain things.)

Recognising that self-same principle of Comparative Advantage applies between people as it does to countries, economist Ludwig Von Mises expanded Ricardo's Law to make it "one of the most beautiful laws of the universe." Calling it the Law of Association he showed that specialisation allows even the less productive to benefit from working with the more productive -- or what his student George Reisman characterises as 'what the productive cleaner gains from the genius inventor.'

Even if the inventor can clean faster than a given cleaner, it still pays him to hire that cleaner because off-loading the cleaning work saves him time. He can then use that saved time in the area of his comparative advantage: inventing and selling more stuff.
Likewise, even if there comes a time when the robots can do everything better and faster than human beings, [even] more wealth will be produced if robots and humans each specialise in what they do best. Super-robots would produce more for us if we save them from having to do things that are less productive [for them].
(Of course we won’t be trading with robots: robots own nothing. Robots are owned by people, and those people will be paid for selling robots or for renting them out, just as you can rent power tools from Home Depot today.)

The Law of Comparative Advantage means humans will never run out of productive work to do. There will always be tasks that you don’t want to waste your rented or owned robots’ time in doing.

If you’ve got a robot building you a swimming pool, you don’t want him to stop to cook you dinner.

A chainsaw is a lot more efficient than a knife at cutting. But you don’t use a chainsaw to slice a loaf of bread. Particularly not if that chainsaw is being used by a robot to clear a place for a tennis court in your backyard.

So, rather than panic over “the rise of the machines,” let’s bear in mind the Law of Comparative Advantage ....
And let's recognise that "even with science-fictional super-robots, there will still be money changing hands and a price-system, just as now. You will still be paid for working in the field of your own comparative advantage.
New kinds of jobs will appear, as they always have when technology advances. Ironically, most of the jobs people are afraid of losing -- such as programming jobs or truck-driving jobs -- were themselves created by technological advances. There used to be an American saying: “Adapt or die.” Having the same kind of job as your father and grandfather did is not the American dream.

What new types of job will be created? I can no more project that than a man in 1956 could have projected that today there would be jobs in something called “social media”; or that money can be made by driving for Uber and by renting out living space through AirBnB.

The robots will make work much easier, more interesting, and much better paid.

Prepare to be enriched.

Friday, 27 February 2026

"So welcome to the lovely new economy where being human actually matters."

 

"This is the new secret strategy in the arts, and it’s built on the simplest thing you can imagine -- namely, existing as a human being. ...

"You see the same thing in media right now, where livestreaming is taking off. ...

"This return to human contact is happening everywhere, not just media and the arts. ... I see it myself in store after store. People will wait in line for flesh-and-blood clerks, instead of checking out faster at the do-it-yourself counter.

"But this isn’t happenstance -- it’s a sign of the times....

"As AI customer service becomes more pervasive, the luxury brands will survive by offering this human touch. ...

"Even tech companies [like Spotify, Apple Music, Bandcamp, and QoBuz] are figuring this out. ...

"Welcome to the new world of flesh-and-blood concierges and curators. That’s now the ultimate status symbol. ... In fact, the Silicon Valley elites forcing tech down our throats will only make us hate cold, sterile tech more than ever. And they won’t fix that problem by training AI to pretend to be human. That just adds insult to injury.

"This might even be the hot new career path -- readymade for curators, concierges, caregivers, conversationalists, and other people who love people. As the old pop song anticipated, they might just end up being the happiest people of them all.

"So welcome to the lovely new economy where being human actually matters. Go ahead, try it out. Be cool -- be a human. All the bots in botdom will never be able to take that away from you."

~ Ted Gioia from his post 'The New Cool Thing: Being Human'

Thursday, 19 February 2026

It's (still) all about the entrepreneur

"The 'AI will code for us' idea always skips over the 90% of the job that isn't coding.

"The real work is translating a vague business need into a precise, testable system. It's architecting something that won't fall over in 6 months. It's debugging a problem that only appears under a specific, bizarre set of conditions.

"Even with a perfect code generator, you still need someone who understands the problem deeply enough to tell it what to build. That part isn't getting automated."

~ Selim Erünkut commenting on the alleged obsolescence of coding [Emphasis mine.]

Friday, 13 February 2026

'The Reverse-Centaur’s Guide to AI'


"Start with what a reverse centaur is. In automation theory, a 'centaur' is a person who is assisted by a machine. You're a human head being carried around on a tireless robot body. Driving a car makes you a centaur, and so does using autocomplete.

"And obviously, a reverse centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.

"Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver's eyes and take points off if the driver looks in a proscribed direction, and monitors the driver's mouth because singing isn't allowed on the job, and rats the driver out to the boss if they don't make quota.

"The driver is in that van because the van can't drive itself and can't get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn't just use the driver. The van uses the driver up.

"Obviously, it's nice to be a centaur, and it's horrible to be a reverse centaur. There are lots of AI tools that are potentially very centaur-like, but my thesis is that these tools are created and funded for the express purpose of creating reverse-centaurs, which is something none of us want to be. ...

"Tech bosses want us to believe that there is only one way a technology can be used. ... The promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job ... Now, if AI could do your job, this would still be a problem. We'd have to figure out what to do with all these technologically unemployed people.

"But AI can't do your job. It can help you do your job, but that doesn't mean it's going to save anyone money."
~ Cory Doctorow from his speech 'The Reverse-Centaur’s Guide to Criticising AI'

RELATED:

"You don't work less. You just work the same amount or even more."
~ Frank Landymore, 'Researchers Studied What Happens When Workplaces Seriously Embrace AI, and the Results May Make You Nervous'

Saturday, 7 February 2026

Just how reliable are AIs? A historian's examination.

A historian and cultural commentator has been examining the reliability of AIs for historical research, with thoughts on the future of AI & us. She summarises what she's discovered below, including answers to such questions as:

  • Which AIs got the highest scores overall?
  • Which AIs got the highest scores by topic: scientific/technical, historical context, creativity, historical and legal?
  • Unavoidable methodological issues with As
  • Lessons on use of AIs for historical research
  • Will AIs surpass and replace humans?

Dianne Durante has written several books, and maintains a historical blog. What she has used AIs for in the past, "and will still use," are for very specific questions:

how to trouble-shoot the document feeder on an HP 8000-series printer/scanner, where to find Gaussian blur on the Adobe InDesign menu, what stretches to use for a tight IT band, how much time to allow for a visit to the Kingsley Plantation, or what the Leopards Eating People’s Faces Party is. An AI [she says] gives me answers much, much faster than I could get them by wading through Google search results. ... 
As a historian [however], I tend to need answers to much more obscure and complex questions. When I started using Grok for such questions last summer, it gave me egregiously incorrect answers. (See Part 1 of this series.)

So I set out to discover:
Are AIs reliable for providing historical facts? Can I trust them to accurately deliver all the relevant details on matters such as Chladni figures and the Proclamation of 1763? Should I assume I always need to do further research? Should I avoid AIs altogether, and spend my research time looking for other sources?

Are AIs useful for going beyond facts to analysis? For example, are they good at providing interpretation, overviews, and/or inductive conclusions, such as a list of the most significant artworks of the 18th century, or of the major events of the 1790s?

Are some AIs better than others, in general or on specific topics?

Head to her many earlier posts (starting back in xxx 2025) to see her detailed methodology and results.

So, how did they all do?  In summary, based on the average of the scores from all 7 of her questions:

Winner: Grok, with 70%. That’s better than the others, but if you were using Grok to write your answers on an exam consisting of my 7 questions, you’d barely scrape through with a C. [That caveat is important.]

Loser: Perplexity, with 38%.

Mid-range: ChatGPT (50%), Claude (48%), and Deepseek (56%).

There was no way to ask Britannica or Wikipedia several of the questions, so I didn’t give them an overall score.

For results by category, best for Scientific and Technical: Grok and Deepseek (100% and 95% respectively; average = 81%).

                                    ... best for Historical Context: Claude and Deepseek (60%, 58%; average 51%)

                                    ... best for Creativity: Perplexity (85%; average 76%)

                                    ... best for Historical and Legal: Grok (70%; average 52%)

Head to her post to see what specific questions she asked, and why. She has a few thoughts ("If you have limited time for research, don’t spend every minute of it with AIs"), and a reminder:

    LLMs don’t think. All the AIs I looked at except Britannica’s Chatbot are large-language models, a.k.a. LLMs (see Part 3). An LLM is fed an enormous amount of data so it can generate human-like language by predicting what words will follow a particular word or phrase. An AI doesn’t receive your question, gather data, observe how it relates what it already knows, analyze it according to scientific or philosophical principles, and then consider the most effective way to present the information to you. The AI just predicts what might come next. That’s why it can slide seamlessly from truth to hallucination. An AI will repeat any errors in the data fed into it, be it from major media, random posts on the internet, or Wikipedia. An AI is the ultimate in second-handedness.

    So do not assume accuracy in your answers, especially if it's a topic you don't know much about.

    I like her conclusion:

    Re AIs becoming indistinguishable from humans, and then making humans obsolete: if philosophers, biologists, psychologists, et al., can’t explain the mechanisms of free will, the procedure for induction, etc., then we cannot program a computer to do those things. Until and unless we can, AIs are not human-like in the ways that matter most, and cannot replace humans.

     Head to her post to read it all.

    Tuesday, 27 January 2026

    "Visual Elevator Music"

    "Generative AI was trained on centuries of art and writing produced by humans. ... When generative AI was left to its own devices [however], its outputs landed on a set of generic images – what researchers called ‘visual elevator music’ ... pleasant and polished, yet devoid of any real meaning. ...

    "The findings ... show that the default behaviour of these systems is to compress meaning toward what is most familiar, recognisable and easy to regenerate ... [resulting in a form of] cultural stagnation. ...

    "It’s the slow flattening of creativity into polished sameness.

    "AI is like a robot that learns by looking at lots of pictures, stories and songs. But it mostly remembers the ones it sees the most. So when it makes new things, it keeps making very similar stuff again and again.

    "It’s why so much AI imagery looks the same.

    "The algorithm just doesn’t know how to be weird and creative like humans do."

    ~ Ahmed Elgammal

    Thursday, 18 December 2025

    "The AI era is one of mythology ... a dynasty of bullshit"

    "We are in the dynasty of bullshit, a deceptive epoch where analysts and journalists who are ostensibly burdened with telling the truth feel the need to continue pushing the Gospel According To Jensen. When all of this collapses there must be a reckoning with how little effort was made to truly investigate the things that executives are saying on the television, in press releases, in earnings filings and even on social media, all because the market consensus demanded that The Number Must Continue Going Up.

    "The AI era is one of mythology, where billions in GPUs are bought to create supply for imaginary demand, where software is sold based on things it cannot reliably do, where companies that burn billions of dollars are rewarded with glitzy headlines and not an ounce of cynicism, and where those that have pushed back against it have been treated with more skepticism and ire than those who would benefit the most from the propagation of propaganda and outright lies."
    ~ Ed Zitron from his post 'Mythbusters - AI Edition'

    Thursday, 6 November 2025

    "An AI developer who trains on pirated or paywalled material can’t launder infringement through the word 'training' "

    "Every few months, an AI company wins a procedural round in court or secures a sympathetic sound bite about 'transformative fair use.' Within hours, the headlines declare a new doctrine of spin: the right to train AI on copyrighted works. But let’s be clear — no such right exists and probably never will. That doesn’t mean they won’t keep trying. ...
    "Fair use is a case-by-case defence to copyright infringement, not a standing permission slip. ... But AI companies are trying to convert that flexible doctrine into a brand new safe harbour: a default assumption that all training is fair use unless proven otherwise. ...

    That’s exactly backward. The Copyright Office’s own report makes clear that the legality of training depends on how the data was acquired and what the model does with it. A developer who trains on pirated or paywalled material like Anthropic, Meta and probably all of them to one degree or another, can’t launder infringement through the word 'training.' "

    Friday, 31 October 2025

    "This is [is this?] the sound of a bubble popping."

    "Mark Zuckerberg had exciting news to share yesterday. His company Meta had finished a great quarter—and would continue to increase spending on AI.

    "He said that yesterday afternoon. But when the market opened this morning, Meta shares dropped more than $80. That’s $200 billion in market cap wiped out in an instant. 

    Meta’s share price this week 
    "Why don’t investors like AI? Only a few months ago, companies saw their shares skyrocket when they made AI investments.

    "In September, Oracle’s stock shot up 36% in just one day after announcing a huge deal with OpenAI. The share price increase was enough to make the company’s founder Larry Ellison the richest man in the world.

    "But then investors changed their mind. Since that big day, Oracle shares have fallen $60. Larry Ellison is no longer the richest man in the world.

    "This is [is this?] the sound of a bubble popping."

    ~ Ted Gioia from his post 'The Bubble Just Burst'
    "Mark Zuckerberg’s Meta is spending untold billions on infrastructure and top talent for its AI ambitions.

    "In fact, the CEO announced during the company’s earnings call on Wednesday, Meta will be spending between $70 billion and $72 billion on AI this year — up from its previous estimate of $66 billion to $72 billion, as CNBC reports.

    "Unsurprisingly, that cash bonfire isn’t going over well with investors. Meta’s shares slid by more than 11 percent on Thursday, indicating widespread skepticism about the company’s ability to stop bleeding billions of dollars as it races to keep up with the AI industry’s ever-escalating expenditure commitments.

    "That’s particularly striking because the drop comes in spite of Meta’s revenues exceeding Wall Street’s estimates. In other words, out of control AI spending is starting to rattle investors. 'The total dollar spend is just kind of what hangs us up a little bit,' [said one]...

    "The AI industry is seemingly approaching a major inflection point, with Meta competitors Alphabet, and Microsoft tripling down on AI by increasing their planned spending to even loftier heights, fuelling fears of a growing AI bubblethat could take down the entire US economy with it if ever pops."

    Thursday, 23 October 2025

    "That’s the real lesson. Market power in technology is temporary because the underlying technology isn’t."

    "History, it turns out, didn’t end for Big Tech. ...

    "Take Alphabet, which took plenty of flak for its control of the search engine market. Dominance, sure. But forever dominance?

    "OpenAI’s new AI-enabled Atlas browser directly threatens Google’s Chrome browser, as well as its search business, by replacing the URL bar with conversational AI. What Washington lawyers couldn’t do to Google, technological competition just might. ...

    "The cycle endures: IBM begat Wintel, which begat Google—and now OpenAI is queuing up next. These 'forever companies' are discovering that in tech, forever lasts about 20 years, and the bill for staying that long runs to roughly half a trillion bucks a year. ...

    "That’s the real lesson. Market power in technology is temporary because the underlying technology isn’t.

    "Even if these winners of the past are also the winners in futurity, they will find themselves utterly transformed by the AI revolution as they provide users with new kinds of value."

    Tuesday, 30 September 2025

    "...a deeper issue about how our educational institutions have prepared (or misprepared) us for life in the adult world."

    "[T]his, to me, seems related to a deeper issue about how we feel our educational institutions have prepared (or misprepared) us for life in the adult world. ...

    "The truth is that, when people complain about the 'Gen Z stare,' 'quiet cracking,' and Gen Z being difficult to work with, those issues started long before the workplace. We went through school feeling like we were being taught one set of rules that applied to our pedagogy and another that belonged to the actual world and workforce. 

    "All my life I’ve surrounded myself with ambitious people, but I noticed that their ambitions often didn’t align with the hoops they were expected to jump through. One thing I noticed about my friends in high school and college is that they were always half-assing assignments and quizzes so they could do something that they felt mattered. They were exhausted. They might sleep through math class so they could teach underprivileged children robotics or skip meetings so they could build their nonprofits. In that environment, it seemed very natural to look for shortcuts ...

    "I went to [uni] to learn, but the same dynamics repeated themselves there. In my classes, I was often left unchallenged. At one point, I worked three part-time jobs and ran three student organisations alongside the maximum number of credit hours. I wouldn’t have done all of that if my classes occupied and challenged me appropriately. ... I was bored by it; professors didn’t emphasise that the essays were important to our education or that they were excited to read them, and I knew I could easily spend that time elsewhere, building things in the world that I felt mattered.

    "Frankly, it’s obvious that many teachers and professors don’t believe their own bullshit anymore. It was an open secret that we weren’t getting a good education in college, and the students were not entirely to blame. Everything became about meeting the next deadline, passing the class, and getting the credits. The professors were often buried in deadlines for their latest 'publish or perish' project. I don’t think anyone ever asked if I learned anything. .... One professor even had us assign our own grades, which he said proudly that he never rounded down.

    "The education system hasn’t measured real learning in a long time. In academia, measures have become goals—or in the case of the professor who had us assign our own grades, measures were thrown out entirely. For generations, students have been telling professors what they want to hear, but it’s been getting worse ...

    "AI is disruptive. It’s moving much faster than any of us can keep up with. But it’s also an invitation to get serious about our measures of success. ...

    "Convince students that their ideas matter; ask them what they think; and listen, not for a correct answer, but an original one. Teach them how to build research projects and business plans from scratch. Ask them to provide feedback and revise their work more than once. Take this as the opportunity to see where the education system is failing and to embark on wholehearted reform."

    Monday, 29 September 2025

    Have you been workslopped?

    "A confusing contradiction is unfolding in companies embracing generative AI tools: while workers are largely following mandates to embrace the technology, few are seeing it create real value. Consider, for instance, that the number of companies with fully AI-led processes nearly doubled last year, while AI use has likewise doubled at work since 2023. Yet a recent report from the MIT Media Lab found that 95% of organizations see no measurable return on their investment in these technologies. So much activity, so much enthusiasm, so little return. Why?

    "In collaboration with Stanford Social Media Lab, our research team at BetterUp Labs has identified one possible reason: Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers. On social media, which is increasingly clogged with low-quality AI-generated posts, this content is often referred to as 'AI slop.' In the context of work, we refer to this phenomenon as 'workslop.' We define workslop as AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.

    "Here’s how this happens. As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver. [Just as over-use of acronyms will — Ed.]

    "If you have ever experienced this, you might recall the feeling of confusion after opening such a document, followed by frustration—'Wait, what is this exactly?'—before you begin to wonder if the sender simply used AI to generate large blocks of text instead of thinking it through. If this sounds familiar, you have been workslopped."
    ~ Kate Niederhoffer, Gabriella Rosen Kellerman, Angela Lee, Alex Liebscher, Kristina Rapuano & Jeffrey T. Hancock from their study summary: 'AI-Generated “Workslop” Is Destroying Productivity' [hat tip Gary Marcus]

    Thursday, 25 September 2025

    He's right, you know

    "Leading British artists including Mick Jagger, Kate Bush and Paul McCartney have urged [UK Prime Minister] Keir Starmer to stand up for creators’ human rights and protect their work ahead of a UK-US tech deal during Donald Trump’s visit.

    In a letter to the prime minister, they argued Labour had failed to defend artists’ basic rights by blocking attempts to force artificial intelligence firms to reveal what copyrighted material they have used in their systems. ...

    “ 'The government’s formal position has exhibited a shocking indifference to mass theft, and a complete unwillingness to enforce the existing law to uphold the human rights stipulated by the ICESCR, the Berne Convention and the ECHR,' said the letter. ...

    "Elton John, one of the letter’s signatories, said government proposals to let AI companies train their systems on copyright-protected work without permission 'leaves the door wide open for an artist’s life work to be stolen.' ”

    "Do I think it’s a good idea to scrap art history? No, I think it’s a terrible, tragic idea."

     

    "In a statement to The Post, Campion said art history was the only subject she looked forward to during sixth form (year 12), and that the subject was 'a crucial step' towards her creative life in film. It was at art school that Campion started making films.

    “ 'It was so helpful to discover I had visual acuity and I was actually good at something. Do I think it’s a good idea to scrap art history? No, I think it’s a terrible, tragic idea. Students like myself deserve a chance to discover themselves [and] find something they feel passionate about and can pursue to enrich their lives.'

    "Art history could lead to satisfying careers in architecture, interior design, graphic design, theatre, painting, art restoration, community art, photography or cinematography, Campion said.

    “ 'We are moving at rocket speed into a world of AI. How will future New Zealanders communicate with their AI bots if they have no general knowledge of art? ... It’s important to have a framework of knowledge in subjects to be able to drive AI.

    “ 'It is my hope the Government reverses this decision.' "

    ~ Jane Campion in the article 'Art history will no longer be a school subject in New Zealand'

    Saturday, 13 September 2025

    Never trust a lawyer ...

     ... even when they're (supposed to be) on your side.

    Ted Gioia has the breaking news:

    Authors win a big lawsuit against AI—but the judge says they may not be able to trust their own lawyers. 

    He explains that the high-tech plagiarism modus of these "large-learning models" (LLMs) simply means that the models are "trained" on thousands of books, and millions of articles and blog posts. All written by an actual person. A person holding copyright in that work.

    So when authors, in a class action, won an ironclad case again AI company Anthropic for violating their copyrights ...

     "some thought that this might result in “more than a trillion dollars in damages.” That would put Anthropic in bankruptcy and send a message to the entire AI industry: Don’t mess with creators!

    Yay! 

    But ...

    Instead the lawyers negotiated a quick deal for $1.5 billion—and Anthropic didn’t even need to admit wrongdoing. But the penalty was so light that the judge has refused to accept it. Instead he expresses concern that the settlement will be forced “down the throat of authors.”

    How is this possible? Their own lawyers negotiated the deal.

    But listen to the judge. He admits that class members often “get the shaft” in situations like this. And he adds: “I have an uneasy feeling about hangers-on with all this money on the table.”

    Simply put, lawyers want their commission more than they care about their clients. Or their case.

    This is the sad reality of copyright litigation to protect human creators. My copyrights as an author have been violated and I don’t want a cash settlement—I want the stealing stopped. I want a Napster-style shutdown, and there’s legal precedent to support this. But what lawyer can I trust? They make money on a cash settlement, not on stopping AI use of my book.

    Expect to see similar settlements in music copyrights. A few people will get a nice payday, but nothing else will change.

    Wednesday, 3 September 2025

    AI's Bubble. Ready to burst yet?

    While politicians here in NZ bicker about who should get credit for an Amazon data centre that either is (or isn't) opening, over in the States they're already wondering whether these data centres are part of an AI bubble that's starting to show clear signs of being about to burst. 

    "Even Open AI boss Sam Altman is now talking about an AI bubble," notes Ted Gioia. "Of course, he knows better than anyone because he is seeing it up close—the disappointing release of ChatGPT-5 played a key role in setting off the current turmoil."

    Consider this: 

    AI buildout is contributing more to measured US economic growth than all of consumer spending.

    I want you look long and hard at this chart, and consider the implications.

    Another sign? Mark Zuckerberg just paid US$14 billion for a stake in Scale AI, the data-labelling startup that's never made a dollar.

    Meanwhile in the real world, McDonald’s CFO told Bloomberg that the company is struggling because many customers are now too poor to afford breakfast. And this isn’t some isolated anecdote—it’s a data-driven report from the biggest restaurant chain in the world. ...

    There’s a mismatch here between two visions of the emerging economy. So which one is real? Are we entering an AI-driven boom time like an out-of-control Monopoly game? Or will [Americans] be too broke to eat breakfast?
    Several signs, maybe, that both are happening —many signs of businesses struggling, closing, unemployment and debt rising, customers at any price simply disappearing. And meanwhile, 
    • half the gains in the stock market are due to betting on the shares of five companies, who are betting everything on their spending up AI data centres
    • consumers however are spending so little that this "investment" spending on AI by just four CEOs (two of whose money is made mostly by selling ads) totals more in the last 6 months than all the spending by all those consumers
    • the energy grid simply can't support this growth in AI data centres, and there’s no indication that consumers are willing to pay for the enormous infrastructure. 
    That last is the biggest sign right there. 
    Fewer than 1% of ChatGPT users are paid business accounts. That total is no larger than the number of paid Substack subscribers (but what a difference in company valuation!).

    In fact, most of ChatGPT’s traffic disappears when students go on summer vacation.

    That tells you how wide the chasm is between reality and the crazy claims of AI fanboys—but many of them (I bet) are also reluctant to pay for AI. ... The tech simply doesn’t live up to the hype. The more people deal with it, the less they like it. That’s why AI companies must give it away (or bundle it into an already successful product) in order to gain any reasonable usage.

    So everywhere I go online, companies are touting free AI. That’s funny. It doesn’t fit the narrative of a transformative technology.
    But even four billionaires can’t change reality," warns Gioia. 
    Yes, they are spending like drunken sailors, but that just makes the bubble bigger. It can’t stop it from bursting. The crazy level of investment only makes the eventual fallout all the worse.

    How much longer can it last? Maybe a few weeks or a few months or a few quarters. Billionaires often throw good money after bad. But the whole economy is fragile—or beyond fragile—right now. And that’s the bigger reality.

    By any reasonable measure, the current trend is unsustainable. And there’s one thing I know about unsustainable trends—there’s a day of reckoning, and it’s not a happy one for the people who caused it. But, even sadder, they take down a lot of others with them when the bubble bursts.
    Read the whole thing here. (NB: He's opened up the article from behind the paywall.)

    PS: How is this AI capital malinvestment even possible? Because of absurdly low interest rates set by the state's economic planners at the US Federal Reserve — rates that are so "economically absurd" they are only made possible "because the monetary fraudsters on the Fed's Open Market Committee (FOMC) had their big fat thumbs on the scales in the bond pits."
    And we do mean fraud: The Fed’s balance sheet rose by $1.2 trillion or 17% during the 12-month period ending on July 7, 2021, and at a time, as we will amplify below, when the Fed’s balance sheet should have actually grown by essentially zero.

    That is to say, the FOMC was buying government debt and GSE paper hand-over-fist with fiat credits snatched from thin digital air, thereby starkly falsifying yields and prices in the bond pits. There is not a chance in the hot place that tax-paying, real money savers left to their own devices would accept such niggardly real yields.
    David Stockman explains the Fed's fraud.

    Ludwig Von Mises explains the inevitable results of malinvestment — "meaning bad investment in
    lines of production that would not otherwise take place."

    Tuesday, 12 August 2025

    "This case is of exceptional importance, addressing the legality of using copyrighted works for generative AI"

    "A single lawsuit raised by three authors over Anthropic's AI 'training' now threatens to 'financially ruin' the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement. ... [That's] 'up to seven million potential claimants, whose works span a century of publishing history,' each possibly triggering a $150,000 fine.

    "Confronted with such extreme potential damages, Anthropic may lose its rights to raise valid defenses of its AI training, deciding it would be more prudent to settle, the company argued. And that could set an alarming precedent, considering all the other lawsuits generative AI (GenAI) companies face over training on copyrighted materials ..."

    ~ Ashley Belanger from her article 'AI industry horrified to face largest copyright class action ever certified' [hat tip Artists Against Generative AI]

    Thursday, 7 August 2025

    "Is art produced by non-humans actually art?"

    Question for the Day, from Mark Silva:
    "Is art produced by non-humans actually art?  
    "I think not. 
    "I know it's often said that AI is just another tool, but it's more than that, it makes decisions..."

    Saturday, 19 July 2025

    "This is called failure. There’s no other name for it."

    "2025 has been the year of garbage culture. ...

    "But something has changed in the last few days. ...

    "[P]eople are disgusted, and finally pushing back. And they are doing so with such fervor that even the biggest AI companies are now getting nervous and pulling back. ...

    "I’m focused here on AI’s destructive impact on culture, but there are other signs that growing AI resistance is now forcing companies to reconsider their bot mania.

    "'An IBM survey of 2,000 chief executives found three out of four AI projects failed to show a return on investment, a remarkably high failure rate,' reports Andrew Orlowski. 'AI agents fail to complete the job successfully about 65 to 70 percent of the time, says a study by Carnegie Mellon University and Salesforce.'

    "He also shared the results of a devastating test that debunked AI’s status in its favorite field, namely writing code. This study reveals that software developers think they are operating 20% faster with AI, but they’re actually running 19% slower.

    "Some companies are bringing back human workers because AI can’t deliver positive results. Even AI researchers are now expressing skepticism. And only 30% of AI project leaders can say that their CEOs are happy with AI results.

    "This is called failure. There’s no other name for it."

    ~ Ted Gioia from his post 'We Are Winning!'