Make Bell Labs an internet museum

I wrote an op-ed for NJ.com and the Star-Ledger in New Jersey proposing that the soon-empty Bell Labs should become a Museum and School of the Internet. Here, for those outside the Garden State, is the text:

Bell Labs, the historic headwaters of so many inventions that now define our digital age, is closing in Murray Hill, its latest owners moving to more modern headquarters in New Brunswick. The Labs should be preserved as a historic site and more. I propose that Bell Labs be opened to the public as a museum and school of the internet.

The internet would not be possible without the technologies forged at Bell Labs: the transistor, the laser, information theory, Unix, communications satellites, fiber optics, advances in chip design, cellular phones, compression, microphones, talkies, the first digital art, and artificial intelligence — not to mention, of course, many advances in networks and the telephone, including the precursor to the device we all carry and communicate with today: the Picturephone, displayed as a futuristic fantasy at the 1964 World’s Fair.

There is no museum of the internet. Silicon Valley has its Computer History Museum. New York has museums for television and the moving image. Massachusetts boasts a charming Museum of Printing. Search Google for a museum of the internet and you’ll find amusing digital artifacts, but nowhere to immerse oneself in and study this immensely impactful institution in society.

Where better to house a museum devoted to the internet than New Jersey, home not only of Bell Labs but also at one time the headquarters of the communications empire, AT&T, our Ma Bell?

I remember taking a field trip to Bell Labs soon after this web site, NJ.com, started in 1995. I was an executive of NJ.com’s parent company, Advance. My fellow editors and I felt we were on the sharp edge of the future in bringing news online.

We thought that earned us kinship with the invention of that future that went on at Bell Labs, so we arranged a visit to the awe-inspiring building designed by Stephen F. Voorhees and opened in 1941. The halls were haunted with genius: lab after lab with benches and blackboards and history within. We must not lose that history.

We also must not lose the history of the internet as it passes us by in present tense. In researching my book, “The Gutenberg Parenthesis: The Age of Print and its Lessons for the Age of the Internet,” I was shocked to discover that there was not a discipline devoted to studying the history and influence of print and the book until Elizabeth Eisenstein wrote her seminal work, “The Printing Press as an Agent of Change,” in 1979, a half-millennium after Gutenberg. We must not wait so long to preserve memories and study the importance of the net in our lives.

The old Bell Labs could be more than a museum, preserving and explaining the advances that led to the internet. It could be a school. After leaving Advance in 2006, I became a journalism professor at CUNY’s Newmark School of Journalism, from which I am retiring.

I am less interested now in studying journalism than in the greater, all-enveloping subject: the internet. My dream is to start a new educational program in Internet Studies, to bring the humanities and social sciences to research the internet, for it is much more than a technology; it is a human network that reflects both human accomplishment and human failure.

Imagine if Bell Labs were a place where scholars and students in many disciplines — technologies, yes, but also anthropology, sociology, psychology, history, ethics, economics, community studies, design — could gather to teach and learn, discuss and research.

Imagine, too, if a New Jersey university could use the space for classes and events.

There is a model for this in New Jersey in what Montclair State University is doing in Paterson, developing and operating a museum devoted to the history of Negro League baseball in the historic Hinchcliffe Stadium. This is the kind of university-community collaboration that could enrich the space of Bell Labs with energy and life.

There is some delicious irony in proposing that the internet be memorialized in what was once an AT&T facility, for the old telephone company resisted the arrival of the internet, hoping we would pay by the minute for long-distance calls forever.

In 1997, David Isenberg, a 12-year veteran of Bell Labs, wrote an infamous memo telling his bosses they were wrong to build intelligent networks and should instead learn the value of the stupid network that anyone could connect to: the internet.

Isenberg’s web site says the memo “was received with acclaim everywhere in the global telecommunications community with one exception — at AT&T itself! So Isenberg left AT&T in 1998.”

How wonderful if, in the end, Bell Labs could claim to become a forever home for that network that has changed the world.

In the echo chamber

Well, that was surreal. I testified in a hearing about AI and the future of journalism held by the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Here is my written testimony and here’s the Reader’s Digest version in my opening remarks:

It was a privilege and honor to be invited to air my views on technology and the news. I went in knowing I had a role to play, as the odd man out. The other witnesses were lobbyists for the newspaper/magazine and broadcast industries and the CEO of a major magazine company. The staff knew I would present an alternative perspective. My fellow panelists noted before we sat down — nicely — that they disagreed with my written testimony. Job done. There was little opportunity to disagree in the hearing, for one speaks only when spoken to.

What struck me about the experience is not surprising: They call the internet an echo chamber. But, of course, there’s no greater echo chamber than Congress: lobbyists and legislators agreeing with each other about the laws they write and promote together. That’s what I witnessed in the hearing in a few key areas:

Licensing: The industry people and the politicians all took as gospel the idea that AI companies should have to license and pay for every bit of media content they use. 

I disagree. I draw the analogy to what happened when radio started. Newspapers tried everything to keep radio out of news. In the end, to this day, radio rips and reads newspapers, taking in and repurposing information. That’s to the benefit of an informed society.

Why shouldn’t AI have the same right? I ask. Some have objected to my metaphor: Yes, I know, AI is a program and the machine doesn’t read or learn or have rights any more than a broadcast tower can listen and speak and vote. I spoke metaphorically, for if I had instead argued that, say, Google or Meta has a right to read and learn, that would have opened up a whole can of PR worms. The point is obvious, though: If AI creators would be required by law to license *everything* they use, that grants them lesser rights than media — including journalists, who, let’s be clear, read, learn from, and repurpose information from each other and from sources every day. 

I think there’s a difference in using content to train a model versus producing output. It’s one matter for large language models to be taught the relationship of, say, the words “White” and “House.” I say that is fair and transformative use. But it’s a fair discussion to separate out questions of proper acquisition and terms of use when an application quotes from copyrighted material from behind a paywall in its output. The magazine executive cleverly conflated training and output, saying *any* use required licensing and payment. I believe that sets a dangerous precedent for news media itself. 

If licensing and payment is required for all use of all content, then I say the doctrine of fair use could be eviscerated. The senators argued just the opposite, saying that if fair use is expanded, copyright becomes meaningless. We disagree. 

JCPA: The so-called Journalism Competition and Preservation Act is a darling of many members of the committee. Like Canada’s disastrous Bill C-18 and Australia’s corrupt News Media Bargaining Code — which the senators and the lobbyists think are wonderful — the JCPA would allow large news organizations (those that earn more than $100,000 a year, leaving out countless small, local enterprises) to sidestep antitrust and gang together and force platforms to “negotiate” for the right to link to their content. It’s legislated blackmail. I didn’t have the chance to say that. Instead, the lobbyists and legislators all agreed how much they love the bill and can’t wait to try again to pass it. 

Section 230: Members of the committee also want to pass legislation to exclude generative AI from the protections of Section 230, which enables public discourse online by protecting platforms from liability for what users say there while also allowing companies to moderate what is said. The chair said no witness in this series of hearings on AI has disagreed. I had the opportunity to say that he has found his first disagreement.

I always worry about attempts to slice away Section 230’s protections like a deli balogna. But more to the point, I tried to explain that there is nuance in deciding where liability should lie. In the beginning of print, printers were held liable — burned, beheaded, and behanded — for what came off their presses; then booksellers were responsible for what they sold; until ultimately authors were held responsible — which, some say, was the birth of the idea of authorship. 

When I attended a World Economic Forum AI governance summit, there was much discussion about these questions in relation to AI. Holding the models liable for everything that could be done with them would, in my view, be like blaming the printing press for what is put on and what comes off it. At the event, some said responsibility should lie at the application level. That could be true if, for example, Michael Cohen was misled by Google when it placed Bard next to search, letting him believe it would act like search and giving him bogus case citations instead. I would say that responsiblity generally lies with the user, the person who instructs the program to say something bad or who uses the program’s output without checking it, as Cohen did. There is nuance.

Deep fakery: There was also some discussion of the machine being used to fool people and whether, in the example used, Meta should be held responsible and expected to verify and take down a fake video of someone made with AI — or else be sued. As ever, I caution against legislating official truth.  

The most amusing moment in the hearing was when the senator from Tennessee complained that media are liberal and AI is liberal and for proof she said that if one asks ChatGPT to write a poem praising Donald Trump, it will refuse. But it would write a poem praising Joe Biden and she proceeded to read it to me. I said it was bad poetry. (BTW, she’s right: both ChatGPT and Bard won’t sing the praises of Trump but will say nice things about Biden. I’ll leave the discussion about so-called guardrails to another day.)

It was a fascinating experience. I was honored to be included. 

For the sake of contrast, in the morning before the hearing, I called Sven Størmer Thaulow, chief data and technology officer for Schibsted, the much-admired (and properly so) news and media company of Scandinavia. Last summer, Thaulow called for Norwegian media companies to contribute their content freely to make a Norwegian-language large language model. “The response,” the company said, “was overwhelmingly positive.” I wanted to hear more. 

Thaulow explained that they are examining the opportunities for a native-language LLM in two phases: first research, then commercialization. In the research phase now, working with universities, they want to see whether a native model beats an English-language adaptation, and in their benchmark tests, it does. As a media company, Schibsted has also experimented with using generative AI to allow readers to query its database of gadget reviews in conversation, rather than just searching — something I wish US news organizations would do: Instead of complaining about the technology, use it to explore new opportunities.

Media companies contribute their content to the research. A national organization is making a blanket deal and individual companies are free to opt out. Norway being Norway — sane and smart — 90 percent of its books are already digitized and the project may test whether adding them will improve the model’s performance. If it does, they and government will deal with compensation then. 

All of this is before the commercial phase. When that comes, they will have to grapple with fair shares of value. 

How much more sensible this approach is to what we see in the US, where technology companies and media companies face off, with Capitol Hill as as their field of play, each side trying to play the refs there. The AI companies, to my mind, rushed their services to market without sufficient research about impact and harm, misleading users (like hapless Michael Cohen) about their capabilities. Media companies rushed their lobbyists to Congress to cash in the political capital earned through journalism to seek protectionism and favors from the politicians their journalists are supposed to cover, independently. Politicians use legislation to curry favor in turn with powerful and rich industries. 

Why can’t we be more like Norway?

Journalism and AI

Here are are my written remarks for a hearing on AI and the future of journalism for the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, on January 10, 2024.

I have been a journalist for fifty years and a journalism professor for the last eighteen.

  1. History

I would like to begin with three lessons on the history of news and copyright, which I learned researching my book, The Gutenberg Parenthesis: The Age of Print and its Lessons for the Age of the Internet (Bloomsbury, 2023):

First, America’s 1790 Copyright Act covered only charts, maps, and books. The New York Times’ suit against OpenAI claims that, “Since our nation’s founding, strong copyright protection has empowered those who gather and report news to secure the fruits of their labor and investment.” In truth, newspapers were not covered in the statute until 1909 and even then, according to Will Slauter, author of Who Owns the News: A History of Copyright (Stanford, 2019), there was debate over whether to include news articles, for they were the products of the institution more than an author. 

Second, the Post Office Act of 1792 allowed newspapers to exchange copies for free, enabling journalists with the literal title of “scissors editor” to copy and reprint each others’ articles, with the explicit intent to create a network for news, and with it a nation. 

Third, exactly a century ago, when print media faced their first competitor — radio — newspapers were hostile in their reception. Publishers strong-armed broadcasters into signing the  1933 Biltmore Agreement by threatening not to print program listings. The agreement limited radio to two news updates a day, without advertising; required radio to buy their news from newspapers’ wire services; and even forbade on-air commentators from discussing any event until twelve hours afterwards — a so-called “hot news doctrine,” which the Associated Press has since tried to resurrect. Newspapers lobbied to keep radio reporters out of the Congressional press galleries. They also lobbied for radio to be regulated, carving an exception to the First Amendment’s protections of freedom of expression and the press. 

Publishers accused radio — just as they have since accused television and the internet and AI — of stealing “their” content, audience, and revenue, as if each had been granted them by royal privilege. In scholar Gwenyth Jackaway’s words, publishers “warned that the values of democracy and the survival of our political system” would be endangered by radio. That sounds much like the sacred rhetoric in The Times’ OpenAI suit: “Independent journalism is vital to our democracy. It is also increasingly rare and valuable.” 

To this day, journalists — whether on radio or at The New York Times — read, learn from, and repurpose facts and knowledge gained from the work of fellow journalists. Without that assured freedom, newspapers and news on television and radio and online could not function. The real question at hand is whether artificial intelligence should have the same right that journalists and we all have: the right to read, the right to learn, the right to use information once known. If it is deprived of such rights, what might we lose?

  1. Opportunities

Rather than dwelling on a battle of old technology and titans versus new, I prefer to focus here on the good that might come from news collaborating with this new technology. 

First, though, a caveat: I argue it is irresponsible to use large language models where facts matter, for we know that LLMs have no sense of fact; they only predict words. News companies, including CNET, G/O Media, and Gannett, have misstepped, using the technology to manufacture articles at scale, strewn with errors. I covered the show-cause hearing for a New York attorney who (like President Trump’s former counsel, Michael Cohen) used an LLM to list case citations. Federal District Judge P. Kevin Castel made clear that the problem was not the technology but its misuse by humans. Lawyers and journalists alike must exercise caution in using generative AI to do their work. 

Having said that, AI presents many intriguing possibilities for news and media. For example:

AI has proven to be excellent at translation. News organizations could use it to present their news internationally.

Large language models are good at summarizing a limited corpus of text. This is what Google’s NotebookLM does, helping writers organize their research. 

AI can analyze more text than any one reporter. I brainstormed with an editor about having citizens record 100 school-board meetings so the technology could transcribe them and then answer questions about how many boards are discussing, say, banning books. 

I am fascinated with the idea that AI could extend literacy, helping people who are intimidated by writing tell and illustrate their own stories.

A task force of academics from the Modern Language Association concluded AI in the classroom could help students with word play, analyzing writing styles, overcoming writers’ block, and stimulating discussion. 

AI also enables anyone to write computer code. As an AI executive told me in a podcast about AI that I cohost, “English majors are taking the world back… The hottest programming language on planet Earth right now is English.” 

Because LLMs are in essence a concordance of all available language online, I hope to see scholars examine them to study society’s biases and clichés.

And I see opportunities for publishers to put large language models in front of their content to allow readers to enter into dialog with that content, asking their own questions and creating new subscription benefits. I know an entrepreneur who is building such a business. 

Note that in Norway, the country’s largest and most prestigious publisher, Schibsted, is leading the way to build a Norwegian-language large language model and is urging all publishers to contribute content. In the US, Aimee Rinehart, an executive student of mine at CUNY who works on AI at the Associated Press, is also studying the possibility of an LLM for the news industry. 

  1. Risks

All these opportunities and more are put at risk if we fence off the open internet into private fortresses.

Common Crawl is a foundation that for sixteen years has archived the entire web: 250 billion pages, 10 petabytes of text made available to scholars for free, yielding 10,000 research papers. I am disturbed to learn that The New York Times has demanded that the entire history of its content — that which was freely available — be erased. Personally, when I learned that my books were included in the Books3 data set used to train large language models, I was delighted, for I write not only to make money but also to spread ideas. 

What happens to our information ecosystem when all authoritative news retreats behind paywalls, available only to privileged citizens and giant corporations able to pay for it? What happens to our democracy when all that is left out in public for free — to inform both citizens and machines — is propaganda, disinformation, conspiracies, spam, and lies? I well understand the economic plight of my industry, for I direct a Center for Entrepreneurial Journalism. But I also say we must have a discussion about journalism’s moral obligation to an informed society and about the right not only to speak but to learn.

  1. Copyright

And we need to talk about reimaging copyright in this age of change, starting with a discussion about generative AI as fair and transformative use. When the Copyright Office sought opinions on artificial intelligence and copyright (Docket 2023-6), I responded with concern about an idea the Office raised of establishing compulsory licensing schemes for training data. Technology companies already offer simple opt-out mechanisms (see: robots.TXT).

Copyright at its origin in the Statute of Anne of 1710 was enacted not to protect creators, as is commonly asserted. Instead, it was passed at the demand of booksellers and publishers to establish a marketplace for creativity as a tradeable asset. Our concepts of creativity-as-content and content-as-property have their roots in copyright. 

Now along come machines — large language models and generative AI — that manufacture endless content. University of Maryland Professor Matthew Kirschenbaum warns of what he calls “the Textpocalypse.” Artificial intelligence commodifies the idea of content, even devalues it. I welcome this. For I hope it might drive journalists to understand that their value is not in manufacturing the commodity, content. Instead, they must see journalism as a service to help citizens inform public discourse and improve their communities. 

In 2012, I led a series of discussions with multiple stakeholders — media executives, creative artists, policymakers — for a project with the World Economic Forum on rethinking intellectual property and the support of creativity in the digital age. In the safe space of Davos, even media executives would concede that copyright is outmoded. Out of this work, I conceived of a framework I call “creditright,” which I’ve written is “the right to receive credit for contributions to a chain of collaborative inspiration, creation, and recommendation of creative work. Creditright would permit the behaviors we want to encourage to be recognized and rewarded. Those behaviors might include inspiring a work, creating that work, remixing it, collaborating in it, performing it, promoting it. The rewards might be payment or merely credit as its own reward.” It is just one idea, intended to spark discussion. 

Publishers constantly try to extend copyright’s restrictions in their favor, arguing that platforms owe them the advertising revenue they lost when their customers fled for better, competitive deals online. This began in 2013 with German publishers lobbying for a Leistungsschutzrecht, or ancillary copyright, which inspired further protectionist legislation, including Spain’s link tax, articles 15 and 17 of the EU’s Copyright Directive, Australia’s News Media Bargaining Code, and most recently Canada’s Bill C-18, which requires large platforms — namely Google and Facebook — to negotiate with publishers for the right to link to their news. To gain an exemption from the law, Google agreed to pay about $75 million to publishers — generous, but hardly enough to save the industry. Meta decided instead to take down links to news rather than being forced to pay to link. That is Meta’s right under Canada’s Charter of Rights and Freedoms, for compelled speech is not free speech. 

In this process, lobbyists for Canada’s publishers insisted that their headlines were valuable while Meta’s links were not. The nonmarket intervention of C-18 sided with the publishers. But as it turned out, when those links disappeared, Facebook lost no traffic while publishers lost up to a third of theirs. The market spoke: Links are valuable. Legislation to restrict linking would break the internet for all. 

I fear that the proposed Journalism Competition and Preservation Act (JCPA) and the California Journalism Protection Act (CJPA) could have similar effect here. As a journalist, I must say that I am offended to see publishers lobby for protectionist legislation, trading on the political capital earned through journalism. The news should remain independent of — not beholden to — the public officials it covers. I worry that publishers will attempt to extend copyright to their benefit not only with search and social platforms but now with AI companies, disadvantaging new and small competitors in an act of regulatory capture. 

  1. Support for innovation

The answer for both technology and journalism is to support innovation. That means enabling open-source development, encouraging both AI models and data — such as that offered by Common Crawl — to be shared freely. 

Rather than protecting the big, old newspaper chains — many of them now controlled by hedge funds, which will not invest or innovate in news — it is better to nurture new competition. Take, for example, the 450 members of the New Jersey News Commons, which I helped start a decade ago at Montclair State University; and the 475 members of the Local Independent Online News Publishers; the 425 members of the Institute for Nonprofit News; and the 4,000 members of the News Product Alliance, which I also helped start at CUNY. This is where innovation in news is occurring: bottom-up, grass-roots efforts emergent from communities. 

There are many movements to rebuild journalism. I helped develop one: a degree program called Engagement Journalism. Others include Solutions Journalism, Constructive Journalism, Reparative Journalism, Dialog Journalism, and Collaborative Journalism. What they share is an ethic of first listening to communities and their needs. 

In my upcoming book, The Web We Weave, I ask technologists, scholars, media, users, and governments to enter into covenants of mutual obligation for the future of the internet and, by extension, AI. 

There I propose that you, as government, promise first to protect the rights of speech and assembly made possible by the internet. Base decisions that affect internet rights on rational proof of harms, not protectionism for threatened industries and not media’s moral panic. Do not splinter the internet along national borders. And encourage and enable new competition and openness rather than entrenching incumbent interests through regulatory capture. 

In short, I seek a Hippocratic Oath for the internet: First, do no harm.

A journalism of belief and belonging


I increasingly come to see that we are not in a crisis of information and disinformation or even of misguided beliefs, but instead of belonging. I wonder how to reimagine journalism to address this plight.

Belonging is a good. The danger is in not belonging, and filling that void with malign substitutes for true community: joining a cult of personality or conspiracies, an insurrection, or some nihilistic, depraved perversion of a religion.

What role might journalism play to fill that void instead with conversation, connection, understanding, collaboration, enlightened values, and education?

Hannah Arendt teaches us that amid the thrall and threat of totalitarianism, some people belong to nothing, and so they are vulnerable to the lure of joining a noxious cause manufactured of fear. In The Gutenberg Parenthesis, I quote her:

“But totalitarian domination as a form of government is new in that it is not content with this isolation but destroys private life as well. It bases itself on loneliness, on the experience of not belonging to the world at all, which is among the most radical and desperate experiences of man.” For Arendt, to be public is to be whole, to be private is to be deprived; to be without both is to be uprooted, vulnerable, and alone.

Arendt found in Nazi and Soviet history “such unexpected and unpredicted phenomena as the radical loss of self-interest, the cynical or bored indifference in the face of death or other personal catastrophes, the passionate inclination toward the most abstract notions as guides for life, and the general contempt for even the most obvious rules of common sense.” The lessons for these populist times are undeniable as Trump’s base shows a loss of self-interest (what did he accomplish for them over the rich?), an indifference to death (defiantly burning masks at COVID superspreader rallies), a passionate inclination toward abstract notions (are abortion and guns truly more important to their everyday lives than jobs and health?), and contempt for common sense (see: science denial and conspiracy theories).

Later in my book, I call upon the theories of sociologist William Kornhauser, who contends that the solution to such alienated mass society is to support a pluralistic society of belonging, in which people connect with communities — they “possess multiple commitments to diverse and autonomous groups” — and are less vulnerable to, or at least feel a competitive tug away from, the siren call of populist movements. I write:

A pluralistic society is marked by belonging — to families, tribes (in the best and most supportive sense, which Sebastian Junger defines as “the people you feel compelled to share the last of your food with”), clubs, congregations, organizations, communities. A pluralistic society is more secure and less vulnerable to domination as a whole, as a mass. In such associations we do not give up our individuality; we gain individual identity by connecting, gathering, organizing, and acting with others who share our interests, needs, goals, desires, or circumstances. When that occurs, in Kornhauser’s view, elites become accessible as “competition among independent groups opens many channels of communication and power.” Then, too, “the autonomous man respects himself as an individual, experiencing himself as the bearer of his own power and as having the capacity to determine his life and to affect the lives of his fellows.” In short, a pluralistic society is a diverse society.

Of course, it is diversity that most threatens the autocrats, populists, racists, and fascists who in turn imperil our nation and democracy around the world. That is why they condemn “identity politics.” The internet, I theorize, enabled voices too long not represented in so-called mainstream — i.e., old, white — mass media to at last be heard. That is what the would-be tyrants and cultists use to stir fear and recruit their rudderless hordes, preaching that the Others — Blacks, Hispanics, LGBTQ people, immigrants, “woke mobs,” and lately trans people — will come steal their jobs, homes, history, security, society, and even children.

Journalism brings information to the fight for their very souls. We stand outside reactionary revival tents with slips of paper bearing facts, thinking that can compete with the heart-thumping hymns of fear within.

In 2022 in Paris, a group of scholars gathered at the International Communication Association for a preconference that asked, “What comes after disinformation studies?” In a paper reporting on the discussion, Théophile Lenoir and Chris Anderson conclude: “Fact-checking our way out of politics will not work.”

Journalists want to believe that we are in a crisis of disinformation because they think the cure must be what they offer: information. The mania around disinformation after 2016 led to what Joe Bernstein in Harper’s calls Big Disinfo, a veritable industry devoted to dis-dis-information. I was part of that effort, having raised money after 2016 to support such projects. I’m certainly not opposed to reporting information and checking facts! But we need to concede that these are insufficient ends.

If the problem is not disinformation, then it must be belief, we say, pointing to opinion polls in which shocking numbers of citizens say they ascribe to insane ideas and conspiracy theories. Regarding such polls, I will forever return to the lessons of the late James Carey: “Public life started to evaporate with the emergence of the public opinion industry and the apparatus of polling. Polling … was an attempt to simulate public opinion in order to prevent an authentic public opinion from forming.”

Polls are fatally and fundamentally flawed because they reflect the biases of the pollsters, who insist on sorting us into their buckets, leaving no room for nuance or context. Worse than that, polls have become a mechanism for signaling belonging in some rebellious, defiant cause. Writes Reece Peck, another scholar at the ICA Paris preconference, “Political scientists have come to understand that voting is less a cool-headed deliberation on how specific policies help or hurt the voter’s material economic interest and more an occasion for expressing the voter’s cultural attachments and group loyalties.” Fringe opinions are a means for these citizens to tell pollsters, media, and authority: ‘You can’t sort us. We’ll sort ourselves.’ As researchers Michael Bang Petersen, Mathias Osmundsen, and Kevin Arceneaux have found, people who circulate hostile political information do so out of a “Need for Chaos,” a desire to “‘burn down’ the entire political order in the hope they gain status in the process.” In the hope, that is, that they will find a place to belong in their posse, their institutional insurrection. See again: Arendt.

I believe there is only one true hope to cure vulnerability to such performative belief: education. By that I do not mean media- or news-literacy, the hubristic assertion that if only people understood how journalism works and consumed its products, all would be well. I mean education, period: in the humanities, the social sciences, and science. As I write in my upcoming book, The Web We Weave, I taught in a public university because I believe education is our best hope. But universities — particularly their humanities departments — are being starved of resources and attacked by populist, right-wing forces that view education as their enemy because it is through education that they lose voters and power. This is where our underlying crisis and solution lie.

What can journalism do? I am not sure.

In any discussion of the crisis in democracy, someone will pipe up with banalities about the internet segregating us in filter bubbles and echo chambers. But research by Petersen and Axel Bruns shows that — as Petersen says — “the biggest echo chamber that we all live in is the one we live in in our everyday lives,” in the towns, jobs, and congregations we seek out to be around people like us. Journalist Bill Bishop said it well in the subtitle of his 2008 book, The Big Sort: “The clustering of like-minded American is tearing us apart.” The internet doesn’t cause filter bubbles, it punctures them, confronting people with those they are told to fear. The internet does not cause division. It exposes it.

Thus I have argued that one mission for journalism (and, for that matter, social networks) should be to make strangers less strange. At the Tow-Knight Center, I funded research to that end by Caroline Murray and Talia Stroud, who found 25 inspiring projects in newsrooms attempting to do just that; look at their list. I find that work heartening, yet still insufficient.

Journalism is flawed at its core. It is built to seek out, highlight, and exploit — and cause — conflict. Political journalism is engineered to predict, which does nothing to inform the electorate. Instead, in the words of Jay Rosen, it should focus on what is at stake in the choices citizens make. Journalism has done tremendous harm to countless communities that have never trusted its institutions. Journalism — just like the internet companies it criticizes — is built on the economics of attention.

I do not, of course, reject all of journalism. Yes, I criticize The Times and The Post because they have been our biggest and best and we need them to be better. I also praise excellent reporting there and support it with my subscriptions. I think it is important to understand our history sans the sacred rhetoric publishers use to lobby politicians and courts for protection against new competitors, from radio to television to the internet to AI. James Gordon Bennett, the early newspaper titan said to be the father of modern journalism — thus mass media — once said to an upstart in the field: “Young man, ‘to instruct the people,’ as you say, is not the mission of journalism. That mission, if journalism has any, is to startle or amuse.” There are our roots in mass media. Hear Carl Lindstrom writing in The Fading American Newspaper:

In its hunger for circulation it has sought status as a mass medium to the point where it is a hollow attempt to be all things to all men. It has scorned competition as an evil, and cultivated monopoly as a virtue. While claiming a holy mission with constitutional protection, it has left great vacuums of journalistic obligation into which competiting mediums have moved with impunity and public acceptance. Today journalism is on the move at an ever-accelerating rate with the daily press showing no apparent concern. This indifference is in accord with its incapacity for relentless self-examination. In this vacant place self-delusion has built itself a nest.

He wrote that in 1960.

There are movements to address the mission void in present-day journalism. I helped start one in Engagement Journalism, with my colleague Carrie Brown. There is Solutions JournalismCollaborative JournalismConstructive JournalismReparative JournalismDialog JournalismDeliberative Journalism … and others. I would like to bring these various ’ives together in a room to see what links them. I think it will be this: They start with listening.

Journalism is terrible at listening. We train reporters to hit the streets with premade narratives and predictions, looking for quotes to fulfill them. In Engagement Journalism, we teach journalists instead to hear the communities they serve. That does not mean we must listen to every cultist’s crazy theories and fears concocted for media attention. Journalists give them plenty of oxygen already. No, I mean that we need to allow people to be heard regarding their real lives and actual circumstances and concerns. That is a necessary start.

How do we then reimagine journalism built around helping people understand that they can belong to positive communities of understanding and empathy, they can build bridges to other communities through listening and learning, they can find fulfillment in their own identities without excluding or denigrating the identities of others?

A few years ago, I participated in valuable diversity training. In one exercise, our trainer told each of us to reflect on our own cultures. I demurred, saying that I had no culture as I am of boring, generic, white-bread, American, suburban stock. She told me I was wrong. Upon reflection, I saw that she was right. She forced me to recognize the power of the cultural default. I’ve learned that lesson, too, from André Brock, whom I quote in The Gutenberg Parenthesis:

In Distributed Blackness, his trenchant analysis of African American cybercultures … Georgia Tech Professor André Brock Jr. sought to understand Black Twitter on its own terms, not in relation to mass and white media, not in the context of aiming to be heard there. “My claim is ecological: Black folk have made the internet a ‘Black space’ whose contours have become visible through sociality and distributed digital practice while also decentering whiteness as the default internet identity.” That is to say that it is necessary to acknowledge the essential whiteness of mass media as well as the internet. “Despite protestations about color-blindness or neutrality,” Brock wrote, “the internet should be understood as an enactment of whiteness through the interpretive flexibility of whiteness as information. By this, I mean that white folks’ communications, letters, and works of art are rarely understood as white; instead, they become universal and are understood as ‘communication,’ ‘literature,’ and ‘art.’”

Brock helped me see where journalism is “whiteness as information.” So have Wesley Lowery and Lewis Raven Wallace in their criticism of journalistic objectivity (works I assigned and taught every year).

Brock also made me see how the internet has helped me belong. I long was a loner; journalists fancy themselves that: separate, apart (and let’s admit it, above). I live in a town disconnected from many of my neighbors. But on the internet, I have found myself connected with many communities.

Every year in the Engagement Journalism class I had the privilege of teaching with Carrie Brown, we would ask students what communities they belong to. The answers inevitably began with the obvious: “I’m a student.” “I live in Brooklyn.” But then someone might say, “I struggle with mental health issues.” A few students later in the circle, another students would echo that. Thus a connection is made, empathy established, a community enabled. Not all communities are bounded by geography; online, they might exist in any definition, anywhere.

Such conversation and connection can occur only in an environment of trust, but today we live in an environment of distrust — and that is the fault, in great measure, of media and politics manufacturing disconnection and fear. That is what journalism must fight against: a darkness not of information but of the soul. I return to Lenoir and Anderson in Paris:

Technical solutions to political problems are bound to fail. Historical, structural, and political inequality — and especially race, ethnicity, and social difference — needs to be at the forefront of our understanding of politics and, indeed, disinformation. The challenge for researchers, and our field broadly, is to engage in politics by generating ideas and crafting narratives that make people want to live in a more just world, not just a more truthful one.

The same should be said of journalism. How might we do that?

Journalists might see ourselves as conveners of conversation (see, for example, Spaceship Media).

We might see ourselves as educators, defenders of — yes, advocates for — enlightened values of reason, liberty, equality, tolerance, and progress. It is not enough to expose inequality, we must defend equality.

We might see it as our task to build bridges among communities — to make strangers less strange, to help people escape the filter bubbles in their real lives.

We might understand the imperative to fight — not neutrally amplify — the dark forces of hate, fear, and fascism.

We must pay reparations to the communities our institutions have damaged by finally assuring that their stories are told — by themselves — and heard.

We could reject the economics of attention and scale of mass media and rebuild journalism at human scale, valuing our work not through our metrics of audience but instead as the public values us.

As I leave my last job and the last year, I am reflecting on where to turn my attention next. I spent a dozen years at the end of my time in the industry working to make journalism digital, a task that should be self-evident but even so, is far from done. I spent eighteen years in a university exploring new business models for news, though I fear that trying to save established journalism ends in protectionism. My proudest work has been teaching and learning Engagement Journalism and it is there — in listening to communities — where I wish to devote myself.

I also believe it is critical that we understand journalism now in the context of a connected world and call upon other disciplines — history, ethics, psychology, community studies, anthropology, sociology — to understand the internet not as a technology but as a human network. That is the subject of my next book. That is what I have been calling Internet Studies: examining how we interact now and what reimagined and reformed institutions we need to help us do that better. Somewhere in there, I believe, is the essence of a new journalism, a journalism of education, a journalism of belonging.

Artificial general bullshit

I began writing this as a report from a useful conference on AI that I just attended, where experts and representatives of concerned sectors of society had serious discussion about the risks, benefits, and governance of the technology.

But, of course, I first must deal with the ludicrous news playing out now at leading AI generator, OpenAI. So let me begin by saying that in my view, the company is pure bullshit. Sam Altman’s contention that they are building “artificial general intelligence” or “artificial superintelligence”: Bullshit. Board members’ cult of effective altruism and AI doomerism: Bullshit. The output of ChatGPT: Bullshit. It’s all hallucinations: Pure bullshit. I even fear that the discussion of AI safety in relation to OpenAI could be bullshit. 

This is not to say that AI and its capabilities as it is practiced there and elsewhere is not something to be taken seriously, even with wonder. And we should take seriously discussion of AI impact and safety, its speed of development and adoption, and its governance. 

These topics were on the agenda of the AI conference I attended at the San Francisco outpost of the World Economic Forum (Davos). Snipe if you will at this fraternity of rich and powerful, this is one thing the Forum does consistently well: convene multistakeholder conversations about important topics, because people accept their invitations. At this meeting, there were representatives of technology companies, governments, and the academy. I sat next to an honest-to-God philosopher who is leading a program in ethical AI. At last. 

I knew I was in the right place when I heard AGI brought up and quickly dismissed. Artificial general intelligence is the purported goal of OpenAI and other boys in the AI fraternity: that they are so smart they can build a machine that is smarter than all of us, even them — a machine so powerful it could destroy humankind unless we listen to its creators. I call bullshit. 

In the public portion of the conference, panel moderator Ian Bremmer said he had no interest in discussing AGI. I smiled. Andrew Ng, cofounder of Google Brain and Coursera, said he finds claims of imminent AGI doom “vague and fluffy…. I can’t prove that AI won’t wipe us out anymore than I could prove that radio waves won’t attract aliens that would wipe us out.” Gary Marcus — a welcome voice of sanity in discourse about AI — talked of trying to get Elon Musk to make good on his prediction that AGI will arrive by 2029 with a $100,000 bet. What exactly Musk means by that is no clearer than anything he says. Keep in mind that Musk has also said that by now cars would drive themselves and Twitter would be successful and he would soon (not soon enough) be on his way to Mars. One participant doubted not only the arrival of AGI but said large language models might prove to be a parlor trick.

With that BS was out of the way, this turned out to be a practical meeting, intended to bring various perspectives together to begin to formulate frameworks for discussion of responsible use of AI. The first results will be published from the mountaintop in January. 

I joined a breakout session that had its own breakouts (life is breakouts all the way down). The circle I sat in was charged with outlining benefits and risks of generative AI. Their first order of business was to question the assignment and insist on addressing AI as a whole. The group emphasized that neither benefits nor risks are universal, as each will fall unevenly on different populations: individuals, organizations (companies to universities), communities, sectors, and society. They did agree on a framework for that impact, asserting that for some, AI could:

  • raise the floor (allowing people to engage in new skills and tasks to which they might not have had access — e.g., coding computers or creating illustrations);
  • scale (that is, enabling people and organizations to take on certain tasks much more efficiently); and
  • raise the ceiling (performing tasks — such as analyzing protein folding — that heretofore were not attainable by humans alone). 

On the negative side, the group said AI would:

  • bring economic hardship; 
  • enable evil at scale (from exploding disinformation to inventing new diseases); and
  • for some, result in a loss of purpose or identity (see the programmer who laments in The New Yorker that “bodies of knowledge and skills that have traditionally taken lifetimes to master are being swallowed at a gulp. Coding has always felt to me like an endlessly deep and rich domain. Now I find myself wanting to write a eulogy for it”).

This is not to say that the effects of AI will fit neatly into such a grid, for what is wondrous for one can be dreadful for another. But this gives us a way to begin to define responsible deployment. While we were debating in our circle, other groups at the meeting tackled questions of technology and governance. 

There have been a slew of guidelines for responsible AI — most lately the White House issued its executive order, and tech companies, eager to play a game of regulatory catch, are writing their own. Here are Google’s, these are Microsoft’s, and Meta has its own pillars. OpenAI has had a charter built on its hubristic presumption that is building AGI. Anthropic is crowdsourcing a “constitution” for AI, filled with vague generalities about AI characterized as “reliable,” “honest,” “truth, “good,” and “fair.” (I challenge either an algorithm or a court to define and enforce the terms.) Meanwhile, the EU, hoping to lead in regulation if not technology, is writing its AI Act

Rather than principles or statutes chiseled permanently on tablets, I say we need ongoing discussion to react to rapid development and changing impact; to consider unintended consequences (of both the technology and regulation of it); and to make use of what I hope will be copious research. That is what WEF’s AI Governance Alliance says it will do. 

As I argue in The Gutenberg Parenthesis regarding the internet — and print — the full effect of a new technology can take generations to be realized. The timetable that matters is not so much invention and development but adaptation. As I will argue in my next book, The Web We Weave: Why We Must Reclaim the Internet from Moguls, Misanthropes, and Moral Panic (out from Basic Books next year), this debate must occur less in the context of technology than of humanity, which is why the humanities and social sciences must be in the circle.

At the meeting, there was much discussion about where we are in the timeline of AI’s gestation. Most agreed that there is no distinction between generative AI and AI. Generative AI looks different — momentous, even — to those of us not deeply engaged in the technology because now, suddenly, the program speaks — and, more importantly, can compute — our language. Code was a language; now language is code. Some said that AI is progressing from its beginning, with predictive capabilities, to its current generative abilities, and next will come autonomous agents — as with the GPT store Altman announced only a week before. Before allowing AI agents to go off on their own, we must trust them. 

That leads to the question of safety. One participant at WEF quoted Altman in a recent interview, saying that the company’s mission is to figure out how to make AGI, then figure out how to make it safe, and then figure out its benefits. This, the participant said, is the wrong order. What we need is not to make AI safe but to make safe AI. There was much talk about “shifting left” — not a political manifesto but instead a promise to move safety, transparency, and ethics to the start of the development process, rather than coming to them as afterthoughts. I, too, will salute that flag, but….

I come to believe there is no sure way to guarantee safety with the use of this new technology — as became all too clear clear to princes and popes at the birth of print. “What is safe enough?” asked one participant. “You give me a model that can do anything, I can’t answer your question.” We talk of requiring AI companies to build in guardrails. But it is impossible for any designer, no matter how smart, to anticipate every nefarious use that every malign actor could invent, let alone every unintended consequence that could arise. 

That doesn’t mean we should not try to build safety into the technology. Nor does it mean that we should not use the technology. It just means that we must be realistic in our expectations, not about the technology but about our fellow humans. Have we not learned by now that some people will always find new ways to do bad things? It is their behavior more than technology that laws regulate. As another participant said, a machine that is trained to imitate human linguistic behavior is fundamentally unsafe. See: print. 

So do we hold the toolmaker responsible for what users have it do? I know, this is the endless argument we have about whether guns (and cars and chemicals and nukes) kill people or the people who wield them do. Laws are about fixing responsibility, thus liability. This is the same discussion we are having about Section 230: whom do we blame for “harmful speech” — those who say it, those who carry it, those who believe it? Should we hold the makers of the AI models themselves responsible for everything anyone does with them, as is being discussed in Europe? That is unrealistic. Should we instead hold to account users — like the schmuck lawyer who used ChatGPT to write his brief — when they might not know that the technology or its makers is lying to them? That could be unfair. There was much discussion at this meeting about regulating not the technology itself but its applications.

The most contentious issue in the event was whether large language models should be open-sourced. Ng said he can’t believe that he is having to work so hard to convince governments not to outlaw open source — as is also being bandied about in the EU. A good number of people in the room — I include myself among them — believe AI models must be open to provide competition to the big companies like OpenAI, Microsoft, and Google, which now control the technology; access to the technology for researchers and countries that otherwise could not afford to use it; and a transparent means to audit compliance with regulations and safety. But others fear that bad actors will take open-source models, such as Meta’s LLaMA, and detour around guardrails. But see the prior discussion about the ultimate effectiveness of such guardrails. 

I hope that not only AI models but also data sets used for training will be open-sourced and held in public commons. (Note the work of MLCommons, which I learned about at the meeting.) In my remarks to another breakout group about information integrity, I said I worried about our larger knowledge ecosystem when books, newspapers, and art are locked up by copyright behind paywalls, leaving machines to learn only from the crap that is free. Garbage in; garbage multiplied. 

At the event’s opening reception high above San Francisco in Salesforce headquarters, I met an executive from Norway who told me that his nation wants to build large language models in the Norwegian language. That is made possible because — this being clever Norway — all its books and newspapers from the past are already digitized, so the models can learn from them. Are publishers objecting? I asked. He thought my question odd; why would they? Indeed, see this announcement from much-admired Norwegian news publisher Schibsted: “At the Nordic Media Days in Bergen in May, [Schibsted Chief Data & Technology Officer Sven Størmer Thaulow] invited all media companies in Norway to contribute content to the work of building a solid Norwegian language model as a local alternative to ChatGPT. The response was overwhelmingly positive.” I say we need to a similar discussion in the anglophone world about our responsibility to the health of the information ecosystem — not to submit to the control and contribute to the wealth of AI giants but instead to create a commons of mutual benefit and control. 

At the closing of the WEF meeting, during a report-out from the breakout group working on governance (where there are breakout groups, there must be report-outs; it’s the law) one professor proposed that public education about AI is critical and media must play a role. I intervened (as we say in circles) and said that first journalists must be educated about AI because too much of their coverage amounts to moral panic (as in their prior panics about the telegraph, talkies, radio, TV, and video games). And too damned often, journalists quote the same voices — namely, the same boys who are making AI — instead of the scholars who study AI. The issue of The New Yorker I referenced above has yet another interview with former Google computer scientist Geoffrey Hinton, who has already been on 60 Minutes and everywhere. 

Where are the authors of the Stochastic Parrots paper, former Google AI safety chiefs Timnit Gebru and Margaret Mitchell, along with linguists Emily Bender and Angelina McMillan-Major? Where are the women and scholars of color who have been warning of the present-tense costs and risks of AI, instead of the future-shock doomsaying of the AI boys? Where is Émile Torres, who studies the faux philosophies that guide AI’s proponents and doomsayers, which Torres and Gebru group under the acronym TESCREAL? (See the video below.)

The problem is that the press and policymakers alike are heeding the voices of the AI boys who are proponents of these philosophies instead of the scholars who hold them to account. The afore-fired Sam Altman gets invited to Congress. When UK PM Rishi Sunak held his AI summit, whom did he invite on stage but Elon Musk, the worst of them. Whom did Sunak appoint to his AI task force but another adherent of these philosophies. 

To learn more about TESCREAL, watch this conversation with Torres that Jason Howell and I had on our podcast, AI Inside, so we can separate the bullshit from the necessary discussion. This is why we need more meetings like the one WEF held, with stakeholders besides AI’s present proponents so we might debate the issues, the risks — and the benefits — they could bring.