65 Comments
User's avatar
⭠ Return to thread
Benn Stancil's avatar

That omniscient Oracle seems like basically what GPT is becoming? It obviously struggles with some stuff and isn't omniscient in a lot of ways and all of that, but it's basically 1) a giant database of everything ever written for which you can 2) write kind of generic queries like

SELECT summarize(themes) FROM books WHERE author = 'shakespeare'

It's not literally that, and you can't be that precise, and all those sorts of things, but if you squint, that seems roughly how it works?

anzabannanna's avatar

> That omniscient Oracle seems like basically what GPT is becoming?

*Kiiiinda*....but this is a bit different than what I am thinking.

Regardless: ChatGPT and others will be what they are, and ~everyone will have access to "it" (which overlooks some people will have access to non-neutered versions, in addition to entirely novel models not available to the public).

And while, best case, if these things really do turn out to become highly beneficial to humanity, everyone always overlooks one problem (the main one): we are still stuck with all of the biological LLM's running loose on the planet, and if one thinks that a (neutered) silicon-based LLM will be able to coordinate these maniacs (especially when some of them secretly have their finger on the scale(s)), I think they are going to be severely disappointed.

> SELECT summarize(themes) FROM books WHERE author = 'shakespeare'

> It's not literally that, and you can't be that precise, and all those sorts of things, but if you squint, that seems roughly how it works?

Very much agree.....but, you only get what it has to give....and what it has to give is a function of both what it was designed to give, as well as what it is allowed to give (I assume you realize reps from the various 3 letter agencies will be well embedded within OpenAI in some manner by this point - it would be dereliction of (mostly undocumented) duty to do otherwise).

Having all of that power, plus something similarly powerful (that also addresses the biological AI problem, and is beyond the control of bad actors) seems like basic prudent gameplay strategy to me. God knows humanity needs someone on their side for a change.

Benn Stancil's avatar

Yeah, so in this post (https://benn.substack.com/p/tribal-accelerationism), there was a third section that I cut that was related to this. Basically, another interpretation of the OpenAI board firing Altman was it shows how OpenAI is a company that is controlled entirely by a very tiny group of people who can do very irrational things, in secret, without any explation. In this case, the irrational thing they did was intentionally step on a rake, but it certainly could've been something more nefarious. And really, if we take anything away from this at all, it should be that - that very large tech companies have the capability to concentrate enormous amounts of power in the hands of a half-dozen people. And for foundational model providers, that seems particularly fraught. From here https://benn.substack.com/p/the-public-imagination#:~:text=We%E2%80%99ll%20also%20have,do%20about%20it.:

We’ll also have to grapple with one very messy issue that cloud computing can ignore: AI is opinionated. Though today’s cloud providers have tremendous power, it’s almost entirely economic. Adam Selipsky and Thomas Kurian can extract rents, but EC2 and Google Compute Engine can’t outright manipulate us

Public AI providers can do both. If nudging Facebook users towards more positive or negative content can change their emotions, imagine the effect of public AI providers turning up the temperature on their core models. That single parameter could control how polite or rude we are to each other in billions of emails and text messages. Other parameters could turn every company’s support staff into agents of chaos, or embed political bias in every generated piece of text.

It’s a terrifying amount of power—far bigger than Elon Musk controlling our Twitter feeds, far more direct than TikTok putting its thumb on its algorithmic scales, and far more precise than Russia’s disinformation campaigns. And I have no idea what to do about it.

anzabannanna's avatar

Oh, no disagreement here, other than (as I imagine you know) you're being reductive....there's all these risks, and many others...there are essentially an infinite amount of possibilities. And if you look at humanity's (particularly that of our Dear Leaders and The Experts) gong show of a performance during COVID, a rather minor event on a historic/absolute basis, or during also silly things like the Fake News / Censorship / Russian Propaganda drama (or is this "threat" still ongoing? lol) meme war, what are the odds we're going to transition into the AI world without major turbulence, if not a meltdown? And I think most people don't appreciate with detail what you're getting at: if Facebook/Twitter/Reddit/etc, platforms that have a textbox, some clicky arrows/pictures and a few buttons, can cause all the problems they do, what might AI which can borderline ~think manage to do, even leaving aside that multiple bad actors are going to be playing various games behind the scenes (while telling a different story to the public, as is always the case).

> And I have no idea what to do about it.

Well as luck would have it, I have been working on this general problem for quite some time. I suspect part of the problem is that you mainly/entirely experience reality from your first person in-game character perspective, like this:

https://qualiacomputing.com/2022/12/28/cartoon-epistemology-by-steven-lehar-2003/

Take a top down "god view" approach (say, how a World of Warcraft administrator has to do, watching all the action & nonsense the in-game characters get up to) and things become much simpler, at least with practice. Think of humans as semi-intelligent, semi-conscious agents in a video game, with all activity, *each individual action* powered by this:

https://i.imgur.com/wiFCZsZ.jpg

...according to all the borderline insane training each agent has received (which they each then proceed to misinterpret in various hilarious ways), and the whole thing starts to make a lot of sense. Remove yourself from the system, start with a completely blank slate, add planet earth and a couple billion of agents, and move forward thinking based on first principles, making no silly errors along the way. What is going on emerges, clearly and simply.

What to do about all this though....well, that's another story. Maybe what the world needs is a competitor (to literally everything) platform? I think once you understand the field well enough, it becomes less a question of how could this succeed, and more like *how could this possibly fail*?

That'd cost a lot of money though, and I have approximately none.

Benn Stancil's avatar

I have no idea if any of that would work (in theory or in practice), but it highlights another problem to me, which is that any sort of solution would probably take a fair amount of effort, coordination, will, etc. And none of those things seem realistic, so instead we'll have a very chaotic transition, where a lot of the power is held but a handful of companies who are mostly fighting to make money. And even if they aren't outright nefarious or anything, it seems really hard for that not to devolve into a bunch of dark patterns like those in social media.

Benn Stancil's avatar

I'm not an engineer (more engineer adjacent), but yeah, I know that one.