[This is exhausting, but I started. Might as well finish.]
>>
[This is exhausting, but I started. Might as well finish.]
>>
Wait what -- now they're talking seriously about "late-stage AGI development"?
>>
Here's a bunch of promises about future oversight by unnamed independent auditors and also "major world governments" (who counts as major? who decides?). Also, how about just DOCUMENTING YOUR DAMN DATA for everyone to see?
>>
"Continuum of intelligence" is gross, not least for the suggestions of ableism, eugenics, transhumanism etc. But also "rate of progress [of] the past decade"-? Progress towards what? Ever larger carbon footprints? More plausible fake text?
>>
What's needed is regulation about: how data can be collected and used, transparency of datasets, models and the deployment of text/image generation systems, recourse and contestability of any automated decision making, etc.
Talking about text synthesis machines as if they were "AI" muddies the waters and hampers effective discussions about data rights, transparency, protection from automated decision systems, surveillance, and all the rest of the pressing issues.
>>
The problem isn't regulating "AI" or future "AGI". It's protecting individuals from corporate and government overreach using "AI" to cut costs and or deflect accountability.
>>
Similarly here, this seems designed to promote the idea that the models they have already put into their API (GPT-2, GPT-3, ChatGPT) are the early stages of "AGI" being "stewarded into existence".
>>
Then there's a glib paragraph about how "most expert predictions have been wrong so far" ending in footnote 2.
Paraphrasing: "Our experts thought we could do this as a non-profit, but then we realized we wanted MOAR MONEY. Also we thought we should just do everything open source but then we decided nah. Also, can't be bothered to even document the systems or datasets."
>>
Hey OpenAI, I'm speaking to you from 2018 to say: DOCUMENT YOUR DAMN DATASETS. Also, to everyone else: If you don't know what's in it, don't use it.
Source: https://aclanthology.org/Q18-1041.pdf
>>
Okay, back to Altman. "As our systems get closer to AGI" -- here's a false presupposition again. Your system isn't AGI, it isn't a step towards AGI, and yet you're dropping that in as if the reader is just supposed to nod along.
Oh, and did you all catch that shout out to xrisk? Weirdo longertermist fantasy indeed.
>>
As I said in my earlier short thread on this blog post, I wish I could just laugh at these people, but unfortunately they are attempting (and I think succeeding) to engage the discussion about regulation of so-called AI systems.
>>
<recordscratch> hang on: did he just say "maximarlly flourish in the universe"? What kind of weirdo longtermist, space colonizing fantasy is that coming from?
>>
What's in fn1? A massive presupposition failure: The GPTs are learning information about word distributions in lots and lots of text + what word patterns are associated with higher scores (from human raters). That's it.
>>
Then a series of principles for how to ensure that AGI is "beneficial". This includes "governance of AGI" as something that is "widely and fairly shared", but I've seen exactly nothing from OpenAI about or advocating for building shared governance structures.
Meanwhile, "continuously learn and adapt by deploying less powerful versions of the technology" suggests that they think that the various GPTs are "less powerful versions of AGI".
>>
Then Altman invites the reader to imagine that AGI ("if successfully created") is literally magic. Also, What does "turbocharging the economy" mean, if there is already abundance? More $$$ for the super rich, has to be.
Also, note the rhetorical sleight of hand there. Paragraph 1 has AGI as a hypothetical ("if successfully created") but by para 2 it already is something that "has potential".
>>
But oh noes -- the magical imagined AGI also has downsides! But it is so so tempting and important to create, that we can't not create it. Note the next rhetorical sleight of hand here. Now AGI is an unpreventable future.
>>
Put this up on Twitter over the weekend, but I guess it should go here, too:
Okay, I read it so you don't have to. Here's a reaction thread to OpenAI / Sam Altman's blog post from Friday "Planning for AGI and beyond":
https://openai.com/blog/planning-for-agi-and-beyond/
From the get-go this is just gross. They think they are really in the business of developing/shaping "AGI". And they think they are positioned to decide what "benefits all of humanity".
>>
The NYT, in addition to famously printing lots of transphobic nonsense (see the brilliant call-out at nytletter.com), also decided to print an enormous collection of synthetic (i.e. fake) text today.
Why the NYT and Kevin Roose thought their readers would be interested in reading all that fake text is a mystery to me --- but then again (as noted) this is the name publication that thinks its readers benefit from reading transphobic trash, so ¯\_(ツ)_/¯
>>
Beyond the act of publishing chatbot (here BingGPT) output as if it were worth anyone's time, there are a few other instances of #AIHype in that piece that I'd like to point out.
First, the headline. No, BingGPT doesn't have feelings. It follows that they can't be revealed. But notice how the claim that it does is buried in a presupposition: the head asserts that the feelings are revealed, but presupposes that they exist.
>>
And then here: "I had a long conversation with the chatbot" frames this as though the chatbot was somehow engaged and interested in "conversing" with Roose so much so that it stuck with him through a long conversation.
It didn't. It's a computer program. This is as absurd as saying: "On Tuesday night, my calculator played math games with me for two hours."
Professor, Linguistics, University of WashingtonFaculty Director, Professional MS Program in Computational Linguistics (CLMS)Before sending me a DM here, see my contacting me page: http://faculty.washington.edu/ebender/contacting-me.html
076萌SNS is a social network, courtesy of 076. It runs on GNU social, version 2.0.2-beta0, available under the GNU Affero General Public License.
All 076萌SNS content and data are available under the Creative Commons Attribution 3.0 license.