For some context, see: https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
So that already tells you something about where this is coming from. This is gonna be a hot mess.
>>
For some context, see: https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
So that already tells you something about where this is coming from. This is gonna be a hot mess.
>>
Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with AI hype. Here's a quick rundown.
First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
>>
“I think we need to clarify accountability. If ChatGPT puts some non-information out into the world, who is accountable for it? OpenAI would like to say they aren’t. I think our government could say otherwise.”
me to Mack DeGeurin at @Gizmodo
https://www.gizmodo.com.au/2023/03/how-a-senators-misguided-tweet-can-help-us-understand-ai/
“We desperately need smart regulation around the collection and use of data, around automated decision systems, and around accountability for synthetic text and images. But the folks selling those systems (notably #OpenAI would rather have policymakers worried about doomsday scenarios involving sentiment machines.” -- me to Tony Ho Tran at the Daily Beast
Ugh -- I'm seeing a lot of commentary along the lines of "'stochastic parrot' might have been an okay characterization of previous models, but GPT-4 actually is intelligent."
Spoiler alert: It's not. Also, stop being so credulous.
(Some of this I see because it's tweeted at me, but more of it comes to me by way of the standing search I have on the phrase "stochastic parrots" and its variants. The tweets in that column have been getting progressively more toxic over the past couple of months.)
What's particularly galling about this is that people are making these claims about a system that they don't have anywhere near full information about. Reminder that OpenAI said "for safety" they won't disclose training data, model architecture, etc.
https://twitter.com/emilymbender/status/1635697381244272640
But people want to believe SO HARD that AGI is nigh.
Remember: If #GPT4 or #ChatGPT or #Bing or #Bard generated some strings that make sense, that's because you made sense of them.
Reading about #ChatGPT plug-ins and wondering why this is framed as plug-ins for #ChatGPT (giving it "capabilities") rather than #ChatGPT as a plug-in to provide a conversational front-end to other services.
Nevermind, I know why: This is #OpenAI yet again trying to sell their text synthesis machine as "an AI".
Another metaphor I'm curious about: "AI" as "fuel" or "power" --- when people talk about "AI-powered technology" or "AI that fuels your creativity/curiosity". This seems to suggest that the AI is autonomously producing something...
Where are my metaphor theorists at?
Apropos #openai refusing to disclose any information about the training data for #GPT4 and #Google being similarly cagey about #Bard...
From the Stochastic Parrots paper, written in late 2020 and published in March 2021:
I see people asking: How else will we critically study GPT-4 etc then?
Don't. Opt out. Study something else.
GPT-4 should be assumed to be toxic trash until and unless #OpenAI is *open* about its training data, model architecture, etc.
I rather suspect that if we ever get that info, we will see that it is toxic trash. But in the meantime, without the info, we should just assume that it is.
To do otherwise is to be credulous, to serve corporate interests, and to set terrible precedent.
Folks, I encourage you to not work for @OpenAI for free:
Don't do their testing
Don't do their PR
Don't provide them training data
https://dair-community.social/@emilymbender/110029104362666915
Oh look, #openAI wants you to test their "AI" systems for free. (Oh, and to sweeten the deal, they'll have you compete to earn #GPT4 access.)
https://techcrunch.com/2023/03/14/with-evals-openai-hopes-to-crowdsource-ai-model-testing/
A cynical take is that they realize that without info abt data, model architecture & training set up, we aren't positioned to reason about how the model produces the results that it does…and thus more likely to believe claims of `AGI' and thus buy what they're selling.
But given all the xrisk rhetorical (and Altman's blogpost from Feb) it may also be possible that at least some of the authors on this thing actually believe their own hype and really think they are making choices about "safety".
Okay, taking a few moments to reat (some of) the #gpt4 paper. It's laughable the extent to which the authors are writing from deep down inside their xrisk/longtermist/"AI safety" rabbit hole.
Things they aren't telling us:
1) What data it's trained on
2) What the carbon footprint was
3) Architecture
4) Training method
>>
But they do make sure to spend a page and half talking about how they vewwy carefuwwy tested to make sure that it doesn't have "emergent properties" that would let is "create and act on long-term plans" (sec 2.9).
>>
I also lol'ed at "GPT-4 was evaluated on a variety of exams originally designed for humans": They seem to think this is a point of pride, but it's actually a scientific failure. No one has established the construct validity of these "exams" vis a vis language models.
For more on missing construct validity and how it undermines claims of 'general' 'AI' capabilities, see:
>>
Also LOL-worthy, against the backdrop of utter lack of transparency was "We believe that accurately predicting future capabilities is important for safety. Going forward we plan to refine these methods and register performance predictions across various capabilities before large model training begins, and we hope this becomes a common goal in the field."
Trying to position themselves as champions of the science here & failing.
>>
Feeling exhausted by the #AIhype press cycles? Finding yourself hiding from GPT-4 discourse? Longing for a dose of reality?
Join us on Friday for Stochastic Parrots Day!
A journalist asked me to comment on the release of GPT-4 a few days ago. I generally don't like commenting on what I haven't seen, but here is what I said:
#DataDocumentation #AIhype #OpenAI #GPT4
>>
"One thing that is top of mind for me ahead of the release of GPT-4 is OpenAI's abysmal track record in providing documentation of their models and the datasets they are trained on. Since at least 2017 there have been multiple proposals for how to do this documentation, each accompanied by arguments for its importance.
>>
Professor, Linguistics, University of WashingtonFaculty Director, Professional MS Program in Computational Linguistics (CLMS)Before sending me a DM here, see my contacting me page: http://faculty.washington.edu/ebender/contacting-me.html
076萌SNS is a social network, courtesy of 076. It runs on GNU social, version 2.0.2-beta0, available under the GNU Affero General Public License.
All 076萌SNS content and data are available under the Creative Commons Attribution 3.0 license.