Some of these policy goals make sense:
>>
Some of these policy goals make sense:
>>
They then say: "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal."
Uh, accurate, transparent and interpretable make sense. "Safe", depending on what they imagine is "unsafe". "Aligned" is a codeword for weird AGI fantasies. And "loyal" conjures up autonomous, sentient entities. #AIhype
>>
Just sayin': We wrote a whole paper in late 2020 (Stochastic Parrots, 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about "too powerful AI".
Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).
>>
Okay, calling for a pause, something like a truce amongst the AI labs. Maybe the folks who think they're really building AI will consider it framed like this?
>>
I'm mean, I'm glad that the letter authors & signatories are asking "Should we let machines flood our information channels with propaganda and untruth?" but the questions after that are just unhinged #AIhype, helping those building this stuff sell it.
>>
On the "sparks" paper:
https://twitter.com/emilymbender/status/1638891855718002691?s=20
On the GPT-4 ad copy:
https://twitter.com/emilymbender/status/1635697381244272640?s=20
On "general" tasks:
https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/084b6fbb10729ed4da8c3d3f5a3ae7c9-Abstract-round2.html
>>
Next paragraph. Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella known as the "Sparks paper" and OpenAI's non-technical ad copy for GPT4. ROFLMAO.
>>
And could the creators "reliably control" #ChatGPT et al. Yes, they could --- by simply not setting them up as easily accessible sources of non-information poisoning our information ecosystem.
And could folks "understand" these systems? There are plenty of open questions about how deep neural nets map inputs to outputs, but we'd be much better positioned to study them if the AI labs provided transparency about training data, model architecture, and training regimes.
>>
Footnote 1 there points to a lot of papers, starting with Stochastic Parrots. But we are not talking about hypothetical "AI systems with human-competitive intelligence" in that paper. We're talking about large language models.
https://faculty.washington.edu/ebender/stochasticparrots/
And the rest of that paragraph. Yes, AI labs are locked in an out-of-control race, but no one has developed a "digital mind" and they aren't in the process of doing that.
>>
There a few things in the letter that I do agree with, I'll try to pull them out of the dreck as I go along.
So, into the #AIhype. It starts with "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]".
>>
For some context, see: https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
So that already tells you something about where this is coming from. This is gonna be a hot mess.
>>
Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with AI hype. Here's a quick rundown.
First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
>>
There is *no better way* to thank me than to back this Kickstarter and encourage your friends to do the same:
https://www.kickstarter.com/projects/doctorow/red-team-blues-another-audiobook-that-amazon-wont-sell
Preselling a ton of audiobooks, ebooks, and print books is a huge boost to the book on its launch - incomparable, really. Invaluable.
What's more, helping me find a viable way to produce popular, widely heard audiobooks without submitting to Amazon's DRM lock-in sets an example for other creators and publishers.
22/
These campaigns didn't just pay my bills (especially during lockdown, when our household income plunged), but they also showed other authors that it was possible to evade Amazon's monopoly chokepoint and sell books that aren't sticky-traps for Audible's walled garden/prison:
8/
Audible is #Amazon's monopoly audiobook platform. It has a death-grip on the audiobook market, commanding more than 90% of genre audiobook sales, and *every single one* of those audiobooks is sold with Amazon's DRM on it. That means that you can't break up with Amazon without throwing away those audiobooks. Under the 1998 #DigitalMillenniumCopyrightAct, I can't give you a tool to convert my own copyrighted audiobooks to a non-Amazon format.
3/
@jos @NireBryce Baldur Bjarnason suggests that regulation:
1. Clarifies that those running the service are responsible as a publisher for the text their AIs spit out. To the same degree as they are for employees.
2. Requires disclosure of the use of LLM technologies.
3. Training sets should be opt-in & (I think more importantly) public.
what is it, $250,000 per infringement that they hit filesharers with? I figure it's gotta be at least 100k people they've violated the copyright of, minimum, right?
25 billion seems like reasonable damages, but I'm sure they'd find a way to push it higher by lying to the judge
remember that pushing for stronger copyright protections re: "AI" only works if you push for it to apply retroactively, otherwise OpenAI just trained their models on all of this and all of their competitors can't.
shit sucks.
@alpinefolk wow, *so* with you on the Adobe thing. They're another really shit corporation. @elias @gemlog
@gemlog After a brief skim, I'd say they're probably folks who develop proprietary software & adjusts their definition to exclude them.
So in a sense, yes they'd be paid defenders!
A browser developer posting mostly about how free software projects work, and occasionally about climate change.Though I do enjoy german board games given an opponent.Pronouns: he/him
076萌SNS is a social network, courtesy of 076. It runs on GNU social, version 2.0.2-beta0, available under the GNU Affero General Public License.
All 076萌SNS content and data are available under the Creative Commons Attribution 3.0 license.