“Whatever, brains are just neurons and so is this, therefore human-like AI is still forthcoming.”
Both rather miss the point: “AI systems harbor surprising failure modes.” Yes, humans will patch the problem, and AIs will keep winning at Go.
But those failure modes that •would have been easily detectable by humans• indicate intrinsic gaps in the •structure• of these “deep learning” systems, not just their scale.
Commenters correctly pointed out that the specific AIs in question (KataGo and Leela Zero) did not use human play as training data, only AI play. Thanks! Edited posts to fix.
Correction of some replies:
None of the variants of AlphaGo are the AI in question.
The point of my thread was that deep learning has weird blind spots that a human would be unlikely to develop, and that these blinds spots are intrinsic to the approach.
The additional insight I take from the fact that it was computer analysis that found the AI’s weakness is that our human metacognition makes it hard for us to find deep learning’s blind spots. Again, we’re too easily bamboozled.
From @psu_13: This paper gets at the underlying problems of that now-embarrassing Minsky quote. It gets at many of the things I was talking about upthread, and much more beside. Thanks for the recommendation! https://pgh.social/@psu_13/109892784391376542
I tend to think that Roomba had the right idea with respect to AI: build stupid-ish machines that assist humans by handling the easy 80% of dreary tasks, and complement humans instead of trying to mimic humans.
The mistake in that line of reasoning is imagining that “intelligence” (whatever that means) is a linear continuum. Computers are already far “smarter” than humans in specific ways. (Can you sum a billion numbers in one second?)
Part of my point is that what we call AI won’t progress toward human intelligence like going up a staircase. It will be •weird• the whole way, and will thwart our intuitions.
Please don’t read my thread to mean “Machines can’t handle novel situations,” or “Machines can’t generalize,” or “Machines can’t identify underlying patterns.” All those statements are false! There is no such bright line to be found here.
My argument is a fuzzier one: software is surprising; it’s hard to know what the hard part is; we should expect to see ML impress us but then faceplant on the seemingly obvious.
The Go AI was trained by feeding it a huge number of Go games. It built a model based on w̶h̶a̶t̶ ̶h̶u̶m̶a̶n̶s̶ ̶d̶o̶. CORRECTION: KataGo is trained by playing against itself; the model input is “past AI games.”
The human beat it by doing something so obvious a human or sensible AI would never do it — so it wasn’t in the training data, so the AI didn't counter it.
(Basically, the human forms a conspicuous giant capture ring while distracting the AI with tactical battles the AI knows how to counter.)
The recent eye-popping advances in AI have come from models that scan huge datasets, either generated by humans (e.g. “all competitive Go games” or “all the text we could find on the web”) or by computer (“the AI plays itself a billion times”), and imitating the patterns in that dataset — with no underlying model of meaning, no experience to check against, no underlying theory formation, just parroting. 3/
These systems produce such striking results, it raises the question of whether our brains are any different. Are we also just fancy pattern mimics?
Would an LLM gain human-like intelligence if only we had more processing power?
This result suggests that no, there’s something else our brains do. The tactic the AI missed would be painfully obvious to a human player. There’s still something our brains do — theorizing, generalizing, reasoning through the unfamiliar — that these AIs don’t. 4/
But does this matter? Sure, the AI did a faceplant on some bizarro strategy that would never fool a competent human. So what? It’s just a board game.
OK, what if it’s a self-driving car?
Have you ever encountered a traffic situation that was just totally bizarre, but had a common-sense solution like “wait” or “just go around?” What would an AI do in that situation?
Think of the reports of self-driving Teslas suddenly swerving or accelerating straight into an obvious crash. 6/
This is probably what’s going on with the hilarious ChatGPT faceplants making the rounds on social media.
People try to fool GPT with esoteric questions, but those are easy for it: if anybody anywhere on the web already answered the question, no problem — and making it esoteric just narrows the search space.
But give it a three-digit addition problem, and there’s no single specific example to match. And GPT can’t turn all those examples into a generalized theory of how to do addition. 8/
3. If AI ever does become intelligent (whatever that means), it’s going to be •weird• for a long time first.
4. Beware your own heuristics about what “intelligence” looks like, because •that• is a place where our brains are as easily fooled as the Go AI was. Our brains didn’t evolve around things like GPT, and we’re easily bamboozled by them. /end
Computers have been beating the best human Go players since 2016. The Go world champion retired in part because AI is “an entity that cannot be defeated.”
I think this news story is more interesting than it might first appear (without knowing details, so grain of salt). It isn’t just a gaming curiosity; it points to a fundamental flaw with “deep learning” approaches in general. 1/
Composer, pianist, programmer, professor, rabble rouser, redheadComputer Science at https://www.macalester.edu/mscs/(Student projects: https://devgarden.macalester.edu)Artistic Director of https://newruckus.orgFreelance dev, often with https://bustout.comMusical troublemaker https://innig.net/music/The heart is the toughest part of the body.Tenderness is in the hands. — Carolyn Forchésearchable