@megmac Maybe it was a mistake to have all the mobile OSes in the world be made in a ten mile radius in a part of the south San Francisco bay area where it is impossible to do so much as buy a jug of milk without a car.
Earlier today I edited my (small) set of Stack Overflow posts to add the sentence "I do not consent to my words being used to train OpenAI" to the end. Within hours, all these edits were reversed and I got a warning email for "removing or defacing content". I did not remove any content. If this small sentence is "defacing", it is a very minor defacement. In no way was the experience of other users made worse by me adding one sentence.
To Stack Overflow, you are not a person. You are "content".
At an event called "New Demos 2" in Toronto. We're gonna watch some demoscene stuff on an actual movie screen at a movie theater. This is the programme
Command-line frontend to Factorio did not appear to be publicly available so no URL and was also very difficult to photograph. So you'll have to take my word for it it was cool
Open-ended quantified-self tracker app. This was also pretty hard to photograph https://futureland.tvt
What they did: - Extracted balance data from Factorio app - Made a mechanics-only recreation of Factorio - Made a web service frontend to the Factorio mechanics reproduction - Made a command line frontend to the web service - Made a "Speedrun" of Factorio (IGT: a little under 8 hours; RTA: About 30 seconds) consisting of about 1100 commands you pipe to the cmdline frontend
I'm really concerned about the effect "generative AI" is going to have on the attempt to build a copyleft/commons.
As artists/coders, we saw that copyright constrains us. So we decided to make a fenced-off area where we could make copyright work for us in a limited way, with permissions for derivative works within the commons according to clear rules set out in licenses.
Now OpenAI has made a world where rules and licenses don't apply to any company with a valuation over $N billion dollars.
Some jerks did mass scraping of open source projects, putting them in a collection called "the stack" which they specifically recommend other people use as machine learning sources. If you look at their "Github opt-out repository" you'll find just page after page of people asking to have their stuff removed:
In a world where copyleft licenses turn out to restrict only the small actors they were meant to empower, and don't apply to big bad-actor "AI" companies, what is the incentive to put your work out under a license that will only serve to make it a target for "AI" scraping?
With NFTs, we saw people taking their work private because putting something behind a clickwall/paywall was the only way to not be stolen for NFTs. I assume the same process will accelerate in an "AI" world.
So… what is happening here? All these people are opting out of having their content recorded as part of a corpus of open source code. And I'll probably do the same, because "The Stack" is falsely implying people have permission to use it for ML training. But this means "The Stack" has put a knife in the heart of publicly archiving open source code at all. Future attempts to preserve OSS code will, if they base themselves on "the stack", not have any of those opted-out repositories to draw from.
…but wait! If you look at what they actually did (correct me if I'm wrong), they aren't actually doing any machine learning in the "stack" repo itself. The "stack" just collects zillions of repos in one place. Mirroring my content as part of a corpus of open source software, torrenting it, putting it on microfilm in a seedbank is the kind of thing I want to encourage. The problem becomes that they then *suggest* people create derivative works of those repos in contravention of the license. (2/2)
Like, heck, how am I *supposed* to rely on my code getting preserved after I lose interest, I die, BitBucket deletes every bit of Mercurial-hosted content it ever hosted, etc? Am I supposed to rely on *Microsoft* to responsibly preserve my work? Holy crud no.
We *want* people to want their code widely mirrored and distributed. That was the reason for the licenses. That was the social contract. But if machine learning means the social contract is dead, why would people want their code mirrored?
@foone This is encouraging to know because I'm currently setting up SyncThing.
On a cursory search I found lots of people going "it's easy! you just set up an rsync server on your phone! download [link to one of 3 rsync server apps which apparently existed in 2012 but are no longer on the Play store]
Why is Windows so bad at basic file operations? Isn't this supposed to be a Disk Operating System? Copying files is slow and unpredictable. *Canceling* file copies are bizarrely slow. If a file is added to a directory, I have to manually refresh for Windows Explorer to notice.
There is, in my downloads folder, a file named "2019-07-02 gutenberg.org.tar.part". I don't remember downloading this. Windows claims it to be several hundred gigabytes larger than my hard drive.