RT @KenRoth UN humanitarian coordinator from northern Gaza: “This is not a place for humans to survive. This must end. This misery must end. This war must end. This is beyond imagination.
#AI#GenerativeAI#DataCenters#Water#WaterScarcity: "The building of new data centres is increasing demand for water resources. Some data centres are presently located in areas of water stress or are likely to be in the future. Developing cooling technologies which minimise or do not require water is becoming increasingly important. Perhaps AI will find a scalable solution to this problem."
#Spain#Barcelona#Tourism#Housing: "Barcelona, a top Spanish holiday destination, has announced it will bar apartment rentals to tourists by 2028, an unexpectedly drastic move as it seeks to rein in soaring housing costs and make the city livable for residents.
The city’s leftist mayor, Jaume Collboni, said on Friday that by November 2028, Barcelona would scrap the licences of the 10,101 apartments currently approved as short-term rentals.
“We are confronting what we believe is Barcelona’s largest problem,” Collboni told a city government event. This meant that “from 2029”, if there were no setbacks, “tourist flats as we conceive of them today will disappear from the city of Barcelona”.
The boom in short-term rentals in Barcelona, Spain’s most visited city by foreign tourists, meant some residents could not afford an apartment after rents rose 68% in the past 10 years and the cost of buying a house rose by 38%, Collboni said. Access to housing had become a driver of inequality, particularly for young people, he added."
#AI#Military: "We have to ask ourselves how meaningful political discussions of AI safety are, if they don’t cover military uses of the technology. Despite the lack of evidence that AI-enabled weapons can comply with international law on distinction and proportionality, they are sold around the world. Since some of the technologies are dual use, the lines between civilian and military uses are blurring.
The decision to not regulate military AI has a human price. Even if they are systematically imprecise, these systems are often given undue trust in military contexts as they are wrongly seen as impartial. Yes, AI can help make faster military decisions, but it can also be more error prone and may fundamentally not adhere to international humanitarian law. Human control over operations is critical in legally holding actors to account."
#UK#Surveillance#PoliceState#DataProtection#Migrants: "The Information Commissioner’s Office (ICO) has issued an enforcement notice and a warning to the Home Office for failing to sufficiently assess the privacy risks posed by the electronic monitoring of people arriving in the UK via unauthorised means.
The ICO has been in discussion with the Home Office since August 2022 on its pilot to place ankle tags on, and track the GPS location of, up to 600 migrants who arrived in the UK and were on immigration bail, after concerns about the scheme were raised by Privacy International.
Data Grab: The New Colonialism of Big Tech and How to Fight Back
Ulises A. Mejias and Nick Couldry
"Large technology companies like Meta, Amazon, and Alphabet have unprecedented access to our daily lives, collecting information when we check our email, count our steps, shop online, and commute to and from work. Current events are concerning—both the changing owners (and names) of billion-dollar tech companies and regulatory concerns about artificial intelligence underscore the sweeping nature of Big Tech’s surveillance and the influence such companies hold over the people who use their apps and platforms.
As trusted tech experts Ulises A. Mejias and Nick Couldry show in this eye-opening and convincing book, this vast accumulation of data is not the accidental stockpile of a fast-growing industry. Just as nations stole territories for ill-gotten minerals and crops, wealth, and dominance, tech companies steal personal data important to our lives. It’s only within the framework of colonialism, Mejias and Couldry argue, that we can comprehend the full scope of this heist.
Like the land grabs of the past, today’s data grab converts our data into raw material for the generation of corporate profit against our own interests. Like historical colonialism, today’s tech corporations have engineered an extractive form of doing business that builds a new social and economic order, leads to job precarity, and degrades the environment. These methods deepen global inequality, consolidating corporate wealth in the Global North and engineering discriminatory algorithms. Promising convenience, connection, and scientific progress, tech companies enrich themselves by encouraging us to relinquish details about our personal interactions, our taste in movies or music, and even our health and medical records. Do we have any other choice?"
RT @danielahanley The public should know which companies have secret and discriminatory deals with Apple and Google to obtain preferential pricing and terms on their app stores.
#Israel#Biden#Palestine#Gaza#HumanRights#WarCrimes#Genocide: "Biden’s effort to delegitimize the numbers coming out of Gaza as fake news has created an opening for defenders of Israel’s indiscriminate bombing campaign to dismiss the crisis; they note that Hamas governs Gaza and therefore runs the Ministry of Health and is inflating the figures. (Biden later clarified he meant to say he didn’t trust Hamas, not all Palestinians, according to the Wall Street Journal.)
Biden’s claim was quickly rejected by human rights organizations that have been active in Gaza for years. The Associated Press noted that the Ministry of Health’s figures from previous conflicts have broadly matched the numbers arrived at by both the Israeli government and the United Nations. And the State Department itself has long considered the numbers reliable.
The Gaza Ministry of Health, meanwhile, responded by publishing a list of names of 6,747 who had died as of October 26 since the bombing campaign began, including 2,664 children. The list included 2,665 children, but The Intercept found that one 14-year-old boy was listed twice, bringing its total down to 6,746. Otherwise the list does not contain duplicates."
#AI#ComputerVision#Surveillance#Patents#IP: "A rapidly growing number of voices argue that AI research, and computer vision in particular, is powering mass surveillance. Yet the direct path from computer vision research to surveillance has remained obscured and difficult to assess. Here, we reveal the Surveillance AI pipeline by analyzing three decades of computer vision research papers and downstream patents, more than 40,000 documents. We find the large majority of annotated computer vision papers and patents self-report their technology enables extracting data about humans. Moreover, the majority of these technologies specifically enable extracting data about human bodies and body parts. We present both quantitative and rich qualitative analysis illuminating these practices of human data extraction. Studying the roots of this pipeline, we find that institutions that prolifically produce computer vision research, namely elite universities and "big tech" corporations, are subsequently cited in thousands of surveillance patents. Further, we find consistent evidence against the narrative that only these few rogue entities are contributing to surveillance. Rather, we expose the fieldwide norm that when an institution, nation, or subfield authors computer vision papers with downstream patents, the majority of these papers are used in surveillance patents. In total, we find the number of papers with downstream surveillance patents increased more than five-fold between the 1990s and the 2010s, with computer vision research now having been used in more than 11,000 surveillance patents. Finally, in addition to the high levels of surveillance we find documented in computer vision papers and patents, we unearth pervasive patterns of documents using language that obfuscates the extent of surveillance. Our analysis reveals the pipeline by which computer vision research has powered the ongoing expansion of surveillance."
#UK#FacialRecognition#PoliceState#Facewatch#Biometrics#Privacy#Surveillance: "Senior officials at the Home Office secretly lobbied the UK’s independent privacy regulator to act “favourably” towards a private firm keen to roll out controversial facial recognition technology across the country, according to internal government emails seen by the Observer.
Correspondence reveals that the Home Office wrote to the Information Commissioner’s Office (ICO) warning that policing minister, Chris Philp, would “write to your commissioner” if the regulator’s investigation into Facewatch – whose facial recognition cameras have provoked huge opposition after being installed in shops – was not positive towards the firm.
An official from the Home Office’s data and identity directorate warned the ICO: “If you are about to do something imminently in Facewatch’s favour then I should be able to head that off [Philp’s intervention], otherwise we will just have to let it take its course.”
The apparent threat came two days after a closed-door meeting on 8 March between Philp, senior Home Office officials and Facewatch."
#SocialMedia#ContentModeration#Instagram#Meta#Ads: "Instagram limited the reach of a 404 Media investigation into ads for drugs, guns, counterfeit money, hacked credit cards, and other illegal content on the platform within hours of us posting it. Instagram said it did this because the content, which was about Instagram’s content it failed to moderate on its own platform, didn’t follow its “Recommendation Guidelines.” Later that evening, while that post was being throttled, I got an ad for “MDMA,” and Meta’s ad library is still full of illegal content that can be found within seconds.
This means Meta continues to take money from people blatantly advertising drugs on the platform while limiting the reach of reporting about that content moderation failure. Instagram's Recommendation Guidelines limit the reach of posts that "promotes the use of certain regulated products such as tobacco or vaping products, adult products and services, or pharmaceutical drugs.""
#SocialMedia#Mastodon#ContentModeration#CSAM: During a two-day test, researchers at the Stanford Internet Observatory found over 600 pieces of known or suspected child abuse material across some of Mastodon’s most popular networks, according to a report shared exclusively with The Technology 202.
Researchers reported finding their first piece of content containing child exploitation within about five minutes. They would go on to uncover roughly 2,000 uses of hashtags associated with such material. David Thiel, one of the report’s authors, called it an unprecedented sum.
“We got more photoDNA hits in a two-day period than we’ve probably had in the entire history of our organization of doing any kind of social media analysis, and it’s not even close,” said Thiel, referring to a technique used to identify pieces of content with unique digital signatures. Mastodon did not return a request for comment."
"In a new report, Stanford Internet Observatory researchers examine issues with combating child sexual exploitation on decentralized social media with new findings and recommendations to address the prevalence of child safety issues on the Fediverse.
Analysis over a two-day period found 112 matches for known child sexual abuse material (CSAM) in addition to nearly 2,000 posts that used the 20 most common hashtags which indicate the exchange of abuse materials. The researchers reported CSAM matches to the National Center for Missing and Exploited Children.
The report finds that child safety challenges pose an issue across decentralized social media networks and require a collective response. Current tools for addressing child sexual exploitation and abuse online—such as PhotoDNA and mechanisms for detecting abusive accounts or recidivism—were developed for centrally managed services and must be adapted for the unique architecture of the Fediverse and similar decentralized social media projects."
#SocialMedia#Twitter#Musk#ContentModeration#HateSpeech#Advertising: "Elon Musk's Twitter acquisition, and the series of content policy changes that ensued, has led to a dramatic spike in hateful, violent and inaccurate posts on the platform, according to researchers. That's now the top challenge for Twitter's new Chief Executive Officer Linda Yaccarino, who has to address advertisers’ concerns about the trend in order to boost revenue and pay back the company's debts.
Musk and Yaccarino have touted updates to the site’s policies, such as letting advertisers prevent their posts from showing up next to certain kinds of content. Still, advertising sales are down by half since Musk took control of the company in October, he said this week. That’s in part because businesses don’t believe there has been significant progress in resolving the problem."
"More than 30% of U.S. adults that used Twitter between March and May reported seeing content they consider bad for the world, according to a survey conducted by the USC Marshall Neely Social Media Index. That percentage was higher than rivals Facebook, TikTok, Instagram and Snapchat. Many users reported seeing tweets that condoned or glorified violence towards marginalized groups or explicit videos easily accessible to underage children.
Earlier this year, researchers at the Stanford Internet Observatory found that Twitter failed to take down dozens of images of child sex abuse. The team identified 128 Twitter accounts selling child sex abuse material and 43 instances of known CSAM. “It is very surprising for any known CSAM to publicly appear on major social media platforms,” said lead author and chief technologist David Thiel. Twitter responded to the issue after being contacted by researchers. This year Twitter removed 525% more accounts related to child sexual exploitation content than a year ago, according to the company."
#SocialMedia#SocialNetworks#Mastodon#Fediverse#Decentralization: "While it may seem like just a thing for computer geeks, the reality is that the Fediverse is a really exciting technology innovation, one that can and already has helped to empower regular people, non-profit organizations, and governments to imagine and operate the internet in a way that isn’t just about putting money in the pockets of billionaires like Elon Musk or Mark Zuckerberg.
Joining me to discuss all of this in much greater detail are two different guests. Dan Gillmor is a veteran technology writer who is also a professor of journalism at Arizona State University. Darius Kazemi is programmer and internet artist who also maintains a version of the Mastodon software called Hometown."