Here's a half-formed thought I need to mull a bit on:
Somehow, algorithmic (and especially "AI-driven") decision making tends to only be proposed in contexts where it can only — or mostly — affect those with the least power in the system.
Migrants and asylum seekers.
Prisoners.
Families using any form of state support (child benefits, foodstamps, etc).
Palestinians in Gaza.
It somehow never gets proposed for use-cases where it might affect the wealthy and powerful.
One wonders why. 🤔
🧵/1