What makes this different than your average tech-company-does-very-shitty-thing-using-AI-without-informed-consent story is that the co-founder was so oblivious to any notion of research ethics, consent, mental health ethics, etc., that he wrote a long thread describing this experiment and was caught completely off guard when people reacted with shock.
I think this example tells us a lot about whether we can trust tech to act ethically: looks like they don't know how, even when they want to.
Morris later tried to walk back his original claims and suggested that he had informed consent from all participants.
But it appears from his own claims that only the "peer supporters" were informed.
According to Business Insider, `Koko users were not initially informed the responses were developed by a bot, and "once people learned the messages were co-created by a machine, it didn't work," Morris wrote on Friday.'
I don't even know where to start with this one. Apparently the mental health service kokocares.org ran a multi-day trial on 4000 users without their informed consent, in which koko's "peer supporters" used ChatGPT to craft their responses to the people they were providing with mental health support.