@justnormalkorean >2 out of 3 paragraphs in release announcement dedicated to safety. Sounds expensive in terms of processing cycles. It will be interesting if a cat and mouse game of SAI censoring and users de-censoring core models has begun. Also, enough poojeets and women in this company photo to inspire additional worry.
@justnormalkorean It's gonna be crippled af like Gemini, huh? >We believe in safe, responsible AI practices. This means we have taken and continue to take reasonable steps to prevent the misuse of Stable Diffusion 3 by bad actors. Safety starts when we begin training our model and continues throughout the testing, evaluation, and deployment. In preparation for this early preview, we’ve introduced numerous safeguards. By continually collaborating with researchers, experts, and our community, we expect to innovate further with integrity as we approach the model’s public release. https://stability.ai/news/stable-diffusion-3