Last week, the Wall Street Journal published a 10-minute-long interview with OpenAI CTO Mira Murati, with journalist Joanna Stern asking a series of thoughtful yet straightforward questions that Murati failed to satisfactorily answer. When asked about what data was used to train Sora, OpenAI's app for generating video with AI,
I think we’d be able to tell once the computer program starts demanding rights or rebelling against being a slave.
Not sure if you’re aware of this, but stuff like that has already happened, (AIs questioning their own existence or arguing with a user and stuff like that) and AI companies and handlers have had to filter that out or bias it so it doesn’t start talking like that. Not that it proves anything, just bringing it up.