OpenAI’s Sora Feed: AI-Generated Video Meets Social Media
As part of its recent release, OpenAI published their philosophy for the Sora Feed, the new social experience based around the consumption of AI-generated video using their Sora platform.
I am not going to delve into the rights and wrongs of AI-powered video. It seems at this point for the discussion about IP and other such concerns, the train has well and truly already left the station. This may be a genie others still want to try to wrestle back into a bottle, but I question whether, at this point, that is possible even if it was decided it was ideal.
Beyond the concerns about "AI slop" there is another important consideration: what should a social recommendation engine look like in the generative AI age?
Back in the days when platforms like Facebook were fun, content reached us because it was associated with something or someone we chose to be connected with. But over the last 20 years that has radically changed. Now, proactive choice is virtually ignored, more important is using "signals" to present something that will keep me engaged rather than enriched.
Social platform vendors will argue that algorithmic recommendation is positive for users, but anyone who has found themselves on an exploration of an Instagram Reels rabbit hole at 1am would probably, in that moment of self-realization at least, disagree.
OpenAI explains its intent in a straightforward and engaging way, "Our aim with the Sora feed is simple: help people learn what’s possible, and inspire them to create." But the fact is, two decades into the social media age, many of us will have suspicions about such assertions.
The place where OpenAI's approach takes a turn is in what data they intend to use to drive their feed. It appears that an on-by-default data source for your feed "may" be your ChatGPT history.
Is this different to companies like Meta or Google? In important ways, probably yes. There is no broad suggestion that the content of Messenger messages or Gmail emails drive the Facebook algorithm or YouTube shorts feed. Such interaction with their AI chat services is murkier, but neither currently have the consumer reach that OpenAI does.
Before the AI world went kind of crazy and Sam Altman (somewhat literally) led his company into the toilet, he was a major voice talking about regulation of AI services. The protection of AI chats - particularly for paid accounts on these platforms - must be something we are thinking about here. Sam has publicly stated on X that "A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn't describe it that way"; surely this creates a huge ethical responsibility for this data and automatically opting it in to be used for algorithmic recommendation wouldn't seem to be treading the right path.
What do you think? Am I reading too much into this, or are AI-connected social feeds going in the wrong direction?
First posted on Linkedin on 10/12/2025 -> View Linkedin post here