OpenAI is exploring the idea of developing a social networking platform that would focus on reducing automated and fake accounts by ensuring users are real individuals, according to people familiar with the matter. The project remains at an early conceptual stage and is reportedly being handled by a small internal team. The discussions reflect growing concern within the technology sector about the scale of bot driven activity on existing social platforms and the impact it has on online discourse, market behavior, and user trust. While no formal product announcement has been made, the reported initiative highlights OpenAI’s interest in rethinking how digital identity and authenticity could be managed in social spaces increasingly shaped by artificial intelligence.
According to reporting by Forbes, one of the ideas under consideration involves requiring proof of personhood, potentially through biometric identity checks. These could include tools such as Apple’s Face ID or the World Orb, an iris scanning device operated by Tools for Humanity, an organization chaired by OpenAI chief executive Sam Altman. The central goal would be to ensure that each account on the platform is linked to a single real person, a departure from current industry norms. Most major social networks today rely on combinations of email addresses, phone numbers, and behavioral analysis to identify users and flag suspicious activity, rather than direct biometric verification. Proponents of stronger verification argue that such measures could significantly reduce spam, coordinated manipulation, and the influence of non human actors, which have become persistent challenges across social media ecosystems.
The concept has also raised questions around privacy and data protection. Privacy advocates have long warned that biometric identifiers such as iris scans or facial data carry unique risks because they cannot be changed if compromised. Concerns have been raised about how such sensitive information would be stored, secured, and governed over time, particularly if a platform were to scale rapidly. Critics note that misuse or breaches involving biometric data could have long term consequences for users, making transparency and safeguards critical if such systems were ever implemented. At this stage, however, there is no indication of what specific privacy frameworks or technical safeguards OpenAI might adopt, and sources caution that the concept could evolve significantly or be shelved entirely before any public release.
Details on how a potential social network would connect with OpenAI’s existing products also remain unclear. People familiar with the discussions said the platform could allow users to generate and share content using AI tools, including images and videos, which would place it in direct competition with established services such as Instagram, TikTok, and X. Such a move would extend OpenAI’s presence beyond productivity and creative tools into consumer social platforms, an area where competition is intense and regulatory scrutiny is increasing. OpenAI declined to comment on the report, though The Verge previously reported in April that the company was working on a social networking product, suggesting internal exploration has been ongoing for some time.
The reported initiative comes against the backdrop of a broader debate about bots and automated behavior online. Bot accounts have been used to amplify spam, influence financial markets, and distort political or social conversations. The issue has been especially visible on X, where enforcement against automated accounts has varied over recent years despite periodic large scale removals. Sam Altman has publicly criticized the growing presence of AI driven accounts on social platforms, stating previously that online spaces increasingly feel artificial. He has also referenced the so called dead internet theory, which argues that a significant and growing share of online activity is generated by non human actors rather than real users.
OpenAI’s track record shows it can scale consumer facing products rapidly once a concept moves beyond experimentation. ChatGPT reached 100 million users within two months of its launch and has continued to grow, while the company’s video generation application Sora recorded more than one million downloads within days of release. Whether a social network built around verified human identity would see similar adoption remains uncertain, particularly given the technical, ethical, and regulatory complexities involved. For now, the idea underscores how concerns about authenticity and trust are shaping the next wave of conversations around social platforms in an era increasingly influenced by artificial intelligence.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.