UpScrolled’s social network is struggling to moderate hate speech after fastgrowth

UpScrolled, a rapidly growing social network, is facing significant challenges in moderating hate speech and harmful content after attracting over 2.5 million users. Reports reveal usernames and hashtags containing racial slurs and extremist phrases remain unchecked. The company is actively working to enhance moderation capabilities, but many accounts with offensive content persist, raising concerns about user safety and platform policies.
Key Points
- UpScrolled experienced explosive growth, surpassing 2.5 million users in January 2026, following changes in TikTok's ownership.
- Users reported widespread usernames containing racial slurs and hate speech, and TechCrunch confirmed these issues.
- Despite user reports, offensive accounts remained online days later, indicating moderation struggles.
- The ADL highlighted UpScrolled as hosting antisemitic content and connections to foreign terrorist organizations.
- Founder Issam Hijazi acknowledged the moderation issues and stated plans to expand the moderation team and upgrade technology.
Relevance
- The situation with UpScrolled mirrors issues faced by other platforms like Bluesky, which also struggled with hate speech moderation after a surge in user numbers.
- Social media platforms are increasingly under scrutiny regarding their ability to manage harmful content amidst debates over free speech and user safety.
- The rise of social networking services has led to an emphasis on balancing user expression with responsible content moderation, especially as communities become more diverse.
The moderation challenges faced by UpScrolled illustrate a common dilemma for rapidly expanding social networks. As it strives to balance free speech with user safety, the company must act decisively to enhance its moderation processes to prevent the rise of hate speech and harmful content.
