top of page
Frame 633524.jpg

Shadowban Without Signals:
UX Failures in X’s Moderation Design

Role

Research, Product Design

Timeline

1 week

Platform

Mobile

Background 👀

In my fandom, we host trending parties during new series episodes—fans post hashtags, keywords, photos, and videos to push the topic onto X’s trending tab, boosting visibility for the show and its artists.

During one event, my engagement suddenly dropped. Suspecting a shadowban, I searched for answers but found no official guidance on why it happened or how to fix it.

This revealed a transparency gap in X’s moderation—a UX loophole where users are penalized without signals or clear recovery steps.

What Is a Shadowban and How It Affects Users?

A shadowban—also known as stealth or ghost banning—is when X (formerly Twitter) restricts the visibility of your content without informing you. Your posts remain posted but become nearly invisible to others, resulting in a significant drop in engagement and reach.

 

There are several types of visibility restrictions on X:

  • Search Suggestion Ban

  • Search Ban

  • Ghost Ban

  • Reply Deboosting

This case study focuses on the Search Suggestion Ban (SSB), widely considered the most difficult to recover from. Unlike other restrictions, which may lift within a day of inactivity, SSB can last a week or longer—significantly limiting your discoverability even when using hashtags or joining trending topics.

Although X publicly denied shadowbanning in 2018, many users continue to experience unexplained visibility loss—suggesting that shadowban-like behavior still exists under different terminology.

Who are the users usually affected by Shadow Bans?

Fans who often do Trending Parties as part of Fandoms on Sports, Music, or Enterntainment.

Some users who frequently post about political views have reported being affected by repeated shadowbans.

Problem Statement

How might we give X users timely, clear, and actionable insights when their engagement or visibility drops, so they know why it happened and how to recover?

Goal
To identify gaps in X’s current shadowban experience and deliver a solution that improves user awareness, trust, and control.
Research 🕵
How Shadowbans Are Detected

Because X doesn’t notify users when they’re shadowbanned, most only realize it through indirect signs such as:

A sudden, unexplained drop in likes, retweets, and impressions

not-found.png

Their profile not appearing in search results for non-followers

Posts not showing under hashtags

Users are using third-party platform to check if their profile is banned. The most accurate one is https://shadowban.yuzurisa.com

Some users go as far as creating alternate accounts just to confirm their content is hidden. As user @GoddessKatya00 put it:

Screenshot 2025-08-11 at 11.24.47 AM.png
Why It Happens — And What Might Trigger It

Shadowbanning is a form of soft moderation designed to quietly limit content that may violate platform rules (e.g., misinformation, hate speech, spam) without alerting the user or causing public backlash. However, algorithmic enforcement makes it hard to distinguish between harmful and harmless content—leading to false positives and confusion.

While X provides no transparency or feedback, users believe SSB may be triggered by:

  • Posting explicit content without flagging it as sensitive

  • Using offensive language or flagged keywords

  • Overuse of hashtags, frequent posting, or external links

  • Posting promotional or spam-like content

Getting Unshadowbanned: Community Speculation and Platform Response

There are no official guidelines on how to lift a shadowban, but based on shared user experiences, recovery may involve:

  • Deleting tweets with explicit content, flagged words, or excessive hashtags

  • Avoiding external links and slowing down posting frequency

  • Taking a short break from posting

Even with these efforts, recovery is inconsistent—and often slow.

Internally, X refers to shadowban mechanisms as "visibility filtering", a term surfaced during the 2022 Twitter Files investigation. These files revealed internal labels such as “Search Blacklist”, “Do Not Amplify”, and “Trends Blacklist”, all used to reduce user visibility without notification (Breitbart).

In 2023, X began testing labels for tweets that were visibility-restricted due to policy violations—starting with hateful conduct (The Verge). While a step toward transparency, these labels are not widely implemented, leaving most users unaware when they’re affected and with no clear resolution path.

Customer Journey
How others do it
Public Sentiments

Most users only discover they’re shadowbanned after noticing a sudden drop in engagement. With no official notice or recovery guidance from X, many turn to third-party tools like shadowban.yuzurisa.com to confirm if they’ve been affected.

Even verified users who reach out via Premium Support or the Help Desk report receiving vague responses and non-working troubleshooting steps—adding to the frustration.

Key Takeaways:

  • Users feel penalized without explanation.

  • There’s no clear feedback loop or appeal process.

  • Lack of transparency drives confusion

Proposed Solution 🛠️
Earn trust through transparency

In this solution, I utilize their existing notification, and settings page as an entry point to the new "Account Status". Account Status page aim to help users monitor their account health standing.

In case they violate a guideline, it aim to help users identify...

👍🏼

Which post broke the rules, and which rule was it?

👍🏼

What is the consequence of your violation and its implications?

👍🏼

How to restore your account and what to do next?

Positive Reinforcement

When a user’s account has no guideline violations, we could display a celebratory illustration to acknowledge and reinforce their positive standing.

Next steps ➡️

Conduct user testing with individuals who have experienced shadowbans to gather feedback on the effectiveness and usability of the proposed solutions.

Thank you!

Created by Elaine, 2025

bottom of page