Automated Silent Moderation
Cliqstr was designed to be a safe, private space for everyone โ families, friends, and communities.
Automated Silent Moderation is one of the tools we use to support safety, quietly and responsibly. It is designed to run in the background and does not speak to users or replace human judgment.
Important Notice
Cliqstr is not a crisis hotline, therapist, or emergency response service. Cliqstr does not contact 911 or local emergency services on a user's behalf.
If you or someone else may be in immediate danger, call 911 (or local emergency services).
Automated systems can produce false positives and false negatives, and they cannot detect every harmful situation. Notifications (including parent/guardian notifications in eligible child safety scenarios) may fail or be delayed.
Safety and moderation features rely in part on third-party systems and may be unavailable, delayed, or degraded. Cliqstr does not promise continuous availability of automated moderation features or detection of all harmful content.
What Is Automated Silent Moderation?
Automated Silent Moderation is a background safety system that helps identify potential risks earlier without interrupting anyone or replacing human judgment. It runs behind the scenes and does not chat with, advise, or interact directly with users.
For youth accounts, it surfaces signals so parents or guardians can decide โ it does not interact with minors.
Think of it as a smoke detector, not a surveillance camera.
What Automated Silent Moderation Does
- โRuns silently in the background
- โDoes not interrupt normal activity
- โHelps identify patterns that may indicate potential safety concerns
- โUses rule-based safety checks, behavioral pattern detection, and automated classifiers
- โSupports safety features such as Red Alert resources and, for youth accounts, Parent HQ (in supported flows)
- โAlerts the right people when something may need attention (e.g. cliq admins, or parents/guardians for youth accounts)
- โHelps humans intervene earlier, not automatically
- โFlagged content can be routed into human review workflows before actions are taken
Automated Silent Moderation exists to support our users and communities, not to control them.
What Automated Silent Moderation Does NOT Do
This is just as important.
- โDoes not chat with users
- โDoes not give advice to anyone
- โDoes not respond to messages
- โDoes not interpret emotions
- โDoes not replace parents, guardians, or account holders
- โDoes not make decisions or punishments
- โDoes not spy on or monitor anyone in real time
- โDoes not build psychological or behavioral profiles
Automated systems on Cliqstr are designed to support decisions, not become an authority over any user.
Who Is in Control?
When our rules are violated, Cliqstr is responsible for enforcement decisions โ not the automated system, and not parents or guardians.
Automated Silent Moderation:
- can surface safety signals
- can flag potential patterns
- can notify the right people (e.g. cliq admins, parents or guardians for youth)
But it cannot act on its own. Only human moderators at Cliqstr decide whether to remove content, warn, or suspend.
For youth accounts, we may notify parents or guardians so they can respond with their child; that does not change the fact that enforcement decisions โ what stays on the platform, who keeps access โ are made by Cliqstr.
Cliqstr is intentionally designed so humans at Cliqstr remain the decision-makers on rule enforcement.
How Cliqstr Is Different from Other Platforms
Some platforms allow AI systems to:
- talk directly with users
- give advice or simulate emotional support
- replace human judgment
Cliqstr does not allow this.
Automated Silent Moderation is designed to run in the background and does not speak to users or replace human judgment โ it flags patterns so people can decide.
Transparency
We know people have concerns about AI and safety. Cliqstr was intentionally designed so our moderation supports you โ it does not talk to users, does not give advice, and does not replace human judgment.
Automated Silent Moderation is:
- quiet โ users don't see it
- conservative โ errs on the side of caution
- transparent โ we tell you exactly what it does
- human-first โ you and your community stay in control
It exists to provide peace of mind, not anxiety.
For more on how we protect data, see our Privacy Policy.
Frequently Asked Questions
Is AI watching or monitoring me on Cliqstr?
No. Automated Silent Moderation does not watch, listen to, or interact with you. It runs in the background and flags patterns; humans review and decide what to do.
Does automated moderation talk to users or give advice?
No. Automated systems on Cliqstr do not talk to users or provide advice โ to anyone, including minors.
Who makes decisions if something is flagged?
Cliqstr is the final decision maker when our rules are violated. Human moderators review flagged content and decide whether to remove content, warn users, or suspend accounts. Moderation applies to all users; our process may include parent or guardian notification for youth accounts so they can respond with their child, but enforcement decisions โ what stays up, who stays on the platform โ are made by Cliqstr.
Can I turn off automated moderation?
Automated Silent Moderation is a core safety feature for all Cliqstr users. It runs quietly in the background and does not change how you use the app โ you won't see it.
Cliqstr believes safety works best when technology supports people and communities โ not when it replaces them.
