boopr
About Security Roadmap Press Aug 15, 2026
Legal

Community Guidelines

Effective April 25, 2026

How we keep boopr a good place to be.


These guidelines are the rules that apply to everyone on boopr, alongside our Terms of Service and Privacy Policy. They exist to keep boopr a place worth inviting someone into.

Who boopr is for:

boopr is for users 13 and over, in compliance with COPPA. Users between 13 and 17 confirm that a parent or guardian has reviewed these guidelines with them. boopr does not collect a date of birth at registration; you are responsible for representing that you meet the age requirement when you create your account. If we discover that an account belongs to someone under 13, we terminate it and, upon confirmation, delete the data within 30 days. Full age policy lives in the Terms of Service.

A note on how boopr works:

Everything you share on boopr (posts, photos, your profile) is encrypted on your device before it ever reaches our servers. We cannot see your content, and we do not scan it. Moderation here has to work a little differently than on Instagram, X, or Reddit, where the company reads what you post.

We rely on you to report violations and describe what happened, your community to hold each other accountable, and our invite system to keep bad actors out. When you report, we receive only metadata and your description; no encrypted content is decrypted or sent to our servers.

1. The Short Version

Be a good friend. Don't use boopr to hurt, harass, exploit, or deceive people. Don't share content that's illegal or harmful. If someone is making boopr worse, report them. If you get reported, we'll look into it fairly.

That's the spirit of it. The rest of this page fills in the specifics.

2. Prohibited Content

Do not create, share, or store the following on boopr:

Illegal Content

  • Child sexual abuse material (CSAM). Zero tolerance. Accounts involved in CSAM or the sexual exploitation of minors are permanently terminated immediately, and we report to the National Center for Missing & Exploited Children (NCMEC) and law enforcement as required by 18 U.S.C. § 2258A. Reported material is preserved for at least one year as required by the REPORT Act (Pub. L. 118-59).
  • Non-consensual intimate imagery (NCII). This includes real, AI-generated, computer-generated, or digitally altered intimate images shared without the subject's consent. This is prohibited under federal law (the TAKE IT DOWN Act, Pub. L. 119-12, codified at 47 U.S.C. § 223a) and will result in immediate account termination. Valid removal requests are processed within 48 hours; see Terms of Service Section 7 for the designated point of contact and request format.
  • Sale or facilitation of controlled substances. Selling, offering to sell, distributing, or facilitating the transfer of controlled substances — including but not limited to fentanyl, fentanyl analogues, opioids, and counterfeit pills purporting to be prescription medication — is prohibited. Accounts associated with such activity are subject to immediate permanent termination, and we will cooperate with law enforcement consistent with 21 U.S.C. § 841 and § 843(b).
  • Sex trafficking and commercial sexual services. Content or conduct that promotes or facilitates prostitution or sex trafficking is prohibited and will result in immediate permanent termination, consistent with 18 U.S.C. § 2421A.
  • Other illegal content. Content that violates applicable law, including but not limited to: credible threats of violence, terrorist content, or content that facilitates human trafficking.

Harmful Content

  • Hate speech. Content that attacks, dehumanizes, or incites violence against individuals or groups based on race, ethnicity, national origin, religion, sex, gender identity, sexual orientation, disability, or serious medical condition.
  • Harassment and bullying. Sustained, targeted, or severe abuse directed at specific individuals. This includes repeated unwanted contact, intimidation, and dogpiling.
  • Threats of violence. Direct or indirect threats of physical harm to any person.
  • Self-harm and suicide promotion. Content that promotes, glorifies, or provides instructions for self-harm or suicide. Sharing personal struggles or seeking support is not a violation — encouraging others to harm themselves is. If you or someone you know is in crisis: the 988 Suicide & Crisis Lifeline is available 24/7 by calling or texting 988, or chatting at 988lifeline.org. The Crisis Text Line is reachable by texting HOME to 741741. boopr is not a crisis service and is not a substitute for professional help, emergency services, or 911.
  • Dangerous misinformation. Deliberately false content likely to cause imminent physical harm, such as dangerous medical hoaxes or fabricated emergency instructions.

3. Prohibited Behavior

The following behaviors are not allowed on boopr:

  • Stalking and surveillance. Using boopr's features (including location sharing) to monitor, track, or surveil someone without their ongoing, voluntary consent.
  • Doxxing. Sharing someone's private personal information (home address, phone number, workplace, real name if not publicly known) without their consent, with or without the intent to cause harm.
  • Coordinated harassment. Organizing or encouraging others to target, mass-report, or pile onto another user.
  • Impersonation. Pretending to be someone else (another user, a public figure, or a boopr team member) with the intent to deceive.
  • Spam and platform manipulation. Sending bulk unsolicited content, creating multiple accounts, using automation to interact with the platform, or manipulating the invite system.
  • Invite abuse. Selling, publicly distributing, or trading invite codes. Invites are personal; you vouch for the people you bring in.
  • Circumventing enforcement. Creating new accounts to evade a ban, using modified clients to bypass safety controls, or helping banned users regain access.

4. Invite Accountability

boopr is invite-only by design. Every person on the platform was invited by someone who vouched for them. This creates a chain of accountability that most platforms don't have.

When you invite someone, you're vouching for them. If someone you invite seriously or repeatedly violates these guidelines, we may consider that context when reviewing your account.

This doesn't mean you're liable for everything your invitees do. It means we expect you to invite people you actually know and trust, not strangers from the internet.

Invite accountability in practice:

We look at patterns, not isolated incidents. If one person you invited gets reported for a minor issue, that's normal. If multiple people you invited are involved in serious violations, or if you're distributing invites in bulk to people you don't know, that's a different situation.

5. How to Report

Because your posts, photos, and profile are encrypted on your device before they reach us, we can't see your content. Reporting is how you let us know something is wrong. When you report a user or content, you'll be asked to describe the issue and select a reason. Your description is what we use to assess the situation. No encrypted content is automatically decrypted or sent to our servers.

Reporting in the App

boopr provides in-app reporting on posts, profiles, and boops. When you report, you'll select a reason (harassment, spam, inappropriate content, impersonation, CSAM, NCII, or other) and write a description of what happened (up to 1,000 characters). Your description is required and is the primary information we use to assess the situation.

We record your user ID, the reported user's ID, the content ID, the reason you selected, your description, and the time of the report. No encrypted content is decrypted or sent to our servers. We make enforcement decisions based on your description, report patterns across multiple users, and account-level metadata.

Reporting Outside the App

You can also email [email protected] with details about the issue. For CSAM or child exploitation, email [email protected] with the subject line "CSAM Report" and report to the NCMEC CyberTipline.

What Happens After You Report

A member of the boopr team reviews your report. We may reach out to you or the reported user for more context. We aim to act on reports within 48 hours, though complex cases may take longer. You won't always hear back with details about what action was taken because we balance transparency with the reported user's privacy.

Protecting reporters:

We do not reveal who filed a report to the person being reported. Retaliating against someone for filing a report is itself a violation of these guidelines.

6. Self-Protection Tools

You don't have to wait for us to act. boopr gives you tools to protect yourself:

  • Block: Blocked users are hidden from your feed, friend list, and notifications. Blocking is enforced on your device today and persists until you choose to unblock; the other user is not notified. Blocking does not on its own sever the friend connection or rotate your profile encryption key — if you also want to revoke that person's access to your profile and future posts, use Remove Friend (below). We may add server-side block enforcement in the future.
  • Remove friend: Removes someone from your friend list. They lose access to your profile and posts, and your profile encryption key automatically rotates so they can't decrypt future content.
  • Screenshot detection: boopr detects when someone screenshots your content and notifies you. On sensitive screens, content is also blanked during screen capture.

These tools work instantly and don't require a report. Use them freely.

7. Enforcement

When we determine that a violation has occurred, we take action proportional to its severity. We consider the nature of the violation, whether it's a first offense or a pattern, the impact on other users, and any relevant context.

Warning

For first-time or minor violations, we may issue a warning explaining what was wrong and what we expect going forward. Warnings are documented on your account.

Temporary Restriction

For repeated or more serious violations, we may temporarily restrict your account. This could include limiting your ability to post or invite new users for a set period.

Permanent Ban

For severe violations, repeated offenses after warnings, or any involvement with CSAM or NCII, we permanently terminate your account. Banned users may not create new accounts.

What we can and cannot do:

Because your content is encrypted with keys only you and your friends hold, we cannot proactively scan for violations. We rely on your reports and descriptions to understand what happened. We make enforcement decisions using report descriptions, patterns across multiple reporters, account trust signals (such as the number of distinct reporters, the enforcement history of accounts in your invite chain, your account age, and your recent activity rates), and behavioral metadata. We can take action at the account level: warning you, restricting features, or permanently banning the account. We never access or decrypt your content as part of this process.

8. Appeals

If you believe an enforcement action against your account was made in error, you can appeal.

How to Appeal

Email [email protected] with your username and a clear explanation of why you believe the decision was wrong. Include any relevant context or evidence.

What to Expect

  • We'll acknowledge your appeal within 48 hours.
  • A different team member than the one who made the original decision will review your case.
  • We aim to resolve appeals within 7 business days.
  • You'll receive a final decision with an explanation. Appeal decisions are final.

What's Not Appealable

Account terminations for CSAM, child exploitation, or non-consensual intimate imagery are not eligible for appeal.

9. Our Moderation Approach

Moderation on an encrypted platform works differently than what you're used to on Instagram, X, or Reddit. Those platforms can read what you post and run automated systems to scan for violations. We can't, and we don't want to build the infrastructure that would let us.

The model leans on four things that don't require us to read your content. The invite-only signup cuts off most drive-by abuse, spam, and bot activity before it starts, since every user was brought in by someone who already had skin in the game. In-app tools let you block (currently enforced on your device), remove friends (which rotates your profile key so they can't decrypt anything you post afterward), and pick the audience for each post.

When something does go wrong, reports are how we learn about it. A human on our team reads every one. We base decisions on what you describe, patterns across multiple reporters, and account-level signals (such as the number of distinct reporters, the enforcement history of accounts in your invite chain, your account age, and recent activity rates) — none of which require decrypting content. Enforcement outcomes are also recorded against the inviter's chain: people who repeatedly invite bad actors may see their account flagged for review. The invite system is the feedback loop.

This approach has a real cost: proactive content scanning and automated takedowns aren't options for us, so we won't catch things a scanning system would. That's the trade we made when we decided we couldn't read your posts, and we think an invite-only community with human moderators is the right side of it.

10. Changes to These Guidelines

We may update these guidelines as boopr grows and as we learn from how the community evolves. When we make changes, we'll update the effective date at the top of this page and notify users through the app. Continued use of boopr after changes take effect constitutes acceptance of the updated guidelines.

11. Contact Us

Questions about these guidelines? Get in touch:

  • General questions: [email protected]
  • Safety concerns: [email protected]
  • CSAM or child exploitation: [email protected]
  • Appeals: [email protected]
boopr
Home About Roadmap Privacy Terms Guidelines Security Press

Safety questions: [email protected]

© 2026 Boopr LLC