imagemoderationapi
Home
Industries
E-commerce Social Media Dating Gaming Healthcare
Use Cases
User Generated Content Profile Verification Marketplace Listings Kids Apps Live Streaming
Detection
NSFW Detection Violence Detection Deepfake Detection Face Detection AI Image Detection
Threats
CSAM Nudity Violence Deepfakes Harassment
SDKs
Python Node.js JavaScript PHP Go
Platforms
WordPress Shopify Discord AWS S3 Firebase
Resources
Pricing Login Compliance Glossary Regions
Try Image Moderation

GARM Brand Safety Framework

Classify content using Global Alliance for Responsible Media standards. Protect advertisers with industry-standard risk categorization across 11 content categories and 4 risk levels.

Try GARM Detection
0
GARM categories
0
Risk levels
0
Classification accuracy
0
Average latency

Industry-Standard Brand Safety

The GARM Brand Safety Floor + Suitability Framework provides a common language for the digital advertising ecosystem. Our API classifies images according to these standards, enabling publishers to protect advertiser relationships and advertisers to maintain brand integrity.

Risk Levels

Brand Safety Floor

Content universally agreed to be inappropriate for advertising. Includes illegal content, CSAM, terrorism, and extreme hate speech.

High Risk

Content that most brands would want to avoid. Includes explicit violence, adult content, and strong profanity.

Medium Risk

Content that may be unsuitable for some brands. Includes moderate violence, suggestive content, and controversial topics.

Low Risk

Generally brand-safe content. May include mild references to sensitive topics but appropriate for most advertisers.

GARM Categories

Adult & Explicit Sexual Content

Pornography, nudity, sexual content, and related adult material.

Floor: Illegal High: Explicit Med: Suggestive

Arms & Ammunition

Weapons, firearms, ammunition, and military equipment.

Floor: Illegal sales High: Graphic use Med: Display

Death, Injury & Military

Graphic violence, death, injury, and military conflict.

Floor: Extreme gore High: Graphic Med: Violence

Online Piracy

Copyright infringement, illegal downloads, pirated content.

Floor: Piracy sites High: Promotion

Hate Speech & Discrimination

Content promoting hatred based on protected characteristics.

Floor: Incitement High: Hate symbols Med: Offensive

Terrorism

Terrorist content, extremism, and radicalization material.

Floor: All terrorism

Drugs & Controlled Substances

Illegal drugs, drug use, and drug paraphernalia.

Floor: Illegal sales High: Use/promotion Med: References

Tobacco

Tobacco products, smoking, and vaping content.

High: Promotion Med: Depiction Low: Incidental

GARM Brand Safety FAQ

What is GARM?

The Global Alliance for Responsible Media (GARM) is a cross-industry initiative to address harmful content on digital platforms. GARM provides a common framework for classifying content suitability for advertising.

Who uses GARM standards?

Major brands, agencies, and publishers use GARM standards for brand safety decisions. Members include Unilever, P&G, GroupM, Publicis, Google, Facebook, and many others.

Can I customize risk thresholds?

Yes, our API returns both the category and risk level, allowing you to set custom thresholds. Some brands accept medium-risk content in certain categories while avoiding it in others.

Do you cover all 11 GARM categories?

Yes, we classify content across all 11 GARM categories including adult content, arms, crime, death/injury, drugs, hate speech, military, obscenity, online piracy, spam, and terrorism.

Protect Your Brand

GARM-compliant content classification for advertising safety. Get started today.

Try Free Demo