Classify content using Global Alliance for Responsible Media standards. Protect advertisers with industry-standard risk categorization across 11 content categories and 4 risk levels.
Try GARM DetectionThe GARM Brand Safety Floor + Suitability Framework provides a common language for the digital advertising ecosystem. Our API classifies images according to these standards, enabling publishers to protect advertiser relationships and advertisers to maintain brand integrity.
Content universally agreed to be inappropriate for advertising. Includes illegal content, CSAM, terrorism, and extreme hate speech.
Content that most brands would want to avoid. Includes explicit violence, adult content, and strong profanity.
Content that may be unsuitable for some brands. Includes moderate violence, suggestive content, and controversial topics.
Generally brand-safe content. May include mild references to sensitive topics but appropriate for most advertisers.
Pornography, nudity, sexual content, and related adult material.
Weapons, firearms, ammunition, and military equipment.
Graphic violence, death, injury, and military conflict.
Copyright infringement, illegal downloads, pirated content.
Content promoting hatred based on protected characteristics.
Terrorist content, extremism, and radicalization material.
Illegal drugs, drug use, and drug paraphernalia.
Tobacco products, smoking, and vaping content.
The Global Alliance for Responsible Media (GARM) is a cross-industry initiative to address harmful content on digital platforms. GARM provides a common framework for classifying content suitability for advertising.
Major brands, agencies, and publishers use GARM standards for brand safety decisions. Members include Unilever, P&G, GroupM, Publicis, Google, Facebook, and many others.
Yes, our API returns both the category and risk level, allowing you to set custom thresholds. Some brands accept medium-risk content in certain categories while avoiding it in others.
Yes, we classify content across all 11 GARM categories including adult content, arms, crime, death/injury, drugs, hate speech, military, obscenity, online piracy, spam, and terrorism.
GARM-compliant content classification for advertising safety. Get started today.
Try Free Demo