User agent parser. Browser, OS, device, bot.
Paste any user-agent string. We parse browser, browser version, OS, OS version, rendering engine, device class, plus a bot-detection layer for the common crawlers (Google, Bing, AI bots, monitoring tools).
Paste any user-agent string. We parse browser + version, OS + version, rendering engine, device class, plus a bot-detection layer for the common crawlers and monitoring tools. The "load my UA" button auto-fills the visitor's current user-agent. Pure browser regex.
Paste from server logs, Postman, browser DevTools (navigator.userAgent), or anywhere a UA appears.
—
Privacy: parsing happens in your browser. Nothing is sent or logged.
Why is it so messy? Compatibility chains.
The historical chain. Mozilla shipped 'Mozilla/4.0' in the 1990s. Sites started detecting 'Mozilla' to enable rich-content features. Internet Explorer wanted those features too, so it shipped a UA that started 'Mozilla/4.0 (compatible; MSIE...)'. Safari shipped with 'Mozilla/5.0 ... AppleWebKit ... Safari'. Chrome wanted compatibility with Safari-targeted sites, so it shipped 'Mozilla/5.0 ... AppleWebKit ... Chrome ... Safari'. Edge added 'Edg' to the end of the Chrome UA so site sniffers that look for 'Edge' detect it without confusing Chrome detection. Every browser inherits the entire prior chain. The string looks paranoid because it is.
Modern reduction. Chrome's User-Agent Reduction initiative (rolled out 2022-2024) strips fine-grained version + device info from the UA. So Chrome 120's UA might say Chrome/120.0.0.0 with the minor versions zeroed; the actual full version is in the Sec-CH-UA-Full-Version-List header if the site requests it via Client Hints. Goal: less fingerprinting surface. For analytics segmentation by major version, the reduced UA is enough.
The pieces we extract. Browser name (the last identifying token in the chain — Chrome, Firefox, Edge). Browser version (after the slash). OS family (Windows NT version maps to release name; Mac OS X version + iPhone / iPad). OS version (parsed from the OS section). Rendering engine (Blink, WebKit, Gecko — inferred from token presence). Device class (Mobile / Tablet / Desktop — based on Mobile / Tablet keywords in the UA).
Bots get a separate path. Most bots add their own identifier (Googlebot, GPTBot, Bingbot, AhrefsBot) inside a parenthesized comment near the start. We detect ~25 common bots first, before browser parsing. The bot's UA may also contain a 'compatible' Chrome / Safari token for sites that gate by browser; we ignore that and report the bot identity as the canonical answer.
Four jobs this tool covers.
Job 1: Log analysis. Server logs, CDN access logs, application logs all contain UA strings. Paste a representative sample to confirm what client made the request. Useful for incident triage — the request that triggered an alert was a legitimate browser? An unfamiliar bot? A monitoring tool you forgot about?
Job 2: Bot triage in analytics. Analytics shows a traffic spike. Pull the top-referring user-agent strings, paste here to identify whether the spike is human (a marketing campaign worked) or bot (Common Crawl, a new monitoring vendor, scraping). Bot traffic should be excluded from conversion-rate denominators; missing this distorts every CRO calculation downstream.
Job 3: Browser-bug investigation. A user reports a layout bug. Their UA tells you the exact browser + version + engine — you can then reproduce in the same browser, check the engine's known issues at bugs.chromium.org or bugzilla.mozilla.org, or apply browser-specific CSS. Pair with our HTTP Headers Checker for the response side of the bug investigation.
Job 4: AI-crawler audit. The growing list of AI training and live-fetch crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended) deserves explicit handling in your robots.txt and analytics. Paste any unfamiliar UA from your logs to identify whether it's an AI crawler — then decide whether to allow, block, or rate-limit per your AI-content-licensing stance. Pair with our SEO service for the crawler-policy strategy conversation.
Six questions users ask.
Why are user-agent strings such a mess?
Historical compatibility. The original Mozilla browser shipped a UA like 'Mozilla/4.0', and every browser since has prepended 'Mozilla' for compatibility with sites that detect 'Mozilla' to enable rich features. Then Chrome added 'AppleWebKit' for compatibility with Safari-targeted sites, then 'Safari' for the same reason, then 'Chrome' for the actual identification — so a modern Chrome UA reads 'Mozilla/5.0 ... AppleWebKit/537.36 ... Chrome/X ... Safari/537.36'. The cumulative compatibility-chain produces strings that look paranoid. Use a parser, don't try to read them by eye.
Why do you detect bots?
Most analytics implementations exclude bot traffic from their reports — Googlebot's billions of monthly crawl requests would otherwise drown out human signal. The bot-detection layer here matches the common patterns: Googlebot, Bingbot, AI training crawlers (GPTBot, ClaudeBot, Google-Extended, CCBot, PerplexityBot, Applebot), social media link previewers (Facebook external hit, Twitter / X bot, LinkedIn), monitoring tools (UptimeRobot, Pingdom), and seo crawlers (AhrefsBot, SemrushBot). Knowing whether a request is a bot is the first step in segmenting analytics.
What are Client Hints and why do modern browsers send 'Chrome 120' instead of 'Chrome 120.0.6099.71'?
User-Agent Reduction is a Chrome initiative (rolled out 2022-2024) that strips fine-grained version and device info from the UA string and moves it to opt-in HTTP request headers (User-Agent Client Hints / UA-CH). The goal is reducing fingerprinting. So Chrome 120's UA might say 'Chrome/120.0.0.0' (zero'd minor versions); the actual full version is in the Sec-CH-UA-Full-Version-List header if you request it. For most analytics use cases the major version is enough. For browser-bug investigation, you may need the full version via Client Hints.
Can a UA string be faked?
Trivially. The UA is a request header set by the client; any HTTP client can send any string. Browsers let users override it via DevTools (Device Mode in Chrome) or extensions. Bots routinely fake legitimate browser UAs to bypass simple bot filters. For real bot detection, combine UA + behavioral signals (request patterns, JS execution, IP reputation) — UA alone is a hint, not a proof. Same for browser-version detection: trust the UA for analytics segmentation, distrust it for security gating.
What's the difference between 'browser' and 'engine'?
Browser is the user-facing app (Chrome, Firefox, Safari, Edge). Engine is the rendering engine that interprets HTML / CSS / JavaScript (Blink for Chrome / Edge / Opera, Gecko for Firefox, WebKit for Safari). Two browsers on the same engine render pages similarly because the engine does the work. So Chrome bug fixes show up in Edge a few weeks later because they share Blink; Safari bug fixes don't transfer to Chrome because they use different engines. For compatibility testing, the engine matters more than the browser brand.
Is the UA I paste sent anywhere?
No. Parsing happens entirely in your browser via local regex. The page is static HTML; the only network request is the initial page load. Safe for parsing UAs from internal logs, partner integrations, or any context where you don't want the data shared. We never see your input.
Three mistakes we see most.
User-agent strings are trusted far more often than they deserve to be, and blocked far more often than they should be. Roughly 30% of bot-management configurations we audit are blocking GPTBot, ClaudeBot, PerplexityBot, or CCBot wholesale, which means the site has voluntarily exited AI Overviews citation eligibility, Perplexity answer-cards, and Claude search results.
Mistake 1, blocking all AI crawlers in robots.txt: a knee-jerk Disallow: / for User-agent: GPTBot in 2023-2024 made some sense when "AI training" was the only narrative. In 2026, blocking AI crawlers also blocks AI-Overview citation eligibility, Perplexity answer-cards, and ChatGPT search results, all of which now drive measurable referral traffic. Google's special-crawlers documentation distinguishes Google-Extended (training) from Googlebot (search); block one, not both. The same logic applies across vendors.
Mistake 2, trusting the Googlebot UA without IP verification: any client can spoof "Googlebot" in the User-agent header. Per Google's Verifying Googlebot documentation, the only authoritative check is a forward-confirmed reverse DNS lookup against the Google-published IP ranges. Treating UA alone as truth means letting through scrapers that pretend to be Googlebot to bypass rate-limits. The parser on this page flags claimed bots and links to the verification process; never gate access on UA alone for security-sensitive paths.
Mistake 3, regex-parsing UA strings in application code: UA strings are RFC 9112-formatted compatibility chains that have grown organically since 1993. "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" mentions Mozilla, Apple, Chrome, and Safari in one Chrome string. Hand-rolled regex eventually breaks. Use a maintained library (ua-parser-js, uap-core, or platform-native equivalents like @ua-parser-js/web) and treat UA as a hint, with Client Hints (sec-ch-ua) as the structured replacement going forward.
When to actually use this: debugging analytics anomalies (sudden spike in "unknown browser" usually means a UA-detection regex broke), forensics on suspicious requests in server logs, auditing a robots.txt or WAF rule, and explaining a strange-looking visitor to a non-technical stakeholder. Our SEO and Web development engagements run a quarterly AI-crawler audit on every client to keep AIO citation eligibility intact.
Related Digital Heroes services + reading: See our SEO service for crawler-policy work and our Web development service for bot-management architecture. Sibling tools: HTTP Headers Checker and Robots.txt Tester.
Published · Last updated .