LogoLogo
  • ShieldsGuard - User Guide
  • Installation Steps
    • Shields Guard Installation
    • Shields Guard SEG Installation
  • Getting Started
    • 1. General Welcome and Site Management Panel
    • 2. Overview
      • 2.1 Today's Data
      • 2.2 Country Statistics
      • 2.3 URL Statistics
      • 2.4 IP Statistics
      • 2.5 HTTP Status Statistics
    • 3. Protection
      • 3.1 DDoS Protection
        • 3.1.1 Google Recaptcha Setup
        • 3.1.2 Friendly Captcha Setup
      • 3.2 WAF – Web Application Firewall
    • 4. Security Rules
      • 4.1 BlackList & WhiteList
      • 4.2 User Agent Filtering
      • 4.3 Query String Filtering
      • 4.4 HTTP Header Filtering
      • 4.5 Block POST Values
      • 4.6 Custom Headers
      • 4.7 Block URL Requests
      • 4.8 URL Path Blocking
      • 4.9 Encrypt Path
      • 4.10 Remove Request Value
      • 4.11 Exclude Directories from Protection
    • 5. Logs
      • 5.1 Access Log
      • 5.2 Security Log
    • 6. Asset Management
      • 6.1 Asset Management
      • 6.2 Network Topology
      • 6.3 Vulnerability Scan
    • 7. Access
  • 8. DNS
  • 9. SSL
  • 10. Subdomain Manage
  • 11. Edit Page
  • ShieldsGuard SEG
    • 1. SEG Dashboard
    • 2. Reporting
    • 3. Analyzed
      • 3.1 Files
      • 3.2 URL
      • 3.3 Mail
      • 3.4 Domain
    • 4. Mail Settings
      • 4.1 File
      • 4.2 Mail Body
      • 4.3 Sender Domain
Powered by GitBook
On this page
Export as PDF
  1. Getting Started
  2. 4. Security Rules

4.2 User Agent Filtering

Previous4.1 BlackList & WhiteListNext4.3 Query String Filtering

Last updated 9 days ago

πŸ“– Overview The User Agent Filtering module allows you to control access to your website or application based on the User-Agent header included in HTTP requests. This header typically identifies the browser, tool, bot, or crawler making the request. By allowing or blocking specific User Agents, you gain powerful control over how your application interacts with browsers, bots, and potential attackers.


βœ… Allowed User Agent List This section enables you to define trusted User Agents that should always be allowed access β€” even if other security mechanisms are in place.

Use Cases:

  • Allowing access to legitimate bots:

    • Googlebot

    • Bingbot

    • Slackbot

  • Permitting internal testing tools or scanners

Configuration Options:

  • Enter a regex pattern to match the User-Agent string

  • Select a sensitivity level from the dropdown Example: Allowing ^Googlebot.* ensures only Google’s official crawler is accepted.


🚫 Blocked User Agent List This section allows you to define unauthorized User Agents that should be completely blocked from accessing your system.

Use Cases:

  • Blocking known scraping tools or headless browsers:

    • curl

    • python-requests

    • Scrapy

  • Preventing spam bots or fake search engine crawlers

Configuration Options:

  • Define a regex pattern for partial or full User-Agent strings

  • Select a sensitivity level from the dropdown Example: Blocking .*bot.* will catch most generic bots and automated tools.


πŸ” Regex Matching & Sensitivity Options When creating a rule, you must select a sensitivity level. The following four options are available:

  1. Case Insensitive Matches plain text without considering letter case. Example: curl will match Curl, cURL, or CURL.

  2. Case Sensitive Matches plain text exactly as typed, including letter case. Example: curl matches only curl and not CURL or Curl.

  3. Regex Regular Expression Case Sensitive Allows full regex usage and respects letter casing. Example: ^SlackBot.* matches SlackBot/1.0, but not slackbot/1.0.

  4. Regex Case Insensitive Enables full regex support while ignoring letter case. Example: ^slackbot.* matches SlackBot/1.0, SLACKBOT/2.0, etc.


βš™οΈ How to Add a Rule

  1. Navigate to Security Rules > User Agent Filtering

  2. Choose either the Allowed or Blocked tab

  3. Click Add User Agent

  4. In the popup:

    • Enter a regex pattern to match the User-Agent string

    • Choose a sensitivity level from the dropdown

  5. Click Save β€” your rule is instantly applied

You can manage existing rules, search by regex, or delete them at any time.


🚨 Warning User-Agent headers can be easily spoofed. While User Agent Filtering is effective against basic bots and misconfigured clients, advanced attackers may forge trusted User-Agent strings.

For enhanced security:

  • Combine this feature with IP Reputation databases

  • Enforce rate limiting

  • Use Web Application Firewall (WAF) rules


🎯 Conclusion User Agent Filtering is a lightweight, efficient, and flexible way to block malicious bots, reduce system noise, and protect public-facing endpoints β€” especially APIs and marketing pages.