- 26 Nov 2023
- Updated on 26 Nov 2023
To identify automated traffic in real-time, the cloud-based HUMAN Detector processes hundreds of live signals collected on the client by the HUMAN Sensor. The detector calculates a risk score for every request which is embedded into a cookie that is later processed by the HUMAN Enforcer.
Bot Defender detection relies on Machine learning(ML) and predictive models continuously updated based on the analysis of vast amounts of data and signals. As of Winter 2022, over 200 ML algorithms and models power the predictive analysis to provide optimized bot precision. The Sensor collects hundreds of features, billions of anonymized data points, and historical data about bots and human behavior. The Detector uses this activity data to dynamically and accurately predict if a request is coming from a malicious bot, and correlates it with the known good and bad bots. The signals include tagging spoofed identifiers, surfacing malicious patterns automatically, spotting user-interaction anomalies within mouse clicks, screen touches, cadence and timing, and self-tuning to address the customer’s website structure and business metrics.
HUMAN invests significant resources in threat intelligence and advanced research. Its research team constantly investigates and explores evolving attack vectors, and new threat actors, tools and techniques. Using manual and automated tools, and internal and external resources, HUMAN researchers derive insights that are translated into new and improved detection algorithms. These algorithms provide the Detector with enriched behavioral patterns for bots of all sophistication levels and address diverse use cases and attacks across several verticals.
In addition to the machine learning(ML)-based techniques and behavioral analysis, the Detector uses hundreds of indicators from the browser, mobile, and network collected by the Sensor. These indicators include HUMAN ID (cookie-based), device features such as visual and audio rendering capabilities, traffic source and type, window objects, attacker specific signatures, browser plugins, and extensions. The indicators are compared to a growing library of bad actor profiles built on anonymized and aggregated customer sets and multiple external resources.
Good bots are beneficial automated traffic sources, such as Google and Bing search engine crawlers and website monitoring services. In addition to the customer’s defined list of good bots, Bot Defender automatically flags good bots based on a constantly updated live feed derived from internal and external sources.