Website Reputation and Risk

You can use website reputation to prevent attacks by limiting the end user risk of exposure to inappropriate or malicious web content. Websites are scored to help identify potential problem sites. The lower the score the greater the risk the website poses to users.

Reputation Risk Recommend Action Description
1 - 20 High Risk Anything in the High Risk Zone should be blocked Sites that have high predictive risk that the user will be exposed to malicious links or payloads

A previously infected website has a greater risk than one with a clean record. When a website is found to be distributing malicious content it, is labeled high-risk and stays in that way until cleaned. After it is cleaned, the score is lower than before it was infected because its infection history is taken into consideration and its original score is only restored over a period of time.

21 - 40 Suspicious Unless shown to be legitimate anything in the Suspicious Zone should be blocked. Sites are generally relatively new and have never been reviewed. As the classification engine scans thousands of URLs per second, any legitimate URL will generally have been scanned and rated so sites in this zone should be treated as suspicious.
41 - 60 Moderate Risk Proceed with caution Sites that may present a risk as they were either infected in the past year or have exhibited some form of risky characteristic. As such they should only be visited by those who are knowledgeable in the subject area and who will not expose others to the risk of infection.
61 - 80 Low Risk None Sites that are benign, and rarely exhibit characteristics that expose the user to security risks. There is a low predictive risk of malicious links or payloads.
81 - 100 Trustworthy None Sites that are highly trusted and have exhibited little to no risk and therefore are safe to use.

To calculate website reputations, Web Protection uses an self-learning network that continuously scans the internet using a combination of global threat sensors and machine learning algorithms. The calculation is based on a large number of factors and more than 400 weighted variables are used including:

Website location

  • Geography
  • ISP/WebHost
  • IP neighborhood

Website behavior and content

  • JavaScript content and characteristics
  • Popups
  • Redirects
  • Executable file downloads

Legitimacy

  • Domain/Category Age
  • Number of IP addresses for a site
  • Top Level Domain

Threat History

  • Past safety Record (history of infections in the last three or 12 month period)