Product News

Find out about our new product features, the latest platform changes, and discover company announcements before anyone else.

Risk Management

Stay up to date on third-party risk management best practices and techniques, and learn about new regulations for third party risk.

Security Research

Keep up with research around the biggest data breaches, malware infections, IoT risks and all the latest news in cybersecurity.

SecurityScorecard on the Principles for Fair & Accurate Security Ratings: A Focus on Accuracy and Validation

Our initiative with the the US Chamber of Commerce to release the Principles for Fair and Accurate Security Ratings started with defining and publishing the principles, and now SecurityScorecard is fostering this initiative by continuing to educate current and future users of our product about how we adhere to these principles.

We recently took a deep dive into the principle of  Dispute, Correction, and Appeal, and this week we’re continuing our series by analyzing the principle of Accuracy and Validation, which reads:

“Accuracy and Validation: Ratings should be empirical, data-driven, or notated as expert opinion. Rating companies should provide validation of their rating methodologies and historical performance of their models. Ratings shall promptly reflect the inclusion of corrected information upon validation.”

This underlying goal of this principle is to help ensure that the quality and accuracy of a security rating is reliable. The SecurityScorecard platform puts each component of the Accuracy and Validation principle into practice.  

 

Section 1: Ratings should be empirical, data-driven, or notated as expert opinion.

At SecurityScorecard, scoring is a data-driven process that ensures that lower scores are always more predictive of breach than higher scores. Put simply, an “F” company has a higher likelihood of getting breached than an “A” company. The scoring methodology has several steps which all preserve this data-driven approach:

 

  • Issue Type Weighting. SecurityScorecard tracks a multitude of security issues. In calculating an organization’s factor score (a score indicating how a company is doing in a particular security category), these issues are weighted to account for differences in severity. The severity of an issue is defined by an industry-accepted standard, such as the NIST Common Vulnerability Scoring System v2. In the event that an issue does not have a industry severity ranking available, SecurityScorecard uses recognized authorities and internal resources to determine severity, considering the opinions of multiple experts to correct for any bias. It’s important to note that once these weights are established for each issue type, they do not change and are the same for all companies. This allows for consistency and reliability in scoring all the way down to the issue level.

 

  • Factor Level Weighting. All issue types are classified into a broad range of risk categories such as Application Security, Malware, Patching Cadence, Network Security, Hacker Chatter, Social Engineering and Leaked Information.  Factor-level weights are determined using machine learning. While all factors have been found to be predictive of breach, SecurityScorecard uses cyber breach data and machine learning algorithms to quantify and rank which factors are more predictive of a cyber breach event. In this process, breach-likelihood ratios are determined for each factor, by calculating the ratio of the conditional probability of a breach given a poor factor score (C, D, or F) to the conditional probability of a breach given a good factor score (A or B). The greater the likelihood ratio, the more predictive is that factor of a cyber breach.  Factors which are more predictive of breach are correspondingly assigned a higher weight.  Just like with issues (and for the same reason), once these weights are defined, they do not change and are the same for all companies. (Factor weight may change when SecurityScorecard periodically re-evaluates factor weights based on updated cyber breach data and changes in the underlying issue types within a factor.) To ensure statistical significance, the machine learning process described above is performed at the aggregate level, across all industries and company sizes.

 

  • Overall Score. All the weighted factor scores described above are rolled into the total score which falls on a scale of 50 to 100. The score, per the descriptions above, is statistically significant and obtained by data driven processes.

 

Section 2:  Rating companies should provide validation of their rating methodologies and historical performance of their models.

SecurityScorecard conducts checks of its ratings against its breach prediction model to ensure the stability of the model. The major takeaway from our breach prediction model is that lower grades (Cs,Ds, and Fs) are always more likely to be breached than higher grades (As and Bs): 

 

 

Section 3:  Ratings shall promptly reflect the inclusion of corrected information upon validation.

As we referenced in our last post on this topic, SecurityScorecard allows any company to provide corrected or clarifying data about their digital assets in order to correct the company’s rating. Additionally when a company requests a recalculation of their score, ratings are updated within 24 hours.

 

Want to learn more? Check our post about Security Ratings or our Focus on Transparency post.

 

A Quick Look at FFIEC's Assessment Tool
Security Roundup: SecurityScorecard Engineers Talk About the Latest Cybersecurity News