Menu
Theme

How It Works

Understanding our methodology and data sources

What is Review Disparity?

Review disparity measures the difference between how professional game critics score a game versus how regular players rate it. A positive disparity means critics scored a game higher than players did, while a negative disparity means critics scored lower than players.

+15Positive Disparity

Critics rated the game 15 points higher than players on average. This could indicate critic bias, marketing influence, or different evaluation criteria.

-15Negative Disparity

Critics rated the game 15 points lower than players. This could mean critics were harsher, or players found unexpected value in the game.

Note: What matters most is the magnitude of the disparity (how far from users), not the direction. A disparity of +15 or -15 both indicate a significant gap between critics and players.

Understanding Our Colors

Our color system is based on the magnitude of disparity—how far the critic score is from the user score, regardless of direction. This helps you quickly identify how aligned or divergent a critic is with players:

±0-5
Aligned— Critic is closely aligned with user opinions
±5-10
Moderate— Some divergence from user scores
±10-15
High— Significant divergence from users
±15+
Extreme— Major divergence from user opinions

The +/- sign still tells you the direction (positive means critic scored higher), but the color indicates magnitude. A +3 and -3 are both green because they're both closely aligned with users.

The Formula

Disparity = Critic ScoreUser Score

We calculate disparity for each review by subtracting the user score from the critic score. For journalists and outlets, we average all their individual review disparities to get an overall disparity score.

Review Timing Categories

We categorize reviews based on when they were published relative to a game's release date. This helps identify review patterns and ensures fair disparity calculations.

Early Review

Reviews published before the game's official release date. These are typically from reviewers who received early access copies from publishers.

Launch Window

Reviews published within 60 days of the game's release. These are used for the primary disparity score shown on profiles.

Late Review

Reviews published more than 60 days after release. These count toward the overall disparity as a secondary metric.

Why 60 days? This window captures the period when most professional reviews are published and when user scores are most actively being submitted. It also prevents journalists from gaming their scores by selectively reviewing older games where user sentiment has shifted or stabilized.

About early reviews: Early reviews are included in disparity calculations and count toward the launch window. However, they're marked separately so you can see which critics frequently receive early access—a potential indicator of closer publisher relationships.

Primary vs. Overall Disparity

Each journalist and outlet has two disparity metrics calculated:

Launch Window Disparity (Primary)

Calculated only from reviews published within 60 days of each game's release. This is the main metric displayed on profile pages and used for leaderboard rankings.

Overall Disparity (Secondary)

Calculated from all reviews, including late reviews. Used as a fallback when a journalist has no qualifying launch window reviews. When shown, it's marked with an asterisk (*).

Transparency note: On journalist profiles, you'll see a breakdown showing how many of their reviews fall within the launch window vs. late reviews. If a journalist's disparity is calculated from overall reviews rather than launch window, this is clearly indicated.

Quality Thresholds

To ensure statistical reliability and prevent manipulation, we apply minimum thresholds:

50

Minimum User Reviews

Games must have at least 50 user reviews on Steam or 20 user reviews on Metacritic to be included in disparity calculations. This ensures we're comparing against a meaningful sample of player opinions, not just a handful of potentially biased early reviewers.

10

Minimum Critic Reviews for Journalists/Outlets

Journalists and outlets must have at least 10 scored reviews to appear on the leaderboards. This prevents new or occasional reviewers with just a few reviews from dominating the rankings due to small sample sizes.

10

Minimum Critic Reviews for Games

Games must have at least 10 critic reviews to appear on the games leaderboard. This ensures we have enough professional opinions to calculate a meaningful disparity score.

10

Minimum Score Spread

Journalists and outlets must have a score spread (standard deviation) of at least 10 to appear on leaderboards. This filters out reviewers who use binary scoring (only 0 or 100) or an extremely narrow scoring range, which can artificially inflate disparity metrics.

Note: Individual journalist and game profiles are still accessible even if they don't meet the leaderboard thresholds—they just won't appear in the ranked lists.

Score Normalization

Different outlets use different scoring scales. To compare apples to apples, we normalize all scores to a 0-100 scale:

Original FormatExampleNormalized
Out of 108.5 / 1085
Out of 54 / 580
Out of 10085 / 10085
Letter GradeB+87
Steam (% positive)85% positive85
Metacritic User7.5 / 1075

Data Sources

OpenCritic

Professional critic reviews, scores, and journalist profiles. OpenCritic aggregates reviews from major gaming publications and provides standardized critic scores.

Steam

Player reviews and ratings from the world's largest PC gaming platform. Steam scores are based on the percentage of positive user reviews.

Metacritic

User scores from Metacritic, which collects ratings from registered users on a 0-10 scale. We multiply by 10 to normalize to our 0-100 scale.

Three Disparity Scores

We calculate and display disparity separately for each user score source, giving you a complete picture of how critics compare to different player communities:

Steam Disparity

Critic Score − Steam User Score
Compares critic reviews against Steam's PC gaming community. Steam users tend to be dedicated PC gamers who may have different expectations than the general gaming audience.

Metacritic Disparity

Critic Score − Metacritic User Score
Compares critic reviews against Metacritic's cross-platform user ratings. Metacritic includes console and PC players, often reflecting a broader gaming audience.

Combined Disparity

Critic Score − Average(Steam, Metacritic)
When both sources are available, we also show a combined disparity using the average of Steam and Metacritic user scores. This provides an overall view across platforms.

Why show all three? Steam and Metacritic audiences can differ significantly. A game might have a +10 disparity on Metacritic but -5 on Steam, revealing that PC players loved it more than the broader audience. Seeing each source independently helps you understand the full picture.

Score Spread vs. Disparity

On journalist and outlet profiles, you'll see both Disparity and Score Spread. These measure different things:

Disparity

How far the critic's scores are from user scores (Steam/Metacritic).

Critic Score − User Score

Example: A critic gives 90, users give 70 → Disparity is +20

Score Spread

How varied the critic's own scores are (variance in their scoring).

Standard Deviation of Critic's Scores

Example: Scores range from 40 to 95 → High spread (uses full range)

High Score Spread (10+): The critic uses a wide range of scores, differentiating between games they love and games they don't. This suggests thoughtful, discriminating reviews. These reviewers appear on leaderboards.

Low Score Spread (<10): The critic gives similar scores to most games, or uses binary scoring (only 0s and 100s). This can artificially inflate disparity metrics and make their scores less meaningful.

How we handle low score spread reviewers:

  • Leaderboards: Journalists and outlets with score spread below 10 are excluded from leaderboard rankings to prevent skewed data.
  • Profile pages: Their individual profiles remain fully accessible with all their reviews and statistics.
  • Warning indicator: A warning message appears on profiles when score spread is below 10, helping you understand why their disparity might be less reliable.
  • Search results: They still appear in search results on the Journalists and Outlets listing pages.

Data Coverage

Time Period

We track all available review data from OpenCritic, going back to the earliest reviews in their database. This gives us comprehensive coverage of gaming journalism history and allows tracking of long-term reviewer patterns.

Update Frequency

Critic reviews are synced continuously from OpenCritic. User scores from Steam and Metacritic are updated regularly. Historical disparity data powers our trend charts, showing how journalists' alignment with users has changed over time.

Interpreting the Data

High disparity doesn't mean "wrong": Critics and players often have different priorities. Critics may weigh innovation, artistic merit, and technical achievement more heavily, while players focus on fun, value, and replay value.

Direction vs. magnitude: The sign (+/-) tells you whether the critic scored higher or lower than users, but the magnitude (how far from zero) is what matters most. A critic with +12 and one with -12 both have significant divergence from users—they just diverge in opposite directions.

Sample size matters: A journalist with 5 reviews will have a less reliable disparity score than one with 500 reviews. We display review counts so you can judge the statistical significance yourself. Journalists need at least 10 reviews to appear on leaderboards.

Check score spread: A journalist's "Score Spread" tells you how varied their own scores are. Low spread means they give similar scores to most games (or use binary 0/100 scoring), which can make their disparity less meaningful. We filter out very low spread reviewers from leaderboards.

Launch window focus: Our primary disparity score uses only reviews published within 60 days of a game's release. This ensures we're measuring how aligned critics are with players when it matters most—at launch.

Context is key: Some genres naturally have higher disparities. Niche games may be loved by their target audience but rated lower by critics reviewing for a general audience.

Compare sources: A journalist might be aligned with Steam users but divergent from Metacritic users, or vice versa. Checking all three disparity scores (Steam, Metacritic, Combined) gives you the complete picture.

Ready to explore the data?