1 CXR Chess Rating Formulas Explained
 
CXR Chess eXpress Ratings
 
    

  
The CXR Rating System

INTRODUCTION

Chess Express Ratings, Inc. ("CXR"), uses a rating system developed by its president, Russell Mollot, in the early 1980's, and which has been adopted by chess clubs and scholastic chess organizations across North America. The "CXR System" is actually an entire arsenal of performance statistics -- ratings, percentages, and other metrics. But the ratings -- which are central to the CXR System -- are a variant of the widely-used system developed by Professor Arpad Elo (the so-called "ELO ratings"). However, ours have two BIG advantages over the formal Elo system used by USCF and FIDE:

1. The CXR ratings are much simpler to calculate -- in most cases you can do it in your head.  Anyone who has seen the formulas, clauses, and sub-clauses of other popular systems can tell you that they will make your head spin. This makes the other systems virtually non-verifiable -- you cannot know whether they've made a mistake or not.

2. The CXR formulas allow for ratings to be updated after individual games -- multi-round events are not required.  This means that tournaments are not the only way to get rated action. Your local CXR chess club can report results of individual games you play there. CXR's ratings and statistics are updated within 24 hours of receiving game results.

The CXR system has been thoroughly tested over many years, thousands of players, and tens of thousands of games.

BASIC PREMISE
 
The idea behind CXR ratings is that you gain points for good performance and lose points for bad performance.

BASIC DEFINITIONS

  • Rated Player: A player who has an established CXR rating.
  • Provisionally-rated Player: A player who has a tentative rating based on an estimated initial rating and the results of less than 5 games against Rated Players.

RATING FORMULAS

There are three basic situations when two players meet over-the-board (certain adjustments are made for scholastic ratings, which are discussed later in this document).   These 3 situations are:

I. Both players are RATED

II. Neither player is RATED (both are PROVISIONALLY-RATED)

III. One player is RATED and the other is PROVISIONALLY-RATED

For situations I and II, the following formula is used:

Formula 1:   Rnew = Rold + (S x 21) + (Ropponent - Rold) / 25

where:

   Rnew = the new rating for either player

   Rold = his or her old (pre-game) rating

S = his or her SCORE for the game;

This is +1 for a WIN,

0 for a DRAW,

-1 for a LOSS

   Ropponent = the pre-game rating of the opponent

EXAMPLE 1:

A player whose rating is 1500 defeats an opponent who is rated 1650.

   Rnew = 1500 + (+1 x 21) + (1650 - 1500) / 25;

= 1500 + ( + 21 ) + 150 / 25

= 1500 + 21 + 6    = 1527

EXAMPLE 2:

A player who is provisionally-rated 1600 draws an opponent who is provisionally-rated 1400.

   Rnew = 1600 + (0 x 21) + (1400 - 1600) / 25;

= 1600 + 0 + - 200 / 25

= 1600 - 8    = 1592;

EXAMPLE 3:

A player rated 1714 loses to an opponent rated 2007.

   Rnew = 1714 + ( - 1 x 21) + (2007 - 1714) / 25;

= 1714 + ( - 21 ) + ( 293 ) / 25

= 1714 + - 21 + 12

= 1714 - 9    = 1705

NOTE: When the difference in the ratings of the two players is very large ( e.g. 500 points or more) the formula needs the following three overriding RULES -- otherwise, for example, a 2100-rated player could LOSE 3 points for defeating a 1500-rated opponent!

OVERRIDING RULES:

(for games where Formula 1 is used)

RULE R1: THE WINNING PLAYER ALWAYS MUST GAIN AT LEAST 2 POINTS.

RULE R2: THE LOSING PLAYER ALWAYS MUST LOSE AT LEAST 2 POINTS.

RULE R3: NEITHER PLAYER MAY GAIN OR LOSE MORE THAN 41 POINTS.

In normal tournaments and matches, the rating difference between opponents will rarely exceed 400 points.

For situation III, where one player is rated and the other is only provisionally-rated, Formula 2 is used for the rated player, and Formula 3 for the provisionally-rated player.

Formula 2:   Rnew = Rold + (S x 6) + (Ropponent - Rold) / 100

This has the effect of giving the game less weight than normal because the rating of the opponent has less statistical significance, since it is based on few games.

Formula 3:   Rnew = (4 / 5) x Rold + (1 / 5) x Ropponent + (S x 80)

This has the effect of heavily weighting both the result of the game and the strength of the rated opponent, thus making a coarse adjustment to the provisional rating. After the fifth such match-up (that is, after 5 games versus rated opponents) the rating is no longer regarded as provisional but, rather, as an established rating.

EXAMPLE 4:

A player whose provisional rating is 1325 defeats an opponent who is rated 1650.

   Rnew = (4 / 5) x 1325 + (1 / 5) x 1650 + (+1 x 80)

   Rnew = 1060 + 330 + 80

= 1470

For his rated opponent, Formula 2 is employed.

   Rnew = 1650 + ( - 1 x 6) + (1325 - 1650) / 100

= 1650 - 6 + ( - 325 ) / 100

= 1650 - 6 - 3

= 1641

As with Formula 1, Formulas 2 and 3 may need adjustment when there is a very large difference in the ratings of the combatants. Therefore RULES R1, R2, and R3 also apply to Formula 2, and the overriding RULES R4 and R5 are observed when applying Formula 3, for the provisional player:

(Rules R1, R2, R3 restated for games where Formula 2 is used)

RULE R1: The winning Rated player MUST GAIN AT LEAST 2 POINTS.

RULE R2: The losing Rated player MUST LOSE AT LEAST 2 POINTS.

RULE R3: The Rated player MAY NOT GAIN NOR LOSE MORE THAN 41 POINTS. ;

(for games where Formula 3 is used)

RULE R4: The Provisionally-Rated player CANNOT GAIN POINTS for a LOSS.

RULE R5: The Provisionally-Rated player CANNOT LOSE POINTS for a WIN.

EXAMPLE 5:

A player whose provisional rating is 1470 loses to an opponent having an established rating of 2050. Using Formula 3 for the provisionally-rated player, we would get:

   Rnew = (4 / 5) x 1470 + (1 / 5) x 2050 + ( -1 x 80)

   Rnew = 1176 + 410 - 80;

= 1506 , a gain of 36 points for losing!

But RULE R4 applies, so his or her rating remains 1470.

For the victorious opponent, Formula 2 is used:.;;

   Rnew = 2050 + ( + 1 x 6 ) + (1470 - 2050) / 100

= 2050 + 6 + ( - 580 ) / 100

= 2050 + 6 + - 6

= 2050 , or no gain!

But Rule R1 applies, so he or she gains 2, to 2052.

INITIAL RATING

The initial rating of a player is the responsibility of the authorized official of the league, team, or club. At present, Chess Express Ratings will accept an established rating from the USCF, FIDE, BALNY, CCLNY, CICL, LIICL, or CCA (in that order). This becomes the initial CXR rating for the player, and is regarded as an established rating.

If the player has no established rating but has a provisional rating from the aforementioned organizations, then that will be accepted as the initial CXR rating for the player, but it will be regarded as a provisional rating.

If a player has no known rating, the authorized official may assign an estimated initial rating between 800 and 2000. If the player has an internet chess rating from a website such as Yahoo, Chess.net, or the Internet Chess Club, this may be used as a rough guideline. A rating from a reliable chess program such as CHESSMASTER 9000 (or an earlier release) may also give a good estimate. If the official has no idea of the strength of the player, an initial default rating of 1200 should be assigned. In any event, after a few games, the new player's rating will "home in" on the appropriate level.

ACHIEVING RATED STATUS

As mentioned earlier, after 5 games have been played against rated opponents, the provisionally-rated player achieves rated status. One objective we consider vital is providing ratings to clubs located in areas where there are few rated players. It is rather difficult for provisionally-rated players to achieve rated status if they cannot find opponents with established ratings. CXR has therefore introduced a second path to reach rated status, even if only provisionally-rated opponents are available. This is by means of Experience Points. Here is how it works.

Experience Points (EP) are accumulated as games are played. When 200 EP have been accumulated by a player, his or her provisional rating at that point becomes the initial established rating. The number of EP gained for playing a particular game is 32 EP if the opponent was rated; otherwise 15 percent of the EP of the provisionally-rated opponent (with a minimum of 5 points for a win, or 2 points otherwise). Note that BOTH players gain Experience Points -- neither takes away EP from the other. The CXR System takes care of all the calculations and keeps track of experience points. Once the player has achieved Rated status, CXR begins tracking many additional metrics, and new statistics will soon appear in the player's Folio.

SCHOLASTIC RATINGS

Two interrelated and serious problems have evolved in chess ratings as the importance of scholastic chess has grown. The first problem involves the improper assignment of extremely low initial ratings to scholastic players. This is completely contrary to the ELO model (and other systems). The second problem is caused by the first: an accelerating trend toward massive deflation in the rating system as a whole. As scholastic players graduate into the world of "adult chess", their low ratings cause deflationary distortions which cascade throughout the entire rated population.

The correct approach, under the ELO model, requires evaluation of performance in a series of tournaments where most of the players already have established ratings. However, in the world of scholastic chess, that approach is totally impractical. The numbers of children learning chess outnumber the population of rated adult players by something like 10-to-1. Most of these children have no way of competing against rated adult players, as there are almost no events which mix the two populations. Scholastic tournaments typically involve dozens or hundreds of youngsters playing in a closed population with little or no exposure to anyone with an "accurate" rating established in adult tournaments.

What has evolved is a nearly separate rating pool of scholastic players. New students are usually assigned low initial ratings related to their age or their grade. For example, 1st graders are given an initial provisional rating of 100; 2nd graders, 200; 3rd graders, 300; and so on. While it may seem logical, on the surface, to assign low ratings to people who don't know a pawn from a bishop, the ratings tend to remain much too low even as these same children study, practice, and improve. A somewhat simplified example will serve to illustrate. Imagine a group of 25 second graders starting a chess program. They are all rated 200. By the end of their course of learning, they are knowledgeable in openings, fighting tactics, long-term strategy, and have each played at least 100 practice games. However, the average rating of the group remains precisely 200! Even the weakest player in the group is a much better player than when the program first began. This situation makes no sense at all. These children are likely playing closer to the 1500 level than the 200 level.

CXR has developed a solution to this serious "hole" in scholastic ratings. In recognition that the experience of playing a game is a learning experience for youngsters, 2 rating points ("practice points") are awarded to each player's rating regardless of the outcome. This adjustment is applied only to scholastic players rated below 1000, and only for their first 100 games. In addition, the CXR system recognizes that, in order to actually win a game, a greater amount of knowledge must have been assimilated by the student. In recognition of this small demonstration of increased skill, 3 rating points ("victory points") are awarded to the winner of a game. Again, this adjustment only applies to scholastic players rated below 1000, and only for their first 100 wins. Thus, scholastic players can, theoretically, pick up 200 experience points and 300 victory points if they play enough games. In the example cited above, the group of 25 students who all started with a rating of 200, could end up with an average rating of 700. The more successful players in the group may have reached the 1000 mark or even higher; and even the weakest players in the group would have higher ratings than when they started out, knowing nothing about chess.

TOURNAMENT STRENGTH INDEX

The Tournament Strength Index is an indication of the event's significance in the overall chess-playing community.  For example, an event involving 25 players is generally more "significant" than a Quad.  An event with 100 players is generally more important than one with 25 players, and so on.  Also, an event among high-rated players is considered more important than one (all other things being equal) than one with low-rated players.  An event with more rounds (more games played) has more significance than one with fewer rounds (all other things being equal).  Finally, an event with a slow time control would generally be considered more important than one with a fast time control.

A Significance Index is calculated separately for each Section (Section Significance Index - "SSI"). The highest SSI in any tournament is called the TSI. This is often the strongest section's SSI, but not necessarily so, as a lower section may have much larger participation or other overriding factors. Sections with fewer than 3 rounds or an effective** average rating below 400 or effective* number of players less than 4, do not qualify for an SSI.

*Effective numbers reflect games actually played in a section. For example: suppose that 16 players have registered for a 5 round event. Suppose one of the players had to leave suddenly, before round 1. With 15 remaining players, there are going to be several BYEs. Also, some of the players may show up too late to play round 1, or some players may have to leave early, not playing the last round or two. Thus, instead of the theoretical maximum of 40 games, the event might have only, say, 30 games actually played. The effective number of games is 30, not 40. If only 30 actual games were played in a 5 round event, that means only 6 games were taking place, on average, in each round. Since 6 games involves 12 players, the effective number of players in this section becomes 12, not 16. Similarly, suppose that the average rating of the 16 registered players is 1622. What if some of the top-rated players were those (for whatever reason) playing fewer rounds than anticipated? The effective average rating is calculated based upon games actually played. In the situation suggested above, the effective average rating might be something like 1593 instead of 1622, since fewer of the games actually played involved the higher-rated players.

TSI is a proprietary metric, and CXR is the only rating authority that provides this measure of tournament significance.  Please ask your local tournament director to contact us to obtain a TSI for his or her events.

STABLE RATINGS

When the CXR rating system was designed, the advantages and shortcomings of other rating systems were examined from many perspectives. Two properties of these systems, it turns out, were linked: The ability to correct errors in the reporting of game results, and rating stability.

Errors in reporting game results have always occurred, and will always occur. Some errors have never been detected, some errors have been detected but never corrected, and these two types of cases are therefore embodied in ratings in every rating system.

In some systems, errors from the not-so-recent past are corrected, and then ratings are recalculated from the game date onward. This correction and recalculation of ratings then introduces its own problem: The ratings of a player whose game results were entirely correctly reported to the system may change, as he or she played another player whose erroneous game results were corrected. Thus, one correction may result in a "cascade" of rating adjustments. Further, a player whose rating has been, say, 1650 for three months, and has not played in those months, may discover that his or her rating changed last week to 1640, a mysterious development indeed. Such a change may bring into question the stability or credibility of a player's rating in that system.

To balance the need for accuracy and stability, the CXR rating system requires chess officials who report games to make corrections to those games within 14 days of the game date. This encourages both officials and players to examine the ratings and wallcharts of events promptly so that the officials may make corrections. From the time a game is initially reported, its results are regarded by CXR as being "unofficial". Once 14 days have elapsed from the game date, the results are "official".

Under the CXR rating system, players and officials can trust a rating once the games on which it is based have become official.

Thus, the CXR rating system provides a reasonable and standardized timeframe to correct errors while at the same time providing a stable rating to players.

NEW DEVELOPMENTS

Chess Express Ratings, Inc., has a corporate policy of being responsive to the needs and suggestions of the chess community, our customers. We are making available a number of chess performance statistics which were not available before, and will continue to develop more useful measurements of performance.

 
Hot List!!
1.   Caleb Brunnert +68
2.   Hosanna Moore +68
3.   Zachariah Carlson +50
4.   Luke Bjork +46
5.   Robert Smith +41
6.   Joshua Davis +31
7.   Justin Woodland +31
8.   Matthew Brown +30
9.   Adam Raymer +26
10.   Daniel Davis +26
[ view more ]

Top Active Players
1.   Advait Patel 2515
2.   Curtis Peterson 2128
3.   Tony Davis 2096
4.   Blake Tanner 1893
5.   Jake VanRooy 1859
6.   Charles Davis 1849
7.   Michael Robertson 1837
8.   Josh Pruett 1812
9.   Frank Dixon 1777
10.   Veronika Zilajeva 1766
11.   Alex Bohn 1698
12.   John Madsen 1689
13.   Shawn Erickson 1668
14.   Arman Azroy 1640
15.   Sid Anjilvel 1629
16.   Dave Gordon 1626
17.   James Dollahite 1617
18.   Wilson Gao 1615
19.   Krish Kumar 1597
20.   David Morgan 1573
[ view more ]


Share on Facebook Share on Twitter
thumbnail
  Copyright © Chess Express Ratings, Inc.   latrunculorum ludio ludius notitia