Two interrelated and serious problems have evolved in chess ratings as the importance of scholastic chess has grown. The first problem involves the improper assignment of extremely low initial ratings to scholastic players. This is completely contrary to the ELO model (and other systems). The second problem is caused by the first: an accelerating trend toward massive deflation in the rating system as a whole. As scholastic players graduate into the world of "adult chess", their low ratings cause deflationary distortions which cascade throughout the entire rated population.
The correct approach, under the ELO model, requires evaluation of performance in a series of tournaments where most of the players already have established ratings. However, in the world of scholastic chess, that approach is totally impractical. The numbers of children learning chess outnumber the population of rated adult players by something like 10-to-1. Most of these children have no way of competing against rated adult players, as there are almost no events which mix the two populations. Scholastic tournaments typically involve dozens or hundreds of youngsters playing in a closed population with little or no exposure to anyone with an "accurate" rating established in adult tournaments.
What has evolved is a nearly separate rating pool of scholastic players. New students are usually assigned low initial ratings related to their age or their grade. For example, 1st graders are given an initial provisional rating of 100; 2nd graders, 200; 3rd graders, 300; and so on. While it may seem logical, on the surface, to assign low ratings to people who don't know a pawn from a bishop, the ratings tend to remain much too low even as these same children study, practice, and improve. A somewhat simplified example will serve to illustrate. Imagine a group of 25 second graders starting a chess program. They are all rated 200. By the end of their course of learning, they are knowledgeable in openings, fighting tactics, long-term strategy, and have each played at least 100 practice games. However, the average rating of the group remains precisely 200! Even the weakest player in the group is a much better player than when the program first began. This situation makes no sense at all. These children are likely playing closer to the 1500 level than the 200 level.
CXR has developed a solution to this serious "hole" in scholastic ratings. In recognition that the experience of playing a game is a learning experience for youngsters, 2 rating points ("practice points") are awarded to each player's rating regardless of the outcome. This adjustment is applied only to scholastic players rated below 1000, and only for their first 100 games. In addition, the CXR system recognizes that, in order to actually win a game, a greater amount of knowledge must have been assimilated by the student. In recognition of this small demonstration of increased skill, 3 rating points ("victory points") are awarded to the winner of a game. Again, this adjustment only applies to scholastic players rated below 1000, and only for their first 100 wins. Thus, scholastic players can, theoretically, pick up 200 experience points and 300 victory points if they play enough games. In the example cited above, the group of 25 students who all started with a rating of 200, could end up with an average rating of 700. The more successful players in the group may have reached the 1000 mark or even higher; and even the weakest players in the group would have higher ratings than when they started out, knowing nothing about chess.