Within a few years, a deficiency in the system became apparent. Rapidly improving young players as they improved would take points away from established players. This phenomena became known as "rating deflation".
This process continued and, by 1956, all of the top players had lost points. . The rating of the average player had gone down from 2000 to about 1750. As a result, the standards were lowered, so that over 2600 was grandmaster, over 2400 was senior master and over 2200 was master. It was understood by 1956 that points had to be injected into the rating system to compensate for the points being taken out of the system by these rapidly improving young players. Modifications to the system were introduced by Professor Elo in 1960, but did not adequately address the problem of rating deflation.
Since then, there has been an ongoing effort to put points into the system which would exactly balance the points being taken out by the natural process of deflation. Bonus points, feedback points and even fiddle points have been introduced. Contributing to the problem has been the meddling of chess politicians in this process. If the average player is allowed to gain rating points without any real improvement, the players will be led to believe that they are getting better at chess. There has thus been a constant struggle between the forces of deflation and the forces of inflation.
Contributing to the problem has been the awarding of cash prizes to class players. In one event, a $5000 (five thousand dollar) prize was awarded to the top Class E player, a player with a rating below 1200. The masters object to this, saying that players are being rewarded for being weak and are discouraged from improving at chess. This has also led to a spate of 2600 players from Moldova and elsewhere coming over and establishing themselves as 1600 players.
Serious mathematicians have devoted hard work to the rating system trying to improve it, only to have their recommendations ignored.
Until now, reforms of the rating system have come about as follows: A group of committee members or board members or delegates sit around a room discussing this problem. Somebody raises his hand and offers a suggestion. After that, somebody else has a suggestion. After a long night, a vote is taken. Somebody's suggestion is approved. The new "reform" goes into effect.
Sometimes, it happens a different way. The rating statistician himself changes the system. Sometimes he forgets to tell anybody about it. Players get ratings which they were not expecting to receive, but are unable to question because they do not know how the ratings are calculated.
In about 1980, George Cunningham, a man with limited mathematical background, became the rating statistician. He introduced what he himself called "fiddle points". Everybody's rating went up about 100 points. Everybody seemed happy to learn that they had gotten a lot better at chess.
I believe that the rating system should be reformed. However, I do not have any specific proposals for change. Rather, I propose a scientific method which should be followed when considering and implementing any changes, as follows:
First, a large database of chess tournament results should be created. Thousands of tournament results have already been submitted to the USCF for rating. Unfortunately, what as happened in the past is that shortly after the result has been rated, it has been thrown in the trash. Yes, you heard that right: THROWN IN THE TRASH.
I understand that any tournament result rated before about 1990 has been discarded. I regard this as almost criminal destruction of corporate documents. I find it unbelievable that this could have happened, but it has happened.
What needs to be done is that important historical tournaments should be re-entered into the system. For example, the US Open, previously known as the Western Open, has been played every year since about 1900. The players have tended to be the same year after year and all of the results of this event have been preserved somewhere. All of these results should be entered into the ratings database.
After this has been done, we will have a corpus of tournament results going back for 100 years. Then, with computers, we can start testing various rating formulas and systems. We can ask: What if such-and-such had been the formula in 1900, what would the ratings be like now?
We can also answer some questions which have long been intriguing chess players. One questions is: Are the players better today, or are they about the same?
The answer to this question is not obvious. Of course, it is obvious that the players at the top are better, simply because there are more of them. However, is the average player who finished the US Open with an even score of 6-6 in 1950 better now than he was 50 years ago?
Some people think that it is impossible to know the answer to this question, but actually it is quite possible. For example, Grandmaster Paul Keres was approximately the number three player in the world from the mid-1930s until his death 40 years later. We know that his playing strength was about the same during that entire time. To take a lower rated example, Dr. Arial Mengarini was a master rated on the average about 2250 from the early 1940s for more than 50 years until his death in 1998. His individual results varied widely. I remember a time when his rating dropped to 2100. He was also as high as 2350 occasionally. However, over all he remained the same strength for more than 50 years.
We take a few hundred players like that at all levels of play and run statistical tests of their results over a long period of time. Eventually, we will come up with a solid basis for reforming the ratings.
We should take as much data as we can. All tournament results provide data, including international tournaments. All this data should be preserved and not thrown in the trash as before. It should all be made available on the World Wide Web.
From then on, any time some brainy person suggests a way to change the rating system, tests should be run by computer to see what result this would have had. Eventually, all theories will have been tested and informed reasonable decisions can be made as to how to change and improve the rating system.
Furthermore, and most importantly, these results should be made available to the general membership. Every time a tournament is completed, the results should be entered into the USCF database using a format provided by the USCF on the World Wide Web. Through the Internet, the entire results of every tournament and the provisional post-tournament rating of every player should become immediately available.
Not only will this enable every player to see his or her new rating, but it will reduce the possibility of fraud. There have been several scandals in US chess involving the fraudulent submission of rating reports. The names of players who have not played in many years or who are even dead have appeared in tournament results along with the results of fictional persons created to get established ratings and then to lose a lot of points to those who are manipulating the ratings.
There are right now two scandals which are being investigated by FIDE. These include a Romanian player who gained the Grandmaster title and a FIDE rating of 2635, making him the number 33 player in the world, without playing any actual games at all, and a group of players in Burma who made the top ten in the world without anybody knowing how they did it.
If all these questionable results had been posted on the Internet as soon as they came in, they would have been spotted immediately.
None of the ideas I have suggested are new. I myself have been talking about doing this for years. I had a long conversation about doing this back in 1976 with the USCF rating statistician at the time. He rejected all of my ideas, as have many others since.
It is not only I who have these ideas. Richard Koepcke, a chess master (who I once beat at the US Open in ten moves) is a programmer for Sun Microsystems, inventor and developer of the Java Programming Language, and made a presentation to the USCF Policy Board for developing and introducing a system almost exactly the same as the one I have long been suggesting. The Policy Board thought that this was really a great idea and said that a volunteer should be found to do this work.
This is one of the problems with the idiots we have had on our policy board. They have the USCF pay for their own air tickets to fly around, while insisting that everybody on the USCF staff use a Juno e-mail address because it is free, not recognizing that you get what you pay for.
Koepcke said that a fully functional system which would enable tournament organizers to enter their results on the World Wide Web and get the results back immediately has already been substantially developed and could be finally debugged and be fully functionally in two weeks time.
On the other hand, Koepcke also mentioned that no programmer of the quality needed to do this kind of work is going to be willing to do this work for free. The programmer is going to expect to be paid actual money.
The USCF spends $230,000 per year to run the US Closed Championship. It spends who knows exactly how much to fly USCF politicians to political meetings. (A closely guarded secret). One of the biggest income items which the USCF has is rating fees of ten cents per game for every game rated.
I believe that for the USCF to pay for two weeks of a programmers time (and not just any programmer or somebody's brother) to complete this valuable and important work which will benefit all chess players would be a worthwhile expenditure and will lead to substantial improvement and reformation of the chess rating system.