I've been traveling, back for a week and out of the country next week, so my postings may lag a bit until I return and catch up.
Television shows often live or die based on their Nielsen ratings, primarily because the networks can raise the advertising charges on higher rated shows. Because of this, T.V. producers go to great lengths to try to manipulate their ratings with hyped advertising and "cliff hanger" episodes timed for the ratings cycle.
The true equivalent of the Nielsen rating for journals, at least in terms of advertising revenue, is the circulation volume. However, in the world of scientific publications, it's the Science Citation Index (SCI) Impact Factor (IF) that gets much more attention. The IF was created by Eugene Garfield in 1960. On the surface, the IF seems like a simple thing. In a given year, the IF of a journal is the average number of citations received per paper published in that journal during the two preceding years. If a journal published 250 articles in 2006 and 250 articles in 2007, and these articles were cited a total of 2000 times in 2008, then the IF for the journal for 2008 would be 2000/(250+250) = 4.0.
Not surprisingly, IF's vary tremendously according to medical specialty. Also not surprising is the fact that editors can adopt policies which will affect the impact factor. Review articles, for example, are typically cited more frequently, so including reviews is likely to raise a journal's IF. "Self-citation" is another way to raise the IF. Articles published early in the year are available for citation for a longer time, so placing review articles in the January and February issues is particularly effective.
Interestingly, everything published in a journal is NOT a "citable item," meaning it does not go into the denominator for the IF calculation. Editorials and abstracts are NOT considered citable items so they do not figure into the denominator portion of the calculation. HOWEVER, when editorials and abstracts are cited, they are counted in the numerator of the IF calculation. Thus, a journal that publishes large numbers of abstracts once a year (you know who you are!) gets no "penalty" for publishing them and gets "credit" every time one is cited.
More recently, Thomson-Reuters, the current owner of the "impact factor" has begun publishing a 5-year as well as a 2-year impact factor. It can be argued that the 5-year IF is a better reflection of long-term as opposed to the "flash in the pan" value of a publication.
There have been many arguments raised regarding the validity and reproducibility of the IF, and these are discussed in a short Wikipedia review
Other metrics have been proposed to better evaluate "impact." These include "immediacy index," "eigenfactor," "aggregate impact factor," "page rank" and others.
It's important to remember that IF measures how often an article is cited in subsequent pubications in a relatively short period of time. It DOES NOT measure how often an article is read or how valuable the journal readership considers the article to be.
At a Lippincott - Williams & Wilkins sponsored symposium that I attended last fall, a representative from Thomson Reuters discussed the IF and what it does and does not do. Her conclusion was that editors should publish the "best" articles that they can for their readership and let the IF fall where it may.