Google's look into document scoring by historical data
UPDATE: After following the comments below, and taking WG's advice and reading the whole document, a little title change is necessary. As always, the meat is in the comments, read on....
Brian sent me a note pointing to this thread at SEW by MSGraph that points to this patent filed by Google. It would appear that this document confirms what many have suspected (known..) for some time regarding Google's infamous Sandbox.
The interesting bit, apart from those individuals credited with inventing the system, is this:
Consider the example of a document with an inception date of yesterday that is referenced by 10 back links. This document may be scored higher by search engine 125 than a document with an inception date of 10 years ago that is referenced by 100 back links because the rate of link growth for the former is relatively higher than the latter. While a spiky rate of growth in the number of back links may be a factor used by search engine 125 to score documents, it may also signal an attempt to spam search engine 125. Accordingly, in this situation, search engine 125 may actually lower the score of a document(s) to reduce the effect of spamming.
Dont gain links too Fast!
Well, you knew that right? But this doesn't go all the way to explaining Sandbox, there was a little work around not so long ago, untill Google nuked it, allegedly taking 1.2million small businesses out as collateral damage in it's attempt to thwart a handful of spammers. It's an interesting read nonetheless (yes, i *did* only read the interesting bit :-) and will no doubt go some way to helping all those poor MF's mired in the dreaded Sandbox.
Feel free to correct me if i got any of that wrong, i should be banned from writing techy stuff....