Why Using A Static IP Address is Benefical... Google Engineer Explains
It's very interesting, gives some insight into Google's spidering architecture and makes damn fine sense at both a business and SEO point of view but does have some subtle innacuracies I want to cover to save confusion.
The article speaks about how and why a dedicated IP address on a website is beneficial to SEO compared to what Barry calls a dynamic IP address.
Sorry to pull you on this Barry but I think you mean a dedicated IP address per hostname compared to a shared IP address per hostname. A dynamic IP address, one that changes regularly is more closely related to the way an individual will connect to their ISP rather than how you host a website.
It is possible to host a website/hostname on a dynamic IP address though, using either your own DNS with a low TTL or commercial (I am pretty sure there are some free ones as well) services such as www.dynip.com, no-ip.com etc but this is unusual and unlikely if I understand the context correctly.
I hope this doesn't become too boring but I'll try to explain what http 1 and http 1.1 mean in relation to IP addresses from a technical stand point and then hopefully the article will make a lot more sense.
Many moons ago every hostname (think domain name for now) directly mapped to a single IP address. As more and more services, people and websites starting appearing on the new commercial internet it became apparent that the limited resource of IP addresses could run out and ways of preserving them were thought of.
IPv6 became the longer term answer, but in the short to medium term there was an addition to the http 1 specification that basically said, lots of web sites can share a single IP address and the only change that is needed is that a browser ask for a specific hostname and IP address, rather than just the IP address itself, and the server understands this new request. This new specification, http1.1 name based hosting, became the norm (sensibly from an IP addressing point of view) of hostnig websites and you will now see most websites sharing their IP with many many MANY other websites.
Fast forward to now :)
Now IP addresses themselves don't directly cost ISPs a penny. They apply for them to RIPE, the organisation that administers who owns what IP addresses, and as long as you fulfill their criteria you get given a new batch. The downside is that the paperwork and hassle factor DOES have a cost for an ISP and many will simply not allow you to have more IP addresses for their hassle factor reasons.
On top of this RIPE's / ARIN's etc policy runs along the lines that http1.1 web hosting is the desired and sensible way to allocate IP addresses and requesting IP addresses solely for website allocation is not looked up favourably when http1.1, name based hosting is the "correct route" I do not believe SEO reasons will assist in getting RIPE (and therefore your ISP) to assist in getting an allocation.
What will help though is if you decide that every one of your websites should run in a secure SSL (https://) manner, as well as insecure (http://) way. The reason for this is that every SSL certificate needs to have a dedicated IP address and http1.1 hosting will not work with it. Don't worry about the costs as they are minimal in the larger picture (from about £20 per annum at the moment) but you can reduce that to Nothing if you self sign a certificate (too techy for here but managable for most system admins)
Now with all that aside and back to Barry's piece and the way that G's spiders work. It conforms what many of us have thought for years but as far as I can recall has never publicly been confirmed.
The spiders find your links to your site and keep trying to get the content in a gradullay increasnig technologically aware spidering system.
They start at http1 with basic parsing ability and gradually climb the list of http and html specifications until they understand the content inside it. I think it is fair (although it doesn't say this in the article) that eventually MozGoogleBot thingymajig will come along and parse all the content on the page, including JS, (though probably not VBScript) Flash etc etc.
Interesting and I for one wanna thank Barry for this info that seems to have missed the mainstream (here at TW at least) and also Nick for chucking in the delicious thing on the right and whoever tagged the story :)