Netimperitive report on a new beta search engine specific to the UK - they're making some fairly large claims, not least of which is the following:
Seekport Internet Technologies’ managing director Joachim Kreibich said: “We can remove material from our index within half an hour of receiving a request. For European users this is a key issue, particularly as deletion requests from US search vendors typically have to be routed back through the US and it can take weeks for anything to happen.”
Only search engine to support its technology with local index teams in each country
They will show only 1 in 10 US results as opposed to an average of 1 in 3 for US engines promoting UK specific searches
It's german site (also has a french one .de and .fr) says it's one of Espotting's largest customers
It's in Beta now, so go have a play and tell us what you make of it...
Threadwatch member Anthony has added an RSS feed to his superb DirectoryList.org - you can grab the feed here.
If you want to keep up with the seemingly thousands of new directories that pop up on an almost daily basis then that's one tool you'll most likely enjoy. You can find a discussion of the RSS addition at the seozip post threadlinked above.
Rice University Computer Scientists Find a Flaw in Google's New Desktop Search Program
New York Times reports of a security flaw found in the Google Desktop Search tool which already has a fix.
The glitch, which could permit an attacker to secretly search the contents of a personal computer via the Internet
In a statement over the weekend, the company said that it had been notified of the flaw by the computer researchers in late November and had begun distributing a new version of the desktop search engine that repairs the potential security hole
Will news like this damage Google’s reputation like it has with MS and IE?
Targeting Small Screens
Douglas Bowman's essay threadlinked above is a wonderful primer on the state of mobile browsing and how we, as web developers might be able to accomadate the small screen whilst the medium is in flux.
By in flux i mean that as Bowman points out, the HandHeld CSS type is not supported on most mobile devices and all manner of solutions and suggestions are cropping up on an almost weekly basis as to how to handle mobile content, even down to top level hosted domains that detect mobile devices for you and handle rendering! It just ain't easy...
It's a technical article for those comfy with CSS but everyone interested in mobile should at least have a skim, it'll be interesting even if you're still designing your sites with tables...
Clearly, this mobile browsing thing isn’t another flash in the pan. It’s here to stay and it’s only getting more popular. We need to account for the millions of mobile devices attempting to hit our sites. And we need to be designing and building our sites to work everywhere. This includes devices for people with disabilities as well as mobile and all other forms of utility browsers we haven’t even seen come to market yet.
We don’t want to go back to 1997, where we had to build different versions of our sites for each new browser that entered the market. True, in some cases, sites may need to be customized for the best mobile experience, and this may mean a completely different architecture, let alone HTML structure. But when we have the chance, we want to optimize the design or presentation of content based on what type of device is used to view that content.
Things needs to change. It requires action from both browser makers and the designers and developers creating the leading-edge sites. Those sites set examples for everyone else. Neither side needs to think whether or not to go first, they just need to go.
Scoble: Even I want an iPod.
You gotta admit Apple has done a good job in starting a market, but they've done that before and lost their lead. Personally, I see too many significantly cheaper/better? alternatives to an ipod coming on the market to think Apple can keep itself from being swept into the just-one-of-many category.
Digital Point's Cooperative Ad Network
Aaron Wall of SEOBook.com has a nice write up threadlinked above, of Shawn from DigitalPoint's Coop Ad Network where members post code to their pages that display links to other members of the network's sites.
It's a little more complex than just that of course, here's a snippet from Aarons post:
Coop Ad Network Rating as Currency:
The Coop Ad Network rating is actually becoming a currency...
Where there is Value...
When other people sign up under your account you gain added network credits. Some people are sending out affiliate link embedded emails recommending the coop ad network.
Now, i know the network is working well becuase i read over at dp quite a bit, time permitting, but in an amazing coincidence glengara over at SEW posts a warning (apparently out of the goodness of his heart..) about the network just an hour or so later.
I have to say that such a network would indeed worry me. So the questions are:
Is the COOP Ad Network potentially dangerous with regard to Google/Yahoo! link scheme penalties?
Are all such networks to be avoided?
Is it a great idea that benefits everyone?
Tell us what you think of ad networks in general and specifically the COOP Network...
Calling All Advertisers
Forbes report on Fox's new mobile initiative via Vodafone.
Remember the cool TV series "24"? Of course you do.. Well, they're going to be running a series of mobisodes based on 24 with different actors that will be distributed and streamed initially via Vodafone's new 3G network in January 2005 in the UK. Vodafone customers will just have to sign up to receive them - Free.
This is the type of permission based viral marketing that seems to be fitting well to mobile - you could do this with practically anything by marrying entertainment and information with advertising. Fox get to promo the fourth season of 24, vodafone get to parade their new network, yummy yummy in the consumer tummy eh?
Something fishy with Google library project
In the threadlink above, king of the Google Conspiracy Theory™ Everyman aka Daniel Brandt makes some interesting observations as to how long it would take Google to complete their library project.
Let's run 24-hours a day (three shifts of temp workers at minimum wage!) and assume that the wizards at the Googleplex will never have any down time. How many days is this? 383,969 / 24 = 15,999 days.
How many years is this? 15,999 / 365.25 = 43.8 years. Even their cookie won't last that long!
Followed by NFFC quoting a Times article that raises the issue surrounding copyrights and Googles new project:
There is, of course, a more worrying possibility. By the act of converting printed books to digital form Google will be creating a new copyright.
Works in the public domain will effectively be privatised. Whether or not Google chooses to exercise its rights, it and its library partners will be owners of the newly processed property. So the vast reservoir of material in the out-of-copyright public domain will become “proprietary”, or pay-per-view. If we get access, it will be because we are “allowed”, not because we have the right.
Daniel's points are interesting, but the Times piece's questions about copyright concern me far more that the mathematics and logistics of the task ahead.
John Battelle also had some interesting thoughts on monetization of Google Library and Google Print:
In other words, this could well be a step toward diversifying Google's revenue streams away from advertising and into direct sales and/or subscriptions - ie, the content business. As one source who is familiar with the industry tells me, Google is not doing this only out of the kindness of its heart - there is a lot of money to be made in selling books, in particular books with no copyright.
Comment Spammers Have Blogs of Thier Own
In the threadlink above, Jeremy Zawodny of Yahoo is talking about solutions to the ever increasing blog spam problem. Recently SixApart, makers of MovableType have been experiencing server load problems due the voracious appetite spambots employed by hardcore search marketers (spammers) in an effort to get top ranking in competative areas.
Jeremy wont link to the spammers site, maybe he means DaveN's site?
Jeremy's solution is this: Assuming that 80% of bloggers use the same major blog software and that 80% dont change the default templates, just have search engine spiders look at the code and differentiate between the original post and the comments. Dont count comment links at all.
Why that isn't a great idea...
I think Jeremy's solution is a poor one for a few reasons:
It will kill a good many great links - Comments are used for discussion of the original post and as with here on Threadwatch the discussion that follows often produces some outstanding links to great resources that the original poster never knew about. I'd hate to see those sites not get the full benefit of a link from us.
Computational overhead - Im not search engineer but im reasonably certain that comparing code on pages to look for MT (or other blogs) footprints and then weeding out the comment links would require a fair bit of extra computation and this may not be doable from an SE standpoint.
Im not convinced that the search engines should be responsible for finding a solution - im not saying that it's sixapart fault, just that they are in a better position to find a solution to this.
So, What's on the Table?
I think the solution lies with the software producers and that the company that comes up with the best solution and can demonstrate figures to prove that it works will have an excellent selling point for thier product. As it stands, MT's MT-Blacklist is crap: It's a constant "bucket and bail" effort that's reactive rather than pro-active and falls way short of being labeled a "solution". Other blogs, such as bBlog have implemented Captchas - where you have to enter the digits shown in a graphic to comment - this is better but, it's not unbreakable.
CrispAds for Blogs, RSS & Atom Feeds
I first saw CrispAds, a new contextual ad network a week or so ago and didn't really think it too newsworthy at the time. Since speaking to the Queen of Contextual: Jennifer Slegg of Jensense though I've finally gotten my head around it.
Jen knows im a simple chap and can explain things to me on a level i understand (not to many syllables..)
From Jens post threadlinked above:
CrispAds - text ads for blogs as well as RSS and Atom feeds, are jumping on the contextual advertising bandwagon, but by offering something that AdSense currently does not - the ability to advertise on RSS feeds.
From the advertiser perspective, it offers advertisers a very tight niche community - blog writers and readers - and the ability to market related products to them. CrispAds does not disclose what the CPC is for advertisers until after signing up (you are charged an initial $5 for credit card verification).
After a quick IM session I think we've both agreed to be "unsure" but kinda keen on it :)
I've noticed that many tools like "GooSug scraper" pull data from Google by running through proxy servers. Although I understand the method conceptually, I would appreciate knowing more about what kind of specific resources are available. Can anyone suggest any proxy services (free or paid), software, on-line tutorials, etc.?
Im sure it can't be just me that's beginning to find the enormously over-inflated hype surounding blogging somewhat amusing. In fact, i know it's not. Simon Walden of the Guardian recently posted about the niche "industries" springing up around blogs and blogging. I couldn't help but read a little humour in his post. Follow the title link above for the full post.
Your market holds the final answer
Here's a short but interesting discussion between Dana Van Den and Jay Lipe on how companies should be looking to engage their customers and market to them according to what they discover - You may have noticed if you read Threadwatch regularly that many of the posts here recently touch on the general theme of community, discussion and feedback as relates to marketing online. It's not just becuase i'm interested in this you know.. :-) The trends developing in blogs, open source marketing, citizen journalism and social networking are where the web as we know it is headed.
Pay attention at the back!
Interactive dialogues between customers and vendors, fueled by blogs, can help both parties have a say in the marketing process.
In the end, the flavor that the customer creates (and likes) is the right one.
(Without breaking NDAs, this'll have to be generic)
I'm now in the contract-writing stage of selling a group of my domains that are deep-content local sites. Though they are repurposing the domains' primary target, their market is allied with the current userbase so there is no need to abandon the content and ~perhaps~ the new owners will be able to continue some portion of the revenue, though that is not mission-critical. It just seems a waste to walk away from something that is working and I'm willing to put some time into the transition.
I hate the phrase, but this is a win-win situation and I'd like to see the relationship go forward after I've turned the sites over to them, but so often I see buy-outs go sour in the transition. Also, I have some content on the domains that they are acquiring that will need time to move in the SEs, so I'm going to need to be involved for a while. So, what are the things that need to be ironed-out beforehand, i.e., in the contract?
Assume you're buying a strongly branded set of domains in a market niche. What would you want/expect from the seller?