Google News under Fire by French Press

37 comments
Thread Title:
Agence France Presse sues Google over news site
Thread Description:

Cnet and The Enqirer are reporting that Google are being sued for $17.5 million by Agence France Presse for aggregating their news stories on Google News.

Google have been in hot water with France for some time now, just recently the French President initiated a euro rival to Google print

Non, non!

Comments

Google will loose it

The only thing that might prevent it is that the case is filed in California. If it was filed anywhere in Europe they would most definately loose it - others have already lost similar cases.

I am still looking for just one brave (large) company that want to challenge search engines right, in general, to steal our content, manipulate it and serve it up (in part) on their own sites only to make money off it. Basically I still don't find that to be totally fair. At least, the way it works today ti is definately far from a "fair deal" - and publishers are never asked if they want to take part of the deal. Yes, you can opt-out with robots.txt but there are hardly any other place in law where you have to opt-out not to be abused. In the cases I know of in Europe robots.txt have actually more or less been laughed out of the courtroom with arguments such as: I don't have to wear sign that tells anyone not to beat me up - the law protects me from that, and many other kinds of abuse.

Most of life is not "opt-in"

Your phone company undoubtedly adds your listing to its phone book unless you ask it not to.

Most organizations will include your name on their within-group available roster unless you ask them not to.

In school, you'll be in the yearbook unless you ask not to be.

At your company, you'll be in the company directory unless you ask to be removed (and they'll really look at you funny).

When you write a letter to the editor of your local newspaper, it's probably going to be available in Lexus-Nexis and other databases for all eternity.

By placing content on the PUBLIC World Wide Web, without a password or other authentication, you're clearly saying (IMHO): "Hi, I'm here, please index me." And frankly, I fail to see a significant problem with that.

If Google and other sites were to flagrantly disregard robots.txt, I think folks'd have a case. As it stands, given Europe's often paranoid-schizophrenic approach to privacy (in contrast with the U.S. wacko you've-got-no-privacy-at-all extreme), I wouldn't be surprised if Google *DOES* get spanked.

But it'd be a lousy decision. Horrible for consumers and possibly devastating for Google.

There's a fundamental difference ...

... in that you have legally binding (and reviewable) contracts with phone companies, schools, business bureaus, etc., all of which are regulated by law.

You've got no such thing with the search engines (PFI and PPC aside, of course). Simply because the WWW is "public" (whatever that may mean - or however it may translate into legalese) doesn't grant anyone license to steal other people's copyright protected content.

If you put up a billboard on a public road, it's viewable by the "public", too (as this is obviously the whole point of this exercise) - you may even photograph it for private usage, quote or reprint it within limits in the course of academic research, etc. Yet, you don't own it or its contents - the minute you start distributing it without the rightful owners' permission you're in deep water legally.

Moreover, AFP said that they explicitly asked Goo to stop displaying their material, without their reacting in any way. If this should prove to be true, they've probably won their case even before it starts. And it would constitute a flagrant disregard of owners' legally protected intellectual property. This isn't mere dismeanor, either: it's a criminal act, period. (Again: we only have one side of the story available currently, so let's give Goo - however grudgingly - the benefit of the doubt until proven otherwise.)

Mikkel's road example is 100% to the point: we, as webmasters, don't need to opt out of anything - it is they, the search engines, that have to ask us for permission if they want to monetize on our content. Like it or not, that's the way the capitalist agenda is organized. And so are libertarian economics and free enterprise.

European "often paranoid-schizophrenic approach to privacy" has nothing to do with it. (In view of Europe's long, ignominous history of political and warmongering strongmen and nanny/Big Brother states, the EU's currently prevailing privacy policy is, if anything, certainly not paranoid - it's sheer realism. However, it's also quite hypocritical - and, thus, I agree, schizophrenic -, but that's another story ...)

Adam

By going out on the public highway, anyone's children are clearly saying: "Take pictures of me and use them on your affiliate zit-removal site". Or not?

By putting your content and photos on your site, you are clearly saying to others: "Copy them and replace me in the search engine listings as duplicate content". Or not?

The reason why AFP is suing, and I strongly support their case, is the same reason why there are photographic rights and model releases. I appreciate that many bloggers have a rather lax approach to the use and abuse of other people's copyrighted work, but there are actually more important legal principles here than "you shouldn't have left it on display".

(Which, somewhat ironically, is the same argument that you and others fulminate against in the blog spam discussions...)

So, I'll be round your house with the digital camera on Tuesday, OK?

Good post, fantom!

Adam, I think your weak arguments has already been killed by previous posters just fine so I won't spend more time stepping over you for all your wrong assumptions.

I used to run my own publishing company for years so I do know copyright law here (in Denmark) VERY well and I am almost 100% sure that any search engine would today would loose a case on the way they work today: Stealing webmasters content with no agreement and reusing it for financial gainings.Caching and image search will be easy to win on and even the regular index could be won with the right legal team and knowledge.

The copyright laws of Denmark is to a great extend build on the same international treaties that most other European (and some outside) is build on. So even though I am no expert in such law outside Denmark I am pretty sure that similar cases could be won there.

Adamn, you may not like that but it is still a legal fact. Your arguments just don't count anything in such a case

Google came here and asked

members of this site not to scrape their index. There was no lawsuit, but GoogleGuy made it clear that the company did not want people in this forum to use a tool to take information from their site.

I think, what is good for the goose.....

Quote:
I would ask

I would ask that it not scrape Google. Doing so is clearly outside Google's guidelines: http://www.google.com/webmasters/guidelines.html#quality
(the part about software querying Google)

It's also a load on our servers from bots rather than the people for whom our services are intended. In addition to being rude and using our services without permission, it may well open you up liability. Again, this is just a courtesy heads-up that such software would violate our guidelines. Please consider this a polite request not to produce software that scrapes Google without our permission.

Adam, you must also realise that copyright works the other way around, one has to ask permission to use someone's copyrighted material. That is the law around the world, not only in the US.

Just before I left New York on Friday evening, I noticed that FuxNews were reporting the story as the French hate us, rather than asking whose coprighted material was being used.

Google sued by French Press

Agence France Presse is suing Google for syndicating their news stories on Google News. I found out about this via this post at ThreadWatch. See also this CNet News.com article. This lawsuit cuts to the heart of fair use and...

How AFP can Win

Dana Blankenhorn has absolutely outdone himself on a couple of posts in the last few days, this one on how AFP can win the case is a classic:

Quote:
For instance, Google News does not believe Corante is a news site. This story will not be spidered there. This is an editorial judgement.

The plaintiff can show Google News is arbitrary in these judgements. For instance, Exhibit A is on Google News today, a spectulative story on the new Treo from EnGadget. Google thinks this speculation is newsworthy.

Note that EnGadget does not kill trees. EnGadget is a blog site, just as this is a blog site. It is no different in fact from Corante. But Google News has made an editorial decision to spider EnGadget, and not to spider Corante. This is an arbitrary decision. The source and process behind this decision is closed. But it is enforced, technically, so Google's response that it has no technical means to grant the plaintiff relief is false.

and

Quote:
So you see, your honor, Google News can make editorial decisions. It does make editorial decisions. Google News decides, and enforces, editorial decisions on which sites it thinks publish news and which don't. Its process for doing this is opaque, not visible to those affected by it, and thus arbitrary in nature.

Funny

It is kind of funny, not so long ago Google defended the oposite principle in the SearchKing case: That they had the right, under freedom of speach, to make any kind of adjustments or penalize sites they just don't like. Now it seems they will aregue they have no editorial control. Make up your mind, Google :)

Goo opportunisms

As we all know, it's not about "principles" (moral, ethical, take your pick), it's about making money, period. I. e. utterly opportunistic. "Do no evil", he he ...

Just a few thoughts

Appropriate outcomes are not always the same as possible or likely outcomes. As I suggested above, I think Google may get spanked. But I strongly believe think they should not.

---

Good point re: reviewable contracts. No one really "owns" the Web, so there's no entity to make a contract with in this context.

---

AFP says they asked Google to stop. Google said that publishers can opt out anytime. What we have -- and what, I think, hasn't been communicated well in this thread -- is Google News indexing articles posted by AFP *affiliates*. AFP's got a problem with that? Have them take it up with their affiliates. Their affiliates are the one breaking a contract, not Google. Robots.txt -- easy, effective, low-fat...

---

One could argue that including snippets and thumbnail photos is fair use, at least by American law. Google's not republishing entire news stories (nor is it, to my knowledge, caching them either). Is this any different than other aggregators, such as RottenTomatoes.com, which takes snippets from movie reviews? For crying out loud, Google links to the original article. What is AFP's problem? (literally and figuratively speaking)

---

"By going out on the public highway, anyone's children are clearly saying: "Take pictures of me and use them on your affiliate zit-removal site". Or not?"

Depends on the scope. If the kids are pictured in a public place and there aren't closeups, then I'd say the company's likely not in hot water. "73% of children today suffer from acne" [cut to photo of kids walking off a school bus in the distance].

---

You put your stuff on the Web (HTML, music, PDF, whatever), it's fair game for quoting. Again, that's American law, at least... I can't speak for European law.

---

By the way, if you're gonna link to someone, at least don't be disingenous about their stance. While Dana does think that the AFP case is winnable, he starts off his entry with a rather telling statement: "As I noted yesterday Agence France-Presse's suit against Google News is silly."

I'd add to that: misguided and obnoxious. AFP's asked Google to stop? Yeah, right. As Dana points out, there's nary a robots.txt file to be found on the 'stolen' documents. So AFP is either a liar or clueless or both. Any sensible judge or jury should have zero sympathy for 'em.

 

AFP says they asked Google to stop. Google said that publishers can opt out anytime. What we have -- and what, I think, hasn't been communicated well in this thread -- is Google News indexing articles posted by AFP *affiliates*. AFP's got a problem with that? Have them take it up with their affiliates. Their affiliates are the one breaking a contract, not Google. Robots.txt -- easy, effective, low-fat...

Frankly, your logic beats me: AFP's clients are paying through the nose to get those newsfeeds on their sites. (Guess how they probably came up with that $17.5 million figure in the first place – something they will have to plausibilize in court, after all.) If they don't protect their pages via robots.txt this may perhaps be dumb - but does it put Goo in the right? And, probably even more important: does that put the AFP licensees in the wrong? Why should it?

Ok, as of now we don't really know what the suit is supposed to be about in detail (and, just as an aside, neither does Dana Blankenhorn, so judging it "silly" based alone on the paltry evidence at hand, i. e. on a mere news snippet or two aka hearsay, is in itself a sublime example of editorial silliness ...) Is it about stealing massive chunks of copyright protected material or about legally quoting by way of "fair usage".

Though the latter in itself is by no means a given, mind you. E. g. I doubt that even a fairly liberal minded U. S. court would necessarily subscribe to the view that extensively quoting news feeds on a regular, probably hourly or so basis, is still protected by fair usage principles. There is also the potential issue of what exact status a news message's headline has. After all, many if not all most people probably read little else, at least as far as the majority of news items is concerned. For one, they're copyrighted to a large extent (unless fairly generic) - you cannot just go ahead and cut and paste them on top of your own piece with impunity.

So, provided this doesn't end in an out-of-court settlement (highly likely), chances are it will turn out to be a very interesting case indeed, more probable than not setting a precedent on a number of scores, at least within the U. S. jurisdiction.

While it may admittedly be of all-decisive importance to this particular case, I don't actually know whether they cached those particular pages and frankly, I couldn't care less: even if not, they're caching practically everything else in site, and that, IMV, is a blatantly illegal act - and no benefit of the doubt required as far as this issue goes.

But no matter how allegedly flagrant the violation...

"Frankly, your logic beats me: AFP's clients are paying through the nose to get those newsfeeds on their sites."

Indeed, and that's why I'm puzzled that my logic isn't more clear here.

AFP undoubtedly makes a contract with publishers/clients along the lines of: "In exchange for lots o' moolah, here, have access to these articles. DON'T SHARE THEM WITH ANYONE!!!"

The publishers then turn around and say "Hey Google! Hey Yahoo! Hey everyone... want to use excerpts of the stories on our site? Go right ahead, we don't mind!" (and yes -- while I realize some here may disagree -- not having a robots.txt file is akin to telling the search engines to grab and use snippets of your content)

A logical AFP would scream at the publishers: "You morons! We gave you permission to USE, not SHARE those articles! You owe us damages for not adequately protecting the content that you got from us." Some heads would roll at the publishers ("Uh, Mr. Webmaster... just how stupid ARE you for not adding a 5 line robots.txt file?!"), they'd fork out some dough, the articles'd be blocked by Google, and life would then be hunky dory.

But no, AFP decides to greedily go for the deep pockets. And, let's face it, that's all it's about here... AFP wanting some extra cash.

And, reiterating my early point, no sane court is going to have sympathy for such a blisteringly obvious, simple, and cost-free self-help remedy. "Let me see if I understand this, Plaintiff: You knew that Google was publishing headlines of your content from your publishers' sites and you knew that the publishers just had to add a snippet of code to their sites, yet instead of enforcing that simple action contractually, you decided to sue the party with deep pockets."

Not quite as simple

I'm afraid it's more complicated than that: AFP's clients may or may not be at fault in terms of contractually specified due diligence etc., but the copyright remains AFP's and the (purported) violator remains Goo. And if Goo is indeed in violation, according to your rationale it's a free for all to go after both: collect damages from their clients and cash in on Goo proper for actually perpetrating the violation. The best of two pockets, wouldn't you say?

Also, if AFP does not go after Goo, they risk setting up a precedent themselves. After which it will be a lot harder to pursue other parties' violations. ("Why didn't you take up Goo when you had the chance? Are you discriminatory, or what?") You see a lot of this in trademark violation cases. They may also owe it to their stockholders, they may not want to piss off their existing client base (who will probably be expecting them to act in this manner, too), etc. etc.

Yes, of course it's all about money on AFP's part - but in a way which can easily be cloaked :-) to make it appear 100% legit, with the good guys and the baddies clearly defined, etc.

Which, IMV, is good because else Goo's never going to get it - with a major, deep pocketed plaintiff for an adversary, perhaps they'll start rethinking their whole cavalier approach towards other people's protected content. And about time, too.

Vigillence is required

Very true. Fight it or lose it.

Regardless of our differing opinions, this is definitely going to be an interesting case to watch!

the blog spam comparison is really ridiculous

By the way, in response to this:

---
I appreciate that many bloggers have a rather lax approach to the use and abuse of other people's copyrighted work, but there are actually more important legal principles here than "you shouldn't have left it on display".

(Which, somewhat ironically, is the same argument that you and others fulminate against in the blog spam discussions...)
---

1) The attack on bloggers / blogging is really getting old.

2) This is totally comparing apples and oranges:

- Adding a robots.txt file has no adverse effects, takes about 2 minutes and results, with practical certainty, that Google will not index or quote from your stuff.

- Preventing blog spam typically has adverse effects, and is impossible to do with reasonable effectiveness while still keeping communications open.

- Google's use of headlines and body snippets from AFP publications resulted in little, if any, hardships to AFP, IMHO. This is purely a matter of greed, semantics, obscure legalities.

- In contrast, blog spam is a huge, painful problem. It hurts small publishers, large publishers, and everyone in between.

- The quoting from AFP publications helped consumers and -- I'd argue -- AFP, by leading likely thousands of readers to AFP publications and information.

- Blogspam helps no one except for asshole online poker sites (and the people who really WANT to play online poker aren't looking in blog comments for recommendations).

Without emotion, it's not ridiculous at all

"By going out on the public highway, anyone's children are clearly saying: "Take pictures of me and use them on your affiliate zit-removal site". Or not?"

Depends on the scope. If the kids are pictured in a public place and there aren't closeups, then I'd say the company's likely not in hot water. "73% of children today suffer from acne" [cut to photo of kids walking off a school bus in the distance].

No, that would be equivalent to Google's "ransom note" (and, by the way, the children would have to be unidentifiable, as would any trademarks in the photo). I wouldn't argue against the snippet.

What is at stake here is a) Google News' presentation of copyrighted licenced news feed content as news items, b) the Google cache of stories and photos, and c) the Google image cache.

1) The attack on bloggers / blogging is really getting old.

2) This is totally comparing apples and oranges:

1) Blogs and forums are top culprits for hotlinking, for example. The term "bloggers/blogging" does not just consist of the "blogerati" (whatever that may be defined as), it includes anyone who has a blog. As far as content goes, off the top of my head I can think of a few SEO blogs who are guilty of scraping content and ideas.

2) Take out the emotional baggage and the logic is exactly the same. If all blogs used no-follow tags, there wouldn't be a blog spam problem. If all sites using copyrighted news feeds and images used robots.txt to disallow search engines, AFP wouldn't have a problem. The likelihood of that happening in either case is zero...

Sorry, I still don't buy it

What is at stake here is a) Google News' presentation of copyrighted licenced news feed content as news items, b) the Google cache of stories and photos, and c) the Google image cache.

First of all, I don't think it's sensible to combine the two issues. I have mixed feelings about the legality of search engines' caching of pages, but as I've stated earlier, Google does not (to my knowledge) cache news photos or stories, so that's really a moot point.

And what IS at stake here? If Google gets spanked, how will that serve the common good? Whereas, in contrast (emotion totally out of the equation), I don't think anyone can argue with a straight face that blogspam serves the common good.

If all blogs used no-follow tags, there wouldn't be a blog spam problem.

That's a strawman argument. Compare the situations. APF decides, hey, we don't want our publisher's stuff used by Google. Insert robots.txt file, problem solved.

I, as a blog owner, decide that I don't want my blog spammed, so I add nofollow. I still get spammed to hell. The "problem" in situation #1 is solved by individuals doing something ridiculously easy. The problem in situation #2 is not rectified until there's a greater than [x]% of blogs using NoFollow, with [x] I assume being 99%, including all inactive blogs. Not going to happen.

The only relation between the two situations is you and others here think Google is being a jerk and I think (and, I'd guess, 99% of netizens) blogspammers are big jerks.

 

>>Adding a robots.txt file has no adverse effects, takes about 2 minutes and results, with practical certainty, that Google will not index or quote from your stuff.

no it doesn't; http://www.afp.com/robots.txt

Quote:
User-Agent: *
Disallow: /beta
Disallow: /francais/news
Disallow: /english/news

robots.txt banning generally still allows Google to index, include in 'similar sites' and clearly include in news.

I think we're talking past each other here...

The problem that AFP has is not its own content, but that of content for which it has licensed a use.

You may argue that all it has to do is to require its clients to place a robots.txt on its pages before they use AFP feeds.

I would ask why they should have to do that and possibly place themselves at a competitive disadvantage just because Google has decided that it has a right to re-present other people's work.

The cache, whether of images or web pages, has long been a grey area. AFP is standing up and saying that they gave rights to one party to publish their work but did not give rights for a third party to copy that work.

The presentation of copyrighted news stories and snippets is a slightly different argument which is based more on its presentation within the confines of Google News.

I quite happily use Google News - but whether we like or dislike something has nothing to do with the issues at stake.

What competitive disadvantage?

I would ask why they should have to do that and possibly place themselves at a competitive disadvantage

Can you clarify what you mean here? Are you suggesting that the site would want to have Google SORT of scrape their contents but only in the way they'd like? (e.g., put it high up in SERPs for Web, but don't include its stuff in News?!) Doesn't sound very reasonable or logical to me.

An entity should either be able to say "Hi, Google, come grab fragments of my stuff to display" or "Hi Google, leave all or some of the parts of my site completely alone!"

If an entity is contractually forced to block out Google, well then, it has a problem with the contract, not with Google, IMHO.

Or is there some competitive disadvantage that I'm missing?

Agence France Presse sells its services

The "they" was referring to AFP.

By following your suggestion, and asking its customers to place a robots.txt exclusion on their sites before they receive AFP copy, they would be possibly placing themselves at a competitive disadvantage against Reuters or Getty Images or whoever - who may not require such a restriction from their customers, either through ignorance or choice.

Okay, but still...

Why isn't that merely a contractual situation best solved between AFP and their publishers? And hey, they wouldn't have to block all robots, just Google... and even then, not their entire site, just the sections with AFP content in them. Publishers would have two choices:
1) Tell AFP to stick it and find news from a less restrictive source.
2) Do a cost-benefit analysis and accept the AFP agreement.

Beyond all this, I don't even understand why AFP is peeved. Sites using their content get extra exposure by being linked in Google News, thus helping the individual sites better afford AFP fees. It all seems stupidly shortsighted to me...

Astoundingly

I find myself (almost) agreeing with Adam on this IF the content is actually being used via third parties (I've skimmed the stories and can't see where they mention affiliate sites so I don't think you can make that ssumption - I can guarantee that Google still takes notice of robots.txt'd sites so for all I know it could be pulling them direct)

However I would say that Google have no responsibility or ability (at present) to take into account the original source of the story when taking news from affiliates sites. It makes no sense that Google, while taking news from one site, should have to assess each news item and trace it back to the source before deciding if it can be listed or not. That's like SEW excluding Google and then Google having to check each page at TW and only index those that don't quote SEW.

That's neither reasonable nor, I imagine, possible with the current news algo (of course it could be done, or a tag could be created to tell the newsspider not to list certain stories, but this case is based upon current facts and I would therefore say Google can't currently do that)

As Adam says, the sites involved should, if their contract with AFP demands, exclude the bot. If they're already doing that then there's a case against Google.

To agree with Gurtie

...if any part of Google (News, Images, Web, etc.) disregarded the robots.txt file of a site, then they should clearly be held accountable.

You are arguing that the end justifies the means...

Google, or any other search engine, is a "good thing", so you say, and therefore it is immaterial if it caches or puts on public display of content that it has spidered/scraped. This is part of the devil's bargain that webmasters have made with the search engines.

But in this case, AFP is saying "no". They own the copyright to their stories and pictures and they sell restricted usage to their customers.

They have, apparently, made no general use deal with Google and thus the lawsuit.

We shall see (or maybe not if the lawsuit is settled) what the courts think of an argument that says "if you want me to stop breaching your rights, then tell me".

I see that quite frequently on sites possessing stolen images ("if the copyright holder wishes me to remove this picture please contact me and I will be willing to take it down once I am confident of the ownership").

I know what I think of it there and I see no ethical (oops, that word again) distinction between that and the "good thing" that Google is presumed to do.

To be fair, Brett Tabke (WMW) has been a long-time proponent of this argument.

To pick gratuitously on bloggers once again, if I and a large group of like-minded friends decide to dos attack every blog with contains the letters "c", "a" and "t" in close proximity, we would probably be doing the world a "good thing". And all that they would have to do is not use the word "cat" in their blogs. Easy for them to do, eh? I don't see the problem here, it's a simple step for any blog-owner to take?

 

well I don't know what anyone else is arguing Stever but I'm not arguing that at all :)

What I am saying is that in order for Google to not show this copywrite news the spider has to be told, somehow, that it mustn't spider it. It isn't a mind reader and it isn't just a case of hard coding into some file somewhere a website that must never have content drawn from it as the affiliates may be multiple and ever changing and this same case could apply to many ontent providers.

if Google is taking the content from affiliates sites and not direct from AFP then Google is honouring AFP's instructions. AFP have no right to instruct Google not to take content from anyone elses site (for example their affiliates) - that's up to the affiliates to do and if they don't then that is a contractual issue between AFP and the affiliate.

AFP have every right to have their copywrite respected (I don't agree with Adams comments on Google doing them a favour) but I don't think in this instance Google could reasonably be expected to identify unlabelled/untagged content as being something that they shouldn't show. How do you suggest they do that? Index all AFP stories and then compare every news item they pick up with this index to see the liklihood of the item having originated with AFP and then not show any item with, say, more than a 90% similarity? Sorry but it just doesn't work on a practical level IMHO.

There's a broader and potentially good argument that Google shouldn't index any site unless the site owner specifically says they can, but this specific case has to take place in the context of the current climate. AFP are clearly prepared to use robots.txt to disallow Google from certain areas of their own site and the pivotal question here is whether Affiliates are not honouring the affiliate agreement or whether Google are not honouring robots.txt instructions.

That whole 'are SE's ethical' argument is another one entirely and my take on that is that I think they're tecnhically illegal, but considering most of the people who read TW spend huge amounts of time encouraging the SE's to list their sites, illegal or not, I don't think we're in too much of a position to take the moral high ground on that issue to be honest......

I agree (kind of!!)

There's a broader and potentially good argument that Google shouldn't index any site unless the site owner specifically says they can...

That's what I think this case is coming down to when you reduce the arguments.

Google has had a free run because it is the elephant in the jungle and ants are too ineffective and frightened of being stepped on. Now a leopard has got involved, and the other lions (and smaller animals) are watching curiously.

What I am saying is that in order for Google to not show this copywrite news the spider has to be told, somehow, that it mustn't spider it.

OK, here I part ways with you. Yes, I agree that that is what has been happening and this may well be a (debatable) argument for mitigation of any damages that Google may or may not have caused.

But it is the injuring party's responsibility to stop the injurious behaviour - or not start it in the first place - not the victim's responsibility to halt it.

In other words, it is Google's responsibility to construct its spider in such a way that it does not spider content that it has no rights to, as, allegedly, in this case.

That may well mean an opt-in tag (which Google, after having built its multi-million dollar empire on "getting away with it", may now be able to implement successfully).

too late for that argument perhaps?

well if we think that we really should have protested earlier :)

(and this, for anyone who still has trouble grasping the basics, is why it's important to make AutoLink opt-in now and not whinge in 5 years when it starts linking out on every word)

I agree opt in is best in principal but I think that Google will argue that there is a mechanism to opt out and it isn't being used so the problem isn't that important. I have some sympathy with the argument that Google (and others) would be less valuable if everyone were required to opt in rather than opt out - many of the really valuable and/or interesting sites would possibly not be included if they had to opt in.

 

Quote:
if Google is taking the content from affiliates sites and not direct from AFP then Google is honouring AFP's instructions. AFP have no right to instruct Google not to take content from anyone elses site (for example their affiliates)

Not a lawyer (insert fine print disclaimer here).

Sorry Gurtie but in my understanding that is not how copyright works. It does not matter who's site Google is getting it from they know that they did not write it because Google does not create any of their own content (see Autolink). It is also my understanding that copyright protection is automatic upon creation.

I would argue that other web based news services like Yahoo went out and asked permission or paid a fee before including wire service news in their news pages, so one cannot say that it cannot be done. Google has chosen to scrape for free what others have chosen to pay for or at least gain prior consent to use.

Others can do the hard work of getting prior consent, why can't Google?

no no nooooo

my argument is not that Google have a right to ignore copywrite protection (of course they don't), but you can't argue this case in isolation.

either all site content is subject to copywrite and must be opt in to be listed on any google service (by tags, creative commons, or whatever) or we're happy with what they're doing and have been doing or years, provided they honour meta instructions and robots.txt

Given the current generally accepted practice Google are right in saying they can't be expected to know this is copywrite and deal with it. The site they're taking it from is not telling them to go away and therefore they're taking the content.

I personally feel that, whatever the rights and wrongs of the greater context, it's too late and there are too many companies involved to retrospectively impliment opt-in only for SE's. Unless, perhaps, the law is changed to dictate it.

Given that the situation is as it is then it's actually not relevant how copywrite works - what's relevant is that Google have no accepted way of knowing this content they're pinching is subject to it.

Which brings me back to are they sourcing the disputed content from sites with or without robots.txt?

CIO Today joins the Fray

They have a big article today on their site "Chirac Plans French 'Counter-Offensive' on Internet Culture"

Quote:
Jeanneney drew as an example the 1989 celebrations to mark the two hundredth anniversary of the French revolution -- which he himself was personally in charge of.

Them, there's fightin words.

 

Gurtie, yours is a fait accompli argument. What you're basically saying is: "Goo's got away with billions of copyright infringements it's turned into a cultural constant, so let's put with it." That's a pretty weird take on what "law and order" is supposed to be about in the first place. Megabucks more often than not hitting it off nicels with the courts, I wouldn't be too surprised to actually see some U. S. judge waiving the whole matter or even deciding in their favor. But that's not the whole world, mind you, least of all Europe where common sentiment is strikingly different in most countries.

The point isn't the lawsuit AFP is pursuing - it's the countless corporate vultures who'll blithely dive into this perceived carcass named Goo now the avalanche has been set in motion. No undue alarmism, but this issue may very well threaten Goo's very existence some day.

well

more 'we have to put with it' than 'lets put with it' but yes basically I am.

I'm not arguing it's right, or it's moral, or it's like it should be, but I think it's what a judge will say, and I think it's the only outcome, because to decide otherwise is basically to at a stroke dismantle the business model of all the SE's and many other online 'services'. Quite apart from the community and economic factors of that I suspect the judge will have Google shares.

Prove me wrong though and I'll be perfectly happy and joining the celebration :)

Nathan's "Inside Google"...

...offers some thoughtful commentary (both in the body and in the comments) here.

Google starts removing AFP content

Already linked here

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.