Archive for February, 2008


I’ve been reading a lot about copyright for a while now. Intellectual property. Does something analogous to a property right make sense in a digital world?

It’s hard. As near as I can tell, we’re seeing two fundamental changes.

First, what are we to make of scarcity just vanishing? What’s a newspaper to do when I don’t have to buy their paper or watch their program because I can find the same information ten or twenty other places online? Or, just as importantly, when I can find other information that’s just as interesting to me hundred or thousand places online? This is important, for when I hit a paywall or am obnoxiously prompted to log in, I close the window or click a link and find something else that suits my tastes at least nearly as well in twenty seconds. Sure, your article about Barack Obama would haven been great, but I can find others elsewhere, and I like reading about Hillary Clinton too.

Second, what are we to make of the plummeting costs of duplication? What’s record label to do when I don’t have to buy their music because I can download it? What’s a newspaper to do when I can easily replicate their content in my feed reader by scraping their site? Or when a splogger does something actually harmful?

There are maybe some answers.

To the first, many propose inventing new business models around goods and services that are necessarily scarce. Bands, for instance, should let go of making money off CDs and embrace concert tours and t-shirts. Kevin Kelly writes about eight other ideas, which he calls generatives. Make your goods and services premium or easier to find or personalized, etc. Good ideas.

To the second, there’s something like Attributor, which could let us track our copyrighted material and force re-publishers to share the monetization. Copyright is still the basis here. Well, without copyright, there would be no basis for technologies like Attributor anyhow.

Are there more problems? I’m sure there are. But fighting ubiquity is a losing battle. Why not encourage it, track it, add up the duplications, and create something that tells us what’s most duplicated? Aggregate the publishing and the re-publishings. Then we’d know what to read or watch—that something is more duplicated indicates some kind of relevant popularity and interestingness (one hopes).

Re-publishers can each have some slice of the pie they helped grow. They keep a share of the ad revenue, and original authors get the rest. This should make everyone happy as long as the copyright owner’s slice of the new, larger pie is larger than the whole of the original, smaller pie. It’s win-win.

So, yeah, I suspect copyright’s still a useful legal construct. It can still promote economic efficiency. But it’s foolish to rely on copyright to enforce scarcity. Instead, embrace ubiquity and monetize it.



There are more than a few ways to remind yourself to read something or other later.

Browsers have bookmarks. Or you can save something to delicious, perhaps tagged “toread,” like very many people do. You can use this awesome firefox plugin called “Read It Later.”

But I like to do my reading inside Google Reader; others like their reading inside their fave reader.

So what am I to do? My first thought was Yahoo Pipes. It’s a well-known secret that Pipes makes screen-scraping around partial feeds as easy as pie. So I thought I could maybe throw together a mashup of and pipes to get something going.

My idea was to my to-be-read-later pages to delicious with a common tag—the common “toread” maybe. I could then have pipes fetch from delicious the feed based on that tag. The main urls for each delicious post point to the original webpage, and so, with the loop operator, I could locate the feed associated with each of the urls in the delicious feed. Original urls in hand, I was thinking I could have pipes auto-discover the associated feeds and then re-use those urls to locate the post within the feed corresponding to the page to be read later.

Well, I don’t think it can be done so easily. (Please! Someone prove me wrong!)

Meantime, I’ll just use my handy grease monkey plug-in that let’s me “preview” posts inside the google reader wrapper—so that I don’t have to hop from tab to tab like a lost frog.

Meantime, someone should really put together this app. Of course, it would really only work simply with pages that have rss analogues in a feed. But if, through Herculean effort, you found some practicable way to inform me that a given page doesn’t, but you could parse out the junk and serve me only the text, you’d be a real hero. Otherwise, just tell me that the page I’m trying to read later doesn’t have an rss analogue, give me an error message, and I’ll move on…assured in the knowledge that it will soon enough.

Gatherers and Packagers: When Product and Brand Cleave 4 Realz

Jeff Jarvis writes about the coming economics of news:

When the packager takes up and presents the gatherer’s content in whole and monetizes it—mostly with advertising—they share the revenue. When the gatherer just links, the gatherer monetizes the traffic, likely as part of an ad network as well.

I think this is right. In the first case, the content is on the “packager’s” page or in its feed; in the second, the content is on the “gatherer’s” page or in its feed. In both cases, advertising monetized the content (let’s say) and readers or viewers found it by way of the packager’s brand (a coarse but inevitable word).

To me, however, the location of the user’s experience seems unimportant—in fact, the whole point of disaggregating journalism into two functions, imho, is to free up the content from the chains of fixed locations. Jarvis writes, “The packagers’ job would be to find the best news and information for their audience no matter where it comes from.” I agree, but why not let it go anywhere too—anywhere, that is, where the packager can still monetize it? (See Attributor if that sounds crazy.)

Couple this with the idea that rss-like subscriptions are on the move as the mechanism by which we get our content, replacing search in part. (As has been said before, there’s no spam on twitter. Why not? Followers just unsubscribe.) The result is that the packager still maintains his incentive to burnish his reputation and sell his brand. After all, that’s what sploggers are: packagers without consciences who get traffic via search.

So I agree with Jarvis: “reliably bringing you the best package and feed of news that matters to you from the best sources” is how “news brands survive and succeed.” That’s how “the packagers are now motivated to assure that there are good sources.”

Give me tags, Calais!

Who needs to think about buying tags when Reuters and its newly acquired company are giving them away?

The web service is free for commercial and non-commercial use. We’ve sized the initial release to handle millions of requests per day and will scale it as necessary to support our users.

I mean, Jesus, it’s so exciting and scary (!) all at once:

This metadata gives you the ability to build maps (or graphs or networks) linking documents to people to companies to places to products to events to geographies to … whatever. You can use those maps to improve site navigation, provide contextual syndication, tag and organize your content, create structured folksonomies, filter and de-duplicate news feeds or analyze content to see if it contains what you care about. And, you can share those maps with anyone else in the content ecosystem.

More: “What Calais does sounds simple—what you do with it could be simply amazing.”

If the world were smart, there would be a gold rush to be first to build the killer app. Mine will be for serving the information needs of communities in a democracy—in a word, news. Who’s coming with me?

PS. Good for Reuters. May its bid to locate itself at the pulsing informational center of the semantic web and the future of news prove as ultimately lucrative as it is profoundly socially benevolent.

Josh Young's Facebook profile

What I'm saving.

RSS What I’m reading.

  • An error has occurred; the feed is probably down. Try again later.