Archive for the 'rss' Category

Not by Links Alone

At this unthinkably late hour, many of even the most recalcitrant journalists and newsy curmudgeons have given themselves over, painfully, to the fundamentally important fact that the economics of abundance now govern their world.

For many, of course, stemming that tide is still paramount. Their goal, as David Carr writes, is to squelch the “new competition for ads and minds.” Thus Walter Isaacson’s “E-ZPass digital wallet” and Alan Mutter’s “Original Sin.” Thus Michael Moran’s obnoxious “NOPEC.” Thus Journalism Online. And, of course, thus we have David Simon’s recent call for Congress to “consider relaxing certain anti-trust prohibitions” or this call in the Washington Post to rework fair use. I wish them all good luck, but mostly good night.

There are others, though, who think it’s great that the Internet and Google are opening up the news to competition. In fact, “Google is good” strikes me as nearly orthodox among the basically Internet-savvy set of news talkers. Marissa Mayer crows about how Google delivers newspapers’ Web sites one billion clicks a month, and Arianna Huffington insists that the future of news is to be found in a “linked economy” and “search engines” like Google.

In this narrative, Google’s the great leveler, ushering the world of journalism out of the dark, dank ages of monopoly and into the light, bright days of competition, where all news articles and blog posts stand on their own pagerank before the multitude of users who judge with their links and their clicks. Its ablest defender is probably Jeff Jarvis, author of What Would Google Do? Jarvis was relatively early in pointing out that “Google commodifies the world’s content by making it all available on a level playing field in its search.” In that and other posts at Buzz Machine, his widely read blog, Jarvis allows that Google “can make life difficult” but insists, “that’s not Google’s fault.” The reverence for Google is thick: “The smart guys are hiring search-engine optimization experts and trying to figure out how to get more people to their stuff thanks to Google.”

But defenders of Google’s influence on the broader market for news and newspapers themselves make a striking error in believing that the market for content is competitive. That belief is wrong—not just a little bit or on the margin, but fundamentally, and importantly, wrong.

Which is not to say that news publishers aren’t competing for readers’ eyeballs and attention. Publishers compete with one another all day long, every day—with some local exceptions, the news has always been competitive like a race, and is now more competitive like a market than ever before. But the market for that news—the place where consumers decide what to read, paying with their attention—is not competitive. Google may well be the great leveler, but down to how low a field?

To be very clear, this is far from a neo-classical purist’s critique that picks nits by abusing uselessly theoretical definitions. I am not a purist, an economist, or a jerk. This is reality, as best as I know it. Nevertheless, to say that the market for content is competitive is just to misunderstand what a competitive market actually entails. The market for news content as it currently stands, with Google in the middle, is a profoundly blurry, deeply uncompetitive space.

*    *    *

“The difficulty of distinguishing good quality from bad is inherent in the business world,” Nobel laureate George Akerlof wrote in the kicker of his most famous paper, published in 1970. “This may indeed explain many economic institutions and may in fact be one of the more important aspects of uncertainty.”

Akerlof fired an early shot in a scholarly marathon to study the effects of asymmetric information in markets. What do parties to a potential transaction do when they know different sets of facts? Maybe that seems like an obvious question, but economists in the middle of the twentieth century had been pretty busy worrying about perfecting complicated models despite their grossly simplistic assumptions.

So Akerlof set about to write about how markets can fail when some of those assumptions turn out to be bunk. The assumption he tested first, in “The Market for ‘Lemons,’” was certainty, and he showed that when sellers know more about the goods being sold than the buyers do, sellers abuse their privileged position and buyers leave the market.

Writing in the same year, the economist Phillip Nelson studied the differences between what he called “search goods” and “experience goods.” Search goods and experience goods express a certain kind of asymmetry. For search goods, consumers can overcome the asymmetry before the point of purchase by doing their homework, while for experience goods, consumers must take their time and invest.

A pair of pants, for instance, is a search good—you can try before you buy, and shop around for the pants that fit you best. An apple, on the other hand, is an experience good—you don’t know whether you’ll like one until you consume it, and you can’t really try before you buy.

News articles are experience goods. Just as with an apple, you need to consume the story, reading the article or watching the video or so on, in order to judge its quality. “Stories can vary in length, accuracy, style of presentation, and focus,” writes economist James Hamilton in All the News That’s Fit to Sell. “For a given day’s events, widely divergent news products are offered to answer the questions of who, what, where, when, and why.” We can’t know which one’s best till we’ve read them all, and who’s got time for that?

Moreover, a multitude of subjective editorial decisions produce the news. Each reporter’s practices and habits influence what’s news and what’s not. Their learned methods, their assigned beats, and even their inverted pyramids shape what we read and how. Reporters’ and editors’ tastes, their histories, or their cultures matter, as do their professional ethics. Each article of news is a nuanced human document—situated aesthetically, historically, culturally, and ethically.

Ultimately, the news is afflicted with the problem of being an experience good more than even apples are. At least Granny Smiths don’t vary wildly from farmer to farmer or from produce bin to produce bin. Sure, some may be organic, while others are conventional. One may be tarter or crispier than another, but tremendous differences from the mean are very unlikely. With the news, though, it’s hard even to think of what the mean might be. It may seem obvious, but articles, essays, and reports are complex products of complex writerly psychologies.

For a long time, however, as readers, we were unaware of these nuances of production. That was, in some sense, the upshot: our experience of this journalism was relatively uncomplicated. This profound lack of context mattered much less.

Call it the myth of objectivity maybe, but what NYU professor Jay Rosen has labeled the “mask of professional distance” meant that we didn’t have much of a chance to bother with a whole world complexities. Because everyone usually wore a mask, and because everyone’s masked looked about the same, we ignored—indeed, we were largely necessarily ignorant of—all the unique faces.

For a long time, therefore, the orthodox goal of American newspapers virtually everywhere was news that really wasn’t an experience good. When news existed only on paper, it hardly mattered what news was, because we had so few seemingly monochrome choices about what to read. We returned to the same newspapers and reporters behind the same masks over and over again, and through that repetition, we came subtly to understand the meaning and implications of their limited degrees of “length, accuracy, style of presentation, and focus.”

As a result, we often grew to love our newspaper—or to love to hate it. But even if we didn’t like our newspaper, it was ours, and we accepted it, surrendering our affection either way, even begrudgingly. The world of news was just much simpler, a more homogeneous, predictable place—there were fewer thorny questions, fewer observable choices. There was less risk by design. Our news was simpler, or it seemed to be, and we had little choice but to become familiar with it anyhow. One benefit of the View from Nowhere, after all, is that basically everyone adopted it—that it basically became a standard, reducing risk.

But a funny thing happened in this cloistered world. Because it seemed only natural, we didn’t realize the accidental nature of the understanding and affection between readers and their newspapers. If, as the economists would have it, the cost of a thing is what we’ve sacrificed in order to achieve it, then our understanding and affection were free. We gave nothing up for them—for there was scarcely another alternative. As a result, both readers and publishers took those things for granted. This point is important because publishers are still taking those things for granted, assuming that all people of good faith still appreciate and love all the good things that a newspaper puts on offer.

*    *    *

But when our informational options explode, we can plainly, and sometimes painfully, see that our newspapers aren’t everything. Different newspapers are better at answering different questions, and some answers—some as simple as what we should talk about at work tomorrow—don’t come from newspapers at all. So we go hunting on the Internet. So we gather. So we Google.

We have now spent about a decade Googling. We have spent years indulging in information, and they have been wonderful years. We are overawed by our ability to answer questions online. Wikipedia has helped immensely in our efforts to answer those questions, but pagerank elevated even it. Newspapers compose just one kind of Web site to have plunged into the scrum of search engine optimization. Everyone’s hungry for links and clicks.

And Google represents the Internet at large for two reasons. For one, the engine largely structures our experience of the overall vehicle. More importantly, though, Google’s organization of the Internet changes the Internet itself. The Search Engine Marketing Professional Organization estimates, in this PDF report, that North American spending on organic SEO in 2008 was about $1.5 billion. But that number is surely just the tip of the iceberg. Google wields massive power over the shape and structure of the Internet’s general landscape of Web pages, Web applications, and the links among them. Virtually no one builds even a semi-serious Web site without considering whether it will be indexed optimally. For journalism, most of the time, the effects are either irrelevant or benign.

But think about Marissa Mayer’s Senate testimony about the “living story.” Newspaper Web sites, she said, “frequently publish several articles on the same topic, sometimes with identical or closely related content.” Because those similar pages share links from around the Web, neither one has the pagerank that a single one would have. Mayer would have news Web sites structure their content more like Wikipedia: “Consider how the authoritativeness of news articles might grow if an evolving story were published under a permanent, single URL as a living, changing, updating entity.”

Setting aside for the moment whatever merits Mayer’s idea might have, imagine the broader implications. She’s encouraging newspapers to change not just their marketing or distribution strategies but their journalism because Google doesn’t have an algorithm smart enough to determine that they should share the “authoritativeness.”

At Talking Points Memo, Josh Marshall’s style of following a story over a string of blog posts, poking and prodding an issue from multiple angles, publishing those posts in a stream, and letting the story grow incrementally, cumulatively might be disadvantaged because those posts are, naturally, found at different URLs. His posts would compete for pagerank.

And maybe it would be better for journalism if bloggers adopted the “living story” model of reporting. Maybe journalism schools should start teaching it. Or maybe not—maybe there is something important about what the structure of content means for context. The point here isn’t to offer substantive answer to this question, but rather to point out that Mayer seems unaware of the question in the first place. It’s natural that Mayer would think that what’s good for Google is good for Internet users at large. For most domestic Internet users, after all, Google, which serves about two-thirds of all searches, essentially is their homepage for news.

But most news articles, of course, simply aren’t like entries in an encyclopedia. An article of news—in both senses of the term—is substantially deeper than the facts it contains. An article of news, a human document, means substantially more to us than its literal words—or the pageranked bag of words that Google more or less regards it as.

Google can shine no small amount of light on whether we want to read an article of news. And, importantly, Google’s great at telling you when others have found an article of news to be valuable. But the tastes of anonymous crowds—of everyone—are not terribly good at determining whether we want to read some particular article of news, particularly situated, among all the very many alternatives, each particularly situated unto itself.

Maybe it all comes down to a battle between whether Google encourages “hit-and-run” visits or “qualified leads.” I don’t doubt that searchers from Google often stick around after they alight on a page. But I doubt they stick around sufficiently often. In that sense, I think Daniel Tunkelang is precisely correct: “Google’s approach to content aggregation and search encourages people to see news…through a very narrow lens in which it’s hard to tell things apart. The result is ultimately self-fulfilling: it becomes more important to publications to invest in search engine optimization than to create more valuable content.”

*    *    *

The future-of-news doomsayers are so often wrong. A lot of what they said at Kerry’s hearing was wrong. It’s woefully wrongheaded to call Google parasitic simply because it the Internet without it would be a distinctly worse place. There would be, I suspect, seriously fewer net pageviews for news. And so it’s easy to think that they’re wrong about everything—because it seems that they fundamentally misunderstand the Internet.

But they don’t hold a monopoly on misunderstanding. “When Google News lists one of ours stories in a prominent position,” writes Henry Blodget, “we don’t wail and moan about those sleazy thieves at Google. We shout, ‘Yeah, baby,’ and start high-fiving all around.” To Blodget, “Google is advertising our stories for free.”

But life is about alternatives. There’s what is, and there’s what could be. And sometimes what could be is better than what is—sometimes realistically so. So however misguided some news executives may have been or may still be about their paywalls and buyouts, they also sense that Google’s approach to the Web can’t reproduce the important connection the news once had with readers. Google just doesn’t fit layered, subtle, multi-dimensional products—experience goods—like articles of serious journalism. Because news is an experience good, we need really good recommendations about whether we’re going to enjoy it. And the Google-centered link economy just won’t do. It doesn’t add quite enough value. We need to know more about the news before we sink our time into reading it than pagerank can tell us. We need the news organized not by links alone.

What we need is a search experience that let’s us discover the news in ways that fit why we actually care about it. We need a search experience built around concretely identifiable sources and writers. We need a search experience built around our friends and, lest we dwell too snugly in our own comfort zones, other expert readers we trust. These are all people—and their reputations or degrees of authority matter to us in much the same ways.

We need a search experience built around beats and topics that are concrete—not hierarchical, but miscellaneous and semantically well defined. We need a search experience built around dates, events, and locations. We need a search experience that’s multi-faceted and persistent, a stream of news. Ultimately, we need a powerful, flexible search experience that merges automatization and human judgment—that is sensitive to the very particular and personal reasons we care about news in the first place.

The people at Senator Kerry’s hearing last week seemed either to want to dam the river and let nothing through or to whip its flow up into a tidal wave. But the real problem is that they’re both talking about the wrong river. News has changed its course, to be sure, so in most cases, dams are moot at best. At the same time, though, chasing links and clicks, with everyone pouring scarce resources into an arms race of pagerank while aggregators direct traffic and skim a few page views, isn’t sufficiently imaginative either.

UPDATE: This post originally slipped out the door before it was fully dressed. Embarrassing, yes. My apologies to those who read the original draft of this thing and were frustrated by the unfinished sentences and goofy notes to self, and my thanks to those who read it all it the same.

The Great Unbundling: A Reprise

This piece by Nick Carr, the author of the recently popular “Is Google Making Us Stupid?” in the Atlantic, is fantastic.

My summary: A print newspaper or magazine provides an array of content in one bundle. People buy the bundle, and advertisers pay to catch readers’ eyes as they thumb through the pages. But when a publication moves online, the bundle falls apart, and what’s left are just the stories.

This may no longer be revolutionary thought to anyone who knows that google is their new homepage, from which people enter their site laterally through searches. But that doesn’t mean it’s not the new gospel for digital content.

There’s only one problem with Carr’s argument, though. By focusing on the economics of production, I don’t think its observation of unbundling goes far enough. Looked at another way—from the economics of consumption and attention—not even stories are left. In actuality, there are just keywords entered into google searches. That’s increasingly how people find content, and in an age of abundance of content, finding it is what matters.

That’s where our under-wraps project comes into play. We formalize the notion of people finding content through simple abstractions of it. Fundamentally, from the user’s perspective, the value proposition lies with the keywords, or the persons of interest, not the piece of content, which is now largely commodified.

That’s why we think it’s a pretty big idea to shift the information architecture of the news away from focusing on documents and headlines and toward focusing on the newsmakers and tags. (What’s a newsmaker? A person, corporation, government body, etc. What’s a tag? A topic, a location, a brand, etc.)

The kicker is that, once content is distilled into a simpler information architecture like ours, we can do much more exciting things with it. We can extract much more interesting information from it, make much more valuable conclusions about it, and ultimately build a much more naturally social platform.

People will no longer have to manage their intake of news. Our web application will filter the flow of information based on their interests and the interests of their friends and trusted experts, allowing them to allocate their scarce attention most efficiently.

It comes down to this: Aggregating documents gets you something like Digg or Google News—great for attracting passive users who want to be spoon fed what’s important. But few users show up at Digg with a predetermined interest, and that predetermined interest is how google monetized search ads over display ads to bring yahoo to its knees. Aggregating documents make sense in a document-scarce world; aggregating the metadata of those documents makes sense in an attention-scarce world. When it comes to the news, newsmakers and tags comprise the crucially relevant metadata, which can be rendered in a rich, intuitive visualization.

Which isn’t to say that passive users who crave spoon-fed documents aren’t valuable. We can monetize those users too—by aggregating the interests of our active users and reverse-mapping them, so to speak, back onto a massive set of documents in order to find the most popular ones.

Whither Tag Clouds?

A few weeks ago, one could do relatively little clicking around the interwebs and notice the tear of pretty tag clouds powered by wordle. Bloggers of all stripes posted a wordle of their blog. Some, like Jeff Jarvis, mused about how the visualizations represent “another way way to see hot topics and another path to them.”

For as long as tag clouds have been a feature of the web, they’ve also been an object of futurist optimism, kindling images of Edward Tufte and notions that if someone could just unlock all those dense far-flung pages of information, just present them correctly, illumed, people everywhere would nod and understand. Their eyes would grow bright, and they would smile at the sheer sense it all makes. The headiness of a folksonomy is sweet for an information junkie.

It’s in that vein that ReadWriteWeb mythologizes the tag cloud as “buffalo on the pre-Columbian plains of North America.” A reader willing to cock his head and squint hard enough at the image of tag clouds “roaming the social web” as “huge, thundering herds of keywords of all shades and sizes” realizes that the Rob Cottingham would have us believe that tag clouds were graceful and defenseless beasts—and also now on the verge of extinction. He’s more or less correct.

I used to mythologize the tag cloud, but let’s be honest. They were never actually useful. You could never drag and drop one word in a tag cloud onto another to get the intersection or union of pages with those two tags. You could never really use a tag cloud to subscribe to RSS feeds of only the posts with a given set of tags.

A tag also never told you whether J.P. Morgan was a person or a bank. A tag cloud on a blog was never dynamic, never interactive. The tag cloud on one person’s blog never talked to the tag cloud on anyone else’s. I could never click on one tag and watch the cloud reform and show me only related tags, all re-sized and -colored to indicate their frequency or importance only in the part of the corpus in which the tag I clicked on is relevant.

But there’re also a cool-headed thoughts to have here. If tag clouds don’t work, what will? What is the best way to navigate around those groups of relatively many words called articles or posts? In the comments to Jarvis’s post, I asked a set of questions:

How will we know when we meet a visualization of the news that’s actually really useful? Can some visualization of the news lay not just another path to the “hot topic” but a better one? Or will headlines make a successful transition from the analog past of news to its digital future as the standard way we find what we want to read?

I believe the gut-level interest in tag clouds comes in part from the sense that headlines aren’t the best way to navigate around groups of articles much bigger than the number in a newspaper. There’s a real pain point there: scanning headlines doesn’t scale. Abstracting away from them, however, and focusing on topics and newsmakers in order to find what’s best to read or watch just might work.

I think there’s a very substantial market for a smarter tag cloud. They might look very different from what we’ve seen, but they will let us see at a glance lots of information and help us get to the best stuff faster. After all, the articles we want to read, the videos we want to watch, and the conversations we want to have around them are what’s actually important.

Unbundling Traditionally Editorial Value-Adds

Felix Salmon does a brilliant job of deconstructing what a great newspaper does once it’s got “just-the-facts” news in hand:

  1. It turns news into stories: well-written, well-edited, not-too-long pieces which provide perspective and context and a bit of analysis too.
  2. It takes those stories and prioritizes them: important stories get big headlines on the front page; less-important stories are relegated to the back. A newspaper provides a crucial editing-down function, providing a way of navigating the sea of news by pointing out the most significant landmarks.
  3. It takes those prioritized stories and turns them into a finely-honed object, a newspaper. That’s what Thomson is talking about when he praises the Spanish newspapers—they’re very good at intuitively guiding the reader around the universe of news, making full use of photography, illustration, typography, white space, and all the other tools at a newspaper designer’s disposal.

Felix then goes on to write a post about customization—defined as giving value-adds (2) and (3) over to readers—and how it hasn’t worked when newspapers have tried it.

I agree with Felix that customization at the level of one publication isn’t terribly useful. That’s because there’s just not that much to customize. Relative to the universe of news—or even just the galaxy of financial news, say—someone who customizes the Journal just doesn’t hide all that much bad stuff or make it all that much easier to find the good stuff.

Let me be clear: I agree that customization can be a lot of upfront work, and I agree that amount of work will ward off many readers, but it’s not at all clear that there isn’t a relatively small (but absolutely substantial) group of users who have tastes that editors and designers miss or don’t appreciate.

But is customization at the level of an aggregator is equally suspicious? I find that once a reader is reading dozens of sources anyhow, value-adds (2) and (3) are more of a hindrance than a help. Whatever value a reader gets out of them is often overwhelmed by the simple inconvenience of having to jump around many different websites.

On the other hand, a customizable aggregator represents a return to the convenient one-stop shop. The reader’s customization may seem like source of value-add (2), but there’s nothing to say the Journal couldn’t serve a feed of articles with “big headlines.” This is what’s going on with the list of most-emailed articles—”emasculating” the editors. That’s essentially what google news and digg and their distant cousins do. And even if we don’t particularly like how they do it (I don’t), we should both respect that this project of figuring out alternative ways to accomplish value-add (2) is very young.

In the end, there’s nothing about your main value-adds (1)-(3) that requires them to come together. Why not unbundle them? Why not give some users the ability not to care about what an editor cares about. After all, an editor is just offering a guess—an intelligent one, to be sure—about what readers want. But who even knows which readers—the mean, the median, the mode, the ones on Wall Street, the ones on Main Street?

Twine Beta

I read an awful lot of RSS feeds. Not a record-shattering amount, but enough that it’s hard for me to keep them all organized in Google Reader.

Despite my efforts to keep them in “folders” of different kinds—some organized by topic, others by how frequntly I’d like to read them—I lose track of feeds for days or weeks on end sometimes. Then, when I do get a firm grip on all my feeds, I find that I’ve spent several hours of time I could’ve spent actually reading. That maintenance is getting to be a pain.

I’m hopeful that Twine can help me add a permanent smarter layer of organization to all my feeds. That smarter layer could be sensitive to my evolving reading habits. I’m also hoping that Twine can help me groups of topically similar posts across scattered blogs on the fly.

So early access to the beta would be awesome!

Spiffy Concept

Caveat user: RSS lava lamps

So be good at long-term trends, not just short-term ones. And situate your visualization in the user’s context—different users see different visualizations depending on their differences. Also, make it easy, not difficult, to combine different data sources. Finally, make them actually social and easy to share.

Wow, sounds hard.

Programmable Information

From Tim O’Reilly:

But professional publishers definitely have an incentive to add semantics if their ultimate consumer is not just reading what they produce, but processing it in increasingly sophisticated ways.

In the past and present days of the web and media, publishers competed on price. If your newspaper or book or cd was the cheapest, that was a reason for someone to buy it. As information becomes digital, and the friction of exchange wears away, information will tend to be free. (See here, here, and here—and about a million other places.) That makes competing on price pretty tough.

Of course, publishers also competed, and still do, on quality. As they should. I suspect that readers will never stop wanting their newspapers articles well sourced, well argued, and well written. Partisan readers will never stop wanting their news to make the good guys look good and the bad guys look bad. That’s all in the data.

The nature of digital information, however, changes the what information consumers will find high-quality. Now readers want much more: they want metadata. That’s what O’Reilly’s talking about. That’s what Reuters was thinking when it acquired ClearForest.

Readers won’t necessarily look at all the metadata the way they theoretically read an entire article. Instead readers might find the article because of its metadata, e.g., its issues, characters, organizations, or the neighborhood it was written about. Or they might find another article because it shares a given metadatum or because its set of metadata is similar. Or, another step out, they might find another reader who’s enjoyed lots of similar articles.

The point is that, if your newspaper has metadata that I can use, that is a reason for someone to buy (or look at the ad next to it).

Actually, it’s not that simple. The New York Times annotates its articles with a few tags hidden in the html, and almost no one pays any attention to those tags. Few would even if the tags were surfaced on the page. Blogs have had tags for years, and no one’s really using that metadata, however meager, to great effect.

When blogs do have systematic tags, the way I take advantage of them is by way of an unrelated web application, namely, Google Reader. I can, for instance, subscribe to the RSS feed on this page, which aggregates all the posts tagged “Semantic Web” across ZD Net’s family of blogs. Without RSS and Google Reader, the tags just aren’t that useful. The metadata tells me something, but RSS and a feed reader allow me to lump and split accordingly.

Google Reader allows consumers to process ZDNet’s metadata in “sophisticated ways.” Consumers can’t do it alone, and there’s real opportunity in building the tools to process the metadata.

Without the tools to process the metadata, the added information isn’t terribly useful. That’s why it’s big deal that Reuters has faith that, if it brings forth the metadata, someone will build an application that exploits them—or that slices and dices interestingly.

In fact, ClearForest already tried to entice developers with a contest in 2006. The winner was a web application called Optevi News Tracker, which isn’t very exciting to me for a number of reasons. Among them is that I don’t think it’s a good tool for exploiting metadata. I just don’t really get much more out the news, although that might change if it used more than MSNBC’s feed of news.

My gut tells me that what lies at the heart of News Tracker’s lackluster operation is that it just doesn’t do enough with its metadata. I can’t really put my finger on it, and I could be wrong. Am I? Or should I trust my gut?

So what is the killer metadata-driven news application going to look like? What metadata are important, and what are not? How do we want to interact with our metadata?

B00km4rkToReadL8r

There are more than a few ways to remind yourself to read something or other later.

Browsers have bookmarks. Or you can save something to delicious, perhaps tagged “toread,” like very many people do. You can use this awesome firefox plugin called “Read It Later.”

But I like to do my reading inside Google Reader; others like their reading inside their fave reader.

So what am I to do? My first thought was Yahoo Pipes. It’s a well-known secret that Pipes makes screen-scraping around partial feeds as easy as pie. So I thought I could maybe throw together a mashup of del.icio.us and pipes to get something going.

My idea was to my to-be-read-later pages to delicious with a common tag—the common “toread” maybe. I could then have pipes fetch from delicious the feed based on that tag. The main urls for each delicious post point to the original webpage, and so, with the loop operator, I could locate the feed associated with each of the urls in the delicious feed. Original urls in hand, I was thinking I could have pipes auto-discover the associated feeds and then re-use those urls to locate the post within the feed corresponding to the page to be read later.

Well, I don’t think it can be done so easily. (Please! Someone prove me wrong!)

Meantime, I’ll just use my handy grease monkey plug-in that let’s me “preview” posts inside the google reader wrapper—so that I don’t have to hop from tab to tab like a lost frog.

Meantime, someone should really put together this app. Of course, it would really only work simply with pages that have rss analogues in a feed. But if, through Herculean effort, you found some practicable way to inform me that a given page doesn’t, but you could parse out the junk and serve me only the text, you’d be a real hero. Otherwise, just tell me that the page I’m trying to read later doesn’t have an rss analogue, give me an error message, and I’ll move on…assured in the knowledge that it will soon enough.

Gatherers and Packagers: When Product and Brand Cleave 4 Realz

Jeff Jarvis writes about the coming economics of news:

When the packager takes up and presents the gatherer’s content in whole and monetizes it—mostly with advertising—they share the revenue. When the gatherer just links, the gatherer monetizes the traffic, likely as part of an ad network as well.

I think this is right. In the first case, the content is on the “packager’s” page or in its feed; in the second, the content is on the “gatherer’s” page or in its feed. In both cases, advertising monetized the content (let’s say) and readers or viewers found it by way of the packager’s brand (a coarse but inevitable word).

To me, however, the location of the user’s experience seems unimportant—in fact, the whole point of disaggregating journalism into two functions, imho, is to free up the content from the chains of fixed locations. Jarvis writes, “The packagers’ job would be to find the best news and information for their audience no matter where it comes from.” I agree, but why not let it go anywhere too—anywhere, that is, where the packager can still monetize it? (See Attributor if that sounds crazy.)

Couple this with the idea that rss-like subscriptions are on the move as the mechanism by which we get our content, replacing search in part. (As has been said before, there’s no spam on twitter. Why not? Followers just unsubscribe.) The result is that the packager still maintains his incentive to burnish his reputation and sell his brand. After all, that’s what sploggers are: packagers without consciences who get traffic via search.

So I agree with Jarvis: “reliably bringing you the best package and feed of news that matters to you from the best sources” is how “news brands survive and succeed.” That’s how “the packagers are now motivated to assure that there are good sources.”

Give me tags, Calais!

Who needs to think about buying tags when Reuters and its newly acquired company are giving them away?

The web service is free for commercial and non-commercial use. We’ve sized the initial release to handle millions of requests per day and will scale it as necessary to support our users.

I mean, Jesus, it’s so exciting and scary (!) all at once:

This metadata gives you the ability to build maps (or graphs or networks) linking documents to people to companies to places to products to events to geographies to … whatever. You can use those maps to improve site navigation, provide contextual syndication, tag and organize your content, create structured folksonomies, filter and de-duplicate news feeds or analyze content to see if it contains what you care about. And, you can share those maps with anyone else in the content ecosystem.

More: “What Calais does sounds simple—what you do with it could be simply amazing.”

If the world were smart, there would be a gold rush to be first to build the killer app. Mine will be for serving the information needs of communities in a democracy—in a word, news. Who’s coming with me?

PS. Good for Reuters. May its bid to locate itself at the pulsing informational center of the semantic web and the future of news prove as ultimately lucrative as it is profoundly socially benevolent.


Josh Young's Facebook profile

What I'm saving.

RSS What I’m reading.

  • An error has occurred; the feed is probably down. Try again later.

Follow

Get every new post delivered to your Inbox.