Archive for the 'Google Reader' Category

Unbundling Traditionally Editorial Value-Adds

Felix Salmon does a brilliant job of deconstructing what a great newspaper does once it’s got “just-the-facts” news in hand:

  1. It turns news into stories: well-written, well-edited, not-too-long pieces which provide perspective and context and a bit of analysis too.
  2. It takes those stories and prioritizes them: important stories get big headlines on the front page; less-important stories are relegated to the back. A newspaper provides a crucial editing-down function, providing a way of navigating the sea of news by pointing out the most significant landmarks.
  3. It takes those prioritized stories and turns them into a finely-honed object, a newspaper. That’s what Thomson is talking about when he praises the Spanish newspapers—they’re very good at intuitively guiding the reader around the universe of news, making full use of photography, illustration, typography, white space, and all the other tools at a newspaper designer’s disposal.

Felix then goes on to write a post about customization—defined as giving value-adds (2) and (3) over to readers—and how it hasn’t worked when newspapers have tried it.

I agree with Felix that customization at the level of one publication isn’t terribly useful. That’s because there’s just not that much to customize. Relative to the universe of news—or even just the galaxy of financial news, say—someone who customizes the Journal just doesn’t hide all that much bad stuff or make it all that much easier to find the good stuff.

Let me be clear: I agree that customization can be a lot of upfront work, and I agree that amount of work will ward off many readers, but it’s not at all clear that there isn’t a relatively small (but absolutely substantial) group of users who have tastes that editors and designers miss or don’t appreciate.

But is customization at the level of an aggregator is equally suspicious? I find that once a reader is reading dozens of sources anyhow, value-adds (2) and (3) are more of a hindrance than a help. Whatever value a reader gets out of them is often overwhelmed by the simple inconvenience of having to jump around many different websites.

On the other hand, a customizable aggregator represents a return to the convenient one-stop shop. The reader’s customization may seem like source of value-add (2), but there’s nothing to say the Journal couldn’t serve a feed of articles with “big headlines.” This is what’s going on with the list of most-emailed articles—”emasculating” the editors. That’s essentially what google news and digg and their distant cousins do. And even if we don’t particularly like how they do it (I don’t), we should both respect that this project of figuring out alternative ways to accomplish value-add (2) is very young.

In the end, there’s nothing about your main value-adds (1)-(3) that requires them to come together. Why not unbundle them? Why not give some users the ability not to care about what an editor cares about. After all, an editor is just offering a guess—an intelligent one, to be sure—about what readers want. But who even knows which readers—the mean, the median, the mode, the ones on Wall Street, the ones on Main Street?

Advertisements

Twine Beta

I read an awful lot of RSS feeds. Not a record-shattering amount, but enough that it’s hard for me to keep them all organized in Google Reader.

Despite my efforts to keep them in “folders” of different kinds—some organized by topic, others by how frequntly I’d like to read them—I lose track of feeds for days or weeks on end sometimes. Then, when I do get a firm grip on all my feeds, I find that I’ve spent several hours of time I could’ve spent actually reading. That maintenance is getting to be a pain.

I’m hopeful that Twine can help me add a permanent smarter layer of organization to all my feeds. That smarter layer could be sensitive to my evolving reading habits. I’m also hoping that Twine can help me groups of topically similar posts across scattered blogs on the fly.

So early access to the beta would be awesome!

Programmable Information

From Tim O’Reilly:

But professional publishers definitely have an incentive to add semantics if their ultimate consumer is not just reading what they produce, but processing it in increasingly sophisticated ways.

In the past and present days of the web and media, publishers competed on price. If your newspaper or book or cd was the cheapest, that was a reason for someone to buy it. As information becomes digital, and the friction of exchange wears away, information will tend to be free. (See here, here, and here—and about a million other places.) That makes competing on price pretty tough.

Of course, publishers also competed, and still do, on quality. As they should. I suspect that readers will never stop wanting their newspapers articles well sourced, well argued, and well written. Partisan readers will never stop wanting their news to make the good guys look good and the bad guys look bad. That’s all in the data.

The nature of digital information, however, changes the what information consumers will find high-quality. Now readers want much more: they want metadata. That’s what O’Reilly’s talking about. That’s what Reuters was thinking when it acquired ClearForest.

Readers won’t necessarily look at all the metadata the way they theoretically read an entire article. Instead readers might find the article because of its metadata, e.g., its issues, characters, organizations, or the neighborhood it was written about. Or they might find another article because it shares a given metadatum or because its set of metadata is similar. Or, another step out, they might find another reader who’s enjoyed lots of similar articles.

The point is that, if your newspaper has metadata that I can use, that is a reason for someone to buy (or look at the ad next to it).

Actually, it’s not that simple. The New York Times annotates its articles with a few tags hidden in the html, and almost no one pays any attention to those tags. Few would even if the tags were surfaced on the page. Blogs have had tags for years, and no one’s really using that metadata, however meager, to great effect.

When blogs do have systematic tags, the way I take advantage of them is by way of an unrelated web application, namely, Google Reader. I can, for instance, subscribe to the RSS feed on this page, which aggregates all the posts tagged “Semantic Web” across ZD Net’s family of blogs. Without RSS and Google Reader, the tags just aren’t that useful. The metadata tells me something, but RSS and a feed reader allow me to lump and split accordingly.

Google Reader allows consumers to process ZDNet’s metadata in “sophisticated ways.” Consumers can’t do it alone, and there’s real opportunity in building the tools to process the metadata.

Without the tools to process the metadata, the added information isn’t terribly useful. That’s why it’s big deal that Reuters has faith that, if it brings forth the metadata, someone will build an application that exploits them—or that slices and dices interestingly.

In fact, ClearForest already tried to entice developers with a contest in 2006. The winner was a web application called Optevi News Tracker, which isn’t very exciting to me for a number of reasons. Among them is that I don’t think it’s a good tool for exploiting metadata. I just don’t really get much more out the news, although that might change if it used more than MSNBC’s feed of news.

My gut tells me that what lies at the heart of News Tracker’s lackluster operation is that it just doesn’t do enough with its metadata. I can’t really put my finger on it, and I could be wrong. Am I? Or should I trust my gut?

So what is the killer metadata-driven news application going to look like? What metadata are important, and what are not? How do we want to interact with our metadata?

B00km4rkToReadL8r

There are more than a few ways to remind yourself to read something or other later.

Browsers have bookmarks. Or you can save something to delicious, perhaps tagged “toread,” like very many people do. You can use this awesome firefox plugin called “Read It Later.”

But I like to do my reading inside Google Reader; others like their reading inside their fave reader.

So what am I to do? My first thought was Yahoo Pipes. It’s a well-known secret that Pipes makes screen-scraping around partial feeds as easy as pie. So I thought I could maybe throw together a mashup of del.icio.us and pipes to get something going.

My idea was to my to-be-read-later pages to delicious with a common tag—the common “toread” maybe. I could then have pipes fetch from delicious the feed based on that tag. The main urls for each delicious post point to the original webpage, and so, with the loop operator, I could locate the feed associated with each of the urls in the delicious feed. Original urls in hand, I was thinking I could have pipes auto-discover the associated feeds and then re-use those urls to locate the post within the feed corresponding to the page to be read later.

Well, I don’t think it can be done so easily. (Please! Someone prove me wrong!)

Meantime, I’ll just use my handy grease monkey plug-in that let’s me “preview” posts inside the google reader wrapper—so that I don’t have to hop from tab to tab like a lost frog.

Meantime, someone should really put together this app. Of course, it would really only work simply with pages that have rss analogues in a feed. But if, through Herculean effort, you found some practicable way to inform me that a given page doesn’t, but you could parse out the junk and serve me only the text, you’d be a real hero. Otherwise, just tell me that the page I’m trying to read later doesn’t have an rss analogue, give me an error message, and I’ll move on…assured in the knowledge that it will soon enough.

If you got excited about Streamy…

…then you should check out FeedEachOther. That’s what Marshall Kilpatrick of R/WW says. If you were let down with Streamy, on the other hand, it looks like you will also be let down by FeedEachOther.

What’s the bummer? These “feature-rich super-social RSS readers” just aren’t that feature-rich or social. They’re just not so different from Google Reader. They’re still RSS readers.

But first the good news. The thing pulls comments from the original blog into the reader. That’s awesome. Multiple kinds of relationship are good too.

I don’t want to subscribe to “similar” feeds according to some recommendation that’s a huge black box. In fact, it doesn’t really even work, and its black-boxiness prevents me from knowing why. Why, for instance, does FeedEachOther only give me recommendation based on the whole feed? Why not on each post? Whole feeds contains posts way too diverse to derive sufficiently sufficient semantic patterns from them.

It’s not okay to look at all of Jeff Jarvis’s feed and offer me this string of banal tags: “advertizing – buzz – internet – news – technology – blogs – daily – marketing – politics – web – blog – commentary – jarvis – online – trends – business – imported – media – tech – web2.0 – blogging – culture – journalism – opinion – tv.” Setting aside the problem of blogs-blog-blogging, it’s not okay because they’re so generic and because I can’t stack them up and take their intersections. I can’s use these tags the way the people who created them use them. When someone in delicious tags something “journalism,” they might also tag it “trends.” Neither topic is interesting alone; only together are they interesting. (Indeed, ‘trends in journalism’ is very interesting.)

Plus! On top of reading each post’s comments with a feed, I can share notes and items within the system. But wait! “The only thing better” would be to post comments from the web app to the original post? Actually, that’s a lot better. That’s worlds and worlds better. A web app is still just a basic RSS reader until it can weave itself into the same cloth of which the many, many thousands of blogs with their comments are made.

So, no, “the absence of offline and mobile modes, weaker analytics than Google Reader offers and a limit of 500 feeds by OPML import” are not the “only shortcomings.” Someone’s seriously drinking the RSS Reader Kool-Aid. And that’s too bad—because RSS itself is so many times greater and more magnificent.

In the end, Google Reader, Streamy, and FeedEachOther are bastions of only ONE component of networked news. They allow readers to network the news by publisher. Sure, they do more than dabble in allowing readers to network by fellow readers. There’s got to be more though—comments from reader to blog would be a big step. Lastly, both Streamy and FeedEachOther just don’t have the necessary kind of semantic (or “Semantic”) insight into their content yet. The three components of networked news must be as one for any to be truly worthwhile.

When will my news platform serve me up content that’s from my favorite author and recommended by my good buddy and about my favorite subject or story or beat? When that happens, we’ll not only all be reading our own really interesting stuff—we’ll care enough about it to get into even more interesting conversations.

Digg Adds Depth

Digg just added social networking to its position as the leading player in submit-and-vote news! Yes, Digg added the second component of networked news to the first.

I’m not sure enough many people will have enough friends to end up caring so much more about what they think about the news than what the universe of diggers thinks about the news. I, for one, as a twenty-something workaday guy, just don’t know enough people who use Digg to slurp up their news efficiently.

But maybe there are fifteen-year-olds who use Digg to get all their news. And maybe there are enough who have lots of other friends who use Digg similarly. If so, the submit-and-vote version of the first component of networked news could be on its way.

Many people, including me, don’t use Digg because its content—often dominated, they say, by upper-middle class geeky white dudes—just doesn’t cut it. I’ll stick with hours upon hours in front of google reader, backed up by aideRSS, of course. But with networks of friends, like-minded intellectuals, no doubt, Digg could really scratch my itch for content on the impending collapse of the dollar or Barack Obama’s position on chatting with foreign leaders or this conference I want to go to badly. (They say there’s so little room! They say Dave Winer may show!)

Anyhow, when are we going to be able to digg stories from outside digg.com? When am I going to install on my facebook profile a digg application, in which I can choose to see everyone’s diggs, just my friends’ diggs, just diggs of certain topics, just my diggs going back through history, etc.? When, indeed, am I going to be able to vote from facebook? Stick an ad in your widget and be done with it, Mr. Rose, who’s a near-hero of mine, for his lack of technical skills, mostly. (He paid a guy—someone else, someone who could code—ten bucks an hour to develop the site.)

PS. Mr. Cohn, toss me an invitation to the conference you and Mr. Jarvis are doing God’s, or at least the Republic’s, work to organize! And ask the top diggers whether they think, or under what conditions, they think their role could shrink because people like me would shift our attention away from the Digg homepage to our own friend-centered niches by way of Digg’s bringing on the second component of networked news!

Google Reader Counts Past One Hundred

That’s awesome. Whew, I shall remember these halcyon days warmly.

I can’t find the official word, however, so I can’t put a link on offer. You’ll just have to log in and check—if you’re like me and can now fret that the number of posts you have yet to read seems to have leaped by an order of magnitude, now up to “1000+” and beyond.

Actually, It’s great knowing the difference between 103 and 803.


Josh Young's Facebook profile

What I’m thinking

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

What I'm saving.

RSS What I’m reading.

  • An error has occurred; the feed is probably down. Try again later.