Archive for the 'NovaSpivack' Category

Why Socialmedian, Twine, and Others Don’t Get the News

More than a year ago, I asked, “What Is Networked News?” I was thinking about how people really, actually want to get their news, and my answer came in three parts.

Let’s focus briefly on the first two. (1) People care about who writes it or creates it. In other words, people want their news from trusted publishers. (2) People also care about who likes it. In other words, people want their news from trusted consumers—their “friends.”

News in the modern era has naturally revolved around publishers. That part’s old-hat, and so people need little help from innovators in getting their news from publishers. But innovators have made tremendous accomplishments in helping people get their news from their friends. This is largely the story of the success of Web 2.0 so far, and many startups have engineered ingenious systems for delivering news to people because their friends like it.

FriendFeed is one such awesome story. Twitter’s another. Google Reader’s “share” feature and its openness, which has allowed others to build applications on top of it, make for another perfect example. The ethic of the link among bloggers is, in a very real way, central to this concept: one person referring others to someone else’s thoughts.

But I also wrote about a third way. (3) People want their news about what interests them. This may seem like a trivial statement, but it is deeply important. There is still tons of work to be done by innovators in engineering systems for actually delivering news to people because they want exactly what they want and don’t want any of the rest.

Twine‘s “twines” come close. Socialmedian‘s “news networks” come close. They’re both examples of innovation moving in the right direction.

But they don’t go nearly far enough. Twine looks like it’s got significant horsepower under the hood, but it lacks the intuitive tools to deliver. Frankly, it’s badly burdened by its overblown vision of a tricked-out Semantic Web application that’s everything to all people all the time. Twine is, as a result, an overcomplicated mess.

Socialmedian’s problem are worse, however. It’s simply underpowered. Nothing I’ve read, including its press release reproduced here, indicates the kind of truly innovative back-end that can revolutionize the news. Socialmedian wraps a stale social donut around Digg, and I’m afraid that’s about it.

When it comes to the news, people demand (1), (2), and (3). They want their most trusted publishers and their most trusted friends, and they want to personalize their interests with radical granularity. That takes an intense back-end, which Socialmedian simply lacks. That also takes an elegant user-facing information architecture, which Twine lacks.

We’ve had (1) for years, and I’m thrilled at the advances I see made seemingly every day toward a more perfect (2). But a killer news web application has yet to deliver on (3). When it does, we’ll have something that’s social and powerful and dead-simple too.

Twine Beta

I read an awful lot of RSS feeds. Not a record-shattering amount, but enough that it’s hard for me to keep them all organized in Google Reader.

Despite my efforts to keep them in “folders” of different kinds—some organized by topic, others by how frequntly I’d like to read them—I lose track of feeds for days or weeks on end sometimes. Then, when I do get a firm grip on all my feeds, I find that I’ve spent several hours of time I could’ve spent actually reading. That maintenance is getting to be a pain.

I’m hopeful that Twine can help me add a permanent smarter layer of organization to all my feeds. That smarter layer could be sensitive to my evolving reading habits. I’m also hoping that Twine can help me groups of topically similar posts across scattered blogs on the fly.

So early access to the beta would be awesome!

Sell me tags, Twine!

How much would, say, the New York Times have to pay to have the entirety of its newspaper analyzed and annotated every day?

The question is not hypothetical.

The librarians could go home, and fancy machine learning and natural language processing could step in and start extracting entities and tagging content. Hi, did you know Bill Clinton is William Jefferson Clinton but not Senator Clinton?! Hey there, eh, did you know that Harlem is in New York City?! Oh, ya, did you know that Republicans and Democrats are politicians, who are the silly people running around playing something called politics?!

Twine could tell you all that. Well, they say they can, but they won’t invite me to their private party! And maybe the librarians wouldn’t have to go home. Maybe they could monitor (weave?) the Twine and help it out when it falls down (frays?).

I want to buy Twine’s smarts, its fun tags. I’d pay a heckuva lot for really precociously smart annotation! They say, after all, that it will be an open platfrom from which we can all export our data. Just, please, bloat out all my content with as much metadata as you can smartly muster! Por favor, sir! You are my tagging engine—now get running!

What if Twine could tag all the news that’s fit to read? It would be a fun newspaper. Maybe I’d subscribe to all the little bits of content tagged both “Barack Obama” and “president.” Or maybe I’d subscribe to all the local blog posts and newspaper articles and videos tagged “Harlem” and “restaurant”—but only if those bits of content were already enjoyed by one of my two hundred closest friends in the world.

I’d need a really smart and intuitive interface to make sense of this new way of approaching the news. Some online form of newsprint just wouldn’t cut it. I’d need a news graph, for sure.

See TechCrunch’s write-up, Read/Write Web’s, and Nick Carr’s too.

PS. Or I’ll just build my own tagging engine. It’ll probably be better because I can specifically build it to reflect the nature of news.

Spivack Gets the Semantic Web, But the Analogy Eludes

Nova SpivackI simultaneously envy and fret over Nova Spivack’s style. I’m deeply sympathetic to his recent brain metaphor—in no small part because I’m a sucker for the killer analogy. Spivack’s analogy is catchy and seems useful: “I believe that collective intelligence primarily comes from connections—this is certainly the case in the brain where the number of connections between neurons far outnumbers the number of neurons; certainly there is more ‘intelligence’ encoded in the brain’s connections than in the neurons alone.” Then, bringing it home, “Connection technology…is analogous to upgrading the dendrites in the human brain; it could be a catalyst for new levels of computation and intelligence to emerge.” Ultimately, Spivack claims, “By enriching the connections within the Web, the entire Web may become smarter.”

There’s great stuff packed in here—frustratingly great stuff. Is there really more “intelligence” encoded in the brain’s connections than its neurons? What does it mean to believe that collective intelligence comes from connections? Or are we talking tautology (in which “intelligence” + “connections” = “connected” or “collected” or “collective intelligence”)? And what could it ever mean to upgrade, or enrich, our dendrites, the byzantine tree-like conductors of electrical inputs to our neurons? How would we be more intelligent?

Why not rehearse an argument that defends the aptness of this analogy? Why leave that chore—the really hard part—to me, to the reader? Unless they’re trivial or obvious, rigorous analogies alone cannot be more than invitations to real arguments. Don’t invite me to the party and tell me to bring the champagne!

“The important point for this article,” Spivack writes, “is that in this data model rather than there being just a single type of connection”—the present Web’s A-to-B hotlink—”the Semantic Web enables an infinite range of arbitrarily defined connections to be used.” Bits of information, people, and applications “can now be linked with specific kinds of links that have very particular and unambiguous meaning and logical implications. … Connections can carry more meaning, on their own. It’s a new place to put meaning in fact—you can put meaning between things to express their relationships.”

Yes, when connections can carry arbitrarily more meaning, the human-relevant reasons for them to exist grow arbitrarily large—or, at least, as arbitrarily large as we bandwidth-bounded humans can handle. Only this kind of virtuous semantic circle, it seems to me, can radically improve the intelligence of the web as whole. What’s important are not just connections with more meaning (“upgraded” dendrites, I suppose). What’s important is that connections with more meaning promise a blossoming of the total number of connections (more “dendrites”)—each of which can themselves have more meaning.

The web will become more intelligent, or just more useful, when projects like Spivack’s and like Freebase—which I’ve checked out a bit (facebook me for an invitation to the private alpha)—expand the scope of reasons for connections among bits of information, people, and applications. Of course, that’s the whole idea for the semantic web. With more reasons for connections, we get more meaning for connections. With more meaning for connections, we get more connections. In the end, we get more connections with more meaning—a kind of semantic multiplier effect.

It’s just that we’re talking about Internet here. Brains are still a few years out.


Josh Young's Facebook profile

What I'm saving.

RSS What I’m reading.

  • An error has occurred; the feed is probably down. Try again later.