I, Kickstarter


Mathew Ingram’s Orthodoxy

Writing at gigaom, Ingram has become the orthodox voice of criticism of know-nothing newspapers and their superficial attempts at innovation in journalism. Cases of “Don’t get me wrong…” make his tone sound apologetic to me. My guess is that he’s moderating his frustration and anger in order to keep the readership of the editors he critiques. Reasonable enough.

For instance, check out this post, called “Memo to media: A Facebook app is not innovation.”

Pick two

Journalism’s challenge: “No one has figured out a way to present lots and lots of constantly updated information in a way that (a) is beautiful, (b) is effective at story discovery, and (c) privileges editorial control.”

Josh Benton has the goods: http://bit.ly/pFZsAI

Pure Money Gifts Are Basically a Bad Idea

The broad thesis under which I like the pwyw model is that there is a huge positive externality that goes to the payers. They look noble, the same way the endowers of chairs at universities look noble.

Now, I’m not saying that all the payers in a pwyw scheme will be rich and pay vast sums. That’s the facet of the analogy I don’t want to import. The relevant facet is that people are aspirational. We want to help the world, and we want our friends and colleagues and peers to know about it when we do. If that’s narcissism, so be it. We’re all narcissists then. There’s a reason we say “thank you.” It’s that we think we’re obliged to recognize the person who did the good thing. And there’s a reason we’re irked when we’re not thanked. It’s that we think others are obliged to recognize us when we do the good thing.

The point is that organizations charitable and otherwise have tried to tap that externality — in one way or another — for ages and ages and ages. Take the girl scouts. Delicious as they may be, people don’t buy thin mints because they deem them (the thin mints!) to be the right quality at the right price. People buy thin mints because they want to support the kids. The girl scouts organization frames the act of sales as a leadership activity for the girls. You go door to door, and you introduce yourself to strangers, and you pitch your product and your mission with poise. And cookies aren’t an arbitrary choice. Everyone knows what a cookie is. Cookies are a very easy thing for thousands and tens of thousands of super diverse children to sell all over everywhere and then some.

Which is precisely why it pisses me off when the parent sells his kid’s cookies in the office. It fucks up the whole jam. It devalues the cookies and saps the strength of the positive externality. But that’s the exact reason this example resonates resonates resonates. Would people buy more cookies if the girls themselves did all the selling, assuming that the girls were just as good at sales as their parents are? Yes, absolutely. That assumption is doing a ton of work for me, obvs. In fact, offices are basically the perfect place to sell charitable cookies for a bazillion reasons, and it’s hard for the kid to get to the parent’s office and walk cubicle to cubicle or send an all-staff email with a bubbly tone. And but so yet imagine the kid doing that, putting herself in view of her dad’s colleagues and saying, “hey, i’m a girl scout, and i’m selling yummy cookies for my troop, so what kind would you like?” Irresistible.

And not just irresistible-because-cute, although that’s the undeniable packaging. It’s ultimately irresistible because supporting a kid who’s being ambitious and working toward a goal is a good thing to do. And when one colleague shares his thin mints with another colleague, he’s saying, in part, “hey, i’m a good person who supports kids when they work toward goals.” And that’s awesome. That’s fucking close to magic. “Thank you,” says the other colleague.

And but so yet obviously the colleague who shares his cookies doesn’t actually say anything like, “hey look at how morally awesome i am!” That would be weird for reasons that are as obvious as they are complicated. That’s the whole point of the cookies! Sublimation! The cookies do the talking. They themselves are the communications vector for screaming, “i am a good person! please don’t cc my boss the next time i fuck up that weekly report you asked for!” And, really, can you imagine a better, sweeter, more delicious way to show the world how awesome you are than freely passing out charity-minded finger food that has tons of sugar to bleary-eyed and bored-stiff adults who are sick to bored to death of emails and meetings and other florescent drudgeries?!

So, at long last, it comes to this: news organizations don’t need pure hand-outs. They need their own cookies. The cookies unlock the huge positive externality. Give people something to talk about and share — something more than “hey, i just made a goddamn mensch of myself by funding the news.” Let them say something like “oh, did you hear that crazy tidbit about X? i just found that out from Jane Journalist, that one awesome expert reporter whose club i’m in.”

So, to repeat, the broad thesis under which I like the pwyw model is that there is a huge positive externality that goes to the payers. But there are all kinds of weird cultural and ethical norms around activating that externality. A great end result and donor list doesn’t cut it. Cookies work for the girl scouts because they sublimate the virtue of giving and because they’re super shareable. And, for the news, intriguing facts or interesting tidbits or smart opinions will work against because they sublimate the virtue of giving and because they’re super shareable.

People will pay to be cool or morally good. You just can’t be super obvious about it with outright pure money gifts.

Speculation on Links, Traffic, and Authority

We can say this: traffic flows along links that we click. For a few years—before google—we could even say this: a link is not a link until we click it.

But now that is wrong because google made links really something else—meaningful signals, not just infrastructure. Links have a deeply important role in pagerank, the backbone of google’s mighty search engine.

Thus the giver of a link tells google that the recipient of a link is notable or significant or worth your time and attention and consideration or engagement. This is authority—on average, at the very least.

Links are signals for authority. That authority is distributed throughout the network, and given Igon values, google built a magnificent business detecting, computing, and centralizing that authority.

* * *

We are not entitled to our own understanding of facts, which take root in the universe. Thus we call facts objective. But we are entitled to our own appreciations of authority. Indeed, appreciation for authority can only take root in ourselves as individuals and groups of individuals. Thus we call authority subjective.

There are very many facts that I will never need to learn or remember. I will rely on google to detect those answers. Like just-in-time inventory, I will have answers only when I need them, when I borrow them, avoiding the mental costs of carrying them in my jammed-up memory.

Likewise, there are very many authorities that I will never need to appreciate. I will rely on google to detect those signals. But unlike facts as stored in someone else’s inventory, something changes about authority when I don’t carry it with me. Something’s lost when I borrow authority—just in time.

Google delivers facts. And facts are facts. But google doesn’t really deliver authorities. It co-opts them.

Maybe this is why Clay Shirky calls it “algorithmic authority.”

So if I were settling a bar bet, I might well say, “Yes, you can trust me. I found that claim by clicking on the top google search return.” The page on which I found the claim doesn’t enter my justification. “Dude, I googled it” might not work for very many justifications today, but Shirky’s quite right that there’s “spectrum” and that “current forces seem set to push [algorithmic authority] further up the spectrum to an increasing number and variety of groups that regard these kinds of sources as authoritative.”

The authority belongs to the algorithm that found the source, not the source itself. Traffic flows along links out to the edges of the network, but authority pulls inward to the center.

* * *

And this is why it seems unfair for folks like Jeff Jarvis to make claims like, “The recipient of links is the party responsible for monetizing the audience they bring.”

News sites should certainly be trying to establish engagement and trust and authority with users who come from google. But insisting that this task is an imperative of the link economy seems to under-appreciate that algorithmic authority centralizes authority. Google pushes the traffic but keeps the trust—or much of it, anyhow.

Maybe the best answer to “What Would Google Do?” goes something like this: build an algorithm that detects and hordes an elusive and highly diffuse resource distributed across a network.

* * *

So Danny Sullivan can jump up and down and yell about WSJ and google and bing: “Do something. Anything. Please. Survive. But there’s one thing you shouldn’t do. Blame others for sending you visitors and not figuring out how to make money off of them.”

Sullivan can exhort newspapers to see google referrals as an opportunity. And they are. Moreover, I have little doubt that many newspapers should be optimizing their pages depending on the referrer, whether that’s google or facebook or twitter or stumbleupon or whatever. But let’s also remember that google changed links. A different kind of traffic now flows along them. And that traffic is fickler—and, yes, less valuable—than we might first imagine.

Picture this! The news graphed.

slate news dotsSlate added a curious addition to its site last week. Its heart is in the right place, and this is a good experiment, not a silly one.

I believe there’s extraordinary value to be unlocked by mapping a world of articles onto the social graph that they describe textually. I’ve written about graphing the news when I awkwardly described a scheme here and geeked out over a pretty picture here. Yes, there is a funny thing about social networks: they often describe the real world as well as sometimes exist as their own worlds. Done right, Slate’s graph could be an eye-opening mechanism for aggregating, sorting, discovering, following, sharing, and discussing the news.

But I don’t think Slate has quite done it right. In short, there’s too much information in the nodes, or the “dots,” as Slate calls them, and there’s too little information in the edges, or the links that connect up those dots. So permit me a little rambling.

We don’t really care all that much about the differences between a person, a group, and company—not at this top level of navigation, anyhow. There are too many dots, and it’s too hard to keep them all the colors straight. Of course, it’s not that hard, but if Slate’s project is fueled by bold ambition rather than fleeting plaudits, it’s just not easy enough. They’re actors. They’re newsmakers. They are entities that can be said to have a unified will or agency. And that’s enough. Make up a fancy blanket term, or just call them “people” and let smart, interested users figure out the details as they dive in.

Moreover, assistant editor Chris Wilson confuses his own term “topic.” At first, he writes, “News Dots visualizes the most recent topics in the news as a giant social network.” Also, “Like a human social network, the news tends to cluster around popular topics.” In this sense, a “topic” is an emergent property of Slate’s visualization. It’s the thing that becomes apparent to the pattern-seeking, sense-making eyes of users. So “one clump of dots might relate to a flavor-of-the-week tabloid story” or “might center on Afghanistan, Iraq, and the military.”

But then Wilson makes a subtle but ultimately very confusing shift. Explaining how to use the visualization, he writes, “click on a circle to see which stories mention that topic and which other topics it connects to in the network.” Problem is, these “topics” are what he has just called “subjects.” As emergent things, or “clumps,” his original “topics” can’t be clicked on. On the contrary, “subjects—represented by the circles below—are connected to one another,” and they’re what’s clickable.

To make matters worse, Wilson then, below the visualization, introduces more confusing terms, as he describes the role played by Open Calais (which is awesome). It “automatically ‘tags’ content with all the important keywords: people, places, companies, topics, and so forth.” The folks at Thomson Reuters didn’t invent the term “tag,” of course; it’s a long-standing if slippery term that I’m not even going to try to explain (because it really, really is just one of cases in which “the meaning of the word presupposes our ability to use it”). At any rate, Wilson seems like he’s using it because Open Calais uses it. That’s fine, but a bit more clarity would be nice, given the soup of terms already around. And there’s really little excuse for dropping in the term “keywords” because, with his technical hat on, it’s just wrong.

I’m terribly sorry to drag you, dear reader, through that intensely boring mud puddle of terminology. But it’s for good reason, I think. Graphing the news is supposed to be intuitive. The human mind just gets it. A picture is worth a thousand words. Taken seriously, that notion is powerful. At a very optimistic level, it encourages us to let visualizations speak for themselves, stripped of language all too ready to mediate them. But at a basic level, it warns us writers not to trample all over information expressed graphically with thousand textual words that add up to very little—or, worse, confusion.

But, yes, okay, about those prenominate the edges, or the links that connect up those dots! I wrote about this long ago, and my intuition tells me that it doesn’t make sense to leave edges without their own substance. They need to express more than similarity; they can do more than connect like things. If they were to express ideas or events or locations while the nodes expressed topics, it seems to me that the picture would be much more powerful. Those ideas, events, or locations wouldn’t sit in light blue “Other” nodes, as Slate has them; instead they would directly link up the people and organizations. The social network would be more richly expressed. And topics, in Wilson’s original sense, wouldn’t be emergent “clumps” but actually obvious connections.

All in all, the visualization is “depressingly static,” as a friend of mine remarked. There may be two levels of zoom, but there’s no diving. There’s no surfing, no seeing a list of stories that relate to both topic x AND topic y. There’s no navigation, no browsing. There’s no search—and especially none involving interaction between the human and computer. There’s no news judgment beyond what newspaper editors originally add. And the corpus is small—tiny, really, representing only 500 articles each day, which isn’t so far from being a human-scale challenge. Visualizations hold the most promise for helping us grapple with truly internet-scale data sets—not 500 news articles a day but 500,000 news articles and blog posts.

It seems unfair to hold Slate to such a high standard, though. It’s very clear that they were shooting for something much more modest. All the same, maybe modesty isn’t what’s called for.

Curating the News Two Ways

There are two relatively new efforts to curate the best links from twitter. They’re both very simple tools, and their simplicity is powerful.

As with any good filter of information, there’s a simple, socially networked mechanism at play, and analyzing how that mechanism works helps us predict whether a tool will thrive. The name of the game is whether a mechanism fits social dynamics and harnesses self-interest but also protects against too much of it. (This kind of analysis has a storied history, btw.)

First came 40 twits, Dave Winer’s creation, with instances for himself, Jay Rosen, Nieman Lab, and others. It’s powered by clicks—but not just any clicks on any links. First Dave or Jay picks which links to tweet, and then you and I and everyone picks which links to click. There are two important layers there.

Like the others, Dave’s instance of 40 twits ranks his forty most recent tweets by the number of clicks on the links those tweets contain. (Along the way, retweets that keep the original short URL presumably count.) The result is a simple list of tweets with links. But If you’re reading Dave’s links, you know Dave likes the links by the simple fact that he tweeted them. So the real value added comes from how much you trust the folks who are following Dave to choose what’s interesting.

Note well, though, that those self-selected folks click before they read the thing to which the link points. They make some judgment based on the tweet’s snippet of text accompanying the links, but they may have been terribly, horribly disappointed by the results. Of course, this presumably doesn’t happen too too much since folks would just unfollow Dave in the longer term. In equilibrium, then, a click on a link roughly expresses both an interest generated by the snippet of text and a judgment about the long-term average quality of the pages to which Dave’s or Jay’s links point. Dave adds the data (the links), and his followers add the metadata (clicks reveal popularity and trust).

Are there features Dave could add? Or that anyone could add, once Dave releases the source? Sure there are. For one, it doesn’t have to be the case that all clicks are created equal. I’d like to know which of those clicks are from people I follow, for instance. I might also like to know which of those clicks are from people Dave follows or from people Jay follows. Their votes could count twice as much, for instance. This isn’t a democracy, after all; it’s a webapp.

But think a bit more abstractly. What we’re really saying is that someone’s position in the social graph—maybe relative to mine or yours or Dave’s—could weight their click. Maybe that weighting comes from tunkrank. Or maybe that weighting comes from something like it. For instance, if tunkrank indicates the chance that a random person will see a tweet, then I might be interested in the chance that some particular person will see a tweet. Maybe everyone could have a score based on the chance that their tweet will find its way to Dave or to me.

Second came the Hourly Press, with an instance Lyn Headley calls “News about News.” It’s powered not by clicks—but by tweets. And, again, not just any tweets. Headley picked a set of six twitter users, called “editors,” including C.W. Anderson, Jay Rosen, and others. And those six follow very many “sources,” including one another. There are two important layers there, though they overlap in that “editors” are also “sources.”

“News about News,” a filter after my own heart, looks back twelve hours and ranks links both by how many times they appear in the tweets posted by a source and also by the “authority” of each source. Sources gain authority by having more editors follow them. “If three editors follow a source,” the site reads, “that source has an authority of 3” rather than just 1. So, in total, a link “receives a score equal to the number of sources it was cited by multiplied by their average authority.” Note that what this does, in effect, is rank links by how many times they appear before the eyes of an editor, assuming all editors are always on twitter.

The result is a page of headlines and snippets, each flanked by a score and other statistics, like how many total sources tweeted the link and who was first to do so. If you’re already following the editors, as I am, you know the links they like by the simple fact that they tweeted them. But no editor need have tweeted any of the links for the to show up on the Hourly Press. Their role is to just to look at the links—to spend their scarce time and energy following the best sources and unfollowing the rest. There are incredible stores of value locked up in twitter’s asymmetrical social graph, and the Hourly Press very elegantly taps them.

Note well, though, that editors choose to follow sources before those sources post the tweets on the Hourly Press. Editors may be terribly, horribly disappointed by the link that any given tweet contains. But again, this presumably doesn’t happen too too much since those editors would unfollow the offending sources. In equilibrium, then, a tweet by a source roughly expresses the source’s own interest and the editor’s judgment about the long-term average quality of the pages to which the source’s links point. Sources add the data (the links), and editors add the metadata (attention reveals popularity and trust).

There’s so much room for the Hourly Press to grow. Users could choose arbitrary editors and create pages of all kinds. There’s a tech page just waiting to happen, for instance. Robert Scoble, Marshall Kirkpatrick, and others would flip their lids to see themselves as editors—headliners passively curating wave after hourly wave of tweets.

But again, I think there’s a more abstract and useful way to think about this. Why only one level of sources? Why not count the sources of sources? Those further-out, or second-level, contributing sources might have considerably diminished “authority” relative to the first-level sources. But not everyone can be on twitter all the time. I’m not always around to retweet great links to my followers, the editors, and giving some small measure of authority to the folks I follow (reflecting the average chance of retweet, e.g.) makes some sense.

But also, editors themselves could be more or less relatively important, so we could weight them differently, proportionally to the curatorial powers we take them to have. And those editors follow different numbers of sources. It means one thing when one user of twitter follows only fifty others, and it means something else altogether when another user follows five hundred. The first user is, on average, investing greater attention into each user followed, while the second is investing less. Again, this is the attention economics that twitter captures so elegantly and richly.

But it’s important to circle back to an important observation. In both apps, there are two necessary groups. One is small, and one is large. One adds data, and the other adds metadata. The job of the builder of these apps is to arrive at a good filter of information—powered by a simple, socially networked mechanism. That power must come from some place, from some fact or other phenomenon. The trick, then, is choosing wisely. Social mechanisms that work locally often fail miserably globally, once there’s ample incentive to game the system, spam its users, or troll its community.

But not all filters need to work at massive scale either. Some are meant to personal. 40 twits strikes me as fitting this mold. I love checking out Dave’s and Jay’s pages, making sure I didn’t miss anything, but if I thought tens of thousands of others were also doing the same, I might feel tempted to click a few extra times on links I want to promote. I don’t think a 40 twits app will work for a page with serious traffic. And, ultimately, that’s because it gets its metadata from the wrong source: clicks that anyone can contribute. If the clicks were some limited to coming from only a trusted group, or if the clicks weren’t clicks at all but attention, then maybe 40 twits could scale sky-high.

Hourly Press—which I don’t think is terribly well suited to being called a “newspaper,” because the moniker obscures more than it adds—doesn’t face this limitation. The fact that Hourly Press is powered by attention, which is inherently scarce, unlike clicks, is terribly powerful, just as the fact that twitter is powered by attention is terribly powerful. Write large, both are incredibly wise, and they contain extraordinarily important lessons in mechanism design of social filters of information.

The Wall Street Journal Isn’t Starbucks

I am at pains here not to seem like a big, gruesome troll. I am therefore going to avoid anything that could be even reasonably construed as an argument anything close to “information wants to be free.” That would give lazy opponents a too easy strawman, which is too bad, because what I’m really giving up, it seems, is arguments stemming from vanishingly small marginal costs. Oh well, such seems to be the price of admission to conversations about the future of news in which curmudgeons may lurk, which is certainly to say nothing at all about whether Mr. Murray is curmudgeonly. (It’s far too early in this post to poison that particular well.)

And so but my question is, “At a human level, why would @alansmurray push us into a paywall when he could avoid it?”

And Mr. Murray’s answer is, “I feel the same way about the folks at Starbucks.”

So let’s take a look at whether it’s an appropriate argument by analogy. Let’s see where it holds up and where it’s weak.

First, the folks at Starbucks rarely know their customers. No denigration to them at all—I’ve put in my time working the Dairy Queen in the mall’s food court—but they have a rote job. Starbucks the corporation may wish it hired pleasant workers, but in truth it doesn’t want to pay for them. Call me cynical or call me a member of Gen M, but low-level food-service workers are not in anything near even quasi-social relationships with buyers of coffee. It’s not their fault; they’re not really paid for their social graces or interpersonal talents. It’s a structural problem.

But Mr. Murray is in an altogether different space. He’s in a space quite literally defined by its human connections. There is little reason to be on twitter at all if it’s not to be social at some level.

And, I can say from my not-so-remote experience in food service that when folks like the folks at Starbucks do find themselves in a social context with customers, they’re deeply tempted to give away product. When I was a kid, working the blizzard machine at the tender age of fourteen, I gave away way more product than I’d like to admit. There was too much soft-serve on my cones. There was too much candy or cookies whipped into my blizzards. And I also just gave it away. Maybe it was part of a swap with the pizza guys or the sandwich guys or the taco guys. Or maybe I just handed out blizzards to all my pals, when the boss wasn’t looking. This corporate-profit-be-damned attitude was rampant across my food court on the west side of Madison, Wisconsin, in the second half of the 1990s. It’s called a principal-agent problem, and although it’s not unreasonable for Mr. Murray, an agent, to side with his principal, his analogy hides the difference, pretending it doesn’t exist. (NB. I haven’t a clue whether Mr. Murray is an equity holder of News Corp.)

Also, it’s illegal to give away someone else’s coffee. As best I can tell, however, it’s perfectly within the bounds of the law to encode a long google link within the bit.ly URLs Mr. Murray uses. It’s not against the law for Mr. Murray to route us around inconvenience rather than push us into a paywall. In fact, the route-around is perfectly normal and appropriate. Again, there’s nothing wrong or shady or sketchy about routing around the Wall Street Journal’s paywall. You don’t have to be hacker; you only have to be frugal and spend a few extra seconds and clicks.

But maybe it’s against the rules. Maybe Mr. Murray’s boss has decreed that WSJ employees shall not distribute link that route around the paywall. That doesn’t answer the question, however; it just passes the buck. For why would Mr. Murray’s boss—who is probably Robert Thomson, though I’m not certain—authorize or oblige Mr. Murray’s twittering of paywalled links if he hadn’t deemed it appropriate? Does Robert Thomson believe it makes business sense to twitter paywalled links?

Maybe it is. Maybe Mr. Thomson believes that, if Mr. Murray twittered route-around links to normally abridged articles, then fewer people would pay for subscriptions. And maybe fewer people would. It’s not impossible. Note well, however, that I’m not saying Mr. Murray should hurt his company’s finances by twittering route-around links to normally abridged articles. I’m saying that Mr. Murray might consider twittering only links to normally unabridged WSJ articles and other content around the web. But that would be odd, wouldn’t it? That would be awkward, silly even.

The Wall Street Journal leaves the side-door wide open, hidden only by slight obscurity, but charges at the front door. The Wall Street Journal is wide open. The fact that google indexes its content fully is dispositive—it’s all the proof we need. Let’s try a good old counterfactual conditional: Were the route-around not legitimate, then google would ding the WSJ’s pagerank. But google clearly hasn’t, so the route-around is legitimate.

The point requires an underline lest we succumb to a kind of anchoring cognitive bias. The paywall is not normative. You are not stealing content by refusing to be treated differently from google. In fact, the use of terms like “front door” and “side door” subtly, but completely inappropriately, encodes moral judgments into the discussion. In fact, there are—rather obviously, come to think of it—no “doors” at all. There are, in technical reality, only equal and alternative ways of reading the news. One’s convenient, and one’s not. One’s free, save the attention extracted by on-site advertising, and the other’s not. Maybe one cushions News Corp.’s bottom line, and maybe the other doesn’t. Maybe one supports civically important journalism, and maybe one doesn’t.

At bottom, though, there’s this. Mr. Murray is a human interacting socially with other humans on twitter, saying, “Hey, read this! Trust me: it’s good!” He gestures enthusiastically toward a bolted door, his back disguising an open gateway. “Please, ignore the actually convenient way to take my suggestion that you read this really interesting piece.” Mr. Murray would rather you remain ignorant of a loophole his paper exploits in order to maintain its googlejuice but keep its legacy subscribers. (Note that I’ve pointed out the loophole to several fellow mortgage traders, asking whether they would consider dropping their subscriptions. They all declined, saying they prefer to pay rather than take the time to make the additional clicks.)

I’m not saying it doesn’t make business sense. Businesses are free to capture whatever “thin value” they can, Umair Haque’s warnings notwithstanding. I am saying it doesn’t make human sense. I am saying that particular business practice looks silly, awkward, and disingenuous on twitter. And, ultimately, that’s Umair’s point. In a world of exploding media (PDF), we’re inevitably going to come to rely more on human connections, based on real trust, in order to make choices about how we allocate our attention. Mr. Murray’s cold business logic may work, but I suspect it won’t.

The Wall Street Journal’s Fancy SEO Tricks

I’m not an SEO expert. So if there were a group of SEO experts standing in the corner, I wouldn’t be among them. I would be among the mere mortals, who apply basically their common sense to how search engines work.

All that said by way of longwinded preamble, I did happen upon a fun realization this morning, in the spirit of “The internet routes around….”

The WSJ does this thing called cloaking. It essentially means they show Google a different website from what they show you. The googlebot sees no paywall and waltzes right in. You hit a “subs only” paywall and get frustrated. Or maybe you pay for the subscription. Still, though, I doubt google pays the subscription, and so even if you see the whole website too, you see a costly website, whereas google sees a free website.

The net result for the WSJ is that it cleverly gets its entire articles indexed, making them easier to find in google, but is able to maintain its paywall strategy. The net result for you and me is that it’s sometimes a pain in the neck to read the WSJ—which is too bad, because it’s a great read. It’s also a pain in the neck to share WSJ articles, as Deputy Managing Editor and Executive Editor Online @alansmurray’s sometimes plaintive “subs only” tweets evince.

But there’s a way around the mess. Actually, there are a couple ways around. One involves the hassle of teaching my mom how to waltz in like google does, and one involves me doing it for her. I prefer the latter.

paywallBut let’s rehearse the former first. Let’s say you hit the paywall. What do you do? You copy the headline, paste it into google, and hit enter. This works way better if you’ve got a search bar in your browser. Once you hit enter, you come to a search results page. You’ll know which link to click because it won’t be blue. Purple means you’ve been there before, so click that link. It will take you back to your article, but you’ll be behind the paywall, gazing at unabridged goodness. It’s not too hard, and the upside it terrific. That said, this procedure is much easier to perform than it is to explain, and the whole thing is pretty unintuitive, so my efforts to spread the word have led to little.

But there’s a better way, for the sharing, a least—a way that involves letting the geekiest among us assume the responsibility of being geeky. It’s natural, and you don’t have to rely on your mother’s ability to route around. Instead, once you decide you want to share a WSJ article, grab the really long URL that sits behind google’s link on its search returns page. They look something like this:

http://www.google.com/url?sa=t&source=web&ct=res&cd=2&url=http%3A%2F%2Fonline.wsj.com%2Farticle%2FSB125134056143662707.html&ei=4oiWSouFJIuGlAez86GqDA&usg=AFQjCNEhRb_n571tSnJZrK-uru_0owFz9g&sig2=3rZbZnhOu11lo3bOUojDfA

Then push that horribly long URL—itself unfit for sharing in many contexts—into your favorite URL shortener. Send that shortened URL to your mom, or post it to twitter.

No one will ever know the article you’re sharing sits behind WSJ’s grayhat paywall.

LATE UPDATE: I write a follow-up post prompted by @alansmurray’s response, comparing his situation to the one occupied by the folks at Starbucks.

LATER UPDATE: Alex Bennert from the WSJ points out that the WSJ’s fancy trick is in fact sponsored by google and called First Click Free. See his her link below and my reply.

Parasites, readers, and value

The idea that some sites, like blogs and aggregators or whatever they’re called, are parasites on traditional news is interesting. It’s not crazy.

Those who run traditional news sites see aggregators benefiting from the resources of the traditional players and worry that they, the traditional players, may be hurt by that use. The worriers say, “Digital vampires” are “sucking the blood” out of traditional news players. (That’s a shibboleth, not a fair rehearsal of a smart argument.)

Some decry the notion that traditional players are hurt or harmed or injured by that use. The decriers say, “Wait! Vanquish your backwards self-pity because aggregators actually help you via the link economy.” (That’s a shibboleth, not a fair rehearsal of a smart argument.)

My sympathies lie deeply with the decriers. But I wonder whether they are right. I’m not sure they are—and, probably more importantly, I don’t see their argument convincing everyone it intends to, especially the worriers. So let me take a different tack.

What if it were the case that aggregators were parasites in the way the worriers worry about? But what if it were also the case that readers or users or whatever they’re called were actually better off as a result? What kind of parasite hurts one host in order to help another? And what might it mean if the help is greater than the hurt? What then?

Would we cheer the gains of the readers? Would we feel bad for the worriers? Would we despise the aggregators? And here’s the real question: Would we forsake the gains of readers in order to prevent the harm felt by worriers and brought about by aggregators?

I don’t know the answer to that question. For one, it’s really hard to imagine what we’d even mean by “gains of the readers.” Would we mean total utility people derive from news, however we define it? That seems empirically pretty impossible to measure. But could we use total traffic or pageviews of traditional news sites and blogs and aggregators as a proxy? But would all pageviews be created equal, as it were, or would we care about the loss of hard news if it were replaced by soft? How would we even know what blend of hard and soft news—serious and light-hearted, intellectual and whimsical—is ideal?

Or maybe we reject the paternalism inherent in claiming the right to answer the question about what blend of hard and soft news is ideal. Maybe all pageviews are created equal, or about equal, or about equal within some bounds of reasonability.

*    *    *

Blogs and aggregators or whatever they’re called as a group add value to the news on the web in a few ways. They add reporting, analysis, and context. They mobilize advocates; they amuse and entertain. They also decrease the uncertainty inherent in experience goods like the news—in other words, they add trust. They increase social capital.

There’s only so much attention in the world. The outfits that help allocate it efficiently—to content, comunication, games, etc.—will win it, even if it’s at the expense of civically important news, ceteris paribus. Worriers worry because they see their slice of the pie decreasing. And maybe it is. Maybe the theory of the link economy is wrong! But maybe the pie’s changing in other ways too.

Maybe the slice owned by traditional news sites is decreasing while the size of the whole pie is increasing. Maybe users are better off. That would be good, right?

*    *    *

And yet we’re not one inch closer to persuading worriers worried about their own demise. No, what we have is possibly an argument that let’s us look beyond their worries to a bigger picture in which it might well be the case that their worries will never go away till they themselves are gone. We may have freed ourselves from that responsibility, and maybe that’s important. After all, it’s unreasonable to blame a worrier for worrying about his own death. It’s folly to try to persuade a worrier to sacrifice herself.


Josh Young's Facebook profile

What I'm saving.

RSS What I’m reading.

  • An error has occurred; the feed is probably down. Try again later.