Archive Page 2

The Wall Street Journal Isn’t Starbucks

I am at pains here not to seem like a big, gruesome troll. I am therefore going to avoid anything that could be even reasonably construed as an argument anything close to “information wants to be free.” That would give lazy opponents a too easy strawman, which is too bad, because what I’m really giving up, it seems, is arguments stemming from vanishingly small marginal costs. Oh well, such seems to be the price of admission to conversations about the future of news in which curmudgeons may lurk, which is certainly to say nothing at all about whether Mr. Murray is curmudgeonly. (It’s far too early in this post to poison that particular well.)

And so but my question is, “At a human level, why would @alansmurray push us into a paywall when he could avoid it?”

And Mr. Murray’s answer is, “I feel the same way about the folks at Starbucks.”

So let’s take a look at whether it’s an appropriate argument by analogy. Let’s see where it holds up and where it’s weak.

First, the folks at Starbucks rarely know their customers. No denigration to them at all—I’ve put in my time working the Dairy Queen in the mall’s food court—but they have a rote job. Starbucks the corporation may wish it hired pleasant workers, but in truth it doesn’t want to pay for them. Call me cynical or call me a member of Gen M, but low-level food-service workers are not in anything near even quasi-social relationships with buyers of coffee. It’s not their fault; they’re not really paid for their social graces or interpersonal talents. It’s a structural problem.

But Mr. Murray is in an altogether different space. He’s in a space quite literally defined by its human connections. There is little reason to be on twitter at all if it’s not to be social at some level.

And, I can say from my not-so-remote experience in food service that when folks like the folks at Starbucks do find themselves in a social context with customers, they’re deeply tempted to give away product. When I was a kid, working the blizzard machine at the tender age of fourteen, I gave away way more product than I’d like to admit. There was too much soft-serve on my cones. There was too much candy or cookies whipped into my blizzards. And I also just gave it away. Maybe it was part of a swap with the pizza guys or the sandwich guys or the taco guys. Or maybe I just handed out blizzards to all my pals, when the boss wasn’t looking. This corporate-profit-be-damned attitude was rampant across my food court on the west side of Madison, Wisconsin, in the second half of the 1990s. It’s called a principal-agent problem, and although it’s not unreasonable for Mr. Murray, an agent, to side with his principal, his analogy hides the difference, pretending it doesn’t exist. (NB. I haven’t a clue whether Mr. Murray is an equity holder of News Corp.)

Also, it’s illegal to give away someone else’s coffee. As best I can tell, however, it’s perfectly within the bounds of the law to encode a long google link within the bit.ly URLs Mr. Murray uses. It’s not against the law for Mr. Murray to route us around inconvenience rather than push us into a paywall. In fact, the route-around is perfectly normal and appropriate. Again, there’s nothing wrong or shady or sketchy about routing around the Wall Street Journal’s paywall. You don’t have to be hacker; you only have to be frugal and spend a few extra seconds and clicks.

But maybe it’s against the rules. Maybe Mr. Murray’s boss has decreed that WSJ employees shall not distribute link that route around the paywall. That doesn’t answer the question, however; it just passes the buck. For why would Mr. Murray’s boss—who is probably Robert Thomson, though I’m not certain—authorize or oblige Mr. Murray’s twittering of paywalled links if he hadn’t deemed it appropriate? Does Robert Thomson believe it makes business sense to twitter paywalled links?

Maybe it is. Maybe Mr. Thomson believes that, if Mr. Murray twittered route-around links to normally abridged articles, then fewer people would pay for subscriptions. And maybe fewer people would. It’s not impossible. Note well, however, that I’m not saying Mr. Murray should hurt his company’s finances by twittering route-around links to normally abridged articles. I’m saying that Mr. Murray might consider twittering only links to normally unabridged WSJ articles and other content around the web. But that would be odd, wouldn’t it? That would be awkward, silly even.

The Wall Street Journal leaves the side-door wide open, hidden only by slight obscurity, but charges at the front door. The Wall Street Journal is wide open. The fact that google indexes its content fully is dispositive—it’s all the proof we need. Let’s try a good old counterfactual conditional: Were the route-around not legitimate, then google would ding the WSJ’s pagerank. But google clearly hasn’t, so the route-around is legitimate.

The point requires an underline lest we succumb to a kind of anchoring cognitive bias. The paywall is not normative. You are not stealing content by refusing to be treated differently from google. In fact, the use of terms like “front door” and “side door” subtly, but completely inappropriately, encodes moral judgments into the discussion. In fact, there are—rather obviously, come to think of it—no “doors” at all. There are, in technical reality, only equal and alternative ways of reading the news. One’s convenient, and one’s not. One’s free, save the attention extracted by on-site advertising, and the other’s not. Maybe one cushions News Corp.’s bottom line, and maybe the other doesn’t. Maybe one supports civically important journalism, and maybe one doesn’t.

At bottom, though, there’s this. Mr. Murray is a human interacting socially with other humans on twitter, saying, “Hey, read this! Trust me: it’s good!” He gestures enthusiastically toward a bolted door, his back disguising an open gateway. “Please, ignore the actually convenient way to take my suggestion that you read this really interesting piece.” Mr. Murray would rather you remain ignorant of a loophole his paper exploits in order to maintain its googlejuice but keep its legacy subscribers. (Note that I’ve pointed out the loophole to several fellow mortgage traders, asking whether they would consider dropping their subscriptions. They all declined, saying they prefer to pay rather than take the time to make the additional clicks.)

I’m not saying it doesn’t make business sense. Businesses are free to capture whatever “thin value” they can, Umair Haque’s warnings notwithstanding. I am saying it doesn’t make human sense. I am saying that particular business practice looks silly, awkward, and disingenuous on twitter. And, ultimately, that’s Umair’s point. In a world of exploding media (PDF), we’re inevitably going to come to rely more on human connections, based on real trust, in order to make choices about how we allocate our attention. Mr. Murray’s cold business logic may work, but I suspect it won’t.

The Wall Street Journal’s Fancy SEO Tricks

I’m not an SEO expert. So if there were a group of SEO experts standing in the corner, I wouldn’t be among them. I would be among the mere mortals, who apply basically their common sense to how search engines work.

All that said by way of longwinded preamble, I did happen upon a fun realization this morning, in the spirit of “The internet routes around….”

The WSJ does this thing called cloaking. It essentially means they show Google a different website from what they show you. The googlebot sees no paywall and waltzes right in. You hit a “subs only” paywall and get frustrated. Or maybe you pay for the subscription. Still, though, I doubt google pays the subscription, and so even if you see the whole website too, you see a costly website, whereas google sees a free website.

The net result for the WSJ is that it cleverly gets its entire articles indexed, making them easier to find in google, but is able to maintain its paywall strategy. The net result for you and me is that it’s sometimes a pain in the neck to read the WSJ—which is too bad, because it’s a great read. It’s also a pain in the neck to share WSJ articles, as Deputy Managing Editor and Executive Editor Online @alansmurray’s sometimes plaintive “subs only” tweets evince.

But there’s a way around the mess. Actually, there are a couple ways around. One involves the hassle of teaching my mom how to waltz in like google does, and one involves me doing it for her. I prefer the latter.

paywallBut let’s rehearse the former first. Let’s say you hit the paywall. What do you do? You copy the headline, paste it into google, and hit enter. This works way better if you’ve got a search bar in your browser. Once you hit enter, you come to a search results page. You’ll know which link to click because it won’t be blue. Purple means you’ve been there before, so click that link. It will take you back to your article, but you’ll be behind the paywall, gazing at unabridged goodness. It’s not too hard, and the upside it terrific. That said, this procedure is much easier to perform than it is to explain, and the whole thing is pretty unintuitive, so my efforts to spread the word have led to little.

But there’s a better way, for the sharing, a least—a way that involves letting the geekiest among us assume the responsibility of being geeky. It’s natural, and you don’t have to rely on your mother’s ability to route around. Instead, once you decide you want to share a WSJ article, grab the really long URL that sits behind google’s link on its search returns page. They look something like this:

http://www.google.com/url?sa=t&source=web&ct=res&cd=2&url=http%3A%2F%2Fonline.wsj.com%2Farticle%2FSB125134056143662707.html&ei=4oiWSouFJIuGlAez86GqDA&usg=AFQjCNEhRb_n571tSnJZrK-uru_0owFz9g&sig2=3rZbZnhOu11lo3bOUojDfA

Then push that horribly long URL—itself unfit for sharing in many contexts—into your favorite URL shortener. Send that shortened URL to your mom, or post it to twitter.

No one will ever know the article you’re sharing sits behind WSJ’s grayhat paywall.

LATE UPDATE: I write a follow-up post prompted by @alansmurray’s response, comparing his situation to the one occupied by the folks at Starbucks.

LATER UPDATE: Alex Bennert from the WSJ points out that the WSJ’s fancy trick is in fact sponsored by google and called First Click Free. See his her link below and my reply.

Parasites, readers, and value

The idea that some sites, like blogs and aggregators or whatever they’re called, are parasites on traditional news is interesting. It’s not crazy.

Those who run traditional news sites see aggregators benefiting from the resources of the traditional players and worry that they, the traditional players, may be hurt by that use. The worriers say, “Digital vampires” are “sucking the blood” out of traditional news players. (That’s a shibboleth, not a fair rehearsal of a smart argument.)

Some decry the notion that traditional players are hurt or harmed or injured by that use. The decriers say, “Wait! Vanquish your backwards self-pity because aggregators actually help you via the link economy.” (That’s a shibboleth, not a fair rehearsal of a smart argument.)

My sympathies lie deeply with the decriers. But I wonder whether they are right. I’m not sure they are—and, probably more importantly, I don’t see their argument convincing everyone it intends to, especially the worriers. So let me take a different tack.

What if it were the case that aggregators were parasites in the way the worriers worry about? But what if it were also the case that readers or users or whatever they’re called were actually better off as a result? What kind of parasite hurts one host in order to help another? And what might it mean if the help is greater than the hurt? What then?

Would we cheer the gains of the readers? Would we feel bad for the worriers? Would we despise the aggregators? And here’s the real question: Would we forsake the gains of readers in order to prevent the harm felt by worriers and brought about by aggregators?

I don’t know the answer to that question. For one, it’s really hard to imagine what we’d even mean by “gains of the readers.” Would we mean total utility people derive from news, however we define it? That seems empirically pretty impossible to measure. But could we use total traffic or pageviews of traditional news sites and blogs and aggregators as a proxy? But would all pageviews be created equal, as it were, or would we care about the loss of hard news if it were replaced by soft? How would we even know what blend of hard and soft news—serious and light-hearted, intellectual and whimsical—is ideal?

Or maybe we reject the paternalism inherent in claiming the right to answer the question about what blend of hard and soft news is ideal. Maybe all pageviews are created equal, or about equal, or about equal within some bounds of reasonability.

*    *    *

Blogs and aggregators or whatever they’re called as a group add value to the news on the web in a few ways. They add reporting, analysis, and context. They mobilize advocates; they amuse and entertain. They also decrease the uncertainty inherent in experience goods like the news—in other words, they add trust. They increase social capital.

There’s only so much attention in the world. The outfits that help allocate it efficiently—to content, comunication, games, etc.—will win it, even if it’s at the expense of civically important news, ceteris paribus. Worriers worry because they see their slice of the pie decreasing. And maybe it is. Maybe the theory of the link economy is wrong! But maybe the pie’s changing in other ways too.

Maybe the slice owned by traditional news sites is decreasing while the size of the whole pie is increasing. Maybe users are better off. That would be good, right?

*    *    *

And yet we’re not one inch closer to persuading worriers worried about their own demise. No, what we have is possibly an argument that let’s us look beyond their worries to a bigger picture in which it might well be the case that their worries will never go away till they themselves are gone. We may have freed ourselves from that responsibility, and maybe that’s important. After all, it’s unreasonable to blame a worrier for worrying about his own death. It’s folly to try to persuade a worrier to sacrifice herself.

Calling bottoms, calling tops, calling danger!

Cody Brown just wrote a piece bashing twitter, getting some decent play on a day twitter and other sites took a bruising.

twitterHe lodged a few complaints:

  • Its 140 character restriction is a blunt instrument. The site does not reflect the potential or nuance in which a public can speak to itself online.
  • Usernames are inconsistent and confusing. Twitter is mobbed by impersonators.
  • Twitter will either perpetually be simple insofar as its millions of users will have to hack the service to reflect their own values or it will roll the dice on a focus, put the site through chronic redesigns, and risk a mass user exodus.

I don’t know what to say. I just disagree. I mean, twitter’s not perfect, but it’s so open and promising that many very smart people are building it out, cleaning up messes, solving problems, adding value. I guess if folks are really concerned about why I disagree, we can flush it out in the comments.

Here’s what I tweeted yesterday:

By @CodyBrown, a twitter story of wildly exaggerated problems and wildly vague promises of infrastructure and elegance http://bit.ly/18FxmI

And here’s what I tweeted this morning:

Really, @muratny? I’m sorry, but I think @CodyBrown’s piece is overblown and overwrought. http://bit.ly/dwdG9 And I’m sympathetic!

For me, this is one of those tough cases when you don’t want to blow your credibility—whatever you may have—by sounding shrill or acerbic. But let’s call a spade a spade: the piece is mostly vapid. Its reasoning just doesn’t follow.

And when it does make good points, they’re hardly original. Who isn’t gazing deeply into twitter, wondering which of its deep properties is driving its success and will in the future? Content delivered by streams defined in terms of (mostly) people? Or asymmetrical relationships? Overlapping publics? The collapse of the distinction between discourse and content? And who isn’t gazing deeply into twitter, looking for what will follow? Brown’s answer: something real-time and more elegant like facebook, picking up Dave Winer’s idea and trading in Jeff Jarvis’s words. Okay, great, thanks for the insight! Like facebook! Elegant!

They say calling a market bottom is like trying to catch a falling knife. It’s dangerous, and you never really know. Calling a top on twitter is like trying to predict with the naked eye when a rocket’s upward arc will turn back toward earth. What goes up must come down, right? We all know it’s bound to happen at some point, or maybe not, but no one really knows, and, if the rocket’s hurdling your way, insisting that it will fall is just dangerous. Or just silly.

Links as Property? Coase to the Rescue!

The conservative scholar and federal appeals court judge for the seventh circuit Richard Posner brought to bear a flavor of analysis that has won him wide renown in legal and intellectual circles.

I’m not a scholar, but it’s a kind of analysis that I happen to love, called law and economics. Others, like Erick Schonfeld of TechCrunch, have hit back with sound free-speech arguments. Alone, I find them basically persuasive.

But I don’t think Posner can win his argument on his own terms. Even if we remain within the dry, dismal realm of law and economics, Posner’s suggestion that a radically stronger version of copyright might save the traditional news companies and let them carry on as they have is wrong.

Posner’s claim:

Expanding copyright law to bar online access to copyrighted materials without the copyright holder’s consent, or to bar linking to or paraphrasing copyrighted materials without the copyright holder’s consent, might be necessary to keep free riding on content financed by online newspapers from so impairing the incentive to create costly news-gathering operations that news services like Reuters and the Associated Press would become the only professional, nongovernmental sources of news and opinion.

Would would happen if, tomorrow, we woke up and couldn’t link or paraphrase without the consent of the copyright holder? Let’s game it out.

First, big news companies might rejoice, but their joy would be short-lived. For soon, all across the interwebs, on smaller websites and services, announcements would begin to pop up. Sites like TechCrunch and Talking Points Memo, to name just a couple, would start screaming as loudly as they can, “Please! Link here. Have your discourse about my content if you can’t have it about theirs! We hereby offer blanket permission to link and to paraphrase to anyone and everyone.”

In other words, as George Frink wrote on twitter, the plan “would give enormous competitive advantage to sources granting blanket copyright permission & for all fair-use links.”

As Posner’s beloved Coase theorem holds, “bargaining will lead to an efficient outcome regardless of the initial allocation of property rights.” Now, there are important caveats like zero transactions costs, but again from wikipedia: “While the exact definition of the Coase theorem remains unsettled, there are two issues or claims within the theorem: the results will be efficient and the results in terms of resource allocation will be the same regardless of initial assignments of rights/liabilities.”

So then, of course, medium-sized sites would look in envy at smaller sites’ success. It wouldn’t be long before they too joined in on the fun, grabbing traffic that their bigger, perhaps more prestigious news companies formerly lapped up.

Now, consumers of news would still visit, e.g., the New York Times to get some news. But most of their news—and the opportunity to talk about it and feel like they’re part of the conversation—would come from sites that just aren’t as good as the New York Times. So the world’s aggregate utility would be lower. Which is sad.

But, alas, it wouldn’t be long, once again, before bigger news companies felt left out too. Soon they, too, in most cases, would post the same permissions. Little surprise, right? A few mostly misguided holdouts notwithstanding, the Internet would revert right back to where it started. Free links are the equilibrium—the Nash equilibrium.

Free links will happen. It doesn’t matter whether we start with them and refuse to change because we know better or whether we start without them and quickly all freely alienate our property rights not to be subject to them. I think Posner might actually agree with this analysis. I’d hope that he’d be open to the empirical possibility anyhow.

Free market for the news? Please.

A free market for news would work about as well as a free market for kidneys. So it’s pretty seriously unhelpful to reach for the invisible hand to solve the news business’s problems, as this FT editorial does.

Of all the defects news possesses, here’s but a short, partial list:

Goods with characteristics like these are prone to market failures of all kinds. So the FT should probably feel free to exercise a bit more creativity.

A twig in the eye for sure.

Not by Links Alone

At this unthinkably late hour, many of even the most recalcitrant journalists and newsy curmudgeons have given themselves over, painfully, to the fundamentally important fact that the economics of abundance now govern their world.

For many, of course, stemming that tide is still paramount. Their goal, as David Carr writes, is to squelch the “new competition for ads and minds.” Thus Walter Isaacson’s “E-ZPass digital wallet” and Alan Mutter’s “Original Sin.” Thus Michael Moran’s obnoxious “NOPEC.” Thus Journalism Online. And, of course, thus we have David Simon’s recent call for Congress to “consider relaxing certain anti-trust prohibitions” or this call in the Washington Post to rework fair use. I wish them all good luck, but mostly good night.

There are others, though, who think it’s great that the Internet and Google are opening up the news to competition. In fact, “Google is good” strikes me as nearly orthodox among the basically Internet-savvy set of news talkers. Marissa Mayer crows about how Google delivers newspapers’ Web sites one billion clicks a month, and Arianna Huffington insists that the future of news is to be found in a “linked economy” and “search engines” like Google.

In this narrative, Google’s the great leveler, ushering the world of journalism out of the dark, dank ages of monopoly and into the light, bright days of competition, where all news articles and blog posts stand on their own pagerank before the multitude of users who judge with their links and their clicks. Its ablest defender is probably Jeff Jarvis, author of What Would Google Do? Jarvis was relatively early in pointing out that “Google commodifies the world’s content by making it all available on a level playing field in its search.” In that and other posts at Buzz Machine, his widely read blog, Jarvis allows that Google “can make life difficult” but insists, “that’s not Google’s fault.” The reverence for Google is thick: “The smart guys are hiring search-engine optimization experts and trying to figure out how to get more people to their stuff thanks to Google.”

But defenders of Google’s influence on the broader market for news and newspapers themselves make a striking error in believing that the market for content is competitive. That belief is wrong—not just a little bit or on the margin, but fundamentally, and importantly, wrong.

Which is not to say that news publishers aren’t competing for readers’ eyeballs and attention. Publishers compete with one another all day long, every day—with some local exceptions, the news has always been competitive like a race, and is now more competitive like a market than ever before. But the market for that news—the place where consumers decide what to read, paying with their attention—is not competitive. Google may well be the great leveler, but down to how low a field?

To be very clear, this is far from a neo-classical purist’s critique that picks nits by abusing uselessly theoretical definitions. I am not a purist, an economist, or a jerk. This is reality, as best as I know it. Nevertheless, to say that the market for content is competitive is just to misunderstand what a competitive market actually entails. The market for news content as it currently stands, with Google in the middle, is a profoundly blurry, deeply uncompetitive space.

*    *    *

“The difficulty of distinguishing good quality from bad is inherent in the business world,” Nobel laureate George Akerlof wrote in the kicker of his most famous paper, published in 1970. “This may indeed explain many economic institutions and may in fact be one of the more important aspects of uncertainty.”

Akerlof fired an early shot in a scholarly marathon to study the effects of asymmetric information in markets. What do parties to a potential transaction do when they know different sets of facts? Maybe that seems like an obvious question, but economists in the middle of the twentieth century had been pretty busy worrying about perfecting complicated models despite their grossly simplistic assumptions.

So Akerlof set about to write about how markets can fail when some of those assumptions turn out to be bunk. The assumption he tested first, in “The Market for ‘Lemons,’” was certainty, and he showed that when sellers know more about the goods being sold than the buyers do, sellers abuse their privileged position and buyers leave the market.

Writing in the same year, the economist Phillip Nelson studied the differences between what he called “search goods” and “experience goods.” Search goods and experience goods express a certain kind of asymmetry. For search goods, consumers can overcome the asymmetry before the point of purchase by doing their homework, while for experience goods, consumers must take their time and invest.

A pair of pants, for instance, is a search good—you can try before you buy, and shop around for the pants that fit you best. An apple, on the other hand, is an experience good—you don’t know whether you’ll like one until you consume it, and you can’t really try before you buy.

News articles are experience goods. Just as with an apple, you need to consume the story, reading the article or watching the video or so on, in order to judge its quality. “Stories can vary in length, accuracy, style of presentation, and focus,” writes economist James Hamilton in All the News That’s Fit to Sell. “For a given day’s events, widely divergent news products are offered to answer the questions of who, what, where, when, and why.” We can’t know which one’s best till we’ve read them all, and who’s got time for that?

Moreover, a multitude of subjective editorial decisions produce the news. Each reporter’s practices and habits influence what’s news and what’s not. Their learned methods, their assigned beats, and even their inverted pyramids shape what we read and how. Reporters’ and editors’ tastes, their histories, or their cultures matter, as do their professional ethics. Each article of news is a nuanced human document—situated aesthetically, historically, culturally, and ethically.

Ultimately, the news is afflicted with the problem of being an experience good more than even apples are. At least Granny Smiths don’t vary wildly from farmer to farmer or from produce bin to produce bin. Sure, some may be organic, while others are conventional. One may be tarter or crispier than another, but tremendous differences from the mean are very unlikely. With the news, though, it’s hard even to think of what the mean might be. It may seem obvious, but articles, essays, and reports are complex products of complex writerly psychologies.

For a long time, however, as readers, we were unaware of these nuances of production. That was, in some sense, the upshot: our experience of this journalism was relatively uncomplicated. This profound lack of context mattered much less.

Call it the myth of objectivity maybe, but what NYU professor Jay Rosen has labeled the “mask of professional distance” meant that we didn’t have much of a chance to bother with a whole world complexities. Because everyone usually wore a mask, and because everyone’s masked looked about the same, we ignored—indeed, we were largely necessarily ignorant of—all the unique faces.

For a long time, therefore, the orthodox goal of American newspapers virtually everywhere was news that really wasn’t an experience good. When news existed only on paper, it hardly mattered what news was, because we had so few seemingly monochrome choices about what to read. We returned to the same newspapers and reporters behind the same masks over and over again, and through that repetition, we came subtly to understand the meaning and implications of their limited degrees of “length, accuracy, style of presentation, and focus.”

As a result, we often grew to love our newspaper—or to love to hate it. But even if we didn’t like our newspaper, it was ours, and we accepted it, surrendering our affection either way, even begrudgingly. The world of news was just much simpler, a more homogeneous, predictable place—there were fewer thorny questions, fewer observable choices. There was less risk by design. Our news was simpler, or it seemed to be, and we had little choice but to become familiar with it anyhow. One benefit of the View from Nowhere, after all, is that basically everyone adopted it—that it basically became a standard, reducing risk.

But a funny thing happened in this cloistered world. Because it seemed only natural, we didn’t realize the accidental nature of the understanding and affection between readers and their newspapers. If, as the economists would have it, the cost of a thing is what we’ve sacrificed in order to achieve it, then our understanding and affection were free. We gave nothing up for them—for there was scarcely another alternative. As a result, both readers and publishers took those things for granted. This point is important because publishers are still taking those things for granted, assuming that all people of good faith still appreciate and love all the good things that a newspaper puts on offer.

*    *    *

But when our informational options explode, we can plainly, and sometimes painfully, see that our newspapers aren’t everything. Different newspapers are better at answering different questions, and some answers—some as simple as what we should talk about at work tomorrow—don’t come from newspapers at all. So we go hunting on the Internet. So we gather. So we Google.

We have now spent about a decade Googling. We have spent years indulging in information, and they have been wonderful years. We are overawed by our ability to answer questions online. Wikipedia has helped immensely in our efforts to answer those questions, but pagerank elevated even it. Newspapers compose just one kind of Web site to have plunged into the scrum of search engine optimization. Everyone’s hungry for links and clicks.

And Google represents the Internet at large for two reasons. For one, the engine largely structures our experience of the overall vehicle. More importantly, though, Google’s organization of the Internet changes the Internet itself. The Search Engine Marketing Professional Organization estimates, in this PDF report, that North American spending on organic SEO in 2008 was about $1.5 billion. But that number is surely just the tip of the iceberg. Google wields massive power over the shape and structure of the Internet’s general landscape of Web pages, Web applications, and the links among them. Virtually no one builds even a semi-serious Web site without considering whether it will be indexed optimally. For journalism, most of the time, the effects are either irrelevant or benign.

But think about Marissa Mayer’s Senate testimony about the “living story.” Newspaper Web sites, she said, “frequently publish several articles on the same topic, sometimes with identical or closely related content.” Because those similar pages share links from around the Web, neither one has the pagerank that a single one would have. Mayer would have news Web sites structure their content more like Wikipedia: “Consider how the authoritativeness of news articles might grow if an evolving story were published under a permanent, single URL as a living, changing, updating entity.”

Setting aside for the moment whatever merits Mayer’s idea might have, imagine the broader implications. She’s encouraging newspapers to change not just their marketing or distribution strategies but their journalism because Google doesn’t have an algorithm smart enough to determine that they should share the “authoritativeness.”

At Talking Points Memo, Josh Marshall’s style of following a story over a string of blog posts, poking and prodding an issue from multiple angles, publishing those posts in a stream, and letting the story grow incrementally, cumulatively might be disadvantaged because those posts are, naturally, found at different URLs. His posts would compete for pagerank.

And maybe it would be better for journalism if bloggers adopted the “living story” model of reporting. Maybe journalism schools should start teaching it. Or maybe not—maybe there is something important about what the structure of content means for context. The point here isn’t to offer substantive answer to this question, but rather to point out that Mayer seems unaware of the question in the first place. It’s natural that Mayer would think that what’s good for Google is good for Internet users at large. For most domestic Internet users, after all, Google, which serves about two-thirds of all searches, essentially is their homepage for news.

But most news articles, of course, simply aren’t like entries in an encyclopedia. An article of news—in both senses of the term—is substantially deeper than the facts it contains. An article of news, a human document, means substantially more to us than its literal words—or the pageranked bag of words that Google more or less regards it as.

Google can shine no small amount of light on whether we want to read an article of news. And, importantly, Google’s great at telling you when others have found an article of news to be valuable. But the tastes of anonymous crowds—of everyone—are not terribly good at determining whether we want to read some particular article of news, particularly situated, among all the very many alternatives, each particularly situated unto itself.

Maybe it all comes down to a battle between whether Google encourages “hit-and-run” visits or “qualified leads.” I don’t doubt that searchers from Google often stick around after they alight on a page. But I doubt they stick around sufficiently often. In that sense, I think Daniel Tunkelang is precisely correct: “Google’s approach to content aggregation and search encourages people to see news…through a very narrow lens in which it’s hard to tell things apart. The result is ultimately self-fulfilling: it becomes more important to publications to invest in search engine optimization than to create more valuable content.”

*    *    *

The future-of-news doomsayers are so often wrong. A lot of what they said at Kerry’s hearing was wrong. It’s woefully wrongheaded to call Google parasitic simply because it the Internet without it would be a distinctly worse place. There would be, I suspect, seriously fewer net pageviews for news. And so it’s easy to think that they’re wrong about everything—because it seems that they fundamentally misunderstand the Internet.

But they don’t hold a monopoly on misunderstanding. “When Google News lists one of ours stories in a prominent position,” writes Henry Blodget, “we don’t wail and moan about those sleazy thieves at Google. We shout, ‘Yeah, baby,’ and start high-fiving all around.” To Blodget, “Google is advertising our stories for free.”

But life is about alternatives. There’s what is, and there’s what could be. And sometimes what could be is better than what is—sometimes realistically so. So however misguided some news executives may have been or may still be about their paywalls and buyouts, they also sense that Google’s approach to the Web can’t reproduce the important connection the news once had with readers. Google just doesn’t fit layered, subtle, multi-dimensional products—experience goods—like articles of serious journalism. Because news is an experience good, we need really good recommendations about whether we’re going to enjoy it. And the Google-centered link economy just won’t do. It doesn’t add quite enough value. We need to know more about the news before we sink our time into reading it than pagerank can tell us. We need the news organized not by links alone.

What we need is a search experience that let’s us discover the news in ways that fit why we actually care about it. We need a search experience built around concretely identifiable sources and writers. We need a search experience built around our friends and, lest we dwell too snugly in our own comfort zones, other expert readers we trust. These are all people—and their reputations or degrees of authority matter to us in much the same ways.

We need a search experience built around beats and topics that are concrete—not hierarchical, but miscellaneous and semantically well defined. We need a search experience built around dates, events, and locations. We need a search experience that’s multi-faceted and persistent, a stream of news. Ultimately, we need a powerful, flexible search experience that merges automatization and human judgment—that is sensitive to the very particular and personal reasons we care about news in the first place.

The people at Senator Kerry’s hearing last week seemed either to want to dam the river and let nothing through or to whip its flow up into a tidal wave. But the real problem is that they’re both talking about the wrong river. News has changed its course, to be sure, so in most cases, dams are moot at best. At the same time, though, chasing links and clicks, with everyone pouring scarce resources into an arms race of pagerank while aggregators direct traffic and skim a few page views, isn’t sufficiently imaginative either.

UPDATE: This post originally slipped out the door before it was fully dressed. Embarrassing, yes. My apologies to those who read the original draft of this thing and were frustrated by the unfinished sentences and goofy notes to self, and my thanks to those who read it all it the same.


Josh Young's Facebook profile

What I’m thinking

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

What I'm saving.

RSS What I’m reading.

  • An error has occurred; the feed is probably down. Try again later.

Follow

Get every new post delivered to your Inbox.