Archive for the 'possible worlds' Category

When the Future Is Unlike the Past

First, pick an arbitrary point in time. Pick a year, any year A.D. up till your birthday or when you were twelve—this is your year. If you can, conjure up some idealized image of your year in your mind’s eye—nothing terribly analytic for now, just something holistic.



Second, pick a year in time a few years back from your year, and pick a year a few years forward from your year. And, if you would, conjure up the same kind of images for these two years. Again, these need not be exacting—think of them as blurry, semi-liminal collection of facts and themes and truths and so on. You don’t need to memorize all the details of the goat, in other words—just some vague but substantive handle on it.

Third, get ready to perform some mildly unusual comparisons—not impossible, just a little strange, a little odd, but highly interesting, I promise. Try to imagine the difference between your year and the year a few years back. And while you’re at it, try to imagine the difference between your year and the year a few years forward too.

It’s a funny thing to try to imagine differences in the world over time. But that’s why we’re starting with this pretty easy exercise. It’s pretty easy to imagine the differences between things that are pretty similar—whether those things are different kinds of four-legged mammalian farm animals or different states of the world a few years apart.

Still, it may not be obvious to you what the differences are (1) between the world in your year and the world a few years before that or (2) between the world in your year and the world a few years after that. Given that you only care about the holistic view, you might be tempted to conclude that there are no substantive differences looking forward or backward. That’s fine—maybe there aren’t any.

But if there are differences within (1) or within (2), you should then be able to compare those differences themselves. Think of it this way. I grew up in Madison, Wis., which is about two hours from Chicago, Ill. Thus one difference between Madison and Chicago is location, and we can measure that difference in distance quantified by how long the drive is. Madison is about five hours from Minneapolis, Minn. Put this way, it’s natural to compare the differences between Madison-Chicago (two hours) and Madison-Minneapolis (five hours). That difference—that difference between differences—is three hours.

We can do roughly the same thing for your year looking back and your year looking forward. So you might say the difference between your year and the one a few years before is “small.” And you might say the difference between your year and the one a few years after is “small.” If so, the difference between the differences is zero. If the differences are “small” and “medium,” though, difference between the differences might be, e.g., “modest” or also “small.” And so on.

Whew. So the reason all this is important is that the differences between looking back a bit and looking forward bit are usually zero or small. Usually, change is constant. Our intuitions are largely built on this premise. In fact, we get terribly confused and distraught and sometimes incoherent when the difference looking forward is much greater than the difference looking back.

Consider the notion of precedent as legal philosophy. Scholars argue that precedents as reasons make the law more predictable—litigants can better predict the outcomes of law suits if they have some handle on the kind of reasoning judges will bring to bear. But why should law suits be predictable in the first place? That assumption reflects a deeper belief that similarities between previous cases and present or future cases are relevant at all. It reflects a belief that lawyers and judges can emphasize the relevant similar facts and ignore the relevant distinguishing facts. But how should lawyers and judges be able to make value judgments about which similar and distinguishing facts matter across different cases? It can only make sense to conform to precedent inasmuch as it makes sense—on average, I suppose, though even that is tricky given fat-tail risk—to take history as a competent guide to the future. Mere differences in time mean nothing.

But what if something so essential to our ability to reason did mean something? What if mere differences in time mattered legally? Or what if something seemingly arbitrary variable about the world mattered? What if, for example, we thought that events that obtained under a full moon were different from events that obtained under a new moon? We’d have at least two entirely different sets of legal precedents—one for full moons, one for new moons, and maybe one for other times. The legal world would be turned upside-down, inside-out. It would be nearly indistinguishable from what we actually have. The arguments that work under a full moon in our actual world almost certainly wouldn’t work in this crazy possible world. Not only would they almost certainly fail to persuade, but they would almost certainly seem deluded or insane—probably as insane as arguments highly sensitive to the lunar calendar would seem in the actual world.

The good news is that upside-down, inside-out changes are rare. The bad news is that their rarity doesn’t equip us well for when they inevitably crop up. We forget that there are any such changes, especially when they’re not attended by frighteningly salient facts, like nuclear weapons. It is easier, in other words, to wrap our minds around how thoroughgoing theories like mutually assured destruction change our reasoning, forcing us to question very basic assumptions, if we can at the same time point to devastating bombs and mushroom clouds.

But there is nothing so salient as for the news. Layoffs and newspaper closings amid a wider economic downturn just don’t cut the mustard. Sometimes the differences looking forward a decade dwarf the differences looking back a decade.

When that happens, when the near history no longer contains implicit clues about the near future, we are unmoored, and we look to look to the differences between differences as a partial guide—but just come to terms with our own imaginations, just to maintain some footing amid upheaval. In order to grasp some sense of how sweeping the next decade of changes in the news business will be, we’ve got keep inching back through time and technology till we arrive at the gut feeling that the differences are equal. When the differences looking forward and the differences looking backward are equal—idealized, vaguely but substantively—we can look to see what is common between the past world and the future one. And those common facts or truths are the only facts and truths we can carry forward as precedent, more or less unquestioned.

As with many businesses facing disruption from the internet, it is far from clear that there is anything common between what the news business will see a decade forward and what it saw even a century back. This is a muddled exercise in which we accomplish little more than calibrating our intuitions about what to discard and what to keep. But there is so much to discard that our intuitions are critical.

My gut tells me this, nuclear holocaust notwithstanding: It is no longer reasonable to carry facts true about the history of the news business into the future without detouring through first principles about journalism and why it’s important. Nothing true of journalism in a decade’s time will turn out also to have been true of journalism at any time in the past except those facts that will always be true.

All else is gone—that is what Shirky means when he writes, “There is no general model for newspapers to replace the one the internet just broke.” All else is gone, but first principles remain. And grasping first principles is why it’s imperative that “we shift our attention from ’save newspapers’ to ’save society.’” But don’t take “unthinkable” too literally; the future is thinkable. Shirky’s is a terribly useful figure of speech, but it is false. We cannot know or predict what the world will look like, but we can and should conduct experiments thoughtfully, not wildly. If we clear our minds of accumulated implicit assumptions about the newspaper business cloaked as timeless verities of journalism, we can arrive at a clean slate of first principles and begin to rebuild.

Thinking the Unthinkable Parable of the Future of News

Most of us humans profoundly exaggerate the powers of our imagination. Indeed, I submit that we’re out-and-out horrible at imagining possible worlds even modestly different from our own.

Ask yourself, “Seriously, what would the world be like had John McCain been elected president of the United States?” If you’re American, your answer is not at all easy to come by. You’ve got a whole host of possibilities and their possible ramifications to think about.

In some ways, the country would be a very different place. For one, many of those who were thrilled at Obama’s election would be depressed, while many of the rest would be elated. All kinds of conversations between friends and colleagues would be dramatically different—and not only those about politics. Of course, all manner of domestic policy would be different, as would international politics.

But in other ways, the country would be nearly identical. We’d still have an credit crisis generally. We’d still drive on the right-hand side of the road. We’d very likely still have fifty states. We would still be Christians, Jews, Quakers, Muslims, and atheists in roughly the same number.

We’d still have a mostly temperate climate, with cities, towns, and rural communities scattered throughout. We’d still have a basically functional economy, with poor, middling, wealthy, and super-wealthy folks for whom it works unevenly. Our taxes might be somewhat higher or lower, but we’d still have a populace that generally believes in paying its taxes. I’m risking a good flaming, but I submit that, in our hypothetical John McCain America, the rule of law would basically still prevail.

Up would still be up, and down would still be down. We’d still have hipsters. Red would still be a different color from blue. Time would still march forward, not backward. It would still make no sense to hear your pal assert, “It is the case that A and not-A.” And so on. Some things never change. Or they seem not to, anyway.

*    *    *

It’s kind of like DNA. We humans are radically different from one another. We’re tall and short, weak and strong, bright and dull. We’re creative and analytic, fast and slow. I’m quite I certain I can do little justice to the bewildering diversity among us.

And yet we share some overwhelming percentage of DNA. We all, generally, have brains, lungs, and bones. We eat and sleep. Even the dullards among us laugh from time to time, privately. We all, generally, recoil at morbidity and fear pain. Exceptions tend to prove the rule here, to the extent that we consider someone who never laughs alien and someone who doesn’t flinch at the prospect of death superhuman.

Now consider yourself: you. Changing around your DNA within the tiny fraction that makes you unique—i.e., that you don’t share with other humans—is akin to America electing John McCain. You’d pretty much be a different country—maybe better, maybe worse, depending on your views and whether they’re wrong—but at least you’re still here on earth. At least you’d still have a circulatory system and a central nervous system. You’ve got a home. You have friends, if you’re nice, though they’re probably different friends. You still have or had parents. If you were born here, you speak some dialect of English, though you may say “pop” instead of “soda.” You might not be as attractive or witty, but you know what beauty is and you have some grasp on the levity of brevity. If you’re the right age and able-bodied, you’ve got a job. In short, your hypothetical life is very different, but it’s still roughly normal. Because these changes are relatively modest, they’re said to the stuff of close possible worlds.

Mucking around with the rest of the DNA that you do share with others is like imagining the Soviets won the Cold War or like imagining cold fusion were perfected years ago. It might be good or bad—utopian, dystopian, something odd in between, or something wildly outlandish—but, most importantly, it’s very likely simply radically different. It’s tough to imagine possible worlds like this. Not only would you pretty much be a different country, as above, but it’s not even clear that you would still be on earth—or on an earth in a form anything like what actually prevails today.

Your whole biological nature could be different—no blood, bones, no brain. You might not be carbon-based. You might be part of hive-mind. All manner of good and bad science fictional possibilities abound. Because these changes are severe, they’re said to the stuff of distant possible worlds.

* * *

Since it’s very hard to imagine such far-out possible worlds, good storytellers have developed rhetorical devices to help us broaden our view. They put us in the mood, push us toward an open mind, offer us the widest frame.

We need the widest frame in order to think about the future of news. Recently, @cshirky and @jayrosen_nyu have offered us just that.

Shirky asks us, more than mildly paradoxically, to consider an unthinkable scenario. At first, he puts his scenario on offer as a hypothetical possible world, someone else’s nightmare, suggesting just that we peer into its void as they do, vicariously.

“As these ideas were articulated, there was intense debate about the merits of various scenarios. … In all this conversation, there was one scenario that was widely regarded as unthinkable, a scenario that didn’t get much discussion in the nation’s newsrooms, for the obvious reason.”

Oh, and what might that nightmare look like to them? What possibilities do they see? Well—still in the mind’s eye of newspaper executives—it “unfolded something like this….”

Then Shirky warns us about being closed-minded. “Revolutions create a curious inversion of perception,” he writes. When Soviets win the Cold War or when you wake up in the Matrix, the world is sharply different, and concocting explanations about how it’s actually the same doesn’t work. “When reality is labeled unthinkable, it creates a kind of sickness in an industry.”

Only then, after hundreds of words of set-up, do we get the punch: “One of the effects on the newspapers is that many of their most passionate defenders are unable, even now, to plan for a world in which the industry they knew is visibly going away.” Your world is going away.

Shirky takes a stroll through some history, pointing out a previous occasion when the future broke from the past, and comes back with devastation. “When someone demands to know how we are going to replace newspapers, they are really demanding to be told that we are not living through a revolution. … They are demanding to be lied to.”

Then, ultimately, we get the distinction here. These newspaper folks know something’s got to give, but they’re still only willing to imagine close possible worlds. They can handle John McCain. They can handle being taller or shorter, leaner or fatter.

From the perspective of industrial newspapering—in which “the core problem publishing solves” is “the incredible difficulty, complexity, and expense of making something available to the public”—the internet might as well be Jupiter. It is a distant possible world.

“Society doesn’t need newspapers. What we need is journalism. ”

* * *

Imagine a world, if you feel sufficiently creative, without newspapers. And imagine a world without newspaper companies—or with companies whose DNA used to be newspapers but is now seriously different. Still, though, imagine that world needs journalism nevertheless.

In other words, imagine a world that is distant but not so very, very distant that we don’t need journalism. That makes our creative job easier. In fact, Jay Rosen reminds us that not all is lost. He intends to give us a head start in imagining exactly this possible world—in which newspapers are out but journalism is still very, very in.

Whereas Shirky jerks and drags our imaginations to think the unthinkable, Rosen encourages us to look inward, contemplatively, offering simple parable of a fishing village. He does it with @davewiner in a podcast the two have come lately to recording on Sunday, and it’s worth taking in as a whole:

I like to try to understand things at their origins. When I think about news and the collection of news, I try to go back and imagine the conditions in human affairs and human settlements that cause people to need news that is collected by somebody, as an occupation.

If you think about a small fishing village, with several hundred people, around a harbor, there’s news every day. But it is communicated naturally, as it were. That is, people going about their day will find out when a new ship is in, and at the end of the day, they’ll know what’s happened in that town. There doesn’t have to be an articulated social function of news gatherer because people do it themselves.

If you imagine that town expanding in its social scale so that it’s not just a fishing village anymore, but a big metropolis, you realize that, at a certain point, the only way you can have news about your own environment—not a distant land, but your own environment—is if somebody actually collects it. The need for news is intimately related to the scale on which we live. As we live on a bigger and bigger scale—not just metropolitan but a national and global scale—our needs for news grow because we are not self-informing.

But, if tools of awareness grow, like we had when we were a fishing village, then the idea of the self-informing public, which was operable at a certain scale, is perhaps operable again. And so if you understand news not as an industrial product or the handiwork of a profession, but as intimately related to human settlement and the social scale people live on, we’ll be able to navigate better in the future of news.

When they gave birth to the United States, a huge experiment in scale, they imagined that part of the reason that you could have a voted-in government over a territory stretching from New England to Georgia was the press, which gave us ways of connecting. So when we try to reboot news, don’t think about rebooting the Cleveland Plain Dealer. Go back to the origins of why people need news in the first place and your own experience with news hunger.

Yes, go back to your own personal news hunger. Skirky admirably yanks our imaginations out of their slumber. That’s the real merit of his piece. Now, however, think not of the more or less terrifying abyss Shirky points at, yelling, “Wake up!” Instead, for now, consider yourself and your community. Consider that we are just groups of people, overlapping social circles composing different human settlements, conducting our own affairs. Consider that we always live on some scale. Sometimes it’s big, as now. Other times it’s small, as it was long ago, and as it is in Rosen’s parable.

And remember that the scale on which we live matters relative to our everyday “tools of awareness.” Better tools mean a self-informing public at larger scale.

* * *

Consider that sources, authors, and readers are all people. Consider that the internet gives us tools so that one person might be all three. When Winer says, “Sources go direct,” he’s pointing out that one person can be both the source and the author of a story. But we’re readers and authors simultaneously too.

Consider that people are busy, that our time is scarce. We make decisions about allocating our attention on the margin. In a fishing village, we’d love it if all our friends could find a central place to gather in order to swap stories at the same time, efficiently. All our friends, yes, but probably not all our fellow villagers, some of whom we don’t like or don’t trust. In other words, we like to aggregate our news, but mostly among our friends and trusted experts.

Consider that people like hearing the news from their friends or from experts whose judgment they trust on particular matters. We like to trust the news and want to be engaged with their storyteller to cultivate that trust. To the extent that we can only get a piece of news from a fellow villager we dislike, we appreciate it when a trusted friend verifies the facts or shores up the analysis. So, too, do our friends appreciate it when we return the favor.

Consider that social relationships are sometimes one-way. We often have less time for others than they have for us. This is especially the case for widely trusted experts on particular matters. This is the general asymmetrical social stuff of celebrity, which is surely an archaic notion, inherent in even the simplest of villages. As society scales, moreover, consider the natural—or, potentially, the morally optimal—distribution of those asymmetries of attention.

Consider that people like the news new. We want to hear what’s happening now, not what happened last week or yesterday or an hour ago. But we also want our facts to be true and our analysis to be sound, so we’re willing to wait for real verification and for wise interpretation. We’re imperfect, though, so sometimes excess haste or caution will blind us to better priorities.

Consider that we mostly don’t really care whether we get our news as a written note or as a verbalized recounting. We care about the topics and events the story discusses. We care about the people, businesses, and other organizations it mentions&mdas;the “newsmakers,” as it were. Politics exists even in modest fishing villages, and we care about the political persuasions of our storytellers. That knowledge helps us bring the appropriate level of trust to our use of their story.

Consider that people are social. We like to gossip about trite matters, and we like to debate serious affairs. We like to consume the news, sure, but we also like to spread it around and add our own perspective. We also like to use the news as a medium for our wills—as a kind of substratum for own meaning. We like to be heard, respected, admired, and loved. We also like to be paid.

Consider all of this and more. Consider how distant the relevant possible worlds may be, and then consider all of them in that sphere. Consider the Cluetrain too. Consider that people, governments, and corporations will always be able to profit from secrecy. Even if we come to demand, and even very naturally expect, transparency as a broad ethical matter, powerful operators will have an incentive to fake it. That seems true even of modest fishing villages, in which a tribal or quasi-political elder may benefit from offering false reasons for important decisions. Consider that people spreading the news about powerful operators make friends with them in so doing. There’s potentially less baked-in profit motive.

Consider that information is an experience good. Consider that it’s a public good. Consider (again) that news is non-durable. Consider that one person’s report of a story has very close substitutes in others’ reports on the same story or nearly equally interesting stories.

Some things change. And some things stay the same the more everything else changes around them. So, most of all, consider dropping the fabulist notion that the future will look very much like the past. The time has gone when we can offer arguments aimed at the future but grounded in the present and the part. Aside from what we share with distant worlds—including my considerations above—the tastes, habits, patterns of readers, journalists, and newspaper companies are moot. Your world is departing, and a fishing village is arriving.

Taking Twitter Seriously: What if it really were a really big deal?

Maybe @davewiner does wring his hands too violently about twitter’s recommended users. Maybe it is too early to worry about unintended consequences.

But maybe not. Either way, if we take a slightly different view of his worries, I think we can take them to heart much more easily. If we can shift tenses, it might help.

While @davewiner talks about twitter, he may be talking about it now in the present tense. Let’s try another: a kind of conditional tense. Let’s try a counterfactual conditional: Would this thing work if it were the case that…?

After all, to detect a problem in any system, we’ve got to imagine that system working at full scale. Whether it’s a database, a message board, or a social network like twitter, we’ve got to imagine its ideal—when everyone’s using it for any purpose that’s difficult to police cheaply.

When @davewiner worries about twitter’s editorial adventures, as he does here and here in conversation with @jayrosen_nyu, he’s taking it extraordinarily seriously. It’s a great compliment, I think. He’s sees a twitter that’s currently critical to very many people. That’s the present tense.

OK, so some of us don’t yet share that view. But I bet we can offer our own great compliment and imagine very many people using it—or maybe even virtually everyone using it. At the end of every day, I think many of us have less and less trouble imagining that.

So, if virtually everyone were using twitter—if it really were the “Future News System of the World,” again, as difficult as that might be to imagine—we might really insist that it refrain from the editorial business. If twitter really were that big, then it really would be critical. And if it really were critical, its closed nature would probably violate all kinds of praise-Murphy rules about leaving our data, our businesses, and our lives in the hands of a for-profit company, its secret business plan, and its fallible servers.

We’re not casting aspersions at what most everyone regards as an essentially fair and just company. Of course, that goes for @me too; I love twitter.

This is simply why we have the notion of a “common carriage.” For centuries, we’ve demanded ultra-reliable commodity transportation services. We’ve been so insistent on the reliability and the even-handedness of transportation that we’ve often saddled the carrier with the de facto burden of liability for losses, which raises its price to us. This is why we care about network neutrality.

If we really take twitter seriously, then we think it’s possible that twitter could be the next big deal. The trouble is that—at scale—big deals attract all manner of mischief—with potentially everyone using them for all things selfish and spammy.

If twitter could be the next big deal, we need to start thinking about safeguarding it now.

PS. That’s what tunkrank, which was conceived by @dtunkelang, is for.

Do journalists have enough time for trust?

Steve Outing’s was representative of the reactions to my proposal, which is just a deepening and an extension of Mitch Ratcliffe’s idea:

I’ve thought about that idea too, but I can’t [get] past the problem of the journalists you (reader/user) want to interact with will mostly be too busy to participate. Some do interact, but it’s more because they want to and feel some passion for engaging directly with their fans and followers and readers. Many journalists I know resist the idea because they’re “already too busy.” (Bad attitude, IMO, but not easy to change.)

At one level, Steve is obviously correct: no one wants more work, and to the extent that my proposal involves interaction between the journalist and the user, there’s more work. Fine. No one’s arguing that it wouldn’t be different, unfamiliar, tough, risky, etc.

But at another level, the journalist would be paid, potentially a big chunk of his income, by offering special access to some users. Is it really the case that journalists think of themselves as so busy that they can’t imagine a (potentially very) different way of doing business?

The actually good argument one might offer against my proposal is this: “Look, journalists only have so many hours in the day. Users will pay them for some things that don’t require additional work, but users will also expect some of their time directly. That means a journalist either loses sleep or has to cut back on reporting. Lost sleep isn’t an option. And although cutting back on reporting might seem plausible, it’s really not, because it would dilute the other side of reporters’ value proposition to their users so much that their users wouldn’t really want to pay enough anymore. The market’s just not there.”

Of course, I happen not to think that argument has much purchase. Arguing about how busy with reporting journalists are now fails to locate my proposition in the relevant context, which could look more or less radically different from now. (It all about the counterfactual conditional.)

The amount of reporting per journalist might decrease, but that’s not a reason in itself that the aggregate amount of reporting would decrease. There could simply be more reporters! So if the average reporter had to reallocate twenty percent of her time to reader interaction, a twenty-five percent increase in reporters would fill the gap.

Of course, the whole proposition is that there’s a real human value proposition, trust between creator and user, that Kachingle’s kind of charity simply lacks. So while it’s certainly true that my proposition would be a big flop in the market if it turned out that users were only willing to pay creators for interaction that amounted to BFFs, which would prevent creators from actually creating, it’s not at all clear that users wouldn’t tolerate somewhat less reporting in order for access to and some connection with creators, especially in light of the fact that trust is sorely lacking between journalists and readers today.

The upside to a bit less reporting and a bit more trust-building is that society as a whole might have more regard for journalism. The hope is that journalism experiences a net gain in readership and mindshare.

PS. This post is repurposed from a comment left at Steve Outing’s further thoughts on Kachingle and voluntary monthly content payments, which he does not want you or Alan Mutter to compare to a tip jar. That comment is awaiting moderation at the time this post is being published.

Innovation, Gladwell, and Those Nasty Things Called Patents

Gladwell’s article is a good read, and his prose is as tightly threaded as ever, even if it sometimes colors itself purple come paragraph end. Gladwell is the prince of grand one-liner metaphors that bring down the curtain on his sections’ endings, and I don’t mind that one bit.

Stop reading this, and go read that. A duplicative summary is a waste.

I read Mike Masnick regularly and consistently enjoy his writing on the nature of innovation and how it happens. But I’m not sure about one of his main criticisms of Gladwell’s piece—that, “if these ideas are the natural progression, almost guaranteed to be discovered by someone sooner or later, why do we give a monopoly on these ideas to a single discoverer?

So let me make a surprise leap to the defense of Gladwell. Let’s suppose, probably correctly, that the purpose of intellectual property is to stimulate innovation.

Sometimes this premise gets misunderstood. Lessig explains how this misunderstanding works in Free Culture:

Creative work has value; whenever I use, or take, or build upon the creative work of others, I am taking from them something of value. Whenever I take something of value from someone else, I should have their permission. The taking of something of value from someone else without permission is wrong. It is a form of piracy. … This view runs deep within the current debates. It is what NYU law professor Rochelle Dreyfuss criticizes as the “if value, then right” theory of creative property—if there is value, then someone must have a right to that value.

This concept of “if value, then right” gets causation exactly backwards. In these terms, the statement formulated correctly would look more like “iff right, then value.”

What does that mean? It means that there should be this set of intellectual property rights if and only if there would not be the value without them. Of course, this works on the margin and actually looks something like this: The actual world should be the possible world with some set of intellectual property rights if and only if that set of rights means greater value than what’s in some other possible world that’s identical but for lacking that set of rights.

In other words, we ask, Is the value of the stimulated innovation greater than the deadweight loss from monopolistic price-setting of the innovation that would have happened anyhow?

“Multiplicity,” however, seems to make for stark evidence against the need to stimulate. It seems to say that the value would be around even in the absence of the right: “The sheer number of multiples could mean only one thing,” Gladwell writes. “Scientific discoveries must, in some sense, be inevitable. They must be in the air, products of the intellectual climate of a specific time and place.”

Multiplicity says that possible worlds that are like ours but that don’t have something like our set of intellectual property rights probably have about as much value—if not more.

But why is this so? The mere fact that people in our actual world tend to have the same ideas at the same time surely doesn’t imply as much. So what if nine people invented the steam engine and two invented the telephone at once?

They were all presumably looking to create value for the world and keep a part of it for themselves—probably enough, on average, that they could consider themselves rich. Alexander Graham Bell and Elisha Gray worked within the same legal incentive structure. As Gladwell writes, “the two filed notice with the Patent Office in Washington, D.C., on the same day.”

Even if the ideas themselves are inevitable, it’s not at all clear that the pace at which we as humans arrive at them is also inevitable.

Do patents speed up that pace? How much? Enough, on average, to compensate for the deadweight loss they create at any given time-slice T compared to T-1?

And what about pure scientific discoveries? Gladwell quotes William Ogburn and Dorothy Thomas, who compiled one of the first lists of “multiples”: “The law of the conversation of energy, so significant in science and philsophy, was formulated four times in 1847…. They had all be anticipated by Robert Mayer in 1842.”

That’s all fine and good, but none of them was chasing a Nobel. Had the prize been around in the middle of of the nineteenth century, is it possible that one of them, or someone else, would have endured a few more sleepless nights, hustled a bit more, and put together the law earlier?

Of course it’s possible. But the way to know about whether intellectual property regimes, or fancy academic prizes, stimulate creative and innovative thinking must be comparative. It must be measured across possible worlds.

In the end, Masnick helps me make my point: “Yet, if Gladwell’s premise is correct (and there’s plenty of evidence included in the article),” then inventors’ “efforts shouldn’t be seen as a big deal. After all,” he continues, if it weren’t for some inventors, “others would very likely come up with the same thing sooner or later.”

I take no position on whether our set of intellectual property rights is ultimately helpful (though I doubt it is). The point is that there probably is some set of rights that is helpful—precisely because other inventors really would come up ideas later without it.

Right and Wrong on Attention

Wrong: Our attention spans are hopelessly on the fritz.

Right: The internet has brought our world more information choices. Sure, we give the average choice less attention because it’s competing with a larger number of alternatives. But we abandon reading one newspaper article not because it bores us to death but because an alternative article in some alternative publication presents itself as more interesting.

So we may read less of your newspaper article before we decide that another one looks better. The switch results from a marginal cost-benefit judgment between alternatives, not from a stunted conclusion that whatever in front of us is beneath us.

In other words, a fancy counterfactual: Imagine a possible world much like the one in which you posit that people still have healthy attention spans—a world circa 1958, for example, fifty years ago. Now imagine that your possible is world is different from the actual 1958-world only insofar as the people who inhabit it have as many (analog) sources of information at their fingertips as we do (analog and digital) sources of information. I claim that the people in your world give their average information choice about as much attention, not much more and not much less, as we do ours in our actual 2008-world.

Thanks to Jeff Jarvis for inspiring this post.


Josh Young's Facebook profile

What I’m thinking

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

What I'm saving.

RSS What I’m reading.

  • An error has occurred; the feed is probably down. Try again later.