As Paul Miller explains, “Members can access background material on stories, submit additional resources of their own, and comment on the content they find.” The central unit of organization is the “topic,” which both the BX staff and members of the community can create. Miller writes that he gets “the impression that topics tend to be approved” if they’re “in-scope” and “actively discussed out on the open Web.”
Given that these are the interwebs we’re talking about here, my mind immediately races to worries about spam. Does BWBX have controls to disincentivize and sideline spam? How do they work? Are they effective?
I’ve had these questions for a while now, but I’ve kept them to myself while observing BWBX’s initial growth. Today, I saw that Paul Miller, the widely respected Semantic Web evangelist, wrote a post praising the news platform. So I pinged him on twitter:
@PaulMiller Great write-up of #bxbw! Curious about how articles get assigned to topics. Users push articles to topics? Isn’t that spammy?
Then he forwarded the question:
@jny2cornell Thanks Joshua. :-) Yes, users assign articles to topics. COULD be spammy. Doesn’t seem to be. Comment, @bwbx @roncasalotti
The folks as BWBX tweeted that they answered the question in the comments on Miller’s post. I’ve excerpted the relevant parts of the comment:
We track several user actions on each item and use a weighted algorithm to score both users and the articles/blog posts. We monitor those scores to not only determine top users or most valuable items in a topic … but also to determine gaming within the system. We also crowd-source user activity via a full reporting system and back-office moderation team.
Now, I’m no expert on “back-office moderation,” but that answer left me scratching my head. So I pinged again:
@PaulMiller What do you make of @bwbx’s comment on your post? http://bit.ly/hTL1 I must admit, I’m having a difficult time parsing it.
Miller answered my question quite aptly, I think:
@jny2cornell seems clear… “back office magic keeps it clean”… ;-) You should try #BWBX, and see how the site performs to your own needs
Yes, it does seem clear—clear as mud. And that strikes me as a problem. If I’m thinking about joining BWBX, I’d like some assurance that all my effort poured into it isn’t going to go to waste as usage scales up and inevitable abuse creeps, or floods, in. I’d be worried, for instance, if I knew that the “back office moderation” is mostly human. Of course, I’d also obviously be worried if I knew that the automated processes were quite simply unfit for the job.
Peer-to-peer moderation doesn’t work magically. Take the quintessential case of wikipedia. It’s got a small and hierarchical army of editors. Perhaps more importantly, though, it’s perhaps the first human community in which vandalism is cheaper to clean up than it is to create. That ain’t trivial. It’s arguably not just important but an utterly critical disincentive against spam.
I wouldn’t have this level of concern were it not apparent that “push” logic drives BWBX. Consider a contrasting example: twitter works by “pull” logic and is therefore mercifully free of spam. I don’t worry about spammy content ending up wasting my attention because you can’t get content before me unless I invite it. And I can un-invite, or un-follow, very easy. This isn’t earth-shattering thinking here; it’s virtually as old as the internet—as old as spam itself.
So if we’re still getting it wrong, why? And if we’re getting it right, why can’t we be more transparent about it? We know how pagerank is the beating heart of google’s effort to out-engineer spam, and some argue that’s not even enough.
In fact, I encourage the folks at BWBX to give a close to read Daniel Tunkelang’s post, which asks, “Is there a way we can give control to users and thus make the search engines objective referees rather than paternalistic gatekeepers?” What goes for search engines ought to go for back office magicians as well.