Skip to main content

Followup to my experiment in community process

Posted by robogeek on October 18, 2005 at 4:42 PM PDT

A couple weeks ago I did a little experiment in community processes. Supposedly community driven processes are better quality because there's more eyeballs. That's an interesting claim, and I wanted to test it.

The Register has an article along the same lines as my test: Wikipedia founder admits to serious quality problems

My test is written up here: An experiment in community process

Basically, was reading a book about collaborative development processes and remembered an article I saw several months ago about a test of the wikipedia ability to fix their encyclopedic project through their own community process. In that test someone posted a bogus article expecting the wikipedia community to notice it and do something about it. When they didn't notice it after a week the guy went "hmmm...".

In my test I posted a bogus article, specifically a randomly generated computer science paper. I wanted to test that community to see how responsive it is. This time the wikipedia community came through and within 18 hours my article was gone.

It's kind of mixed results. In some cases bogus articles stay in place, and in others they get deleted. However the Register article weighs in with their analysis that there's a lot of problems with the community process on the wikipedia, with the evidence being the large number of questionable articles.

Well, I haven't made a deep study of the wikipedia, but the articles I've looked at were generally good. That is, except for the ones related to a peculiar niche interest I have in "energy healing" (See peaceguide.com or reiki.7gen.com for some resources that demonstrate what energy healing is). The related wikipedia entries are rather spotty, but it's such an out-of-mainstream niche one would expect poorly informed articles.

One thing the Register article talks about is the tortured quality of the writing in general. I suppose that's a direct result of the communal process, since you've got hundreds of volunteer editors running around each tweaking what other people have already tweaked. That's surely a recipe for strangely written prose if I've ever heard of one.

Looking at it from my perspective in the Java team, I can't help but think about different software development models. You've got open source projects and their collaborative development model, and you've got the Java team with it's long-standing largely closed development model that we're searching for a way to open to the larger community. The open source advocates claim the collaborative model is better repeating mantras like "with enough eyeballs, all bugs are shallow".

I'm interested in actually testing this claim rather than accepting it on blind faith. The result with the wikipedia quality gives me reason to doubt that a collaborative process is going to always result in high quality.

As another test I'm looking a little at the Hibernate project, and using the Findbugs tool. I'm really getting familiar with using findbugs as part of a normal software development process, but I thought to also run it on several versions of the Hibernate build. I've only just begun this but I did find something very interesting.

Namely ... in Hibernate 3.0 there are several very shallow bugs ... which all the eyeballs in that project did not find. For example code like

if (obj == null) { throw new SomeSpecificException("unknown object " + obj.toString()); }

Clearly that code is going to throw a NullPointerException rather than the specific excpetion the author of that code expected it to throw.

Clearly no software development model is perfect, and bugs slip through the cracks of every software development model.

Related Topics >>