Open-Source Factors of Success
Open source projects live or die by contributions. There is the occasional project like Open Office that can get by based on financial backing, but most of us have to suffer on collaboration and communication.
What makes a Successful Open Source Project
Most Open Source projects simply have to follow a basic formula:
- Do something worthwhile
- Accept contributions
- Making contributions easy
Now this is only a rule of thumb, you can make something useless and easy to contribute to and still have a good time. There is utility in fun.
I am tempted to list an Open-Development process as essential as well (it seems to be for making a community, but not always required for a project). I am going to put it down as a tool for building trust which goes towards making the decision to contribute easier for people.
Do Something Worthwhile
Now from where I am setting the first one is always easy. Either you are doing it or you are not. Not very helpful let me try again.
Since you wrote the software chances are it does something useful for you and hopefully others (with the exception of the anti-pattern of Resume Ware).
If you are having trouble:
- Have an actual itch (something that bugs you)
- and scratch it (solve it)
We all have plenty of fascinating ideas, actual need is generally the focus needed to make something worthwhile (You can also try and find someone with the actual need and get them to pay you money).
This second point is actually surprisingly difficult. It is one thing to give out access to the source code and it is another to survive in the face of changes.
For this me it comes down to two points:
- Is it clear how I can submit a change, or join the project and get commit access?
- Does the project support the idea of a community at the Design & Architecture level?
At a Design & Architecture level the concept of a Plug-in based system has proven itself again and again as the way to do things. The success of Firefox with respect to Mozilla is one small example.
The strength of the plug-in model with open source projects rests on two benifits. At a community/people level a plug-in based system provides a sense of ownership and responsibility over a section of code. It also keeps developers from tripping over each other quite so much. At a technical level it does its job: allowing a system to be extended.
This magic combination of a an architecture matching the technical needs and the social needs of an open source community means I look a bit sideways at any open source project that is not plug-in based (often they are the result of a single developer). If a project (like GeoTools below) combines plug-ins with well-known hoops to jump through for inclusion things start to look pretty good.
The range of responses to these forces is different for different projects:
GeoTools takes a two pronged approach a Developers Guide and strong supportive design and architecture. The developers guide outlines procedure for getting commit access, submiting changes and all that good stuff.
The Design and Architecture is basically layers, with support for additive plug-ins at each layer of abstractions. Currently making use of SPI for plugin services, simple enough if long in the tooth.
- GeoServer is strong on doing something worthwhile, supporting RnD branches and leveraging a host of standard J2EE design tricks and technologies like STRUTS. They are showing signs of developing a more supportive framework if you follow the right mailing lists.
- GeoAPI is on the worthwhile spectrum (common interfaces for use by many toolkits), but has recently had such an influx of process that nobody is really sure how to contribute right now.
- UDIG is trying to hit all the points. The Hacking Guidelines are refreshingly small (three guidelines), and a Community section outlines how to take part. For a supportive design we are making use of the Eclipse Platform.
Making Contributions Easy
There are technical aspects to this problem, but mostly it is a social issue that must be thought through. Are people willing to put up with the license? Can someone learn enough to contribute to the project? This is all about the people.
Often it easier to maintain an external fork then put up with a unappealing license, or put up with too much process. GeoTools has only recently escaped this by being active enough that maintaining a fork is more expensive then feeding the changes back to the community. (This is about the only benefit to API churn that I can imagine, Ever).
The technical issues are there all right - and one can get them wrong. Using an unfamiliar version control system, not providing source code downloads, not supporting the IDE used by the majority of those interested in contributing. Requiring unit tests that take 15 minuets to run before accepting a commit. Heck I have even made each and everyone of these mistakes (most in the last week).
But they all fade in comparison to the Learning Curve. What is the learning curve like - will they loose motivation before being able to fix the thing they wanted?
Documentation and Learning Curve
Out of the projects mentioned above only two make use of an external framework (GeoServer and UDIG). In terms of learning what would the advantage of doing so be?
There is a chance (if you choose the right technology) that contributors will already be familiar with the ins & outs and will have an easier go of it. This is of course the ideal. The risk is that if you choose wrong people will just have two things to learn.
I chose wrong with STRUTS for GeoServer (most contributors simply have to learn two things). The other downside to STRUTS, or are inexperienced use of it, is a lack of support for the plug-in style of contribution. Branch and Merge simply takes longer then plug-in based alternatives. You can witness the success of Spring with open source projects as indicative of this tradeoff.
I suspect I am choosing right with uDig and RCP â€“ now if only I can help people lean it.
One thing all these projects have in common is the use of industry standards. While we may talk about standards being wonderful and trump up the idea of interoptability, one of their main practicle advantage to an open source project is in terms of documentation.
(There is stillroom to argue over the standards, but they often give everyone a common language to debate with). The ability to print out and read a 100-page document about what is going on should not be underrated.
This is a problem with ISO based standards (where it is pay to play). This is one of the driving factors in the adoption of OGC standards by the Java GIS Community.
One benefit to the use of both standards and frameworks it the possibility one can buy/find existing documentation. Open source projects are suckers for even bad documentation - either in the form of books or websites, anything that helps will be of use.
A Reading List for UDIG
I found something out as I assembled the list. I don't use all these tutorials or books. Don't get me wrong they each saved days of work, but their usefulness entered in at specific point of the learning curve.
Please check out the above reading list, feedback is welcome. Taking my own advice (this being a form of contribution), please add a comment to bottom of the page.