Humane interfaces, simplisticity, and domain languages
One facet of the debate is an example comparing the equivalent "List" classes from Ruby (Array) and Java (java.util.List). Java's list class has 25 methods while Ruby's has 78 methods. Martin uses that fact to conclude that Ruby's list class is somehow more "humane" while Elliotte's thinks it's just bloated and that a minimal interface is better in terms of how people work.
Martin's primary argument for the "more is better" approach is that:
Humane interfaces do more work so that clients don't have to. In particular the human users of the API need things so that their common tasks are easy to do - both for reading and writing.
while Elliotte's "less is more" approach is that:
More buttons/methods does not make an object more powerful or more humane, quite the opposite in fact. Simplicity is a virtue. Smaller is better.
As you might have guessed, I think both of them are partially right and that there's something even more important that they are really bringing up that should be discussed.
list.first() is actually more humane/usable/readable/etc. than
list.get(0). Why? Because the intent needs to be clear and obvious to humans, not just the compiler. Even worse, crap like
list.get(list.size() - 1) is just plain wretched compared to
list.last() -- intent, clarity, complicatedness, easy to get wrong (off by one), etc. Also, look at how many parts of the list abstraction are "leaked" in just those two examples such as linear positioning, indexing, zero based indices, the first element is "always" at position zero, reliable sizing, etc.
However, Elliotte is completely right that having 78 methods in any class is an atrocity. Something that has that much surface area is way too complicated for humans to keep manageable. In addition, it also sets a bad example for coders learning the recommended ways of doing things -- i.e., "just throw anything you feel like in there."
Going to the opposite extreme of a bare minimum, necessary set of methods is also too simplistic. For example, Elliot throws downs an image of various remote controls, two fairly complicated "universal" remotes and a minimalistic one from Apple. But, who gets to chose what that minimal set will be for everybody? In software, almost everyone will end up wasting time and introduce bugs by writing their own versions of truly common bits of code. Software is, in this regard, much more of an engineering practice than a mathematical reduction.
Hopefully it's obvious that there's a reasonable middle ground. Like any good standard, deciding on what to put into the core library should be about codifying truly common behavior rather than what might sound good to any one special interest group. Other important tools in this effort are things like good design principles and refactoring. [I find it ironic that Elliot brought up the issue of refactoring in a debate with Martin.] Also, the both extremes miss out the addressing the specific needs of the context in which the code is being used: a pro using something everyday vs. a serious hobbyist vs. a random user vs. a half-blind grandmother with rheumatoid arthritis vs. .... Context matters.
Alas, arguing back and forth over those sorts of details makes it easy to miss a fundamental, crucial point: no software (library, application, language, operating system, or whatever) can be all things to all people. Fighting that war is not only pointless but is one of my definitions of insanity. The point of a chunk of good software is to enable the effective and efficient creation of more good software and to help inhibit the creation of bad software.
So, how do we then build up our own code to fix whatever shortcomings we find? Build our own libraries on top of whatever the core gives us which provide the clean abstractions and domain specific languages to get our jobs done, well.