Skip to content

If mice could program: Some thoughts on software engineering research

Last updated on 2012-12-10

One topic that interests me a lot is software development. I also do research on this topic (in my PhD), but my interest is not new and goes back to my first management meeting. In this meeting, my boss showed us lots of graphs showing how me managed to catch bugs early, and how this reduced the cost of the project, and so and so… Nice things, but I thought it was problematic to say that the project was well managed since we deployed the system about 1 year late and with many features not implemented. But my manager, who was a genius at making all kinds of bugs found at each development phase/requirements changed each iteration/etc, thought that this didn’t matter because we delivered a very good working product and that is what mattered. I though differently. But could I prove this in some way?

And this problem of “proving” that one methodology for software development is better than the other still hunts me. “What is the problem?” you are probably asking yourselves. “Obviously agile is better than waterfall, and OOP is better than functional”. But is it really that way? Ambysoft, Scott Ambler‘s company has done a number of surveys (2007, 2008, 2010, 2011) which clearly show that the success rate of iterative and agile projects is higher than for waterfall projects. But, and this is very important, this doesn’t mean that all agile (or iterative) software development is inherently better than waterfall! Why? because correlation does not imply causation.

How could we scientifically prove that agile is better than waterfall, or OO is better than functional? easy!: by doing structured experiments and comparing the results. For example take the requirements of a system, take two teams of developers, let one team implement the requirements using agile and the other using waterfall. Repeat the experiment enough times until you have enough data. Write a paper with your results. Receive a PhD. Easy, right? No… very, very hard.

First of all, to get good experiment results, the two groups would have to be fairly similar. But how the hell do you get two “similar” groups of (real) programmers to invest so much time and effort only for the sake of research? And there is no such thing as “similar” groups! The difference in skill between programmers does not distribute normally – picking a random set of programmers and dividing them in two groups will not give you two similar groups (I had a paper that showed this somewhere… still searching for it). Just one Linus Torvalds, Doug Lea, Joshua Block or similar programmer in one of the groups and your experiment is completely screwed.

But assume that you can find the developers and they are divided equally. You must also check different project types: projects that are very UI centered will surely behave differently than server-side projects with strong algorithmic requirements. More variables you must check.

So what do we do in the academia? we experiment with students. But sadly, students are not real programmers, and they are not developing real world projects. So our results, while great for publishing purposes, are most of the time not representative. And that is the reason why I think we should teach mice to program. If we achieve that goal, we could experiment with them and finally understand how we should develop software!

Disclaimer: I think agile is a great development methodology, but waterfall is also good if implemented as originally proposed by its creator W. Royce, with backtracking when necessary. He even wrote “I believe in this concept, but the implementation [no backtracking]… is risky and invites failure”

Enhanced by Zemanta
Published inProgrammingThoughts

Be First to Comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Musings of a Strange Loop

Subscribe now to keep reading and get access to the full archive.

Continue reading