Today’s guest blog post was written by Phil Wilson, who works in the Web Services Team at the University of Bath. Phil ran a workshop session at the IWMW 2007 event on “The Eternal Beta – Can it Work in an Institution?” in which he addressed the question of whether the Web 2.0 development philosophy of ‘always beta’ was applicable with the educational sector:
Google’s famous for it, Flickr’s moved to Gamma, Moo are on an eternal 1.0 – yet still in institutions we plod on with a tired, slow-moving and opaque process for developing and enhancing applications. From our closed support lines to official notices on unread websites and applications mysteriously changing in front of a user’s very eyes we look staid and tedious. But it doesn’t have to be like that, we could be fast faced and interactive – but at what cost? Continuity? Uptime?
I could ramble on about this for thousands of words, but I’ll try and keep it brief (for me):
- you take too long rolling out software
- you don’t do enough unit testing or user testing
One of the leading ideas of eternal beta is small improvements all the time. It’s the preferred model for developing Web 2.0 applications (just look at Google, Yahoo, Microsoft and about a billion Silicon Valley startups). The essence is that if you’ve changed something small and you’re waiting for the next milestone before you release, you’re crazy – just deliver it. If it turns out to be wrong or broken in some way, you can just change it again.
There are a couple of things people typically reply with:
One of the big fears that it hasn’t been user-tested enough. Well, in institutions we’ve got thousands of technically-minded members – staff and students alike; what do you think the odds are on being able to make, say, twenty of them beta testers? (It’s critical to get testers from outside your team; your team are effectively the alpha testers) I mean, you’ve probably got bloggers, Facebook group founders and tech contacts everywhere. See who you can find to test your apps – it doesn’t have to be the same people for all of them, and make it worth their while either by delivering a better application to them than everyone else, or maybe some mark of kudos inside the application that everyone else can see.
This does rely on being able to get good feedback from your testers – hey, you’d hope that if your software is good enough they’ll be telling you anyway, but you can use incentives or whatever floats their feedback-giving boat. The important part is exposing the feedback communication channel; maybe it’s a forum, maybe it’s blog where you post the new features and they add comments, maybe it’s a weekly meetup in the bar. Whatever you do, talking to those people and making sure that they can see that there are other active testers, whom you’re listening to and actually replying to is A1 critical. No trust == no good feedback.
The other big fear is that this basically throws traditional software development and delivery out of the window (farewell, cruel Gantt chart). When a team suddenly has deliverable dates measured in the days rather than the months you suddenly discover that the priorities change and you start getting people-focussed software rather than something focussed on year-old requirements. This is where agile techniques start kicking in. Things like pair-programming, continuous integration, automated deployment are all your friends. Techniques like PRINCE2 and Scrum are there to pick up the rest of the slack.
In the real world, although my team isn’t quite there yet (notably with the feedback), we’re trying hard and it’s paying dividends in terms of delivered software and happier users.
University of Bath
Phil’s blog: http://philwilson.org/blog/