In my previous article I described my first ever agile project as a Java developer. I highlighted some of the challenges that we faced in terms of the requirements gathering and analysis work.
In this article I’m going to talk about my first agile project as a business analyst, and how I took on some of the learning from the previous project.
I was hired for this project in Summer 2009 (a very rainy summer here in the UK, since you asked – 2010 was soooo much better). When I arrived I discovered that there was no project manager to speak of and no particular process I had to follow. The assumption was that I would take a standard waterfall approach. However, the project was relatively small (1 application, 4 developers, 8 months), and the lead developer was open to trying new things. So I decided we would instead take a ride down Agile Avenue.
Defining the Agile Business Analyst Role
As mentioned in my previous article, there is much debate over the role (if any) of a BA on an agile project. Writing myself out of a job didn’t seem like such a smart option, so I decided that we would indeed have a BA on this project. And if any agilists asked, I would describe myself as the (Scrum-style) Product Owner in order to put them off the scent.
New to Bridging the Gap? Check out our free email course on how to kick-start your business analysis career. Click the link below to learn more:
Articulating the Vision
The first artifact I produced was the Vision. The terminology is taken from RUP, but basically it’s a Terms of Reference or Project Initiation Document by another name – background, objectives, high level scope, methodology, timelines, stakeholders, assumptions, constraints, risks and issues.
The key ‘agile’ thing I did with this document was to produce it as a slide deck instead of a standard text-rich document. My main objective here was to avoid all the ‘window dressing’ that usually goes with a written document (you know – full grammatical sentences, fonts, formatting, alignment and so on). But it had a very useful side effect too. On numerous occasions I needed to give a project overview to various interested parties. The Vision was a ‘ready to go’ presentation perfectly suited to this purpose.
You can download a copy of the Vision.
Moving from document to slide deck was also the start of an important mindset shift for me. In the past I’d been used to producing high quality documentation, and I took pride in my work, often spending valuable time fiddling around with the wording and sentence structure to get it ‘just right’. In slide deck format, with the window dressing stripped away, and the content laid bare, my artifact was revealed for what it really was: a means to an end, not an end in its own right.
Identifying Scope and Priorities
The Vision identified 5 or 6 key scope areas that the project would deliver. During the scoping stage I engaged with the various stakeholders (interviews, workshops, questionnaires, the usual stuff) and put together what we called the Feature List. Each feature was a single paragraph description of some desired system behaviour. The Feature List was a spreadsheet so, as with the Vision, there was a focus on content over format.
You can download a copy of the Feature List.
The Feature List was (deliberately) unprocessed. Some of the features were very small and specific. Others were huge and rather woolly. In some cases the features looked like they would probably overlap, and others looked like they might conflict. I tried not to let any of that worry me too much at this stage – I was trying to avoid doing too much detailed analysis up-front. I called them features rather than the more agile term user stories because I didn’t feel they were concrete enough to be classified as stories just yet. This was probably a mistake, of which more later.
I worked with the stakeholders to prioritise the features (using MoSCoW). I repeatedly made it very clear that the project was to be time-boxed (there was, in any case, an immovable go-live date) and we would deliver features in priority order until we ran out of time.
I then worked with the lead developer to estimate each of the features. Of course he complained that some of the features were too woolly and gave correspondingly high estimates. And as is common with developers, he also put high estimates against the features he didn’t agree with. We added a contingency of 50% to each estimate to account for the lack of detail.
We used the estimates to determine how many developers we needed to deliver all the ‘must’ and ‘should’ priority features in the available time for go-live (the answer was 4), and sized the development team accordingly.
We then organised the remainder of the project duration into 8 three-week increments, and that’s where the real fun began!
Detailing the Functional Design
From my previous experience on an agile project, I knew I wanted to go into each increment with more than just a list of features – I’d seen that approach cause problems last time. With this in mind, I deliberately planned in a couple of weeks before the first increment started, to give me time to get ahead of the game.
I worked with the business team to elaborate the features into something more concrete. I called these more concrete things user stories, and used the standard “As a…I want…so that…” format. I captured the detail of each user story in a fairly cunning use case/acceptance criteria hybrid notation (which I have discovered recently is not dissimilar to that used in Behaviour Driven Development).
I elaborated the top priority features first and after two weeks I had enough detailed stories to take into the first increment with a few to spare. This ‘just in time’ approach to elaboration was my attempt to capture the benefits of agile (deliver benefit early, respond to change etc.) whilst avoiding the problems raised by starting a development increment with incomplete requirements.
Rather than writing the user stories on index cards, I put them all together in a single spreadsheet and called it the Functional Specification, or FS for short. The FS was a living document and it was also a shared document (on a shared network drive) – I was adding new stories to it whilst at the same time the developers were reading it and ticking off acceptance criteria as they coded them.
I think it worked really well. The increment planning sessions ran smoothly because most of the tricky questions had already been answered and so the estimates produced were relatively accurate. The developers were able to get on with coding pretty much as soon as the planning session was over.
As ever, once development started on a story, questions were unearthed and further detail required. The benefit of the living document was that I could update it very quickly and with very little effort. Change control was managed simply via a column in the spreadsheet which the developers used to ‘sign off’ each acceptance criterion. Any gaps in the sign off column indicated a new or amended criterion. I used the same technique for business sign-off too – no need for lengthy and repeated review cycles after every change.
Prototyping the User Interface
I also built a prototype of the user interface. As this was a web application, I produced the prototype as HTML, using Adobe Dreamweaver.
I built the prototype in parallel with the user stories i.e. ‘just in time’, adding new pages to it as and when required. For a given feature, I would sometimes do the prototype first and other times I would write the user stories first. Generally, one informed the other, so I might start with a story, then do the prototype, then go back and amend the story with something I learned during prototyping. Sometimes I didn’t bother with a prototype for a story if I didn’t feel like it added any value.
The prototype was pivotal to the stakeholder workshops. I would display the prototype on the projector screen and have the FS on my laptop screen in front of me (good old dual-screen technology!). This allowed me to talk through the user stories using the prototype as a visual focus. If I’d had enough coffee that morning and was especially on the ball, I could even make changes to the prototype on the fly.
This method was a real success – stakeholder participation was high and we evolved and refined the prototype (and associated stories) over the course of a few workshops, even before ‘proper’ development had begun on those stories. Again, this was done ‘just in time’ rather than all up front.
The prototype also made life easier for the developers because they were able to lift the prototype HTML directly into the application (I made sure the prototype used all the same layout and format as the application itself).
Read more about using wireframes to elicit, analyze and validate requirements.
Creating Other Artifacts
In the earlier increments, I produced two other artifacts: a Logical Data Model and a Screen Navigation diagram. Again, I took a ‘just in time’ approach and only included the changes that were relevant for the upcoming increment. By increment 3 or 4 it became apparent (during the ‘retrospective’ session) that neither artifact was being used: the developers were able to infer the Logical Data Model from the FS and the Screen Navigation from the prototype. The artifacts were duly discontinued, and a whole load of unnecessary work avoided – a real triumph of the agile approach.
Incorporating User Feedback and Evolution
One of the stated key benefits of incremental development is the ability to get user feedback early, and to evolve the system based on that feedback. Ideally this is done by actually putting the system live as soon and as frequently as possible. At the very least you are supposed to ‘showcase’ the system at the end of each increment.
Showcasing the system turned out to be unnecessary – the user feedback usually received at this point had already been received during the ‘just in time’ prototyping phase. But I was keen to get user feedback based on actual system use, and sooner rather than later.
We were restricted to a single go-live at the end of the project, so multiple go-lives were unfortunately not an option.
Instead we conducted two ‘mini’ phases of User Acceptance Testing (UAT) – after increments 4 and 7. Each UAT phase was focused on specific areas of functionality that we felt would most benefit from hands-on user feedback. We got feedback from around 30 ‘friendly’ users, looked for common gripes and scheduled extra user stories into later increments for the most important ones.
Tracking Progress and Managing Change
I used burn-down charts to track progress through the project. I had one chart per increment (tracking against user stories) and also a high-level chart for the project as a whole (tracking against the Feature List).
My charts were actually inverted burn-down charts (burn-up charts?) in that the progress line worked its way upwards over time towards a ‘100% scope complete’ target line (a normal burn-down chart works downwards towards the x axis). This allowed me to show scope creep on the chart by moving the 100% line upwards (e.g. 20% scope creep would take the line up to 120%).
This was an excellent tool for managing stakeholder expectations and worked hand-in-hand with the MoSCoW prioritisation – every time a new feature or change was requested mid-project (including those arising out of the UAT phases), I would ask the stakeholders to prioritise it, get it estimated, then show them the increased gap between current progress and the ‘complete’ line. At one point the gap got too big and we spent some time re-prioritising all features to make sure we were definitely focusing on the right things.
By the end of the project we had hit the (original) 100% line. We had delivered all of the ‘musts’, most of the ‘shoulds’ and a few of the ‘coulds’. Most importantly, the stakeholders were happy because they had been involved in deciding what to deliver every step of the way.
You can see the overall project burn-up chart on the ‘Progress’ tab of the Feature List. You can see the per-increment burn-up charts on the ‘Stats (i1)’-‘Stats (i8)’ tabs of the Functional Specification.
Learning from the Retrospective
Overall, I was really pleased with how this project went. By pretty much any measure it was a success. The system went live and is being used today by around 30,000 users.
In terms of managing the analysis artifacts, I did have one major headache, in that I was dual maintaining two separate lists of scope items – the Feature List and the user story list (in the FS). In order to keep track of progress (and to keep the various burn-up graphs accurate) I constantly had to make sure the two were in sync.
With hindsight, it might have been better to combine the two into a single list. I had wanted to keep a distinction between ‘woolly’ high-level features and detailed, elaborated user stories. But really I think that the latter are just a progression from the former. Agilists commonly refer to large, high-level stories as ‘epics’ and maybe I could have done that too.
I appreciate that I’ve glossed over some of the juicier details in this article – it would have been too long otherwise. In future articles I hope to deep-dive into specific aspects of the artifacts and techniques I used on this project. If they worked for me, they might work for you too, so if there’s anything you’d particularly like to hear about, please leave a comment.