How much documentation do you need?

Stacks of documents being measured with a tape measure
Used under a Creative Commons License courtesy of gadl @ Flickr

As a business systems analyst, one of things I struggle with is finding the right balance of documentation.  Often times documentation feels like a chore and as Modern Analyst points out, it can seem downright counter-productive in an Agile environment.  Yet, in many situations documentation is necessary for other project stakeholders to do their jobs and to ensure that someone can learn how the system works once you’re gone.  The tricky part is finding out how much and what type of documentation is required.

Probably the most important consideration when determining documentation efforts is to consider your stakeholders.  Who will read this documentation, and how will they use it?  Here are a couple thoughts on how different stakeholders use documentation and descriptions of their unique needs.

Software Developers: If the main users of your documentation are going to be software developers, you might as well not write it.  I kid, but seriously developers do not really need detailed specification documentation, and generally won’t read it anyway.  In my experience the best types of documentation for conveying business requirements and rules to developers are simple artifacts like wireframes and flow charts.  Those, along with a handful of conversations and some white board time, are generally all you need to get developers building to spec.

Testers: The underpaid, understaffed, and underappreciated grunts of the software world ensure that our software works in all those edge cases and  produces predictable results.  It’s been my experience that these guys and gals often need highly detailed documentation to get the job done.  Detailed documentation gives them specific details they need to write test cases with predictable outcomes. Furthermore, testers are often located offshore, complicating direct communication and making detailed documentation all the more necessary.

Business Users: When process owners change, aside from wanting to scrap the entire process, the new owners will generally want some documentation that explains both the purpose of the business processes and how the software work.  Ideally, process owners and business stakeholders can provide much of their own documentation.  To meet these needs, a business case can explain the purpose and goals of the process/system, and some operating procedures or guidelines will provide an overview of the business process.  Finally, the analyst might want to provide a flow chart and some notes explaining how the process and system fit together, along with appropriate references to the more detailed documentation if it exists.

Future Analysts: Often times, I have found myself cursing the people who left behind no documentation on the workings of the system I am expected to modify.  Don’t be that guy.  When reading historical documentation, analysts mostly need to be able to glean business rules, the project justification, and some idea of system processes  from documentation.  These can generally be derived from some combination of the other types of documentation listed above.  As long as the other documents can cover those subjects, there probably is not a need for documentation specifically intended for future analysts.

Unfortunately, each set of stakeholders requires a different type of documentation.  Sometimes this means that we analysts need to write multiple sets of documentation for the same project.  In the end, analysts should aim for just enough documentation get projects accurately built and tested, and enough documentation to ensure future stakeholders can understand what was done.

If you’re interested in more on this subject, Modern Analyst has a great article on requirements specification in an Agile environment that touches on this, as well as recommendations for methods to communicate requirements.


Lessons Learned on Design, Analysis, and Agile

Sticky notes clustered together for an agile planning meeting

When I was at InfoCamp a couple weeks ago, Erin Hawk and Samantha Starmer both spoke about how they’re working to integrate design into the Agile Software Development process.  This got me thinking about some of the successes and failures I’ve seen in Agile projects and how those related back to the integration of the analyst role into those projects.

Since Agile was a methodology created by developers for developers, it tends to be fairly development centric, and a lot of Agile implementations neglect both design and requirements processes.  As someone who has been in the development, analysis, and design roles of systems implementations, I have seen Agile applied with a variety of results.  Overall, the projects that successfully integrated designers and analysts into the Agile process tended to have much better results.

Many projects are unsuitable for developers to be the primary group interacting with stakeholders (that’s another post altogether).  Integrating designers and analysts into an Agile project helps ensure that the end product meets both business and user needs the first time (ie; on time & on spec). Not to mention, it makes the developer’s job easier by cutting through a lot of the ambiguity around a project and reducing the number of times developers will have to change what they’ve made.  Unfortunately, since Agile stresses speed and iteration over carefully planned designs, it’s not necessarily easy to integrate that into the process.

Here are four of the lessons I’ve learned on how to succeed in Agile development from an analyst’s point of view.

1 – Be an agile waterfall

Agile development focuses on rapidly developing, testing, and releasing a product, which has a tendency to squeeze out designers and analysts . However, many projects and products are so complex, and have so many stakeholders, they require specialists (aka designers & analysts) to handle that complexity.

The best Agile projects I’ve worked on keep all of the phases of the Systems Development Life Cycle, but compressed them down into small and fast chunks.   By reducing the project into a cluster of discrete but related deliverables, ie; user stories, you can keep all the phases of the SDLC but still move at a rapid, iterative pace.

2 – Use staggered phases to keep designers & analysts ahead of developers

One of main reasons that Agile has gained so much popularity is that developers and organizations became fed up with lengthy requirements gathering and design processes.  Such processes tend to lead to specs that are out of date by the time the developers get around to them.

By having designers and analysts work one phase (or two phases at most) ahead of the developers, organizations can deliver just-in-time specifications.  Ideally, the development team can put together a rough set of initial stories and prioritize work accordingly at the beginning of the project.  After that, an initial sprint (called sprint zero by some) can give the designers and analysts time to get ahead of the developers while they set up dev environments and take care of any other start up items they have.  Sprint length will vary from organization to organization, but in my experience two weeks is a good length of time for each group to do a thorough job with their current tasks.

3 – Coordination & communication is key

I can’t stress enough that everybody needs to be in constant communication throughout the process.  As analysts are working on requirements, they should be giving developers a heads up on what’s coming down the pipe.  This keeps the developers abreast of changes, and there will always be changes.  While developers are programming a current set of tasks, analysts can provide feedback and integrate any changes the developers want back into the requirements documents.  A good project manager is often key to this process, as project managers are skilled at keeping up communications and preventing the geeks from siloing up.

4 – You’re either all-in or you’re all-out

Either everybody is working on the project, or nobody should be working on the project.  One of the biggest wastes of time  is when development resources get pulled from a project, but the analysts and designers keep going.  In the best of cases it takes 6 months for the developers to come back, and keeping the rest of the team on the project will lead to a waterfall style mountain of  obsolete design documents.  In the worst case, the project gets cancelled, resulting in wasted effort by the design and analysis teams. If one team gets pulled from the project, management should sideline the project until everyone can get back to it.

Those are my 2 cents on the matter, and I’d love to hear about your experiences in integrating design and analysis into Agile projects.

What I Learned at Info Camp

Word cloud of session descriptions for InfoCamp: Information, Design, Research, User, IA, etc...Last weekend, I attended InfoCamp, a community-organized “unconference” for people excited about information and how we manage and consume it.  This was its 4th year in Seattle, and I must say, it blew me away.

At first, I was pretty skeptical of the whole “unconference” idea.  There weren’t any predefined topics or sessions other than the keynote and plenary speakers and all the breakout sessions were run by conference participants.  I thought this meant the breakout sessions would be poorly prepared, and thus I wouldn’t like them.  As it turned out, there was a pretty good mix of highly polished presentations and more spontaneous ones.  The polished ones were good, and the spontaneous ones seemed to morph into enjoyable and informative Q&A sessions.

Overall InfoCamp gets an A in my book and I’m certainly going back next year.  And who knows, maybe I’ll even volunteer to run a session myself.  Without further ado, here’s a quick synopsis of the sessions I attended.

Content Management Strategy – More than just words

Vanessa Casavant gave a interesting & entertaining talk about how content management strategy fits into an organization.  The answer seemed to be right in the middle.  She explained that the role of a content strategist is to ensure that information & features being published by an organization have a clear alignment with the organization’s mission, and that they aren’t sending mixed signals about the organization to the end consumer of that information.

My key take aways were that content on the site needs to be in alignment with an organization’s strategy and that messaging should be consistent across the board, rather than a haphazard spray of content all over the organization’s website.

DAM Systems for Creative Agencies

Tracy Guza gave an overview of how creative agencies (mostly advertising firms) can employ Digital Asset Management (DAM) systems to manage their pictures.  This included file storage, searchability, meta data, and the whole nine yards.

I was particularly interested in this session because I work for a large stock photography company, and I wanted to learn more about how creative agencies use and manage the images they buy from us.  It was enlightening to learn some of the challenges facing a firm that manages 50,000 images, which are very different than the challenges my company faces in managing 14 million images.

Economics of Online Advertising

Jeff Huang led a discussion on how the economics of search advertising work.  This included the auction and fraud prevention.   Much of this was old hash to me, but it was interesting to get questions answered by an expert who has worked at all three of the big search engine companies.

Priority Inbox

Ario Jafarzadeh, one of Google’s designers on Gmail gave a talk about the design process behind Priority Inbox and all the iterations they went through to get to the current design.  It was pretty interesting to hear him describe all the decisions they made from what type of icons to use to whether they should use a video or a written document to explain it to users (turns out, they decided to make one of each).

This session was like manna falling from the sky.  I wrote a post about two weeks ago describing how much I liked priority inbox and speculating as to why it had taken so long for someone to do this.  I got my question answered, or least got his answer to it.  He explained that email is so personal that email providers have to get this sort of thing exactly right or else they’re going to upset their users pretty badly.  Apparently the machine learning (AI) behind this was so complicated that it took Google until now to build it. While I’m sure that’s true, I still think part of the answer is that nobody thought of doing that for email, or at least that nobody was willing to invest the time and money to make it happen until now.

If you’re interested in the keynote and plenary speaches, which were both good, you can find out more here.

Priority Inbox = Awesome, but why did it take so long?

Google recently launched a new feature in Gmail called “Priority Inbox”.  It’s an amazingly simple but powerful concept for dealing with email overload, and I’m surprised that this is the first time I’m seeing it.  Basically, Google’s using some predictive software to analyze your email use behavior and use that data to present you with the emails that should be most important to you.  Their explanation is better than mine, so check it out.

What I’m interested in, is why is this happening now instead of 2-to-4 years ago?  People have been suffering from email overload for a long time now, and while there are strategies for coping with it, none of the ones I’m familiar with have really been smart solutions until now.  Folders and filters give users some control over how to deal with deal with incoming email, but they rely on static rules that can’t adapt to change and require active effort on behalf of the user to set up.  Conversely, Priority Inbox adapts to users actual email reading habits to build dynamic rules that change along with the user’s behavior and email content and requires no more effort than turning the feature on.  For me, this is great leap forward in email.

So, why the heck did it take so long?  I can think of two reasons why this development may have taken so long to emerge.  First, the email market has been lacking in strategic innovation for quite a while.  Email is email.  It’s a well known space and email providers and designers have become complacent, offering a constant series of small improvements aimed at moving unwanted content out of view rather than bringing wanted content to forefront.  Second, this is a space where people are pretty sensitive concerning their privacy.  Many people would be pretty appalled at the idea of Google or another email provider reading all of there messages.  Nevermind that it already happens to provide targeted advertising.  So, privacy concerns may have previously prevented email providers from implementing such a feature.

Those are my two cents on why this is a great leap forward for the inbox and why this innovation has taken so long.  Please feel free to comment and let me know if you think this is as great an idea as I do, and why you think it may have taken this long.

And, why exactly is this a good idea?

illustration borrowed from

We’ve all experienced it before.  Some VP or director has a brilliant idea for a new distribution method, a plan to launch a new product line, or any number of other schemes that have the potential to be a really good idea.  The idea makes its way onto the company’s strategic road map, and resources start getting thrown at it to complete the project in some crazy time frame.

At some point the project will make its way to some smart analyst’s desk.  He starts to do some investigation and quickly finds that either no work has been done to look into the potential costs and benefits of the project, or that the figures being thrown about are little more than someone’s educated guess.  As an analyst, this is when I usually ask, “Why exactly is this a good idea?”  But by then it’s too late.

Businesses often leap before they look into projects, both large and small, without taking the time to delve into the data and use facts to determine if benefits justify the costs. For large decisions, such as new product launches and entering new distribution channels, such an analysis is crucial to the well-being of the company, but is often overlooked or done in a superficial manner. The same is equally if not more important for small projects since such analysis is so often missing.

Taking the time to investigate the value of a project helps reduce the amount of money, effort, and energy we waste on projects that just aren’t worth it.  So, why do businesses so often jump into projects without taking the time to look at the data and make well informed decisions?  I think the answer is twofold.

First, businesses are run by people, and people have more motivating factors than the best interest of the business. The personal (not business) cost of answering these questions can be high.  It takes time and effort to do the research required to make an informed decision.  Not everyone is going to be inclined to put in that kind of effort for an idea they already believe to be a winner.  Furthermore, answering these questions upfront runs the risk of having to kill one’s own project, which can create a conflict of interests between the individual’s ego and the business’s best interest.  The business often loses out.

Second, managers empowered to make decisions do not always have the right skills, tools, or data to identify and access the information needed to make these decisions.  Even if I had the best possible intentions to thoroughly examine my next initiative, if the data didn’t exist or I didn’t know it existed, I would have to rely on my gut or just make up some numbers.  This is often the case in projects with no existing analog.

Alternately, when dealing with projects that extend or modify existing methods, the information to drive such decisions should exist.  However the needed data might be incomplete or non-existent because the designers of the system at the time didn’t bother to store it.  This is problem of design intention and a failure to think adequately for future needs.

In another possibility, the data may exist, but it is buried deep down in the unearthly bowels of several databases and would take an expert to retrieve.  This is a situation where the business decision makers need to have strong relationships with the analytical and technical folks who know how to get this information.

The solution to the problem of making informed decisions is a two-way street: it is equally incumbent upon analytical and technical folks to build relationships with the people running the line of business. Such relationships could save us all from having to ask, “Why, exactly is this a good idea?”