Bob, thanks for your two cents. The kitty is now up to a commendable $.06.
You mentioned: If we decided to measure across teams, maybe we would want to look at the number of User Stories Planned vs. User Stories Delivered, or maybe the number of
User Stories Delivered vs the number of User Stories accepted by the customer.
I agree this is a more appropriate metric than using points to determine performance across PMO, since a point value from one project is different than a point value from another project. However, my comment
about using points was strictly related to measuring the project itself and not as a metric to compare against other projects, as Mike rightly pointed out as a potential pitfall. I do like the metrics you chose if they were to be calculated as percentages.
That way, we can more accurately compare performance across all projects without concern for the project size.
I would hesitate to sign on to the “stories delivered vs. stories accepted by customer”, only because that muddies the team performance metric. It’s one thing for a team to perform all of its work on time, but
when you add the variable of “the customer”, it suddenly complicates the ability to derive a team performance metric. Adding a customer means a new bias has been introduced, which affects the project performance depending on whether or not the customer chose
to accept the completed work.
Branching off on this thought for a moment…
One of the things I think we’re doing well in our current reporting platform is defining project health through color-coded labels. From what I’ve observed over the years, Library projects are exceedingly dynamic
in nature, which makes it difficult to calculate a baseline of health across multiple projects. For example, some projects are external, while most others internal. Some operate within fixed budgets, while others are treated as maintenance projects with varying
access to additional funding or resources. Some projects are software-based, others are implementation-, event-, or process-based, meaning certain projects can be tracked for the quality of output, while others cannot. I’m sure there are better examples,
but my point is that for each variation in project type, we reduce the number of metrics that can uniformly apply across all PMO projects. As our project centralization efforts continue, the problem compounds.
This observation leads me back to our current color-coded system. I feel like this has been a decent solution for deriving project health across all project types because it hits upon the 3 “triple constraints”
across our projects—scope, schedule, and budget/resources. It would be my suggestion to find ways to make this reporting method work more effectively than it does in its existing form. In speaking with Lisa about how the Project Access reporting tool works,
I speculated that it could be very helpful in a future iteration of the tool to offer PMs the ability to complete an online form asking for specific, uniform questions about each project being managed. These “yes/no” type questions would tabulate and automatically
affect the status of the scope, schedule, and resources of each project. This is already happening in the tool with respect to the project schedule: extend the project completion date and the status escalates to yellow. Why not agree to a set of project metrics
that are not tied to any given project management framework and base our performance and success off of that? Bob, this is where we could also determine the performance metric you suggested by asking in the form, “How many tasks/stories were planned?”, as
well as, “How many of those tasks/stories were completed?” Beyond this, we would need to explore if other reporting criteria is truly necessary or useful to management.
By reporting on projects using these common denominator metrics (that are not point-based, to qualify Mike’s sentiments), we can future-proof ourselves against changing project management models and varying project
types that would otherwise make health and performance tracking impractical.
This is quite an unusual ramble for me, but I believe it’s one of the few ways to enforce accountability across a diverse set of project types, and I hope it can stir some ideas on how we might tackle the impending
need to take on all the Library’s medium and large projects.
Steve
From: Agile at Library [mailto:[log in to unmask]]
On Behalf Of Shirley, Robert L.
Sent: Wednesday, August 23, 2017 1:32 PM
To: [log in to unmask]
Subject: Re: Not Everything... & Story Point Usage
Great thoughts – I’d like to throw in my 2-cents…
IMO, we should certainly not use story points or velocity in performance comparisons across teams. If we decided to measure across teams, maybe we would want to look at the number of
User Stories Planned vs. User Stories Delivered, or maybe the number of User Stories Delivered vs the number of User Stories accepted by the customer. These rate measures would have
some validity across teams and provide valid insight into the effectiveness and quality of different teams. But, sure…every project is a snowflake.
J
Also, IMO…Release burn-up charts are a good gauge of a project’s health over time. It’s the closest analogy to a Gantt chart you’re gonna get in Agile. I know it has some “estimates”
like the number of story points that must be delivered to get to a specific point of functionality (i.e. MVP), and it also assumes a steady velocity (which we know isn’t so easy to calculate or sustain), but, it can be used by the PM as a data point to help
in understanding and communicating project status – which is what the stakeholders need most.
Bob Shirley
Team Lead, PMO
Library of Congress
(202) 707-0166
"how would analysis without points serve management in a meaningful way”
That’s the rub, I don’t believe that Story Points are there to support management. They are there to support the team, but I don’t believe they
should be used as a health gauge. Using them as a gauge for health is useless to determine health.
I have requested for along time that Release Burn Down charts should be used to identify health issues within a project. That gives an actually look at the current (and forecasted)
status. You an actually forecast into the future of potential missed dates.The release burn chart is the Agile way of reporting project health. That what is was created for. Since story points are chosen by the team, and can fluctuate, they are not a good
measure of “health” only “effort”. Using points this way also promotes the idea of comparing across projects. I know that folks say they won’t do that, but it’s a natural instinct. Why does project A average 120 points/sprints, where Project B only averages
18? There are so many possible answers to this, any none of them speak to the health of the project.
Software Engineering Manager
Mike,
Great to hear we can tweak the default templates to accommodate estimating tasks. But I found your comment at the end of the last message the most interesting, concerning the use of
points as a reporting metric. I think we are now getting into a different conversation, but one that’s worth delving in to.
From what I’ve seen so far (in my own weekly reporting through the Project Access tool), our status metrics are summarized as color-coded labels. It’s only when these metrics escalate
out of the green category and into the yellow or red that they become actionable by someone other than the PM. If it weren’t for the burnup chart or version report that we PMs include as part of our reporting, how would analysis without points serve management
in a meaningful way? Thanks for correcting me if I misinterpreted your comment.
We can certainly bake it in the project creation process.
We also need to discuss the importance of story points. I see them as a project level tool, and they should not be used as a project
reporting or status metric.
I can confirm we are tracking points through Tasks in JIRA, and you’re right--it’s a deviation from the default config. Would we
need to reach a consensus on making this a permanent part of a default project template, or is this something we can bake into our default project templates?
Agree, Storys are not the only bucks we should be using/ The only issue with using Task in Jira is that they don’t track Story points
by default. That makes it difficult to track velocity of a team, something which we would like to do in the future. I believe we can add Story Points to a Task in Jira, but not sure….
Software Engineering Manager
Thanks, Bob.
I’m definitely an advocate for alternatives to formal story writing. On the projects I’m on, we will only write stories when the
unit of work can be formally tested by our QA staff. Formal story writing forces us to provide an Accepted Criteria value, which QA relies on to ensure the story can be successfully reproduced and tested. The model we use to determine the Accepted Criteria
is “Given-When-Then”. QA can integrate properly formatted tickets into their regression test workflows with automated testing frameworks such as Gherkin. Even the G-W-T model can be made more efficient. In many cases, we use “Given-Then” success factors.
In other words, to be successfully reproduced, a condition (“When”) is not always necessary.
Conversely, when we have system-level and infrastructure tasks, investigative tasks, or project activities involving planning, we
deliberately categorize the work as “tasks” instead of “stories”. As noted above, tasks are distinguished from stories because they do not need to be peer reviewed or formally tested by QA. Significantly less effort is required to write an “Action/Expected
Result” ticket than a formal agile story. “Task tickets” significantly reduce the amount of team consensus that would otherwise be allocated for writing thorough and complete stories, as well as the story grooming that would take place in a separate ceremony.
Task tickets do not need the same level of group scrutiny when it comes to estimation since a majority of our task-level tickets are usually limited to one or two people. Perhaps this is different in other projects here? By not having to write formal stories
all the time, we can allocate any saved meeting time towards more useful group discussions/activities.
We have not consciously adopted the FDD model for writing task-level tickets, but it appears we naturally gravitate towards its principles by incorporating the most important aspects of any planned work, which is “what” the action is and its expected result.
Just our experience so far and two cents!
https://www.mountaingoatsoftware.com/blog/not-everything-needs-to-be-a-user-story-using-fdd-features
Here’s an older article by Mike Cohn where he talks about how to handle user stories that are more focused on back-end features and
do not have custome-facing components. He advocates for a different syntax for these stories, leveraging FDD as example. Instead of saying “As a developer…” or “As a product owner…” say something like “Merge the data for duplicate transactions” (<action>
the <result> <by!for!of!to> <object>)
What does everyone think of this approach? How do you handle these types of user stories in your backlog?
Thanks!
Bob Shirley
Team Lead, PMO
Library of Congress
(202) 707-0166