Playbook for Client Troubleshooting
Client-based work is very often tied to deliverables and deadlines, which can sometimes come into conflict with Truss recommended practices for Agile software delivery. This is one of a number of common consulting challenges that teams encounter on our projects. While we work with our clients shoulder to shoulder, and deliver great work, it’s important that we advocate for continuously improving practices, including but not limited to agile delivery and working in the open. This continuous improvement also applies to this document, and it will be updated over time (a “living document”) as we continue to learn and grow.
Purpose:
This playbook provides guiding principles and Truss recommended practices for how we manage client requests and/or working styles that are different from our own. Successful consulting means we will need to meet our customers where they are and bring them along the path of understanding and embracing the practices that we use. It should be expected that our customers will be a limiting factor in just how Agile we can be – that is ok. Embrace this constraint and always work to show how Agile and other Truss values will bring value to our customers, their stakeholders, and their constituents.
Below, please find:
- Section 1: Overarching principles for client management
- Section 2: Tactical plays for troubleshooting difficult circumstances we commonly encounter
- Appendix: A shared lexicon of terms related to agile and waterfall
Section 1: Overarching principles
Meet the client where they are
Our clients have a range of capabilities and constraints when it comes to using both agile and waterfall approaches. We work to understand not only their approaches and constraints, but also their appetite for change.
We ask questions of clients to understand how they are working and what’s affecting that way of working. Often, it’s not that clients don’t appreciate the value of working in other ways, but there are blockers that we may be able to influence.
We then set reasonable and strategic targets for shifting their approach to support better outcomes.
Create some amount of predictability for our clients
Though agile processes are often described as trading predictability for flexibility, most clients have a need for some amount of predictability. Afterall, we’ve been contracted to deliver something before the end of the contract term. Whether it’s to effectively plan and budget, or to report to higher ups, including, for many of our government clients, to legislative bodies like Congress, some level of predictability is needed by all our clients. There are several ways we create this for our clients:
- **By being reliable. **We create predictability by doing what we say we’re going to do. We set realistic iteration targets and we meet them (most of the time).
- By being honest and transparent. We use candor, rather than reassurance, to build trust. We surface risks as soon as we discover them, so that plans can be adapted accordingly. We make our work visible by maintaining an updated sprint board and using it in a transparent way (Hierarchy of Work; Definition of Ready; Definition of Done).
- **By conducting planning activities. **Planning is an important part of all software development processes, and we leverage it to create shared understanding, and communicate information and changes. Our plans are always presented as snapshots representing the best known information at the time, and we update them regularly.
- By offering choices. Our clients can count on having a say in making decisions about the project. As we learn new information or as circumstances change, we iterate on our plans. We engage our clients in this process by presenting meaningful options, articulating and visualizing tradeoffs, and making recommendations based on our expertise.
- By delivering some quick wins for the client upfront. By providing value early on in the engagement, we can earn their trust as a partner who delivers. Later on if issues arise, these early wins have already helped establish a strong foundation upon which we can negotiate.
Support our clients with outstanding stakeholder communication on the benefits of our approach
Our day-to-day client contacts may be interested in working in more agile ways, but may struggle to gain buy-in throughout their organization. We support our clients by providing outstanding stakeholder communication, ghost-writing materials, and generally working to make our clients look good to their colleagues.
There are several categories of information we communicate to help shift our client stakeholders towards a greater appetite for agile and other processes.
- We communicate the things we’re learning through user and stakeholder research, technical work, and communication from our day-to-day stakeholders. This can be in-depth research synthesis presentations, or simply bullet points included in weekly ship notes.
- Rather than a waterfall-based schedule, we create a product roadmap as our go-to playbook that serves as the source of truth for our product, the direction we’re heading, what we’re prioritizing, and how far we’ve come. This helps us and the client stay in sync with both the immediate and the big picture goals for our projects.
- We communicate **how our roadmap has changed as a result **of new learnings. By frequently and regularly sharing updated roadmaps, we normalize the concept of iterative changes.
- We keep our product and sprint backlog updated and accessible to stakeholders to provide further visibility.
- We also communicate how learnings have de-risked the project.
- When possible, we share how much time or money has been saved in the long-run by spending extra effort now to de-risk the project through user research and iterative practices.
Section 2: The Plays (troubleshooting common challenges)
When a client faces significant barriers to working in agile ways (e.g. deeply siloed organizational structures, heavy ATO processes, budget cycles, timelines or product requirements written into policy)
Plays:
- Conduct an audit of the client’s approaches to software development and appetite for change, and as a team, strategically choose one to three areas to influence over the course of the project.
- Use the Leader Lab’s “Unfreeze, Change, Refreeze” approach to leading change.
- Use a risk registry to communicate to the client the risks inherent in their approach.
- Demonstrate the value of working in a more agile way on a smaller stage first. (e.g. choose something low stakes to the client.) Once the client sees success there, ramp up this approach to more consequential areas.
When a client is inflexible about both scope and deadline
Plays:
- Share risks to the deadline and/or scope early, and keep the client informed as these change.
- When an obstacle to budget, timeline, or scope surfaces, always provide options and articulate tradeoffs to clients. Visualize external impacts (such as delayed client decision-making) and facilitate tradeoff discussions with something that visually demonstrates impact, such as a sprint board.
- Reference: The Iron Triangle
- Use candor, rather than reassurance, as a trust-building tool.
- Do a prioritization exercise with your client contacts. For example, work with your clients to prioritize features/work within the constraints of a finite budget of effort (points). This can educate clients about developing a realistic plan, and prompt them to work collaboratively to set priorities and make trade-offs.
- Use a consistent interval (or sprint) planning process to determine your team’s velocity, and make it a priority to calibrate your velocity estimates over a few intervals, so that you are consistently achieving your planned outputs within the interval. The longer the interval, the more the unknowable risk, so the estimate/variability buffer needs to be bigger as well. A consistent task (story) size can be very helpful in improving estimates and predictability.
- Be especially wary of changes to a sprint while it is in progress, and be candid with the client that this basically “breaks everything”.
- Make work and progress visible to the client and to other teams. This includes making bottlenecks, risks, and discovered work visible, as soon as they are known.
When the client requires extensive upfront planning
Plays:
- Uses ranges for time estimates, providing best and worst case options, with clarity around what would result in a worst case, and what would result in a best case. Adjust the ranges as part of your just-in-time planning.
- Another option is to provide just a best case, and clearly outline prerequisites or assumptions for that estimate to be true (such as, our estimate assumes certain systems will be in place, or decisions will be made by the client by x date.)
- Attach confidence levels to time estimates, based on how much information you have about the work to be done, and be able to explain why. Adjust the confidence levels as part of your just-in-time planning.
- Visually signal the emergent nature of roadmaps. For example, prominently include information about dependencies and unknowns as well as must haves vs. nice to haves.
- Use roadmaps primarily to align on and sequence priorities, and visualize relative effort via time and resources applied.
- Default to a framework like “Now, Soon, Later.” If your roadmap references any specific dates, include information about how often the dates will be re-evaluated.
- Use spikes when a user story cannot be well estimated until the development team does some work to resolve a technical question or design problem.
- For projects with long-term horizons, it may be useful to conduct periodic planning throughout the project—for example, on a quarterly cadence.
- Complement your upfront planning with “just in time” planning where you continuously refine the backlog and communicate out changes. Break upcoming user stories into implementation tasks—ideally at a fairly granular level of less than a day’s work.
- In addition to just-in-time planning, projects of sufficient duration and scale often have a need for parallel tracks for discovery and delivery as noted in the Truss Playbook
- Add the risks that stem from upfront planning to a client-facing risk registry, and discuss them with the client, so the risks around dates or the need to change scope are known.
- Be sure to measure and demonstrate the amount of time and effort dedicated to planning activities, so the client is aware of the impact to the timeline.
- Use a Hierarchy of Work (e.g., task→story→epic→project→milestone) in client communications, to provide explicit and direct mapping from the customer’s high-level requirements all the way down to progress during each interval. If tasks are sized at a day of work or less, the Hierarchy of Work can be used to monitor and show daily progress toward even the highest-level goals.
When the client considers estimates to be commitments
- The first 6 plays mentioned above are good first steps to address this challenge.
- Make work and progress visible to the client and to other teams. This includes making bottlenecks, risks, and discovered work visible, as soon as they are known.
- Conduct client education about the planning fallacy and how it poses a risk to project success
- Run client through the following scenario from Mike Cohn:
- To help a stakeholder understand the concept of estimating, ask them to estimate the median amount of time it will take them to drive home. Then pose different scenarios: what if there was a flood? A car wreck? Construction? (source)
- If a contract or client expectation is for commitments to cover a longer period of time (such as a program increment), additional dedicated time to plan must be incorporated into process AND the client must commit to not introducing new scope within the committed period…just as if that period were a sprint
- Warn the client early if a target is at risk of being missed, and why, so it doesn’t come as a surprise to the client. Often these are things with increased complexity or new details identified which ultimately benefit the client. So it isn’t that we missed the target, but we did more work than expected on X which resulted in this benefit.
- If a target was missed, own up to this to build trust. Then, highlight the reasons why it was missed. Often these are things with increased complexity, or new details identified which ultimately benefit the client. So, we did more work than expected on X which resulted in a specific benefit.
When a client won’t allow for** user research or **usability/product testing
Before you select a play, reach for understanding
Sometimes there are legal reasons we cannot talk directly to users or test products with them. There may be very real, valid reasons that are not related to policy or legal constraints. It’s helpful to understand why a client is not allowing access to users before selecting the right play to employ. Lean into your curiosity and leave room for missing information. Most clients do see the value of hearing from users directly, but might be blocked or have different experiences that are affecting their guidance. And remember to adapt your strategies and plays along the way - what might work with one stakeholder or situation might not work for another.
Plays for being blocked on user research:
- Desk research: not the most glamorous, but often a client will have documentation about a process or customer feedback. This might be in call logs to a service center or standard operating procedures (SOPs) or training materials. Teams have looked up Glassdoor reviews or job descriptions when they haven’t been able to talk directly to users. Do some digging and you might find nuggets of information that are helpful.
- Bonus points for: Reviewing what you find with people who might work with users or who developed documentation. Often they will have stories or context behind how that documentation came to be.
- Risks: desk research can take a lot of time to find useful information. Skills needed are an ability for deep reading, parsing of relevant information, and synthesis.
- **Proxy users: **Who are the people who interact most closely with your target user group? Examples might be customer service representatives, clients, supervisors. Ask them about their direct experience with users, to share examples of what they have seen; they are representing themselves
- Risks: They are not the users, even if they know the users best. Their own bias and ideas might influence their answers. One way to mitigate this is to give them separate opportunities to think about something from a user’s perspective and then provide their own feedback and thoughts. It’s not perfect and carries a higher risk that we miss an important nuance to a user’s experience.
- Mitigations are to release an alpha or beta version of a product to actual users to get feedback (a kind of late usability test), and build in enough time to iterate on their feedback before a product is released to everyone or becomes a system of record.
- Risks: They are not the users, even if they know the users best. Their own bias and ideas might influence their answers. One way to mitigate this is to give them separate opportunities to think about something from a user’s perspective and then provide their own feedback and thoughts. It’s not perfect and carries a higher risk that we miss an important nuance to a user’s experience.
- Shadowing people who are user-facing: Sometimes teams are blocked from directly talking to users, but we encourage you to ask if you are able to shadow someone who is talking with users often. For example, listening to customer service representatives answering phone calls or emails from direct users can be informative and then you can ask them questions on how they resolve issues, what issues they run into.
Plays for being blocked with usability/product testing:
It’s important to understand ‘why’ we might be blocked on usability testing before choosing your play. Is it due to time constraints? Are stakeholders doubtful you will find anything meaningful from it?
- Stakeholders as proxy users: when blocked to reach out to the real end users, ask stakeholders to run through the product as if they are a user for minimal testing. You might position this as a time to find bugs or blockers to the workflow that is a part of natural quality assurance. At this point you might identify questions that neither you nor a stakeholder can answer and you can include them in the brainstorming of how you could find the answers. This might lead to them being open to you reaching out to users for additional information and testing.
- **Go lean on testing: **Often teams are limited in time/capacity to do usability testing. Conspire with your team to find very lean ways to get some feedback. Instead of going to 10 users, find 5. Instead of testing all features/workflows, prioritize the most important or most risky and get to the others if you have time. If not, flag those for future testing or to inform using another method (e.g. analytics).
- In the government client context, there are other reasons to go small. Going beyond 9 users in a study can trigger the need to get Paperwork Reduction Act signoff from OMB. This can pose significant delays.
- Share a real example of a product (ideally from a similar org to the client’s) that did not validate with testing with users, and how that wound up costing that client much more in the long run
- Risks: Clients do not always respond well to “here’s what failed somewhere else”. Assess what kind of data this client responds to and leverage that
- Share a real example of a product (ideally from a similar org to the client’s) where validating with user research or usability testing had an outsized positive outcome
- Lean on analytics: Determine an analytics plan with key indicators for how to dig into quantitative data with clear research cycles that are user-facing.
- Risk: these often come after a product is released, meaning there is a higher risk of not meeting user (and thus business) needs before release. This creates a longer feedback loop and more work to iterate.
When a client won’t adjust plans based on new information
- Make risks known, offer options, and make recommendations that reflect our expertise and our understanding of the client’s goals and constraints.
- Make a best effort to persuade the client to adopt our position, using language that will resonate with their needs.
- Look for allies and champions in positions of influence at our client organizations.
- If after making a good faith effort to bring along the client, they decide to go against our recommendation:
- Document our recommendations
- Fully commit to their decision
- (If the team is unable to fully commit to the decision, bring it to the leadership team for discussion of next steps.)
When a client is reluctant to update artifacts to reflect status / convey what will be perceived as bad news
- Provide talking points to the client that help reframe the story. Talking points can focus on unexpected benefits of the project status (e.g. we missed this delivery date, but this is because we wound up focusing our time addressing a critical security issue).
- Focus on what is needed in order to change the status. This way there is a clear action that can be taken, vs. simply delivering bad news.
- Offer to partner with the client to deliver the news, or to work with them to practice how to deliver the news.
When a client requires a “switch flipping” approach
Plays:
- Propose a soft launch. As long as the soft-launch happens ahead of the planned timeline (as opposed to extending the timeline to support a soft launch), you can potentially build support for a quiet launch with minimal fanfare.
- User-segment your soft launch with feature flags to reduce risk or build buy-in if not blocked by security requirements for out-of-the-box solutions.
- Approach this as you would a migration challenge—build temporary systems to bridge the gap between systems, finish out the long tail so that you can sunset the old system.
When a client can’t or won’t actively participate in making decisions about the product
Plays:
- Identify and establish regular touch points with someone influential in the client’s hierarchy who is invested in the success of the project
- Consistently articulate how lack of specific support or engagement from the client side is impacting the team’s ability to deliver. Having a clear paper trail can be useful if new client stakeholders come onboard.
- Create a stand-alone list of blocked items to review regularly in existing, recurring conversations with the client. This can serve as a prompt and forcing function for resolving the items. Put the most urgent items at the top of the list, so you get the most value out of the conversation even if they cut it short.
- Create a visual artifact (like a client-facing roadmap, sprint board with blocked items, product backlog) that highlights the ways lack of client engagement is blocking the team. Show the blockers in bright red so it is unmistakable how this puts the overall project success at risk.
- Check for hard-to-verbalize impediments: Are we having an hour-long status meeting with someone who can’t focus for that long? Are we trying to discuss decisions at a bad time of day for them? Are there multiple stakeholders at the same meeting who are afraid to commit to anything in front of each other?
- As a last resort, refer to language in the contract that requires client involvement.
When all else fails
If you try the plays listed here, and are still struggling to sufficiently address the situation, please reach out to your CEM and your project’s Executive Sponsor for help. They are here to help you navigate any seemingly intractable scenarios.
Appendix
Broad terms like “waterfall” or “agile” suggest a variety of approaches, some of which can work in harmony. This blurring of language can lead to confusion and unnecessary dismissals of compromise approaches that might meet the needs of a given project.
In order to avoid this, we use more specific terminology when discussing issues related to Agile/Waterfall project delivery.
Instead of saying… | We’ll say… | When we mean… |
Waterfall | Fixed scope | A predetermined outcome, including potentially specific requirements or deliverables |
“Up front” or periodic planning | An approach to software development that requires a lot of requirements gathering, story and task definition, and sequencing of work up front, typically for the purpose of creating predictability or the perception of predictability | |
“Big bang” or “Switch flip” release | An approach to development where a new system is adopted all at once, with no transition period between the old and new systems. | |
Fixed deadline | The client has a deadline that would be difficult to change; for example, one that is legislatively mandated | |
Stakeholder-driven | An approach to development that assumes stakeholders can stand in for end users | |
Agile | Emergent requirements | An approach that centers around a desired outcome, but allows for specific requirements or deliverables to be discovered through user research, usability testing, technical discovery, and more |
“Just-in-time” planning | An approach to planning that reduces waste by conducting in-depth planning activities just prior to implementation | |
Iterative releases | A development approach where gradual improvements to functionality are regularly released | |
Incremental releases | A development approach where the product is sliced into fully working functional areas that are released consecutively | |
Scrum | A framework, defined at scrum.org, that prescribes a set of ceremonies, artifacts, and roles for delivering products. A defining feature of scrum is a timeboxed interval (sprint), and a “push” system where tasks are generally assigned ahead of time. | |
Kanban | A method for visualizing work, workflows, priority, work in progress limits, throughput, and bottlenecks. Uses a “pull” system where prioritized tasks are picked up by team members as needed. | |
Scrumban | A hybrid of Scrum and Kanban that uses some or all of the scrum ceremonies and artifacts (and sometimes the roles), but without an explicit interval goal/target and an emphasis on just-in-time planning. Typically uses a “pull” system for work assignments. | |
Best Practices | Industry Standard Practice | Methods that have been carefully designed and published through a standards body. For example, NIST 800-63B |
Common Practice | Widely-used, but not necessarily consistent or well documented processes | |
Truss Playbook Practice | Methods based on lessons we have learned from previous engagements and bring with us. |