The single most costly – and sadly very common – outcome for a data team building decision support tools is to build things their stakeholders don’t want. (In fact this is the single most costly thing for any team.) Everyone has seen it; a dashboard is useless or unusable, a Jupyter notebook is hard to decipher without a PhD in Physics; a derived table is claimed to be dangerously hard to work with. When teams are making these artifacts with the wrong mindset, the project ends up getting delivered, the stakeholders who need to use it to make product decisions are underwhelmed, and the data team is demoralized by the reception. But management ends up rewarding the people who complete the project, and the problem persists (not to mention there’s new code to maintain). And sadly, no one learns how to do better next time, because the reasons for everything turning out not-quite-right remain invisible.
A few years ago I began to notice that those invisible reasons, when made explicit, tended to look more or less the same. When things go well, it's because the team had a small set of good "product" habits that motivated the data team to build stuff people actually might use. When they don't go well, those habits were absent.
I find the "product" framing useful because these habits are easily learned from working with product managers, reading the PM literature, or working at a startup that has a strong product culture. Of course, data teams don’t usually have product managers, so it’s really on the shoulders of the data scientists, engineers, and managers who become project leads to fill in these critical "product"-shaped roles.
So I thought I’d write something for my colleagues to make visible those habits as I’ve experienced them. I was able to boil them down to the five essential ones. Practicing them leads to projects that deliver internal stakeholder value more successfully, making everyone much happier. Think of them as complements to your team's data skills, not alternatives. They are:
- focusing on understanding your stakeholder’s problems;
- talking & listening to everyone (a lot);
- prototyping early and often;
- ruthlessly prioritizing the felt experience; and
- thinking big-picture.
Of course, your team may already have some of these habits. Hopefully they seem obvious when I write them out like this! But honestly, I haven't seen anyone describe them holistically as it relates to building tools for internal stakeholders. I think this is because we don't really hire for these things, nor are they often explicitly called out or rewarded in a lot of cases in a way that promotes the habits.
But when a team doesn't practice these – and again, that's often the default – projects end up on one or more of these bad paths (paraphrasing a brilliant Shreyas Doshi insight whose source I cannot for the life of me find):
- The team doesn’t build any kind of relationship with their stakeholders, so they are disconnected from the problem that needs to be solved and don’t have a good relationship with the people they’re serving, often leading to a big empathy gap that makes matters worse;
- Their project doesn't solve the stakeholder’s problems because they don’t talk to them enough to understand their problems. Leading to the big problem of building something the stakeholders don’t actually need.
- The thing the team is actually building takes forever to finish. A project team can spec out how they plan on building the thing, but they haven’t done the work of understanding the problem they’re solving for stakeholders, so it’s unclear how to shift gears when the requirements inevitably seem to change. And they will change if they haven’t actually figured out the stakeholder’s problem.
- The stakeholders are underwhelmed by how confusing and unusable the output is. Data teams sometimes overvalue technical excellence (resilient & tested systems, clever conference-talk-worthy methodology) and undervalue good UX (good written communication, easy-to-grasp data visualization, documentation, a usable interface, extensibility). Stakeholders need easily understandable and impactful insights, and they don’t care as much about the engineering and data science “signals of excellence.”
- The stakeholders stop trusting the team (and data) because they don’t deliver. Poorly-serviced stakeholders will find ways around underperforming data teams – either they stop making decisions with data, or they hire their own analysts & pay for new tools without having the expertise to use them as intended, depending on the size and structure of the company. This often leads to a lack of confidence in data as a concept, which leads to bad decisions, which leads to bad products.
- The team is demoralized because building something useless sucks the life force out of them. People do less-good work, some people get burnt out, and you end up having to rehire to backfill those frustrated ICs who leave.
I've seen all of these bad paths before. Maybe you have too. But fear not; teams that have these habits usually have a higher chance of building the right things and avoiding these bad paths entirely. These habits are teachable, and once you see the value in them, they practically become instinctual.
Let’s go into more detail.
¶ (1) focusing on your stakeholders' problems
When planning for the quarter, data teams often make as a goal some kind of project output: a dashboard for this, a pipeline for that. Senior ICs & project leads will write a spec that mitigates the technical risks and deliver the finished project. The team implements and delivers.
But this kind of project thinking tends to leave out the stakeholder’s problems entirely – that is, the people you’re building these projects for. How do you know that the project actually meets the stakeholder’s needs? Isn't that the goal?
A data team with good product habits tends to focus on understanding the stakeholder’s problem as much as they focus on implementing the solution. Without a thorough understanding of the problem facing stakeholders, they’re less likely to build a solution that meets the need. They shift their line of questioning from “how?” to “why?”, and then ask over and over again until there are no more "why?"s.
When you give proper weight to the stakeholder’s problem, it becomes clear that building the wrong thing, even if you actually finish it, simply won’t cut it. It helps curb your natural impulse to just build-build-build. A problem-focus will change your incentives, and you are more prone to make better decisions, even if those decisions are hard ones like “this thing we’ve built is taking us in the wrong direction.”
Another nice side-effect of focusing on problems is that it clears any confusion about secret goals that might serve an IC's tinkering instinct, such as trying a new technology or methodology. That is not to say that a team should never experiment with new technologies; but there is a time and place, and if the solution to the stakeholder's problems has a high return, it might not be sensible to bet the farm on an unproven technology.
Learn more: When Coffee & Kale Compete by Alan Klement.
If you'd like to laern about a mental model around solving stakeholder problems, I recommend When Coffee & Kale Compete. This book outlines the popular "jobs to be done" approach of consumer product development, but the lessons aren't any different.
¶ (2) talking & listening to everyone (a lot)
A data team with good product habits explores their stakeholders’ problems by talking & listening to them. A lot. It's an exercise in relentless empathy, and it's not always comfortable, but it's absolutely vital to the whole endeavor.
When I say “talk to them”, I really mean you need to understand that your stakeholder’s success is your success. They're your captive customers, & you’re more likely to have a successful project if you cultivate a good professional relationship with them. That relationship enables candor, which leads to real discussions about their needs, which leads to better solutions, which leads to happy, successful stakeholders (and a happy, successful data team).
“Talk to them a lot” is obvious advice to people who like talking to others, but it’s unevenly held up as a skill or as a practice among data teams, and so it often goes unmentioned and unrewarded when a project lead does the work to talk to everyone and unblock the flow of information. Especially with data scientists and engineers, the idea of talking to stakeholders a lot isn’t really the reason we were inspired to pursue this career in the first place.
Yet communication skills are crucial to being effective in nearly all careers. It’s no different for data workers.
Similarly, the “talk to people” rule critically applies to your team, the people who are with you to explore the problem and find solutions. When the project lead understands their responsibility is to make sure everyone on the team is aligned, they go out of their way to achieve that alignment through conversation. And when the team is talking to each other regularly and all voices are heard, new and better solutions usually come up that leverages everyone’s talent and expertise.
All of this is to say that regular communication transforms the project lead into a context machine. When someone knows how all the pieces of a project work together, they help fill in gaps as they see them. Talking a lot both creates the context and passes it along.
Learn more: Just Enough Research by Erika Hall.
It can be hard to figure out how to begin talking to people. Perhaps your stakeholders are busy and they feel they don't have time for relationship building. Or maybe you don't feel that you have the time, and importantly – you don't know how to do it. No matter; just conduct an interview with them to begin the dialog about the problems they're having, their own stakeholder landscape, and how they're currently solving their problems today. If you are unsure (or anxious about) how to begin, then I'd recommend reading Chapter 4 of Just Enough Research, on conducting organizational research. The idea of stakeholder interviews may feel a little formal, but in reality, formality and process are great entrypoints for shy, introverted, or green tool makers.
Talking to your stakeholders requires a mindset shift. Especially toward the beginning of the dialogue, your goal is to talk to them about their problems. You should not be explaining solutions you have in mind; you should be probing for information and listening. This Startup School video on talking to your customers is a great introduction to customer-centric interviews.
As for improving the communication flow within your team, the two tools you have are straightforward: meetings & document-writing. For meetings, a good guide is "Effective Meetings at Spotify". Meetings can be awful – especially in the Zoom era – if they're not well-designed. And as for the other tool, many tech companies have some kind of workplace writing culture that enables the free-flow of ideas in an async fashion. I wrote something about Mozilla Data Engineering's proposal-writing culture, if you'd like to explore the consequences of communicating & aligning on decisions without having to be in the same room.
¶ (3) prototyping early and often
A data team with good product habits is always looking for ways to reduce the risk of building the wrong thing through techniques like product prototyping.
This is quite different from technical prototyping. A technical prototype gets team members to think through the mechanics of the proposed solution to make sure it's feasible. They also tend to create more alignment among the team members since everyone can see the same solution.
product prototyping ≈ artisanal gradient descent
While technical prototypes are invaluable, they don't inherently reduce the risk of building a solution no one needs. Which is why you should be making product prototypes. Think of them as iterative loops through the solution space. You want to show just enough of a solution to stakeholders as early as you can, get a gauge if it solves their problems the way you'd hoped, repeat – all before spending time building the full solution. This approach optimizes for feedback speed, so each prototype should be as cheap & efficient to produce as possible. The stakeholders will get a better understanding of what the real problem is – and so will you – with each prototype you put in front of them.
This product prototyping ethic isn't just about figuring out what big thing to build. It's about inexpensive validation in any context. Here are four big benefits to pursuing the product prototyping loop I've encountered in my career:
- You reduce the total cost of building since the foundational assumptions about UX and value are tested in prototyping and not while implementing the full solution;
- You reach a better solution much faster, which reduces the risk of investing too much effort into the wrong solution;
- You produce more actual stakeholder value over the lifecycle of the project, since you’re more likely to reach a higher-utility solution in the first place (and your functional prototypes will often deliver useful insights); and
- You build a stronger relationship with your stakeholders because you demonstrate that you're committed to collaboratively working toward good solutions with them. They will have a clearer perception of the status & trajectory of the project than otherwise.
It's worth juxtaposing this product prototyping iteration loop against the common bias-for-building fallacy. Even if you’ve talked to your stakeholders once or twice, it doesn’t mean you actually know how to solve their problems. We’ve all seen too many cases where engineers do in fact talk to stakeholders early on, go off and scratch their building-itch for months without getting any further feedback, then get frustrated by “changing requirements” messing up the thing they’ve built once the stakeholder sees the solution for the first time. It’s clear, when I put it in these terms, that it’d be ridiculous to be upset about this outcome if you didn’t spend enough time validating that your solution solved the problem. It doesn't have to be this way. Just prototype.
Some resources on the topic:
- Prototyping at Slack – a nice writeup by Kyle Stetz about how Slack uses prototypes to explore solution spaces.
- Flavors of Prototypes – Marty Cagan delves into the different types of prototypes practiced at strong product companies.
- Minimum Desirable Product – Andrew Chen writes about how to build to find the minimum desirability (user) vs. viability (business) or feasibility (engineering).
¶ (4) ruthlessly prioritizing the felt experience
A data team with good product habits tend to be ruthless about deciding on what a finished project should feel like. This requires a mindset shift toward prioritizing the felt experience of using & getting value from project above all else. This is, of course, not to say that you shouldn’t make sure your data insights are accurate or that you are free to be a sloppy engineer. It is to say, though, that a technically impressive project doesn't inherently solve your stakeholder’s needs.
Left to their own devices, data teams might feel the urge to meet perceived technical requirements that are not always necessary to deliver the solution to stakeholders. These requirements may be more relevant in other contexts they’re familiar with (consumer products, research, more robust systems) yet are often misapplied either without critical consideration or as a way to produce a “signal of excellence” to peers. Here are a few examples of requirements I have seen relaxed before, in the right context:
- Comprehensive ETL where a much simpler, potentially slower query against the database tables may suffice. Maybe the data doesn't need to be up-to-the-minute accurate; maybe it doesn't need to be optimized for fast querying. Maybe the data set won't be relevant in two months; maybe you could sample in some way. Good data teams know how to build the tube that meets the need.
- Building for web scale – servers that could easily scale to millions of users, when you only have 10-100. You don’t want something that breaks on people, but you don’t want to spend cycles building for resiliency and scale when it won’t make any practical difference.
- New, unproven technology – There absolutely is a time and place to try a new tech stack, but that time and place isn’t probably in the pursuit of solving your very important stakeholder’s problems. I recommend watching my boss Dan McKinley’s popular talk “Choose Boring Technology” for the well-known version of this argument.
- Novel, conference-worthy methodology & language that impresses other data scientists, when a much simpler analysis will provide the same value with less friction.
- Bespoke custom dashboards that take several months to produce, when a pre-made tool might meet the stakeholders’ needs more quickly.
On the other hand, teams often under-invest in felt experience improvements. After all, the big task of a decision-support data team (at least on paper) is to deliver sound insights that guide product development (aka "getting decision-makers to think differently"). No one mentioned UX when we took this job! But the truth is, those small things – those felt-experience improvements – make all the difference. If your stakeholders can't use what you've made, you won't get them to think differently about the product they're building. Here are some things that make the project feel more usable:
- Clear, simple business language in writing & copy. Your stakeholders are looking for insights and want to get to them as effortlessly as possible. They will appreciate a writing style that brings them along for the ride.
- Accessible data lineage. Good data tools build trust in numerous ways. One important one is to make the data lifecycle of the tool transparent and accesible. Make it easy to access the data behind the charts, and make it easy to figure out how the dataset behind the chart came to be. "Show query" and "export data" buttons are always helpful.
- Documentation, Documentation, Documentation. Good documentation is the cheapest, most efficient way to produce the context needed for others to derive value from the project.
- A simple, easy-to-use interface. What you build has to have an obvious, easy-to-use UI. The thing you're making doesn't have to be glamorous; it just has to be obvious how to get value out of it.
- Simple graphs & data visualizations with tooltips and readily-accessible explanations. I’ve written some thoughts about how these "small things" can transform a data visualization into something great.
- A nice URL that is easy to remember and easy to paste into Slack.
- A way to export the data or get to the next mile of the analysis more easily.
- An emergent feeling that the solution is simple, obvious, and dare I say "boring" even when it took a lot of work to get there.
Thankfully, the product prototyping iteration loop very naturally exposes issues around usability, as does talking to your stakeholders (a lot). Whether or not your pay attention to those issues, however, is up to you. But everyone will thank you if you prioritize the felt experience of using the project output to make decisions.
¶ (5) thinking big-picture
A data team with good product habits is invested in learning from the project as it is being executed and when it’s done. These learnings help them and their team do better on the next project. The project itself is, after all, just another iteration of its own project-completion feedback loop.
Here are three great loop-closing tactics I have encountered:
- Keep an adoption & impact scoreboard (were we successful?) – much like any consumer product, it’s invaluable to measure if your decision support project is creating value over the long-term through basic analytics. Simple tracking with something like Google Analytics can suffice. In this case, it’s useful to have some expectations of what success looks like. Ten users a week may be transformative for the company. Another success metric is keeping track of the number of times the data project is used in company meetings or other important moments of communication.
- Schedule a post-mortem & share-out (what did we learn?) – inevitably, some parts of the project go well & some don’t. It’s valuable for a team to have a chance to openly discuss the learnings so that future projects benefit from the work done on this one. Blameless post-mortems are probably the most common way to do this. It’s worth noting that post-mortems and basic usage tracking are not strictly product things. Engineers and others do these things all the time. That said, I have found that a team that recognizes these product habits tend to get more out of the post-mortem than those who don't.
- Ideate on how to scale up the impact (where else can we apply this approach?) – one of the great joys of delivering a successful product is discovering unmet needs other stakeholders have when they see the solution. "wow, can we get this data model and dashboard for mobile?". Good decision-support data products serve your stakeholder's needs; truly great ones change how everyone thinks.
These tactics – and indeed all of these skills – serve a bigger purpose. To think big-picture is to consider the transformation you're engineering with your stakeholders. Building decision support data tools, at the macro level, is about working with people to improve their capacity to make impactful decisions and better products. And at its best, the tooling comes to represent the change itself. It's more than just building a dashboard.
As is often said on Twitter, there are no hard problems – only slow feedback loops.
The secret here: all of these habits come from experience & insights easily gleaned from entry-level product management & design literature, applied to common internal projects. None of them are new or novel; but applying them to data decision support teams doesn’t seem particularly obvious to everyone.
I call these “essential product habits” but they’re probably just as accurately called “basic startup skills”. When you’re on a product team of some kind, especially a scrappy one, you realize how expensive building the wrong thing is. Some of the sharpest data scientists I’ve met usually have some startup experience. They also tend to practice these product habits. I don’t think this is a coincidence.
But why all this focus on & framing around lessons from product management? I think we’re in a golden age in the PM literature. Best practices are being shared, discussed, and written about. Tech products are better designed today than they’ve (arguably) ever been.
I’m not advocating that data scientists should attempt to become product managers, especially if there is no clear career track for it at their company; I am saying that it doesn’t hurt to learn how other disciplines solve problems. And given the wealth of insights available from the field of product management, it’d be a shame if we didn’t steal what we can.
Thanks for reading! If you have any questions, comments, or disagreements, please drop me a line at hamilton dot ulmer at gmail. I love getting feedback and new insights from readers.