Technology
179 - Foundational UX principles for data and AI product managers
In this episode of Experiencing Data, Brian O'Neill delves into foundational UX principles tailored for data and AI product managers. He emphasizes the importance of user and stakeholder research...
179 - Foundational UX principles for data and AI product managers
Technology •
0:00 / 0:00
Interactive Transcript
spk_0
You're now experiencing data with Brian O'Neill.
spk_0
Experiencing data explores how product managers, analytics leaders,
spk_0
data scientists and executives are looking at design and user experience
spk_0
as a way to make their custom enterprise data products
spk_0
and analytics applications more useful, usable, and valuable.
spk_0
And now, here's your host, the founder and principal of
spk_0
Designing for Analytics, Brian O'Neill.
spk_0
Welcome back to Experiencing Data.
spk_0
This is Brian O'Neill.
spk_0
Today, I want to share, I've been thinking about foundations
spk_0
and user experience design as it relates, you know, this was September 2025
spk_0
to advanced analytics, AI work, many of you listening obviously are building data products
spk_0
and I include AI solutions in that definition.
spk_0
And, you know, I've got a course that I've written about designing human
spk_0
center data products and I'm launching this new cross company group coaching service as well.
spk_0
So I've just been thinking a lot about education and general recently.
spk_0
And I realized, I think it's been a while, I can't remember if I've ever done,
spk_0
an episode that was really about foundational user experience principles for data and AI product managers.
spk_0
So that's literally what the title of this episode is today,
spk_0
foundational UX principles for data and AI product managers.
spk_0
These principles, many of these would apply to formally trained designers
spk_0
and user experience professionals as well.
spk_0
I really tried to take a focus here on what I think the most important things
spk_0
are for non-designers in particular, especially because I think with the advent of AI
spk_0
and AI helping us with product development work and delivery work in particular,
spk_0
we're going to have more generalist type solution creators, product teams that have wide skill sets
spk_0
that are doing design research, maybe even some low-fi coding or vibe coding
spk_0
or even some production level coding, hopefully not for many of you.
spk_0
You probably shouldn't be doing that too much unless you're at a startup.
spk_0
But even then, the point here is the skill sets are kind of blending even more.
spk_0
And I've always been a designer who tends to think that the circle between product management and product design
spk_0
or user experience design, these are heavily overlapping circles.
spk_0
And some people would disagree with this and they would say how design is a completely separate thing
spk_0
and product people should stay in their lane and vice versa.
spk_0
But I just don't tend to see it that way.
spk_0
So let's jump in here. This is for those of you that do want to ramp up your user experience design skill set
spk_0
because ultimately you are the one doing that work.
spk_0
Perhaps you have some fairly young talent on your team that's right at a school or something like this
spk_0
or maybe you have no design talent or resources to work with,
spk_0
whether it's contractors, employees or whatever it may be,
spk_0
you might be doing all of this work yourself because maybe you're the only one that even sees
spk_0
that human adoption, delight, usability, simplicity, utility that all of these qualities
spk_0
are actually what's so often holding up our analytics and AI solutions
spk_0
from creating the value and impact that everyone's hoping that they're going to have.
spk_0
But sometimes management doesn't see that and design sounds like some kind of decoration skill
spk_0
that you bring in to make things look pretty.
spk_0
And I really hope for most of you listening that you're well beyond that perspective,
spk_0
maybe you're new to the show though and you haven't been listening to experiencing data for the last nine years
spk_0
or however long we've been going here, but this one's for you.
spk_0
So let's jump in. I've got about seven key ideas that I want to get across here in September of 2025
spk_0
as I think about this. So foundational UX principles for data and AI product managers.
spk_0
The first one is that user and stakeholder research is a must.
spk_0
It must be routine, but it's okay to do it lightweight.
spk_0
The professional researchers out there will probably cringe at this and that's okay.
spk_0
I would rather see you get to the gym, start doing some workout, build the habit,
spk_0
just get in there and start doing some work and don't worry about doing the perfect workout.
spk_0
Because just getting in there and getting moving is creating some value for you
spk_0
and you're building the habit and you're also beginning to learn what you don't know
spk_0
about how important that work is.
spk_0
So again, let me restate number one.
spk_0
User and stakeholder research, lightweight is okay, but it needs to be routine.
spk_0
Why this matters? Analytics and AI solutions often fail not because the math is wrong,
spk_0
but because the workflows don't match how people actually do their jobs.
spk_0
So we're trying to design these solutions.
spk_0
We're focused on governance and data quality and pipelines and all this technical stuff.
spk_0
But we're often rarely spending the time to see how this solution is going to fit into the natural workflow
spk_0
or the jobs to be done of the people who are supposed to use it.
spk_0
And without this exposure, without our ears turned on and our eyeballs observing,
spk_0
it's really hard to get this right on a regular basis.
spk_0
And you might guess your way there correctly, but guessing is not really a reliable strategy here.
spk_0
So some tactical things here.
spk_0
Try to commit to even one 30-minute qualitative research session.
spk_0
What I mean by that is you're not doing a survey.
spk_0
You're having an open-ended conversation or observation session watching one of your users or a prospect
spk_0
or an employee from a team that you're about to serve with some kind of work,
spk_0
if you're doing internal work, watching them do their job when they're running the books,
spk_0
if they're in finance and they're doing the books,
spk_0
ask if you can observe them for a half an hour or an hour once a week, one person,
spk_0
and just see the tools they're using and how they make decisions and ask them questions,
spk_0
ask them what slows them down, ask them what would save them time,
spk_0
ask them to tell you a story about the last time they got really frustrated
spk_0
with one of their data products or data-related solutions that they're using.
spk_0
The importance of having technical users participate in this work,
spk_0
I can't underline that enough here.
spk_0
What I mean is if you're in product management, you're probably in control of
spk_0
or have a lot to say about the engineering and data science and data engineering work
spk_0
that needs to get done to build the product.
spk_0
These people need to be part of this work.
spk_0
You need to get their eyeballs on these end users so that they can start to develop some empathy there
spk_0
and they know that there is real human beings at the end of the solution
spk_0
that are going to actually use this model or this dashboard or whatever the application that you're building is,
spk_0
that empathy will help them start to think differently.
spk_0
So you're going to want to rotate these people in.
spk_0
So maybe every week you pull a different person in and you do a two-on-one session.
spk_0
You ask the questions, you have them in common and observe.
spk_0
I like recordings but most people aren't going to spend the time to watch it.
spk_0
And it's just kind of like watching a recording of a concert.
spk_0
It's not the same thing as seeing the grateful dead live.
spk_0
The tapes are great.
spk_0
If you like that kind of music at least, I'm actually not a dead head.
spk_0
But it doesn't replace actually going to see the live experience even if the recording is really high quality.
spk_0
So try to get them involved on a regular basis.
spk_0
Capture your insights in a really lightweight artifact.
spk_0
You can record them. We have all this AI technology that can summarize, keep findings for us and make them searchable and things like that.
spk_0
I do recommend taking some notes, one common tactic is sticky notes, one idea for sticky notes.
spk_0
You give everyone that's coming with you a stack of those and a pen.
spk_0
And it helps them keep the ideas really short and the observations there.
spk_0
And in short, treat this UX research as an operating cost and ongoing cost of doing the work that you do.
spk_0
And think of it as insurance for your team.
spk_0
It is insurance to ensure that the data team, the data science and analytics team or the data product team is not just out there sinking more and more cost into technology and not actually delivering any value.
spk_0
This is kind of your insurance against low use or no use solutions.
spk_0
Okay.
spk_0
Number two, outcomes and projects and products must be defined, measurable and visible.
spk_0
Revise them as you build and know that the user experience outcomes are often a stage gate to business outcomes.
spk_0
So why does this matter?
spk_0
Well, let's say an AI powered products, you know, optimizes KPIs, but you know, it frustrates your end users, right?
spk_0
The one that delights users, but doesn't move business metrics won't get funded.
spk_0
But the one that doesn't do anything to help users want to use it or even enable them to use it.
spk_0
This isn't going to create any value for the business anyways.
spk_0
So all you're doing is setting a clock, running the clock down and spending time and money building a solution that nobody wants.
spk_0
So both the business outcomes and the user out the user experience outcomes need to be defined upfront or I should say they're defined upfront and they're revised as you build because as you're going to learn a second, we're actually designing to clarify the problem, especially early on.
spk_0
So design isn't just about solutioning, it's also about problem definition.
spk_0
So let me get a little bit clearer here because this, I'm still thinking about the best way to explain some of this to you.
spk_0
But the term prediction model, for example, is only successful if your sales reps actually change how they prioritize doing outreach, right?
spk_0
So in this example, we don't get any of the business value of the model until sales reps are willing to change how they do their work, which is influenced by the model.
spk_0
And when they change how they do their work, only then might we actually reduce customer churn.
spk_0
So embedded in all of that is this behavior change of the sales reps.
spk_0
So we need to understand what does it mean to deliver an outcome, a change that we can observe for the lives of the sales reps.
spk_0
That is a stage gate to getting the business value, which is the churn reduction.
spk_0
So these outcomes should be written in plain English or whatever language you speak at work, and they need to be kept visible during your product work.
spk_0
Outcomes need to have a from and a two component. So there is a delta.
spk_0
There is typically going to be not always, but there's typically going to be a change, an observable change that we should be able to measure.
spk_0
And if you can observe it, you can measure it. I don't want to get too far into this, but I have a whole episode on how to measure anything.
spk_0
It's one of my favorite episodes. But if you can observe it, you can measure it. It just gets into how accurately you can measure it.
spk_0
So you can climb through my old I don't have the episode number right now about that.
spk_0
But I highly recommend you taking a look at that if you're getting stuck on how to measure some of these things.
spk_0
Also on this topic of outcomes is this idea of user adoption.
spk_0
User adoption is somewhat of a rookie UX objective, counting things like sessions and logins and how many users did something with the application or tool that you're building.
spk_0
But adoption is not actually an outcome. An outcome is actually downstream of user adoption.
spk_0
And the reason it it's downstream is that adoption has tracked through product analytics or your solutions own analytics such as logins and time spent using the tool.
spk_0
Adoption might actually signal that your solution is is a tax and not a benefit.
spk_0
And this is why we can't just look at product usage metrics to tell us whether or not we're designing a useful solution that's actually creating a benefit or outcome for our users.
spk_0
So we have to actually have direct exposure to the humans and the loop that are using these solutions.
spk_0
A log of these wins over time is really powerful evidence that your team is having impact.
spk_0
So the act of defining these outcomes, making them measurable and making them visible and then actually going back and looking at the results and spending some time to evaluate.
spk_0
Did we actually create the impact here? Keep a log of these things. This is really good for the team. It's good for your career.
spk_0
And it shows that you are being outcome minded simply in the fact that you're actually tracking some of this stuff.
spk_0
So that's kind of my spiel and outcomes again. I'm going to restate that one outcomes need to be must be defined, measurable and invisible.
spk_0
Revise them as you build and know that the user experience outcome is often a stage gate to the business outcome that you seek because no use or low use may be the gate to the business getting the value that it seeks to get.
spk_0
And most of the time business stakeholders, most customers are not thinking about user experience as a requirement.
spk_0
They're not at all thinking about that. They're just thinking about the downstream outcome that they want, not the work that goes into it.
spk_0
Your job is to intercept that and understand that the humans and the loop have a lot to do with the organization getting what it wants.
spk_0
Okay, number three, design to refine the problem, not just the solution, well defined problem spaces unlock value faster.
spk_0
So why does this matter? Well, a lot of times analytics and AI projects will start with a request for data or the abilities interact with data.
spk_0
That is really a solution that's masked as a problem.
spk_0
So reframing this, for example, by asking, well, what decisions do you need to make faster and where do you lack confidence and how do you know when you have enough confidence to make a better decision?
spk_0
At what scale do you need to make these kinds of decisions? How would you know whether or not it was easier to make this decision with a new solution versus the way you do it now?
spk_0
These are all getting into feelings and changes that we can observe and we talked about that earlier.
spk_0
You want to translate vague requests such as build an AI chatbot into problems such as, you know, we want to reduce the tier one support tickets by 40%.
spk_0
And we want to validate that our CSRs or customer service representatives job frustration has reduced within four weeks of releasing this data product or this feature or this sprint.
spk_0
And we would do that perhaps with qualitative interviews with the CSRs to see how are they generally feeling now that you have this AI bot that's intercepting some of these low level support tickets.
spk_0
In turn, this might help the business then realize the benefit of retained staff. And that might be part of what they want is to reduce this job frustration because they don't want to have to hire new staff and retrain them.
spk_0
Another example here leadership might say, hey, let's use AI to predict which customers will buy more. Well, you might think, okay, let's go figure out what made people buy in the past build a model around that and then, you know, we'll be off to the races with the data science project.
spk_0
If you actually go through a design led discovery session here, what I sometimes call a design jam, we might discover that by doing this work with sales reps in the room, the people that would actually perhaps use this predictive algorithm or predictive score, we might find out that they don't really need a generic score like an index, a number.
spk_0
But they might really want is specific reasons why a flagged customer might expand their purchase. So things like, you know, their usage is been higher than their license limits for like three months and they're paying extra fees or, you know, we've seen that accounts like this one have adopted the gold tier package instead of the silver tier package.
spk_0
And so you may want to call this customer because accounts like them have adopted this the new integration or the new feature, whatever it may be.
spk_0
And it's this reasoning, it's the explanation behind it that makes it powerful enough to get the sales reps to actually change their behavior.
spk_0
And when we talk about designing to refine the problem and not just the solution, part of this is we do this because we accept that requirements are often friendly lies.
spk_0
So refining the problem as we design might get you and your team to might get you to the point where you have an AI model that focuses on explainability and action taking over data science accuracy, mathematical accuracy.
spk_0
You might end up with a solution that is built with existing tools inside of existing tools that users already like and live in. So in the sales example, maybe instead of a new dashboard or a new application or stand, you know, whatever it may be, you integrate this right in the CRM that they're living and breathing and all day long.
spk_0
And we don't create some friction there by introducing yet another dashboard or another solution that they need to go and get a log into and blah, blah, blah, we're embedding that right into the the place that work is already happening.
spk_0
So we're asking them to make the least amount of change possible.
spk_0
And we might as we design to refine the problem, what we might end up designing is a product that's smaller.
spk_0
We might find that there's actually a smaller set of technical requirements because in the original requirements we had the customer was speccing out this solution and all the data that needed I need all these columns and I need to be able to do x, y and z with it, et cetera, et cetera.
spk_0
And that's because they were kind of imagining a solution in their head, but customers aren't designers.
spk_0
And while you may be like, well, I'm not a designer either. If you're listening to this podcast and you're trying to actually take some action with these insights, then you're on the way there.
spk_0
And you're beginning to do the work of design. And that's that's what matters is we're starting to think a little bit omnisciently about this within where we're designing with intention instead of byproduct design, which is, well, what I'm going to do is I'm going to do a little bit of the work of design.
spk_0
Whatever the engineers and the data people made, whatever kind of comes out at the end of the project, that's the design. I call that byproduct design and it's design without any type of intention.
spk_0
So the fact that you're having any intention just by listening to this, you're on the right track here.
spk_0
And yeah, I'm going to come back for my little soap box there. I'm trust me, I'm off and I just stepped off and I did spray in my ankle, but it's getting better already.
spk_0
Okay. And the last thing I want to say on this number three here is retest your problem framing with both your business sponsors and your users.
spk_0
Misalignment really it breaks trust with stakeholders. Nobody wants projects and products that are going off the rails and they're, you know, one year, two years behind all this kind of stuff.
spk_0
So keep in check with the problem space and make sure you're constantly talking about that with the product team and up to the business and or stakeholders.
spk_0
Also remember that as we designed to refine the problem and not just the solution, the problem space is probably going to change.
spk_0
And you may find out that while we set out to solve problem X, what we're really realizing is that this, this data product we're trying to build is it really needs to solve problem Y.
spk_0
And that might not have been stated up front and it may be that Y is greater than X. It's a more important problem and it might be just stated in a framing that no one had ever really thought about before.
spk_0
And that's normal. So accept the fact that as we do our design work, we are refining the problem and not just focusing on making the solution go from rough to fine and okay to great.
spk_0
You're really living in this problem space early on before you really get into solution refinement.
spk_0
And so yeah, so you have the misalignment kills projects later problem space does change as we build.
spk_0
And then finally just remember, you know, new solutions will create new problems.
spk_0
And I think this is really true with with AI when we, especially with the language model type implementations here.
spk_0
We may find out that like as we, you know, if you start automating things or using agents to do, you know, quote, busy work that humans used to do.
spk_0
We may find that this starts to introduce new types of problems, new work that never used to have to be done because a human being used to do all of it.
spk_0
But now they're only going to do 10% of it. But it's not really just 10%. It's 10% of the old job.
spk_0
And now there's an extra 15% which is managing the agent that's doing this automated work.
spk_0
That's a very agentic AI specific example, but I'm using that because if you imagine throwing an agentic data product into the workflows of your end users, you can imagine how this thing isn't really fully automated with no oversight.
spk_0
It lives in code only. And if it goes off the rails, no one can do anything about it except to deploy new code.
spk_0
Chances are that's that's not how that agent works. It's going to live within a deterministic application.
spk_0
And there's going to need to be control points. And I have a whole episode about my mirror, my RRR UX framework for agentic applications. It's a few episodes back.
spk_0
Because of all of this, you are introducing all of that oversight now over machines doing work for people.
spk_0
So a new problem space has now been created that needs to be designed for.
spk_0
So that's normal and it should be expected. And the fact that you're actually discovering these things, I think, is a good thing because we're just getting more awareness of the world around us.
spk_0
And particularly the world around the customers and the people that we're trying to serve.
spk_0
Okay.
spk_0
Number four, build trust as a first class feature, especially with AI solutions.
spk_0
Why does this matter? Well, AI adoption really hinges on trust and not just accuracy.
spk_0
So users need transparency, control, reassurance, et cetera.
spk_0
I think as of this time in 2025, I think your default approach as the user experience person on a data product team, your default framing here should probably be that there's going to be some excitement.
spk_0
But probably skepticism excitement might be more from the business imagining all these amazing returns on value they're going to get from end users.
spk_0
You probably should assume some skepticism there, possibly even fear.
spk_0
And part of the work of doing good design work is to address these fears and the skepticism so that people aren't reluctant to use these solutions, particularly if you're, quote, taking work away from somebody like we just talked about these automation.
spk_0
Just a second ago, taking work away from someone doesn't necessarily mean you're doing them a favor if they feel like their entire job and career is attached to that work.
spk_0
So you need to think of trust as a first class feature, something that needs deliberate attention.
spk_0
It's not something that we write with code, but it is a quality attribute of the system and the data product that we're creating.
spk_0
So again, how do we do this again, my mirror UX framework, I just mentioned that a second ago that has some ideas, particularly for agentech AI applications.
spk_0
So I'll just state that's not a broad framework, but it is relevant in the AI space.
spk_0
The reminder that explainability often trumps accuracy when it comes to increasing trust, giving users a way out of solutions, particularly when they're involving AI.
spk_0
So making them feel like they're in control that they can override automations that they can flag problems there.
spk_0
They have a way to audit certain information there. They want to feel like they're in the command seat and they should be.
spk_0
And my broad perspective here is really you should be resisting automation unless it's work that everyone is really excited to automate.
spk_0
And I mean, everyone means that includes the end users who's worked that you're actually taking away and that we've had intentional product management and design effort given to managing this automation as a product.
spk_0
So again, this gets back into this idea that even if you deploy agents or an automated system, there's still probably going to be a management interface or application that is governing these solutions.
spk_0
There has to be some transparency into what is going on, what what these agents are doing here.
spk_0
And so if there's deliberate attention given to that designing that system around the people who are then managing the solutions that are now partially or fully automated, that takes a lot of work.
spk_0
And I really think that you should be trying to resist automation or using it sparingly and double checking that you're actually getting to the benefit and the outcomes that those end users seek and the businesses getting the downstream value that it works.
spk_0
And so that's what it wants because you can you can do damage here, you just in the form of sunk cost because people don't use it.
spk_0
You could also create resistance if you're automating the wrong things and people feel like they're being like a particularly internal customers feel like they're being threatened.
spk_0
And they're I'm going to talk a little bit more about the damage stuff here that can happen in just a second, but really we this trust factor is something we want to be measuring over time through interviews and observation.
spk_0
I really think qualitative work here, qualitative research work is really important here. I would rely less on things like surveys and self reported data where you're not there to actually hear their feelings and attitudes and the way they're talking about things.
spk_0
So I wouldn't use surveys for this. I really think qualitative is probably the better way to go at least until you know what the right signals are such that you could at scale measure trust.
spk_0
But you really need to start qualitatively first and just remember that a trusted good enough model, particularly when we're talking about, you know, particularly analytics and AI, good enough probably beats an untrustworthy, perfect model.
spk_0
So that more and more technical precision technical work to make models more precise.
spk_0
If you're only playing the technical quality game, you're not necessarily building trust by simply increasing the quality of the technology that's underpinning all these solutions that we're building.
spk_0
Trust is a feeling and it involves direct intervention and interaction with human beings.
spk_0
Sorry if you're one of the people that don't like to do that work, you're probably listening to the wrong show.
spk_0
Or at least if you are listening, I'm hoping you're starting to realize if you're not ready to make changes to how you work and realize that people don't really care about the tech, they just care about the benefits from the tech that you're building.
spk_0
I mean, that's really a cornerstone of this whole show.
spk_0
So I hope you'll get something out of that and a little light switch, a little light bulb will go on for you and you'll start to realize that they don't really care about your technical chops.
spk_0
Only your peers probably care about that.
spk_0
And frankly, they probably don't really care either. Everyone just cares about what's in it for them.
spk_0
So if you're playing that game, you're probably going to get ahead.
spk_0
Anyhow, number five, for AI, think about a portfolio of rough products and have a prototyping mindset with users in the loop from creation all the way to actual use.
spk_0
I'm going to say that again. For AI in particular, think about having a portfolio of rough products and keeping a prototyping mindset with users in the loop from creation to use.
spk_0
So why does this matter? Well, enterprise work in particular, just be to be in general, analytics and AI workflow. These are complex things.
spk_0
And if you wait until you're already doing all the data science and engineering work, you've already sunk major cost research user and stakeholder research, design work and implementation.
spk_0
These are all activities that we need to do with users and not for them.
spk_0
And I mean, we're doing this work with them while we're creating it and we're involving them in the evaluation or sometimes called acceptance testing, UAT.
spk_0
I don't really like that framing, but the point is the users and customers are involved from start to finish.
spk_0
There's no big giant reveal at the very end.
spk_0
But if you want to do that kind of work, you're taking a lot of risk.
spk_0
Companies like Apple that do very secretive types of design work and all this. The difference between you and them is that most most of you listening to the show.
spk_0
Your company doesn't value design work at all.
spk_0
You are a thousand times different than a company like Apple that puts a premium on design.
spk_0
That is most likely not the organization you are working in.
spk_0
And so you need to design a lot more on fact and a lot less on skills and opinions and feelings of experts.
spk_0
Because frankly, you don't have those very few companies have those.
spk_0
But let's get back to this idea of a portfolio of rough AI products and a prototyping mindset, right?
spk_0
Sketch out your workflows on paper or whiteboards before getting into algorithms and have users involved in these sessions.
spk_0
And I like workflows in particular because they get us thinking about experiences and experiences happen over time.
spk_0
User interfaces are kind of like an instance in time.
spk_0
And so while UI design does matter, the UX professional is really thinking about experiences that occur over time.
spk_0
And workflows are tasks that have one thing, falls another thing, falls another thing, falls another thing.
spk_0
Getting into the headspace of your users, tasks and jobs and work that they do.
spk_0
This is really an easy way to unlock value and to make sure that the solution you're building is going to create the value that you intended it.
spk_0
Host short design jams, get your engineers, your technical people and the users in that same room.
spk_0
You don't need fancy software to do this kind of work.
spk_0
But we're also by involving our technical implementation people there, particularly the senior ones, your architects and the principal solutions engineers.
spk_0
Whoever the really senior people are, it's really valuable to get them to have a non technical working session to really think about user experience and to think about customers, tasks and jobs and ask them to keep their implementation hat outside the door.
spk_0
Keep that in the little, you know, like you have to, you know, in the cold, what do they call those special rooms on the spy shows, we have to check your phone in, you know, outside the room.
spk_0
You're not allowed to take your cell phone in. I want you to instead of taking yourself. Well, yeah, put the cell phone in the little cubby hole as well, because that doesn't belong in a design jam.
spk_0
But you can also take off your implementation hat and ask everyone else to take theirs off and check those at the door and emphasize validation of these workflows.
spk_0
Does, hey, dear user, does this actually fit how you work today or when we build this new, you know, AI solution, is this going to change the way you all like to do your work right now?
spk_0
Like what kinds of problems might we create if we give you this solution that we know that you asked for, but we're curious what might be different here?
spk_0
What, how is this fitting into what your work and your job is today?
spk_0
We want to iterate quickly. We want to fail in shorter periods. And I think there's, if we have a prototyping mindset and a portfolio of products, we accept that first of all, not all the products are going to create value and win.
spk_0
And that's okay. If only a portion of them, even a fraction of them create some value, that can be a really good thing. And it could be very significant value, even if the cost of other things was pretty great.
spk_0
The prototyping mindset also helps us iterate quickly, get things out the door before they're perfect and focus on learning and not just value creation, because part of what the part of what we're doing here when we're designing these solutions is learning.
spk_0
We're trying to figure out what's working and not. And when we learn why something is not working, particularly if we're focused on a set of customers or end users that are consistent and don't change over time.
spk_0
Like let's say that you work inside a large enterprise, you're regularly serving the marketing team or sales team or finance team.
spk_0
Over time, you're building a knowledge about those audiences that you serve. And when we iterate quickly here, we're increasing the speed of these feedback loops.
spk_0
And the more of this knowledge that we can have about others, the easier it is for us to actually start to anticipate needs and to move from simply providing satisfactory experience to actually delighting people.
spk_0
And when we start to delight people, this is where larger value unlocks can actually happen in terms of business goals and business value metrics, financial metrics, things like this.
spk_0
So that's the last one. Number five, number six, I got two more here. Behavior change is hard.
spk_0
I want you to consider this like trust to be a product requirement and I want you to actively design for it.
spk_0
Why does this matter? Well, if you've listened to this show, you know, solutions can easily data products can easily become technically right and effectively wrong if we don't think about behavior change.
spk_0
And particularly if we're doing enterprise data product work and you're serving internal customers, what is the status quo?
spk_0
What is the culture around here? How do things get done? How do people do their job? Why are things the way they are?
spk_0
We need to understand all of that in order to design for behavior change. So it's a requirement, make it a requirement and make sure the technical team understands.
spk_0
It's also a requirement, not just the SLA is not just the model requirements, not just the streaming data has to come in this fast and error rates need to be like this and and da da da da da da.
spk_0
No one wants that the scary thing about the behavior changes your technical team is not going to know how to directly address that and that's okay.
spk_0
But as a product leader, you really need to hold up that that might be the stage gate to getting to the business value.
spk_0
So it needs to be considered as a requirement.
spk_0
You can ask if this works, if this data product works that we're building, what will we observe that is different inside the user or the organization and by when?
spk_0
Like what would we be able to see and this gets back to those outcomes that we were talking about.
spk_0
We should be able to observe this behavior change if it's happening and if we can observe it, we can measure it.
spk_0
Quantitative measure standout, I know we talk about behavior change and metrics and measure ability, we tend to think about numbers and all this kind of stuff.
spk_0
But qualitative results and stories are super memorable and they can be really powerful at persuading people like our executive stakeholders to understand what what kind of impact is the data team had.
spk_0
And you know, delighting just a few key users early on, that might be enough to justify increased investment in your product vision or letting, letting your team run with the direction it wants to go, even if it's not what was, quote, ordered by the end customer or, quote, ordered by the business.
spk_0
If they can see that benefits are starting to occur, observable desirable benefits and outcomes are emerging, you're buying some trust there.
spk_0
And the stories that these end users tell you or even your stakeholders, these stories about their feelings and how much, how much easier their job is to do and how I used to hate doing this.
spk_0
And now it's really easy or it's like, I feel so in power to make better decisions. And on and on, don't, don't underestimate the power of these stories, particularly as it relates to communicating up to your management and to your senior stakeholders.
spk_0
So there might be a 50 sales people, but if you can get the three directors to become your cheering squad and they're excited and they're doing their own water cooler chats at the upper management meetings and stuff like this, do not underestimate the value of that, even if the actual sales metrics are hard to pin down.
spk_0
Maybe it's hard to say it will turn, went down and by 10%, which equals $50 million a year and it's directly attributable to this analytics metric and all of that.
spk_0
Sometimes that's going to be really hard, but the fact that you have three excited sales directors, that might give the business leadership all the confidence it needs to know that you're creating desirable benefits with whatever it is that AI and analytics stuff that we don't want to do.
spk_0
We don't know how all that stuff works and where all the math people sit, go ahead, keep doing that stuff. We don't know exactly what it is, but we can see that you're creating value.
spk_0
And that then in turn lets you have more autonomy and it lets you begin to anticipate the right kinds of projects and data product work to do and on and on it, it's no balls.
spk_0
Number seven, actually I like there's two more there's actually eight in total. So here's two more here. When you're tackling UX design yourself, I want your minimum goal of your work to be don't create harm and do increase usability and utility.
spk_0
However, as I hinted earlier, the big value and locks come when we move from satisfaction to delight. I'm going to say this one again, when you're tackling the UX design work yourself as a data and AI product manager, your minimum goal is no harm and increasing usability and utility.
spk_0
However, the big value and locks, the big financial impacts, the stuff that business is really going to like that's more likely to come when you're moving from basic satisfaction usability utility, that kind of thing up to delight.
spk_0
And yes, this is hard work to do. It's even hard for trained designers, but that's where the big unlocks tend to be user experience is a measurable thing we can measure how frustrated or satisfied or delighted are and users are this can be measured and it does need to be measured so that you can track how you're doing.
spk_0
Why does it is matter well complexity feels impressive with with especially we're doing AI work, but it can bury the value and simplicity. This is what accelerates adoption and analytics and a is when people feel like the work that these data products are actually serving them.
spk_0
So it's not about you and all the technical amazing technical work that went in under the covers. The end users are not going to celebrate that work and they're not going to care about it frankly, they just care about what's in it for them.
spk_0
And if it's not simple, then there's not a lot in it for them. Simple is hard, but simple unlocks adoption and adoption is the start of the behavior change and the value unlock that we're trying to get.
spk_0
As you mature your design prowess, you're going to know when to a data product really needs to focus on delight and not just satisfaction and there's going to be other times where simply getting to basic usability utility that might be enough and it's time to move on because not every project is particularly for internal data products.
spk_0
It's not going to be a lot of time and budget to do all of that and the value the ROI on moving from satisfaction to delight it might not be there.
spk_0
It might be good enough to just get rid of the blockers and make something really simple and inviting to use.
spk_0
I did want to call out here this idea of no harm and this is particularly relevant with the AI thing.
spk_0
So, you know, we can harm with breaching privacy, whether it's we promised not to share certain kinds of information and that information got out anyways or users might have had an expectation of a certain type of privacy.
spk_0
And that privacy was breached even though there wasn't a commitment necessarily from the product or the organization to keep that information private.
spk_0
Both of those things are real, whether we like it or not and you can say, well, that's that's what our policy what for the lab when you can say, well, that's we didn't commit to that.
spk_0
All I'm saying is the harm can still happen so you have to decide whether or not whether or not that type of privacy harm is something that the business can afford to do.
spk_0
So, harm can come from poor automation decisions and lack of oversight.
spk_0
Harm can come from a lack of empathy as to how our data products might introduce negative externalities inside of our org or even with our customers.
spk_0
So, you know, think the classic example here is like the getting a loan approval for a mortgage, for example.
spk_0
And if the you know the data was based on biased, you know, racist decision making in the past, well, it's going to generate if it's not we don't correct for that ethically the model is going to be biased in the results that it gives.
spk_0
And so there are lots of different ways to create harm particularly with AI.
spk_0
So, you know, data product teams really need to also be asking, should we be building this?
spk_0
Should we be doing this work? What or who gives us the right to do this kind of work?
spk_0
Yes, there will be times where you don't need to ask those big questions and that some of these things may feel really giant.
spk_0
But you think about things like healthcare, should we be doing this? What gives us the right here?
spk_0
There are some really difficult incentives in certain types of industries.
spk_0
Again, US healthcare is a great example for profit companies that are also saying, well, we're for patient outcomes as well.
spk_0
But when our billing system is based on the number of procedures that we do, we're charging for effort and not for results, we're charging for procedures and tests and activities performed and not necessarily health outcomes, then there's going to be times where these questions do need to be asked because the interests are multiple and the interests are sometimes at odds with each other.
spk_0
And I think data is a big part of this going forward.
spk_0
And so, I'm getting really heavy. I'm going to come back down, back down to 10 foot, 100 foot altitude. I don't know and get out of the 10,000 foot space.
spk_0
Number eight, design is a team sport rooted in serving, serving end users, stakeholders and ultimately paying customers.
spk_0
I'm stating that because I know a lot of you are working on internal data products where you may be a fair number of hops away from actual paying customers.
spk_0
Why does it need this matter? Well, I can tell you that your executives, you're most senior business people, the ones that have PNLs, the ones that are responsible for results and not just building out tech and saying, hey, we shipped it on time whether it worked, not our problem.
spk_0
Someone's responsible for that when you go high enough up and maybe that's you.
spk_0
I hope and I think most executives will value teams that are ultimately focused on downstream customers experiences and the benefits that we're providing to those paying customers.
spk_0
So, when you can start designing for and thinking about how your data product work is affecting those possibly downstream paying customers,
spk_0
because let's say, well, I'm I'm really serving the sales team right now, but ultimately the sales team is going to talk to the paying customer.
spk_0
How does my work make that paying customers life better? That's the ninja level stuff.
spk_0
I want you to be thinking about that and just at least being for now, be aware that your work is ultimately going to affect those people.
spk_0
How? I don't know, but you should be mindful about how how it will.
spk_0
The second thing on this, your career will shine when you realize that much of this game of design and product design, product management, I just think of all this as product rework.
spk_0
It's rooted in serving others and enabling outcome for others.
spk_0
And you shine your career shines when you're helping others get what they need.
spk_0
So, it's really not about you. The path to your shiningness comes through others, making improvements in the lives of others.
spk_0
So, when we do all of this work right, here's the kind of tough thing.
spk_0
When you do all this design stuff right and we get a lot of this kind of stuff right, a lot of it, we can't see it.
spk_0
The solutions, the data products we actually build, the final technology that might ship, it might feel logical, obvious, really simple, like kind of not a big deal.
spk_0
That might even be particularly true if you've been designing all along as a team sport with customers, not for customers.
spk_0
And I mean end users here when I say the word customers, when they've been involved all along, there isn't a giant reveal at the end.
spk_0
And it's kind of like, oh yeah, this thing just went out the door and we've been working on it, they know we've been working on it, and they finally have access to it.
spk_0
And then we move on to the next thing or the next sprint or the next data product.
spk_0
And it just almost feels like it's not that significant.
spk_0
And that's part of the thing about design is it's in a lot of times the good stuff is invisible.
spk_0
It will be, feel logical, obvious, and simple.
spk_0
But it's really hard to do simple well.
spk_0
And so it doesn't mean you didn't do a good job if you don't have a gleaming pretty UI at the end of the day.
spk_0
That's not the kind of design that I'm talking to all of you about today.
spk_0
I want you to get to that almost place of invisibility where the solutions you created, created obvious value, measurable benefits, outcomes that we could see and observe.
spk_0
The solutions feel obvious, usable, useful, and hopefully delightful.
spk_0
Anyhow, that was a whole lot of stuff.
spk_0
I hope this was useful to you.
spk_0
I do do training on this for teams.
spk_0
I have a seminar called Designing Human Center Data Products.
spk_0
So if you and or your team is looking for live training with me, you can head over to designing for analytics.com slash seminar.
spk_0
And by the time this rolls out, I may have publicly launched my cross company group coaching.
spk_0
But I'm okay just giving you the link to this now.
spk_0
I have soft launched this new service as of again, it's September 2025 to a very small audience, just the people that were involved in my own UX research.
spk_0
So if you feel like you got some of these skills already down and you're kind of dealing with all the organizational friction and cutting through the red tape and getting access to end users.
spk_0
You want to do this work, but there's different kinds of blockers in your org.
spk_0
My cross company group coaching is really a small group way of getting help from me on your own challenges and helping you figure out how to navigate those things.
spk_0
Unlike the seminar, which is really about knowledge transfer structure teaching over a defined amount of time and that kind of thing.
spk_0
So if you're interested in an end cross company group coaching, the link is designing for analytics dot com slash group coaching.
spk_0
And that again is brand new and it will be launching soon.
spk_0
And the first 10 people in that will get lifetime early bird pricing as well.
spk_0
So if you want to lock in lifetime early bird pricing with that, I hope you'll join me in the first cohort of that new service.
spk_0
That's enough until next time. Thanks for listening to experiencing data.
spk_0
We hope you enjoyed this episode of experiencing data with Brian O'Neal.
spk_0
If you did enjoy it, please consider sharing it with the hashtag Experiencing Data.
spk_0
To get future podcast updates or to subscribe to Brian's mailing list where he shares his insights on designing valuable enterprise data products and applications, visit designing for analytics dot com slash podcast.