Self-improvement
180- From Data Professional to Data Product Manager: Mindset Shifts To Make
In this episode of Experiencing Data, Brian O'Neill discusses the essential mindset shifts that data professionals must make when transitioning to data product management roles. He emphasizes the...
180- From Data Professional to Data Product Manager: Mindset Shifts To Make
Self-improvement •
0:00 / 0:00
Interactive Transcript
spk_0
You're now experiencing data with Brian O'Neill.
spk_0
Experiencing data explores how product managers, analytics leaders,
spk_0
data scientists and executives are looking at design and user experience
spk_0
as a way to make their custom enterprise data products
spk_0
and analytics applications more useful, usable, and valuable.
spk_0
And now, here's your host, the founder and principal of
spk_0
Designing for Analytics, Brian O'Neill.
spk_0
Welcome back to Experiencing Data.
spk_0
This is Brian T. O'Neill.
spk_0
Today, the podcast title from Data Professional to Analytics
spk_0
and AI product management, mindset shifts to make.
spk_0
Why this, why now? Well, I wanted to talk about this transformation
spk_0
or this ways of thinking at a high level that I want to see data
spk_0
professionals make when they're going to shift from traditional analytics
spk_0
and data service type work into product management
spk_0
where you're managing data products that use analytics and AI,
spk_0
particularly for decision support types of activities,
spk_0
but even more broadly than that, what are the mindset shifts that you need to make
spk_0
to be successful in that role and to actually embrace the spirit,
spk_0
literally the title of product management as well as the spirit of this role,
spk_0
the way I see it. So that's what today's episode is about.
spk_0
So if you've been thinking about stepping into this role
spk_0
or you're just trying to kind of feel in maybe a little bit of imposter syndrome
spk_0
about what exactly is my role?
spk_0
I'm kind of winging it as I'm going here in this new data and AI product management
spk_0
role that you're in or maybe you're thinking about how do I understand
spk_0
if my team is living up to the expectations of what this type of role is about.
spk_0
This is kind of who this episode is for.
spk_0
So a couple of things here.
spk_0
First, let's talk about kind of framing the challenge today.
spk_0
Analytics work in particular, which is a lot of the work that data people do.
spk_0
It's about analyzing data, looking for answers, producing answers,
spk_0
making answers easy to give or to explain to someone else.
spk_0
So we're often doing this preparation work, investigation work,
spk_0
this type of analysis. It's literally in the word analytics analysis, right?
spk_0
Product management requires us to define and frame what the right questions are
spk_0
to ask in the first place and not necessarily to focus so much on doing the delivery
spk_0
of analytics work, solution creation type of work.
spk_0
I think that ultimately product management, whether you're doing internal or commercial,
spk_0
typically there's going to be some type of fiscal outcome, a return on investment,
spk_0
profit, reduce cost, whatever it is, there's going to be financial targets associated with this.
spk_0
Whether you actually have P&L responsibility or you're in an internal position,
spk_0
there's an assumption that you are going to help create outcomes and not just deliver
spk_0
more technology outputs on schedule.
spk_0
This is not a project management role, it's a product management role.
spk_0
Design is often a gate to getting to these outcomes because design has a lot to do
spk_0
with making sure that the things we make are actually satisfying the needs
spk_0
of the people who they are intended for.
spk_0
When we don't have use, then we don't get business or organization value
spk_0
because the solutions just sit unused.
spk_0
These mindset shifts are really important and a lot of this gets into that,
spk_0
how we're asking questions and things like that.
spk_0
We're going to jump into that.
spk_0
Are you managing deliverable creation or are you responsible for the team producing outcomes
spk_0
that users, stakeholders, and paying customers actually value?
spk_0
There's different tiers there based on the kind of work you're doing, whether or not,
spk_0
who is your target, quote, customer, audience, user, stakeholder,
spk_0
even defining those words and what they mean, particularly if you're in an internal role,
spk_0
it's really important to understand how to define those words.
spk_0
I've got four key ideas I want to share with you today.
spk_0
Let's jump into these.
spk_0
The first one, this idea, we've sort of touched on it,
spk_0
but analyst give answers and product people ask questions.
spk_0
Again, I got to credit this framing to Andy Sutton from the Endeavor group down under in Australia,
spk_0
one of our original DPLC data product leadership community members.
spk_0
He runs a large data and software organization
spk_0
and side this hospitality company.
spk_0
It's the largest hospitality company down in Australia.
spk_0
And he really, I forget when he said this, but I really latched on it,
spk_0
it's such a smart observation.
spk_0
Analysts give answers and product people ask questions.
spk_0
So part of your job is to ask really good questions in order to inform whether or not
spk_0
the delivery teams are engineers and our designers and our SMEs
spk_0
that were actually building the right stuff,
spk_0
because the game is no longer about build the stuff, deliver the stuff on a schedule,
spk_0
and then someone else is supposed to do something with it,
spk_0
and then maybe some kind of value gets created down the line.
spk_0
No.
spk_0
Now it's about making sure that we're building the right stuff.
spk_0
So delivering the best possible answer quickly,
spk_0
this is more of an analyst mindset,
spk_0
and I want you to be focused on uncovering the real problem that needs to be solved
spk_0
before deciding what to answer and or build as as much the case
spk_0
when we're talking about data and AI products.
spk_0
It's easy to get lost in precision and accuracy and all your knowledge
spk_0
about the ins and outs, about how, well, what's a customer in the database?
spk_0
And we have four different definitions here,
spk_0
and it's like there's all these minutia details that really get into
spk_0
accuracy specifics, and it's not to say that that doesn't matter,
spk_0
but it is possible to get a really highly accurate answer to a question
spk_0
that no one's asking.
spk_0
So most of the time this is going to mean leveling up and making sure
spk_0
that the big picture is being addressed.
spk_0
So let's look at an example here.
spk_0
A marketing VP might ask for something like a,
spk_0
I need a segmentation dashboard.
spk_0
And instead of just building it, a product mindset,
spk_0
a product designer or a product manager might ask,
spk_0
well, what decisions will this solution, this dashboard help you make
spk_0
if we were to have that like right now?
spk_0
What kinds of actions are you struggling with today?
spk_0
And what did you use your, whatever you do currently to answer questions
spk_0
that you, like if you want to knew,
spk_0
is it a new segmentation dashboard or do you have some solution
spk_0
you're using right now?
spk_0
And if so, what can you show that to me?
spk_0
And what was, when was the last time you used it?
spk_0
And tell me what happened.
spk_0
Tell me a story about the last time you used this thing.
spk_0
What we're looking for here is the signal behind the ask.
spk_0
And this questioning and this hypothetical might reveal the need
spk_0
for say a campaign prioritization tool,
spk_0
not a segmentation dashboard.
spk_0
So this kind of leaves me to this idea that there are latent problems
spk_0
and active problems.
spk_0
And this isn't just true for data teams.
spk_0
I think this is true for even a lot of product teams, design teams as well.
spk_0
But a lot of teams, particularly when you get into larger organizations,
spk_0
they're doing answer, fetching, and project work
spk_0
and data drive through type of work and assignment work.
spk_0
And the next stage of that, that's kind of the first stage.
spk_0
The second stage, when you get past the, well,
spk_0
we followed orders, we did what we were assigned to do.
spk_0
And it's not our problem as somebody doesn't use it
spk_0
because this is what we were told that we need to make.
spk_0
The second tier, there is actually starting to question these problems on the surface.
spk_0
And generally, we can get to a little bit,
spk_0
you kind of like pull back the first layer of the onion.
spk_0
You know how the onion has this dry skin on the outside.
spk_0
And if you just go down like a layer or two,
spk_0
you can kind of start to surface what's actually behind this, this need here.
spk_0
And hopefully, in this case, what you end up with is you have more
spk_0
satisfaction from users.
spk_0
You gave them a better version of the thing that they asked for.
spk_0
And the thing that they asked for was perhaps more or less correct
spk_0
because I'm not saying that customers and users never know what they need
spk_0
and we should completely not listen to that at all.
spk_0
It's more this idea that requirements are friendly lies sometimes
spk_0
and we need to know how to interpret that information.
spk_0
Ultimately, tier three thinking is actually solving the real latent problem.
spk_0
And this is the problem that users aren't actively expressing to you right now.
spk_0
And it's the problem they may not know that they have
spk_0
or they've just never stated it because it might seem impossible or just so obvious
spk_0
that well, of course, we would love that, but there's no way we could ever do that.
spk_0
So we're not going to ask for it because they've made a lot of assumptions about things.
spk_0
These kinds of insights are usually only things that emerge from deep habitual user research
spk_0
such that you can begin to see patterns that nobody else can see
spk_0
or you're seeing observations and you're drawing connections between things.
spk_0
And it's kind of like if you're inside the jar, it's really hard to read the label
spk_0
and people aren't, you know, when we go about our daily life,
spk_0
we're not, there's no like person looking down at us
spk_0
and how we're living our life and all the decisions we're making on all this thing.
spk_0
But in this situation, we literally are trying to watch that.
spk_0
And we can't necessarily ask the person that's the character in this little movie that we're watching
spk_0
to explain everything about what they're doing and why they're doing it and all this.
spk_0
Some of this we have to either probe for verbally or we need to simply observe it through behavior.
spk_0
And this isn't a one time kind of thing.
spk_0
But really when we can get to what the underlying latent problems are,
spk_0
this is where we might be able to create really outsized value
spk_0
because we start to really address the underlying need that has not been stated as of yet.
spk_0
So what can you do with this kind of key idea?
spk_0
Number one, I think before you start on product work
spk_0
or even if you're kind of in the middle of stuff,
spk_0
is writing down three to five problem questions that are assumptions or hypotheses
spk_0
that you really need to validate.
spk_0
And go spend some time to actually validate this problem space
spk_0
that you're currently, your solution is addressing some problem space.
spk_0
And I want you to go out and spend some more time in this space
spk_0
talking to users, observing users to make sure that you've actually uncovered the space
spk_0
and see if you can get either to that kind of level two
spk_0
where that first layer of onion is peeled back or three,
spk_0
if you can actually hit on,
spk_0
and you'll probably know it when you hear it
spk_0
because the user is probably as they're talking to you,
spk_0
they're going to run into it and say something they've never really said out loud
spk_0
before or it might seem totally obvious to them.
spk_0
You're probably going to know when you start to run into that space
spk_0
and it doesn't mean that's a guaranteed win.
spk_0
Go out and build that thing and everyone's going to be happy.
spk_0
Obviously, we'd be looking for patterns here across multiple different users
spk_0
to kind of revalidate that this latent problem actually exists more widely.
spk_0
But the first thing in order to do any of this,
spk_0
you need to start out by having a sense of the questions that you're going to ask.
spk_0
That's what I would recommend doing here is to at least,
spk_0
if you're listening to this show,
spk_0
hope you're already operating it at Tier 2,
spk_0
and if you're not, that's okay.
spk_0
But we really want to get out of this all requirements are correct.
spk_0
If I just build what was asked for and I use Scrum
spk_0
or I go through some of the rituals of product management
spk_0
that I'm therefore creating the value that I'm supposed to.
spk_0
Especially for people in leadership positions here,
spk_0
more senior PMs and things like that.
spk_0
This game is really about creating value
spk_0
and it's not enough to simply go into that delivery work to spec.
spk_0
That's the first one.
spk_0
Number two, change management is designed into the solution
spk_0
and not bolted on afterwards.
spk_0
The old mindset is that delivery is the finish line.
spk_0
The product is in production.
spk_0
The model is in production.
spk_0
Change management is someone else's job.
spk_0
The mindset shift from that position is one where we think,
spk_0
hey, if the right people don't adopt this product
spk_0
and we see no meaningful change in our primary success criteria
spk_0
and benchmarks, then this solution has failed,
spk_0
even if delivery was on time, on budget,
spk_0
all that good stuff and met spec.
spk_0
This is an outcomes and results based approach to doing product work
spk_0
instead of a deliverables based approach to doing product work.
spk_0
Let's talk about like an example here.
spk_0
Of course, it's going to use AI
spk_0
because that's all you're hearing about all the time.
spk_0
It doesn't really matter.
spk_0
But let's say the company wants to use AI.
spk_0
The company you're operating at wants to use an AI chatbot
spk_0
to manage all tier one customer support requests.
spk_0
Hidden in that is probably a goal to reduce their operating costs
spk_0
around live human customer service people.
spk_0
Traditionally, a digital and or data team
spk_0
might buy something or they might race to go out
spk_0
and start building a bot that could go and answer these things
spk_0
and we train the data on all the documentation we have
spk_0
and all the recipes to resolve issues
spk_0
and figure out what all the top tier issues are in the first place,
spk_0
etc, etc, etc.
spk_0
And initially, the customer service reps themselves,
spk_0
the actual humans doing this work.
spk_0
What they're probably seeing right now is the beginning of the end of their career
spk_0
or their job and not necessarily a solution that will allow them
spk_0
to do higher value work that could prop them up,
spk_0
that could support them, that could allow them to really shine
spk_0
and do the things that they're great at
spk_0
and keep in mind customer service people are often the ones
spk_0
talking to paying customers just about more than anybody else.
spk_0
Especially if you have a product that's kind of a set
spk_0
and forget it kind of thing and there's really not a lot of need
spk_0
to come to CSRs but when they do, people want help.
spk_0
So this product and user experience kind of approach,
spk_0
we would be wondering how can we make CSRs lives better
spk_0
by offloading highly repetitive and simple tier one types
spk_0
of customer supports, tickets.
spk_0
And I will say specifically to customers,
spk_0
like customer support types of issues, sometimes,
spk_0
well, fix your product and service and you may reduce the need
spk_0
to have inbound customer service requests in the first place.
spk_0
But I'm going to assume for whatever reason we're not doing that
spk_0
and this is where designers would really carry this kind of thing forward
spk_0
and kind of say, well, what's the actual,
spk_0
you know, first principles thinking here would step all the way back
spk_0
to like, well, why are so many tickets coming in
spk_0
about resetting passwords?
spk_0
Well, maybe there's something wrong with password reset,
spk_0
user experience that people can't, that feel that they need
spk_0
to contact a human being to get help with that.
spk_0
So you can see here where we're, again,
spk_0
we're starting to look at this from the customer's perspective
spk_0
and we're really trying to get to the first principles.
spk_0
But I'm going to ignore that for a second.
spk_0
Let's just assume that you can't touch that or whatever it may be.
spk_0
We're living in this space of, we have these inbound tier one,
spk_0
support requests coming in and they need to be service by somebody.
spk_0
So how could we make the CSR's lives better
spk_0
if we were to take that work away from them,
spk_0
offload that work from them?
spk_0
Are they excited to let go of that work?
spk_0
Are they afraid of that work going away?
spk_0
Because there may not be needed as much.
spk_0
What would be true in the future?
spk_0
Say X stays out from a lease for these users.
spk_0
In this case, I'm talking about like a CSR
spk_0
that's using an application that probably involves this bot doing some of their work.
spk_0
What would need to be true and better in the future for the org?
spk_0
What would need to be true and better in the future for the paying customer,
spk_0
the one who's actually on the other end of the chat?
spk_0
You could carry this forward to what about the managers of the customer service departments?
spk_0
The point is we're looking at all these humans in the loop
spk_0
and we understand that change is difficult
spk_0
and we need to think about change while we're making the thing that we're making.
spk_0
We can simplify change management by involving these parties,
spk_0
especially end users of the actual application.
spk_0
In this case, it would be probably both paying customers,
spk_0
but also the CSR's.
spk_0
If they're the ones that have to manage the bots that do some of this work,
spk_0
we would involve them in the creation of the thing that we're making.
spk_0
They become a first-class person, a first-class stakeholder,
spk_0
a design partner, sometimes language that we use.
spk_0
For the org, baseline values that we can observe and test against
spk_0
could be things like volume of ticket request going down
spk_0
or answering these questions faster,
spk_0
answering these questions 24 hours a day, seven days a week,
spk_0
things like this.
spk_0
We can obviously create estimates of fiscal value here,
spk_0
by looking at cost reduction and all of that.
spk_0
For our end users of the application, internally, these CSR's,
spk_0
we might explore, say, the idea of customer service reps
spk_0
actually managing their own little workforce of little AI agents
spk_0
that are doing this work with the ability to train this workforce
spk_0
to work the way that they work,
spk_0
because they've known how to do this work for a long time
spk_0
to override conversations when necessary,
spk_0
to establish incentives for them to keep low-priority tasks handled by the AI,
spk_0
and for higher important ones to be human-controlled,
spk_0
what would those incentives be?
spk_0
Are the incentives in place now correct, given the change we want to make?
spk_0
Where is their CSR labor work that these people hate doing right now?
spk_0
And could this new data product be refined to actually remove some of the stuff
spk_0
that they actively dislike about their job right now?
spk_0
Even if some of that might be an ancillary to say building the model of the actual bot
spk_0
and the number of questions that it can answer and all this kind of stuff,
spk_0
we would also be wanting to be dialed into what CSR's fine frustrating
spk_0
about doing their job efficiently,
spk_0
especially as it relates to technology.
spk_0
Might we explore the possibility of gamifying something like this,
spk_0
making it fun?
spk_0
Whose team of AI bots cannot perform the other team by using your own?
spk_0
Maybe there's a way for the agents to do some specialized training themselves
spk_0
to give shortcuts that might not be in the official documentation,
spk_0
but there's a way for these managers to help influence the way the agents answer questions.
spk_0
And so there might be a little bit of uniqueness between the different bot teams
spk_0
and this could become a friendly game.
spk_0
This is me just imagining solutions.
spk_0
I'm not saying you should go out and do this.
spk_0
But what I am looking for is what are the ins...
spk_0
How could we delight this CSR such that we don't have to try to force them to use a new solution?
spk_0
I want to figure out, well, what's in it for them if we are to take away some of this work?
spk_0
The only way to do this is to get involved directly with these people.
spk_0
We need to understand what it's like to do the work of a CSR.
spk_0
We need to shadow them, we need to talk to them, we mostly need to listen and watch what they're doing.
spk_0
But our goal is to design affordances into our user interface and to our experience
spk_0
so that this group is also successful,
spk_0
not just the business by saying, hey, we offloaded all this human labor and look at the dollar savings.
spk_0
That's part of it.
spk_0
But what are the downsides if that's the only thing that we pay attention to?
spk_0
Other stakeholders, what about the paying customer?
spk_0
Well, they likely have zero control in the first place about this,
spk_0
except that they might be able to give us feedback on the solution, depending...
spk_0
I'm making judgments here that the decision to do this has already been made
spk_0
and talking to paying customers regardless of what they say.
spk_0
If there's a cost-cutting goal in line,
spk_0
unless that organization is product and design mature,
spk_0
they're probably going to do that and wait for the downside to come later,
spk_0
because they want to see a short-term financial benefit.
spk_0
But could we use sentiment analysis and shadowing some of these conversations
spk_0
to understand if these support tickets are being resolved without the company brand being soiled?
spk_0
Could we continuously sample some of these conversations to see whether or not
spk_0
a ticket was actually closed in a positive way within an acceptable time frame
spk_0
and qualitatively actually look at some of these so that we can see whether or not...
spk_0
Yes, it was closed in under two minutes, which is great,
spk_0
and it was closed at all, which is also great.
spk_0
But what did the conversations look like when we sampled them?
spk_0
Were people angry?
spk_0
Can we see frustration in the dialogues that they're having with the bots?
spk_0
Might we give our paying customers the option to use human or the AI bot,
spk_0
perhaps explaining that the bot might be faster,
spk_0
and it might be worth a try, and the human might be slower, because you might have to wait.
spk_0
But we'd give them a choice, especially early on,
spk_0
and the deployment of such a thing.
spk_0
And could we learn from marketing how we might talk about what the benefits are
spk_0
to using an AI assistant to answer your customer service issues versus using a human?
spk_0
So now we have marketing potentially involved.
spk_0
All going back to this change management theme about putting the change management into the design of the solution.
spk_0
As a product person, we'd also probably be thinking about placing bets,
spk_0
keeping our bets small, deploying this solution to a limited set of teams or people,
spk_0
trying to incorporate fast, rapid learning cycles,
spk_0
get feedback early, look at where all the unexpected things are,
spk_0
and people are going to kick the tires.
spk_0
They're going to ask silly questions that have nothing to do with customer service.
spk_0
And we want to start seeing what the fringes look like,
spk_0
and a probabilistic solution like this.
spk_0
And we want to contain that risk.
spk_0
We want to move quick, and we want to learn,
spk_0
and we want to roll that feedback back in,
spk_0
because we definitely don't want to do damage either.
spk_0
And while we can see, oh, cut human customer service equals direct cost savings.
spk_0
But what that's ignoring is, well, what are the byproducts and the side effects that can happen from that?
spk_0
Which may not seem like cost right now, but they could become costs.
spk_0
So what can you do about this right now?
spk_0
This big number two, I think spending time observing where users and stakeholders dislike their current solutions,
spk_0
processes, applications.
spk_0
This can help us start to understand where delight might be able to be brought in.
spk_0
And if we're thinking about delight as being something we're designing for here,
spk_0
that's going to reduce our need to go through a change management process against people's will,
spk_0
because we will be building in desirability into the solution from the beginning.
spk_0
You could also spend time uncovering where organizational culture,
spk_0
politics or compliance things may create friction or roadblocks.
spk_0
And this is something we do collaboratively with relevant stakeholders.
spk_0
We would be brave and we would surface some of these things that we found out and that we've heard.
spk_0
And we'd say, look, I know we want to do this.
spk_0
This is the overall, when we've talked to 15 people in this CSR job,
spk_0
this is what we've learned so far. This is how the company is perceived.
spk_0
This is how hypothesis, maybe we need to go out and validate it,
spk_0
but we want to bring this up because we don't want to build this AI agent,
spk_0
try to roll it out. It's against everyone's will and then it falls on its face
spk_0
and we do some damage that we haven't really calculated.
spk_0
No one's really asking questions about that.
spk_0
So let's talk about where friction is going to happen such that we can make all of this part of the deployment of the product
spk_0
that includes addressing some of these types of issues.
spk_0
And I'd want to understand if I was in product,
spk_0
where have things, where have rollouts like this not gone well in the past?
spk_0
And I'd probably start with the CSR as an ask them,
spk_0
when was the last time you had to change your technology solutions in a large way
spk_0
or that you're tooling or your processes? What was rough about that?
spk_0
What about your questions scripts and the way you talked to customers?
spk_0
What could the company have done better? What could technology have done better?
spk_0
What kinds of fears do they have?
spk_0
These are all more product oriented ways of thinking.
spk_0
So again, I'm going to restate key idea number two.
spk_0
Change management is designed into the solution not bolted on afterwards as a task that you or some other team will then go do.
spk_0
Once they have this finished thing, this application or tool,
spk_0
someone else goes and tries to plug it in and get it used.
spk_0
I want you to kind of let go of that thinking and see that as a high risk way of doing things.
spk_0
Number three, key idea number three. Adoption of data products only matters if it is meaningful adoption.
spk_0
So the old mindset here is counting usage metrics.
spk_0
And I see this quite frequently.
spk_0
How many unique users logged in? How many minutes did they spend on average? How many interactions do they have?
spk_0
We look at Google Analytics tool equivalents, whatever your, whatever your internal metrics are and chances are,
spk_0
if you're deploying something through a tool, you have some way to count things.
spk_0
So what's the shift? The shift here is that meaningful adoption isn't about sessions and durations.
spk_0
It's about reducing frustration, increasing delight and then seeing downstream fiscal value emerge as a result of facilitating meaningful adoption with the solution.
spk_0
This also assumes that if we're increasing satisfaction and making better things that we have a baseline from which to measure this change from status quo to satisfied or to delight,
spk_0
depending on how far up you want to take things.
spk_0
If we cannot measure or we do not have a baseline from which to measure where we're starting, then we really cannot measure the change that has happened.
spk_0
And we're now relying on hearsay and squeaky voices to tell us if we did a good job.
spk_0
I do care about qualitative feedback from important stakeholders. I think that's an okay thing here.
spk_0
But adoption actually needs to be defined what does meaningful adoption mean?
spk_0
And if so, then what is unmeaningful adoption? What does that look like?
spk_0
It's the change that we need to design for and we need to have a definition of meaningful because that's not the same thing as adoption.
spk_0
Well, why? Because high usage metrics, as you've probably heard me say on the show before, can actually mean there's a problem with the solution.
spk_0
It's clunky. It takes too much time to use the solution to get the thing you want out of it.
spk_0
Wow, customers on average used our customer support bot for 42 minutes a week.
spk_0
Or whenever they had a ticket at average 42 minutes, that must be great.
spk_0
Except we know in customer service world, more time is not good.
spk_0
Generally, less time spent is actually what we're going for here.
spk_0
The problem is if you don't know how much time and activity should actually take, there's really no way to know if any of the signal you're getting is meaningful.
spk_0
Because adoption, meaningful adoption rather is a qualitative state of adoption, meaning there's meaningful and there's not meaningful.
spk_0
There's a range here. And if nobody can count what meaningful means or describe that, then it's all kind of hearsay about whether or not we're actually doing.
spk_0
Work that matters here. So, an example here, a sales team dashboard that requires 12 clicks per update.
spk_0
Well, you're getting lots of heavy usage. You can see they're clicking all over these different screens, et cetera.
spk_0
But when you actually go and do a one on one with one of these people, you find out that they're quite frustrated by the solution.
spk_0
How many clicks it takes? They have to export stuff. They got to put it in their other tool.
spk_0
Bring it back into the CRM, update some fields, recalculate some model score, and on and on and on.
spk_0
And counting analytics is really telling us how much is the product being used and when.
spk_0
But if we're actually measuring user experience and meaningful adoption, we need to be dialed into feelings.
spk_0
And I know it's scary to talk about feelings on a data podcast, but feelings have a lot to do with people's whether adoption is going to be meaningful.
spk_0
It's going to tell us about how successful this person feels. Success may be just simply I just I don't want to spend any time here if it's like it's like going to a public bathroom.
spk_0
The best experience may be simply the shortest and most invisible experience possible where you feel like you're never even in that bathroom.
spk_0
You're just in and out. And maybe there's a way to make that even better, but there will be oftentimes particularly business software where we're simply making usability and utility really high.
spk_0
That might be what we're looking for, but someone has to define what useful and usable actually means what is not useful and not usable mean because the change is what we're after.
spk_0
The change is the score to the game that we're playing now.
spk_0
So what can you do with this one? This key idea number three adoption of data products only matters if it is meaningful adoption.
spk_0
Well, establish a baseline of what it means to have meaningful adoption and non meaningful adoption.
spk_0
There's no right or wrong answers to these things. Sometimes you have to put especially when you're doing zero to one product work, you might have to put a hypothesis out there.
spk_0
Get it out, evaluate your test, your hypothesis against reality, and then recheck your assumptions. Maybe only a tenth of the people I've even touched this thing.
spk_0
Okay, but now we have some signal back here, maybe in round two when we work, when we go back and revisit what have we created, have we created value, then we can adjust our baseline accordingly.
spk_0
But for those of you working in large companies, a lot of times there's already existing processes and product technology solutions and all this kind of stuff.
spk_0
It shouldn't be that hard to establish a baseline of what good adoption means because chances are there's already bad adoption or bad user experience or usability problems, utility problems, trust problems, speed problems.
spk_0
There should be a way to define this. There must be a way because if you literally someone is paying for this product, it's probably because there were enough negative attributes.
spk_0
The issue is when we don't really get into defining these success criteria and metrics up front and instead we wait until the end.
spk_0
And then we just want to find out what is the scorekeeping method that the people in charge are actually using.
spk_0
And I will say this, you cannot ask users necessarily what a good user experience for them would necessarily be.
spk_0
They're going to start just thinking aloud creatively and they might give you a bunch of information that could be false signal here.
spk_0
I would rather you really understand that first of all understanding past behaviors is a better indicator for you to set a baseline because that's actually what's actually happening regardless of what people actually saying is happening and those aren't the same thing.
spk_0
But you should also know that stakeholders and or users if you're involving them in this process, they may not know the answers to these questions.
spk_0
They won't be able to tell you what meaningful adoption means. They're probably especially in the business and the senior, our senior business partners.
spk_0
They're going to be glad that you're actually thinking about the quality here that there is a definition of quality here and that there's a definition of low quality.
spk_0
And so it's not that they're hiding the information. It's that no one's asked the question.
spk_0
And so this early discovery work we do when we talk to stakeholders and users, this is an exercise it figuring out, well, what what it doesn't mean to have meaningful adoption.
spk_0
And the right answer is the one that you and and customers and product owners and stakeholders all come up with together.
spk_0
And that's to me part of the work of product management is to put definitions around these things such that we can play a game that can actually be measured so that we know the score of what we're doing.
spk_0
And we're not scoring project management metrics such as did it did we deliver the tech to spec on time.
spk_0
This is project management. It's not product management.
spk_0
Okay, number four last one for today. Thanks for hanging in there.
spk_0
Solving active problems is good, but the big unlock is when you can surface and solve for latent problems.
spk_0
Let me say it again, solving active problems is good, but the big unlock is when you can surface and solve for latent problems.
spk_0
So old mindset, my job is to respond to stakeholder requests and build and deliver this data driven solution or AI solution as requested.
spk_0
The shift here is that the most valuable work that we might create for our stakeholders and organization is going to be when we can solve for these unspoken hidden needs that stakeholders and or users never thought to ask for in the first place.
spk_0
I kind of mentioned this earlier, but I think this this one deserves to be a key idea of the four in this particular episode. This is really important.
spk_0
So let's look at an example, you know, analysts may feel weakly, you know, ad hoc requests for let's say I need a list of top customers by revenue from the marketing department.
spk_0
And so a PM might mindset might see a latent problem that looks like this. Well, it sounds like these marketing stakeholders lack a way to explore customer lifetime value.
spk_0
And they believe that looking at top customers by revenue, this is the technical delivery ask the output or solution ask they believe that this is the way to answer to in order to explore this customer lifetime value.
spk_0
And this is a proxy for doing that. And furthermore, as a product person, if we or and and and and research or mindset, when we dig into why they care about any of this, we find out that the way to explore this customer lifetime value.
spk_0
So that they're actually really trying to do narrow band targeting in their campaigns and they believe that customer lifetime value or revenue, which is a proxy for them for customer lifetime value.
spk_0
This is the way to do narrow band targeting in their campaigns. And why do they care about that? Well, they care about that because they want to be able to show, hey, I spent all this marketing dollars and look at my my CPL, my cost per liter CPA, my cost per acquisition or whatever game that they're playing in their little world.
spk_0
That's ultimately what they care about. So they've given you a solution that they think will help them go solve their problem and play their game better.
spk_0
And sometimes when we do this laddering exercise, these asking these why questions and asking them to open up, we might realize that, okay, really what you're doing is trying to do better targeting.
spk_0
And you think looking at our top customers by their revenues spent like the amount of revenue they generated for us is the way to find other clients like them that we could double down in our marketing effort or whatever it may be.
spk_0
And this is where the data person might say, you know, we could actually create a model here that could look at more than just customer lifetime value.
spk_0
And in fact, we might know something about what actually drives people to buy. And how would you dear marketing person? How would you feel if we if the data suggested that there were other feature other metrics and machine learning language, we're thinking about features now that there's other attributes of this customer that actually might be more related to a buying signal.
spk_0
And so now we're into a different kind of conversation, but this VP of marketing is now hearing that you are dialed into their need, which is I need to show effective ethical and reasonable spend of our marketing dollars.
spk_0
I need to show an ROI on when I go spend our money on marketing and marketing is really hard. And they don't want to go and spend a lot of money on campaigns that don't work because they have to answer for that.
spk_0
But now that they know that you're on their team and you understand that underlying need here is really not to get a dashboard that has top customers by revenue.
spk_0
It's really to do narrow band targeting in their and their say advertising campaigns.
spk_0
Now we're dialed in at a different level and we understand what's in it for this VP and how will we make that VP successful in their job.
spk_0
So again, let me restate the key idea, solving active problems is good, but the big unlock is when we surface latent problems and we and we solve for those latent problems that have not been necessarily expressed yet.
spk_0
So what can you do with this one?
spk_0
Again, the answer is usually going and talking to observing and mostly using your ears to learn to understand what it's like to be this VP to engage with them in discovery conversations to understand what's hard about doing campaign targeting now.
spk_0
Like how do you know when you got it right and when you got it wrong? Are there are there numbers that we can attach to effective and non effective.
spk_0
Where did they get the idea to use customer lifetime value in the first place is that best practice in the industry is it the boss's orders.
spk_0
So now we know there's some politics involved and belief about what matters and what doesn't, which may or may not be factually true, but it's an important thing for us to understand if that's actually how the business operates, then that may be all that matters right now at this stage.
spk_0
Are there incentives aligned with this VP of marketing following orders, quote, quote, to use the correct metrics and to deliver or is it really about they are empowered to deliver the best results on their campaigns, regardless of how they do it.
spk_0
We need to know what the mind the incentives of this VP are because you might be able to come up with a machine learning model that could do a better job of segmenting the users, etc.
spk_0
But if this person can't back it up and can't say I did use the CLV measurement customer lifetime value, I did use that as my main way of of running this campaign and hear the results and maybe it didn't do super well, but I did follow instructions.
spk_0
I did do what we were asked to do.
spk_0
You need to be dialed into that because you can be technically right, but you can be effectively wrong as if you know if you've listened this show for a long time.
spk_0
Giving this VP of marketing a technically right, but effectively wrong solution that they're going to ignore or not use that is not a win.
spk_0
That might be an academic win for your paper to go right and machine learning conference about the propensity model that you built, but you're actually not creating any value for the org and since you're in product, your job is to create value.
spk_0
So that's it for today. I'm just going to restate these. Thanks for hanging in there. There were four key ideas for data professionals moving from being a data professional to analytics and AI product management.
spk_0
There's a mindset shifts and there's four big ones that I was thinking about today that I shared with you.
spk_0
That first one that analysts give answers and product people ask good questions.
spk_0
The second one, change management is something we design into solutions. It's not something that's bolted on afterwards and delivered by somebody else.
spk_0
I'm being super black and white with that. I am not saying there is never a need to have any type of change management process.
spk_0
That's not what I'm saying. What I'm saying is let's not rely on change management to be the determinant of whether or not this data product is successful or not.
spk_0
There is a whole lot that we can do to make better things that don't require as much forced change, encouraged change, training threats, whatever it may be.
spk_0
That's not generally the best way to get people to change. We want to make it in their interest to change.
spk_0
Number three, adoption of data products only matters if it's meaningful adoption, which means someone has to define what meaningful and non meaningful adoption actually means.
spk_0
Big idea. Number four, solving active problems is good, but the big unlock is when we surface and address latent problems, problems that are not being currently expressed verbally or in writing.
spk_0
Okay, that's it for today. If you need help doing this type of work on your data product initiatives or with your team, I am if you got all the way to the end, I have been hinting at and I'm happy to say that my new cross company group coaching service is in soft launch.
spk_0
I am building the first cohort out right now. You can hop on to that over at designing for analytics.com slash group coaching, whether you're focused on your career advancement as a data and AI product management professional or whether or not you're looking for some support.
spk_0
Maybe you've been doing this kind of work and you need a little bit more support dealing with day to day challenges that come up in your unique environment.
spk_0
That is really the promise of what my small group cross company coaching is all about. You're going to get support from me personally, as well as from your peers will be doing hot seats.
spk_0
There will only be 12 people in the first group that will be the cap. So once that's capped out, then people have to wait for a seat to open and we will launch as soon as there are five people.
spk_0
So if you move early, not only will you get access to lifetime pricing, but you will also get the benefit of the smallest group size if you value a very small group size.
spk_0
So there's that. And if you feel like you have a good grip on what you're doing, but you kind of feel like you're working alone. We do have a form where I do have a community that I started along with a large group of data product professionals.
spk_0
It's called the data product leadership community. And this is really a place for you to both contribute your knowledge, your experience and your learnings along this road with others. And then of course to learn from them and to get feedback on your ideas.
spk_0
Because you are not alone. There are other people that are applying product management and user experience design methods of working product, the ways of working to data science and analytics work.
spk_0
So I encourage you to visit that link. I'll give you those two links group coaching again is designing for analytics.com slash group coaching and the community, the DPLC as we call it the data product leadership community is available over at designing for analytics.com.
spk_0
slash community. Till then, talk to you soon.
spk_0
We hope you enjoyed this episode of experiencing data with Brian O'Neal. If you did enjoy it, please consider sharing it with the hashtag Experiencing Data to get future podcast updates or to subscribe to Brian's mailing list where he shares his insights on designing valuable enterprise data products and applications.
spk_0
Visit designing for analytics.com slash podcast.
Topics Covered
data product management
analytics applications
AI product management
mindset shifts
user experience design
decision support
latent problems
active problems
product management role
data professionals
custom enterprise data products
stakeholder value
user research
questioning techniques
customer needs