The Research “Impact Problem”

In the rapidly evolving landscape of Big Tech, tensions between qualitative insights and business outcomes have come to a head. As budgets get tighter and layoffs continue, it's imperative that qualitative Researchers prove their value. This is the core of the Impact Problem: How do we impact products and services to show our value to Business People?

This article delves into the context and causes of the Impact Problem, exploring the nuanced differences in measuring Product Impact vs. Business Impact, and why the current system sets qualitative research up for an uphill battle.

True Confessions

When I started out as a UX Researcher back in the early 2000s, I didn't care much about the Business, or helping the Business make money.

My mission in life was to make products better for people. To ensure that they actually solved real problems humans have and to eliminate shoving features people don't want into experiences just because it's, "Good for the Business."

I used to think, "Good for the Business," was synonymous with, "Something nobody wants, but makes the Business money." Or, as my favorite research participant so eloquently stated:

"$#!% I care about versus $#!% [the company] wants me to care about." — Best Participant Ever

I used to think user needs and Business goals were AT ODDS with each other and it was my job to, "Stand up for the user:"

Josh to the rescue! (That's me, Bubbles, on the left):

Powerpuff Girls are here to save the day! (Source)

Over time, it felt more and more like my job was to figure out how to position the Product so that people would want to use it. Or to adapt the feature so that people would do the thing that makes the business money. And yes, the business “needs” to make money… if it doesn’t, then we’re all out of a job.

I didn't realize there was a middle ground: connecting user needs to business goals. The two sides aren't really at odds with each other, they actually reinforce each other.

Of course there are important business drivers to consider and as UX People, we need to do a better job at understanding them. We also need to be better about how we communicate Experience Needs to Business People, so together, we can come up with better features that both solve human needs and make the business money.

While Business People (Like Leadership, Marketing, and PMs) are advocating for the Business Outcomes, like profitable products, UX People balance out the other side with Experience Outcomes. It's a push/pull. A yin/yang balancing of forces that integrate and come together, not oppose each other.

When UX People work together with Business People, we can create successful products that solve human needs and make the Business money.

But this is really difficult when we aren't speaking the same language. So when it comes time for attributing credit to who impacted the Product, (And the Business), we’re sadly not considered for our value as much as we should be because we're unable to "prove," using the language of the Business. And that brings us to the Impact Problem.

Let's take a quick look at the events of the last few years that led us to this point.

What Just Happened*

The COVID-19 Pandemic was a Black Swan event: unexpected, unpredictable, and with far-reaching effects. We are still dealing with the after effects of COVID.

Many Big Tech companies over-hired during the COVID lockdowns, which led to much larger workforces. This was a great time for UX and it seemed like we would all have remote jobs-yay!

The larger workforce allowed companies the opportunity to reprioritize their employees to support the AI explosion and the infrastructure of cloud services needed to support it.

Then, in 2023, Mass layoffs created record profits (See also: Observer Earnings Recap) and increased shareholder value in Big Tech. The tipping point of hiring had been reached, and something had to give.

Many of the day to day products were left stagnant because budgets were put "on hold," (Which is also synonymous with "Shareholder value"). Researchers that survived the layoffs got fuller plates with fewer resources, and mental health plummeted due to the constant insecurity of being laid off.

"According to the 2023 Mental Health in Tech Report, a growing number of individuals in managerial roles within the tech industry are experiencing heightened levels of depression and anxiety concerning their professional prospects." — Forbes

The COVID dream of remote working fizzled as employees are being forced back to an office no one wants to return to. And despite best efforts to have everyone in person, having just one remote employee means that every workplace is now Hybrid, except for Elon Musk's X.

"Despite studies showing that hybrid working has a flat or even positive impact on productivity, leaders still lack confidence that their hybrid teams are effective." — Forbes

Businesses are being conservative with their budgets, so they want to ensure that every employee is helping them to succeed. It is within this context that the UX Research discipline has had to come to terms with showing our value to the Business.

*Note: This is a title of a wonderful book by James Gleick I read in the early 2000s when it came out. I use this heading in homage to James.

The "Impact Problem"

In 2023, Judd Antin called for a, “UX Research Reckoning.” Antin identifies the Middle-Range Research which focuses on, “User understanding and product development,” as the problem and identifies the priority of the Business over the User as the cause.

"When companies lay off workers, they’re making a statement about business value. When a discipline gets the disproportionate axe, as UX Research has, the meaning is pretty clear." — Judd Antin

Business People don't understand the value of UX People. How do we show them?

How do Qualitative Researchers prove our value to the Business to ensure our jobs don't get cut because some Business Person doesn't understand what we do?

Basically, our impact on the Product and Business has come into question.

Despite the methods we employ, I believe all well-executed and rigorous UX Research can be used to inform Impact as long as we can logically connect it to the Metrics that Business People care about.

The rest of this article is about how to build this bridge. Let's start with the Metrics themselves.

Metrics and ROI

Part of the problem of communicating with Business People is the language and terminology we use-we simply use different terms, coming from different perspectives to talk about the same things. Let’s take a minute to ensure we’re talking about the same thing.

I’m going to use the words “Metrics” and “KPIs” (or, “Key Performance Indicators,”) synonymously. I am aware that there is a nuanced difference between Metrics, the umbrella term, and KPIs, which are more narrowly defined and tied to a success outcome.

Both terms refer to the variables you measure that you believe are key (or, positively influential) to your Product/Business Success—success being defined as happy customers or profitable businesses, accordingly.

And some Metrics fall in between, like Satisfaction:

  • If it’s CSAT and in a quant survey and Business People think it leads to Success (Or $$$ for the Business), then it’s a Business Metric

  • If it’s measured qualitatively in a Small n Usability study, it’s an Experience Metric

Metrics and KPIs are the data points we use to inform the Impact we're trying to create. They are measurable ways of evaluating how your Product is doing in your market and how your Business is doing financially.

ROI, or the “Return on Investment,” is typically a monetary Metric when Business People use the term. Business People use ROI to talk about what they get out of what they put in. I invest X dollars and I get Y dollars. The return is the difference, and it should be higher than what was put in, X < Y.

A positive ROI is the same thing as Business Impact: if your Business KPIs go up and to the right, your “Return” on whatever action you performed was worthwhile, since it made the Business more money.

There are cases, however, where ROI is difficult, if not impossible, to measure.

Let me give you an example: if Research told you not to build Feature A because your customers hated it and said they would never use your product again if you built it… and you decided not to build Feature A, how much did you save? It’s not possible to know. You can only be glad that you avoided a potential pitfall. This kind of impact is intangible. You can't measure it and you can't understand the possible implications because you didn't build the feature.

Of course, if you do build Feature A and your company’s stock tanks, you know that you probably should have listened to your UX Researcher… alas, it’s too late and the company went bankrupt or decided to, “Increase efficiency,” which is really just a euphemism for layoffs. This impact is certainly tangible, in that it can easily be measured, but it's not desirable.

Types of Impact

This leads to additional nuances of understanding types of Impact.

  1. Human Impact — the research we conduct informs how to make the lives of humans better. The impact you make is directly "Good for humans" and Experience KPIs inform this impact. The impact is qualitative and intangible.

  2. Product Impact — the research we conduct informs how to make the Product better for the humans that use it. The impact you make is indirectly “Good for humans" and Experience KPIs inform this impact. The impact is tangible and measurable.

  3. Business Impact — the research we conduct informs how to make the Business more profitable. It's "Good for Business" and Business KPIs inform this impact. The impact is quantitative and measurable.

The Experience Metrics that inform Human and Product Impact are the same because a human has to use a product or service in order to have an experience, so these are essentially the same thing.

So, while a UX Person might be talking about an Experience Metric positively impacting the experience for our customers, a Business Person will be thinking about a Business Metric and how the Business makes or saves money.

But the way to inform Business Impact comes from quantitative data, Business Metrics/KPIs, that UX People don't have access to. This is key because the way to prove impact comes from data we didn't collect or have access to in the first place. Qualitative researchers thus have an especially difficult road ahead because of the steps that have to happen in order to prove their impact.

Negative Impact

We usually talk about Impact as positive Impact. However, there are also cases of negative Impact:

  • Negative Product Impact: Your customers dislike the changes you made and use your product less frequently… this leads to the next type of impact…

  • Negative Business Impact: the Business lost money, or made less money, based on changes that were made to the Product (X > Y)

You can typically see the impact of Negative Business Impact when a VP suddenly takes a leave of absence to, “Spend time with family,” ensuring they don’t cause any future Negative Impact on the Business by being laid off.

The cost of negative Impact is very high-you can literally lose your job-which is why Business People are so concerned about it. And it’s also why you should be concerned about it, too. The stakes are high, so you better be certain that you’re right. Major changes to your product are less frequent because usually, it’s just better not to rock the boat for fear of causing negative Impact.

Informing Impact: The Typical Process

Let's take a look at what the typical process looks like for a Corporate Big Tech Qualitative UX Researcher…

Product Impact: Showing that Recommendations Benefit Users

UX people are trained to create, conduct, moderate, analyze, and synthesize results to research questions from data we’ve collected, which is qualitative, to measure Product Impact using an Experience KPI. We focus on users and products, sometimes so much that we miss the broader business context.

  1. Researchers must make a Recommendation based on data they've collected to inform the problem

  2. Target Metrics/KPIs have to be identified that make sense to be impacted should the product Recommendation be accepted and implemented as intended

  3. Baseline data has to be in place already (or gathered separately) to compare to later on

  4. Said Recommendation has to be prioritized by a PM above the magic "cut line" (An arbitrary dimension that has more to do with the PM looking good than the impact to the user experience, or the developer time, which it purports to measure)

  5. Designers inform the Recommendation as well-ideally, to make the experience consistent with the Design ecosystem

  6. Developers have to implement the Design in the way it was intended to work

  7. Data Scientists must work with Developers to ensure that the Recommendation can be measured in the code after it's been developed

  8. Once it's been coded, ideally there's time to perform a usability test on the new feature to ensure it's easy to use (Or even possible to figure out at all-yes, I know I'm jaded)

  9. The Recommendation has to be launched in such a way that it aligns with the original Recommendation and can be tied back to the Researcher

  10. The metrics identified above, in Step 2, should move the way you hoped they would (Up and to the right)

  11. Success! You impacted the Product.

Theoretically, this leads to higher customer satisfaction and utility of your product. The experience of using the product got better. But this isn't what matters to Business People.

As UX People, we assume an implicit connection between the Experience Metrics and positive Business Metric growth, but because it’s not tangible, or measurable, it’s hard to make that implicit connection explicit. And this is why the Impact Problem is so frustrating.

Business Impact: Showing that Recommendations make the Business more Profitable

Business People measure Impact numerically and more specifically, monetarily. That is, $$$. Capital. Dollars. Euros. Yen. Whatever currency you work in. Impacting the Business means helping the Business to either make more money (preferred) or save money.

Ok, so what if I want to prove Business Impact? What’s that like for a qualitative Researcher? Let’s look at the daunting task of attempting to prove Business Impact and how tough it is. Since Impact can only be assessed over time, you have to start by waiting to gather data to see if and how usage changes.

This already assumes that it is currently possible to measure the Business KPIs needed to prove Impact. I say possible because sometimes the instrumentation isn't in the code to measure the things you need to prove impact. Very frequently, they're being built as the feature is being built, so it's important to plan ahead for how you want to measure success.

  1. Implement the recommended change/new feature as recommended by the Researcher

  2. Wait 3 months to gather usage data on the new feature

  3. Go back to the KPIs defined above and compare your baseline data with new data from the past 3 months of new usage data

  4. If your KPIs are Experience KPIs, you might be lucky enough to have satisfaction and utility metrics defined as operational definitions of success which you might even have the ability to measure and track yourself… (However, you might not have a high enough N to convince Business People!)

  5. Talk to a Data Scientist to see if they have access to the Business Metrics and see if they can run a report for you — this is possible if you have a good relationship with your Data Scientist. If you don't, you're probably out of luck since they aren't often incentivized to help you prove why your discipline is valuable to the business (Wah-wah).

  6. Talk to a Business Analyst or VP (If you even have access to one) to see if you can get access to this data — Usually, the answer is "No," it's above the Researcher's pay grade to have access to "sensitive" Business Metrics (Remember, Experience KPIs are different from Business KPIs, and Business People don't want you to know how much the Business is actually profiting based on your work)

  7. Pray to the UX gods that you have a manager that supports you and knows someone who has access to the illusive Business Metrics from the previous step. Maybe they can get a hold of it to help you without taking responsibility for your work?

  8. Keep track of this and every other Recommendation you've made over the past year (Yes, hundreds of them!) so you have a write up of your "Impact" and prove your value to the Business since it might take a while to get access or convince others to help.

  9. Again, pray to the UX gods, this time that you don't get laid off because that ever nebulous "Shareholder value" doesn't take precedence over your job function

And somehow… this has to be done ALL THE TIME: while you're running other research projects, working with your stakeholder team, and advocating for your customers to build products that solve their problems instead of products that just try to make the Business money.

In terms of positive impact on your career growth, Business Impact is the more important of the two. That’s because Business People make promotion decisions based on their definition of impact… And that’s based on hard numbers, the one thing qualitative researchers don’t have access to, and other disciplines aren't incentivized to share.

Know Your Meme: This is why we can’t have nice things

Causes of the "Impact Problem"

Hopefully, you've read some things that either resonated with you so strongly you want to cry, (me too), or angered you because it's not your experience, (calm down, I'm being provocative to illustrate the extreme scenario).

Mentioned already is the lack of clarity between Product vs. Business Impact and the nuanced differences of Experience Metrics and Business Metrics in how they inform Impact. Conflating these terms and using the same language to mean different things only exacerbates the communication problems between UX People and Business People, who already come from different interpretive universes.

Aside from the communication and terminology issues, three additional causes reinforce each other to perpetuate the Impact Problem:

  1. Competing Epistemological Systems

  2. Threats to Validity in Interpretation

  3. Systemic Incentivization and Gatekeeping

1. Competing Epistemological Systems

The philosophical branch of Epistemology asks two key questions, “How do we know what we know?” and “How can we be certain of what is True?”

Quantitative and Qualitative data come from two different systems of evaluating the nature of the evidence we use to say that a statement is True:

  • Qualitative data is non-numeric in nature, can be observed, but not measured in the same ways Business People think about. Truth is determined through narrative, thematic analysis, and coherence of explanations within the context of data collection as pertains to the research questions.

  • Quantitative data is numeric in nature and can be measured and quantified. Truth is determined through statistical significance, reproducibility, predictive power, and correspondence to observable facts about the world.

Perhaps some of this sounds basic here, but the differences between these two data types have deep implications for the criteria of evaluating the veracity of statements or beliefs-aka how we know what is True. And at the end of the day, agreeing on how we evaluate Truth is the key to unlock the Impact Problem.

Cross-System Evaluation

Many of the situations qualitative Researchers face when attempting to prove their impact stem from the the challenges that ensue when taking data from one system and the criteria of measuring Truth from the other.

Let's start with some examples to show what this looks like in a real corporate setting, and yes, all of these have happened to me:

  • This is why qualitative researchers tire of hearing other disciplines say, “You only talked to 10 people, that isn’t statistically significant, therefore, I don’t need to act on it.”

  • After sharing an incredibly painful video of a participant completely lost and frustrated with your product, another discipline will invalidate that user and their experience by saying it was, "Just one person."

  • It’s also why we groan, sometimes audibly, when another discipline talks about WHY customers are doing X and Y yet basing their statements on quantitative data, (which can only tell us how many people did something, not why they did it).

  • Saying that 80% of customers have a problem, when it was only 8 out of 10 of a representative sample that we tested qualitatively.

Perhaps you, too, are testy about using percentages only in the context of statistical significance and only with populations and not with samples. You’re not alone.

You can’t evaluate what Truth is, from an epistemological standpoint, with data from one system and the criteria of measuring Truth from the other. The two competing systems are constructivism and positivism/logical empiricism.

  • Constructivism, associated with qualitative data, views Truth as subjective and context-dependent

  • Positivism/Logical Empiricism, associated with quantitative data, tends towards an objective notion of Truth

Is there truly an ultimate, objective Truth? Or does each human experience the world in their own subjective way?

These grand philosophic questions are beyond the scope of this article, however, I pose them in order to get to the root of the Impact Problem:

Qualitative Researchers are being tasked with proving Business Impact using the rubric of quantitative data, which exists in a different epistemological universe from the initial qualitative data that was collected to impact the Product in the first place.

Not only does this cognitive dissonance make logical sense, it doesn't make pragmatic sense, either.

What should you do when you encounter situations of cross-system evaluation? How do you avoid them from happening in the future?

Recommendations & Discussion

  • Be honest about what types of conclusions we can and cannot draw from the type of data we’ve collected.

  • Educate others when they apply the criteria of Truth of one system to the data of another.

  • Triangulation is your friend. The most robust recommendations come from across multiple studies, data types, and sources.

  • Telling a compelling story across studies, data types, and disciplines will bring your stakeholder team together around the common goal of impacting the Product and Business.

A deeper implication is the underlying equality of quantitative and qualitative data. Business People treat quantitative data as king, but both data types and criteria of Truth evaluation systems are needed to come to a holistic picture of Truth.

2. Threats to Validity in Interpretation

Competing epistemologies of evaluation of Truth criteria lead to blind spots, or "Threats to Validity," when interpreting data. Even if we interpret the data appropriate to the epistemological system, problems can still arise in interpretation.

I want to further distinguish two additional types of "Impact," because the way they are evaluated is unique and, like the nuanced distinction between Product vs. Business Impact, they are often conflated:

  • Prospective Impact is based on regression analysis, looks ahead, into the future and assumes that what was true in the past will continue to be true. Business People, who deal in the currency of quantitative data that inform Business Metrics, think of Impact prospectively.

  • Retrospective Impact is understood by looking backwards to measure before/after effects of a change. UX People, who deal in the currency of qualitative ideas that inform Experience Metrics, think of Impact retrospectively.

Problems with both approaches must be acknowledged. (See also Prospective and Retrospective Studies).

Prospective Impact & Black Swans

Prospective Impact uses regression analysis and other predictive models to forecast future trends based on historical. It assumes observed and measured patterns from the past will continue into the future.

After 20 years in the industry, I’ve finally figured out how Business Metrics, and thus ROI, are calculated:

  1. Identify the variable(s) or KPIs you care about.

  2. Run a regression analysis and project into the future.

  3. If it goes up and to the right, you’re making the right decision: Success!

Advantages
Prospective Impact is based on quantitative Business Metrics and can be used to justify just about anything. The benefit of prospective impact lies in strategic planning for the future by identifying risks and growth areas. It can also help Businesses make decisions about resource allocation, product development, and market expansion.

Quantitative Researchers have a leg up here because their data more closely resembles the "rigorous" data that Business People are looking for as a way to show Impact through statistical significance.

Limitations
The biggest threat to validity is the assumption of continuity: that historical data trends will persist into the future. However, the predictive ability of Prospective Impact is only as good as the data it's based on.

Changes in customer behavior, market conditions, global pandemics, and technological revolutions such as the explosion of AI can't be accounted for. And therein lies the problem, which Nassim Nicholas Taleb calls a "Black Swan."

Photo by Rodrigo Rodrigues | WOLF Λ R T on Unsplash

A Black Swan, as defined by Nassim Nicholas Taleb in his book by the same name, is an unpredictable event, like the COVID-19 Pandemic, that seemingly comes out of nowhere and has significant and widespread consequences.

“A black swan is an event, positive or negative, that is deemed improbable yet causes massive consequences.” — Nassim Nicholas Taleb

Additional examples of Black Swan events include the 2008 Financial Crisis, Brexit in 2016, and the American Capitol Riot on January 6, 2021.

Taleb criticizes the use of regression analysis and other statistical models that assume a normal distribution of data for their reliance on the continuity of past trends into the future. This creates a vulnerability in financial forecasting and planning leading to the underestimation of the rarity of Black Swan events. We then use hindsight to show that we could have predicted the event, minimizing the unpredictability of the event in the first place and overestimating our ability to predict future events.

Business Metrics, and thus Business Impact are based in a system that has massive problems as it comes to predicting the unpredictable. Since Black Swans cannot be predicted, we cannot be certain of our ability to predict the future. And yet, we continue to act as if we can.

Regression analysis and Business Metrics are not a foolproof way to predict Impact.

And, if you follow Taleb's line of reasoning, regression analysis should be done away with altogether. But I'll let you read his book on your own to come to your own conclusion-it's a compelling read.

Retrospective Impact & Confounding Factors

Ok, so if we can't project forward the data from the past to show impact, what about a more conservative way of measuring impact: looking back at changes in the past.

Measuring Retrospective Impact involves analyzing historical data to understand before/after effects of a specific change, such as usage metrics of your product before and after the launch of your new feature you recommended.

Advantages
Looking retrospectively allows for the creation of an evidence-based decision making framework based on actual data and impact. You can use these learnings to offer insight into what worked and didn't work in your product, allowing you to inform future decisions and experimentation.

Limitations
Isolation of Variables: Product Metrics may be influenced by other factors, such as changing multiple things in your product at the same time… You haven’t done that, have you? ;)

With so many A/B (or “Split tests”) running at the same time, it's hard to pinpoint exactly which feature (or set of features) truly impacted customer behavior, (Only qualitative research can answer this question, by the way).

An additional challenge is the data and conditions to “prove” impact. I put “Prove” in quotes because we’re not using a true Experimental Design here: no randomized and controlled trials or the ability to claim causation.

The Post Hoc Fallacy is the assumption that simply because one thing follows another in sequence that the former caused the latter. At best, when we’re trying to show impact, we are hoping to show a positive correlation between our recommendations and Product or Business KPIs. Actually proving causation takes a lot more data and work and it's not the typical case for Big Tech.

Notice I’m not using “proof” anywhere else-we can’t really prove impact in the first place, it’s a fallacy on its own. I use the phrase “inform impact" elsewhere in this article for this reason.

And this leads us to our next challenge. But first some recommendations.

Recommendations & Discussion

  • Recognizing the limitations of both Prospective and Retrospective analyses is crucial to developing a nuanced understanding of Business and Product Impact.

  • Be honest about threats to validity of our studies and critically evaluate what counts as "proof" and what doesn't. Do this with others. Ideally, those coming from a different epistemological background as you.

  • Define how success will be measured ahead of time, including the metrics that will be measured and hypothesized will be affected. You will be putting yourself in a good position to evaluate them later on down the road. And the team will be bought in to tracking those metrics in advance (To go deeper, check out these articles on Logic Models: one and two).

  • Both Prospective and Retrospective approaches can be rigorous and lead to Truth. It's just that in the Tech Industry, we often lack the level of time, budget, and control to do them well. Acknowledging this is also important.

  • Combine both approaches for a more comprehensive view of your Product success Metrics-this is really an argument for a Mixed Methods approach to all key Business decisions.

As a discipline, qualitative Researchers need to have a good picture of what impact can and should look like, on our own terms, using data we've collected, and interpreting it using our criteria of evaluating Truth.

If Researchers are required to prove Impact using the mental model of Business Metrics, we may need to up-level our skill at translating between the systems, being careful not to cross-system evaluate.

3. Systemic Incentivization and Gatekeeping

In addition to being asked to prove impact using data qualitative researchers didn't collect or have access to, the data that is needed to inform impact is being gate kept by people who aren’t incentivized to support them.

Philosophic issues aside, the system also needs to change.

The owners of the quantitative data are often the same people in charge of making the changes, so they just grab the data they need to prove they are right and do what they want.

Obviously, there is a huge threat here of confirmation bias when you only seek out data that supports your perspective and ignore data that may be contrary to your view.

Further, data owners aren't incentivized to share their data or to spend time helping prove someone else is right. If you're in a competitive (or "Toxic") company culture, you have to do everything you can to get ahead, be seen, and shine above your peers. This competitiveness also leads to mental health issues for you and your colleagues.

Thus, the qualitative researcher is reliant on the gatekeepers who own the data in order to prove their impact.

By the time the data exists, many months have passed since the initial Recommendation. The Product may have also changed considerably over that time, and in the interim, everyone else has also tried to impact the Product.

It's challenging to inform impact one way or the other-inconclusive findings because of the threats to validity mentioned above.

And everyone is fighting to be the person who gets the credit, and thus the good review, and the promotion.

Credit for Impact and Promotions

Research deals in the currency of ideas, which are hard to visualize. Design has an output that qualitative Research doesn’t have and it’s tangible because it can be seen, thus, Design has it easier when it comes to impact: they literally designed the feature, so they have an easier time when credit for Impact is attributed.

Impact = Credit for Positive Business Metric Movement

Time and again, someone else got the credit for my Recommendations, (and they have the Patent cubes to prove it!). Does it matter to the customer who made the recommendation as long as the product gets better? No, of course not.

But it matters to the researcher who is frequently relegated to the back of the room while the pissing match of American Corporate business politics rears its ugly head as the loudest person, with a heavy dose of toxic masculinity, takes the credit for the whole team of people who did the work.

I was at Microsoft when the yearly review changed from stack ranking employees to having a question about how you helped impact the success of others in the yearly review. This small change had a massive impact on the culture to empower others to collaborate in healthy ways instead of competing with each other.

Recommendations & Discussion

  • Don't fall into the trap of confirmation bias, seeking only the data to prove yourself right. Instead, seek out data and perspectives that don't align with your own. A good study should be able to equally prove or disprove your hypothesis.

  • Seek common ground between disciplines and collaborate with your stakeholder team to ensure that you are measuring the most important Business KPIs and defining and measuring your own Experience KPIs.

  • Befriend your Data Scientist. I'm not joking. Take them out for drinks and understand how they are being incentivized and what they need for a promotion. Then give them the one thing they don't have access to: WHY your customers do what they do.

The system for evaluating what impact looks like for Qualitative Researchers needs to change to be more reflective of the ideas and narratives that qualitative research provides.

We need to incentivize collaboration over competition in yearly reviews. When disciplines are incentivized to help each other out, there is more opportunity for holistic discussions about what is True for Users and the Product and the opportunity for deeper discussions about what success looks like for the Experience and the Business.

Implications & Concluding Thoughts

We've covered a lot of ground here because I want to be exhaustive in my thinking and approach. Naturally, you can go deeper into every area I've covered, and I encourage you to reference the sources yourself in order to go down the rabbit hole of ideas.

There is no single approach to solve the Impact Problem because every Business, just like every Team Culture, functions differently. But my hope is that understanding the problem and how Business People think will help UX People to show their value to the Business in a more effective way.

A short summary…

The current 2024 climate in Big Tech is still responding to the AI explosion, 2023 Layoffs due to over-hiring during the COVID-19 Pandemic, and a general tightening of the pursestrings, despite the record profits that many of the Tech Companies still have.

Qualitative Researchers feel we have been hit hard and we have been coming to terms with a reckoning of our own: how do we demonstrate our value and inform our Impact to the Business?

However, we have a hard road ahead because of the challenges inherent to informing Impact in the first place:

  1. Competing Epistemological Systems and how cross-system evaluation leads to confusion and future problems

  2. Threats to Validity in Interpretation, specifically in problems with Black Swans and Confounding Factors

  3. Systemic Incentivization and Gatekeeping continue to exclude qualitative Researchers from a seat at the table and help others with access to Business Metrics get considered for credit and thus promotions first

Three Key Strategies

Reflecting on the challenges with informing Impact leads to three key strategies:

  1. Learn the Business Metrics that matter to Business People at your Company and how they are currently measuring success.

  2. Reposition your insights and Recommendations to explicitly show how they inform and impact the Product and the Business.

  3. Collaborate with other disciplines who have access to the Quantitative Data, work towards triangulation of insights across data types and studies, prioritize Mixed Methods approaches when applicable.

The Impact Paradigm Shift

It's time that we engage first as a UX community, and next with Business People, to discuss what "Impact" can and should look like.

  1. Collaborate with other disciples and grapple together with what is True and how the Product and Business really get impacted.

  2. Educate others on the realities and issues with cross-system evaluation to ensure that we don’t fall into the traps on either side.

  3. Incentivize this collaboration so that data isn't gate kept, but shared in a way that we all work together to inform and share Impact.

Quantitative Research validates and describes what happens at scale to the Business' target customers.

Qualitative Research adds context, narrative, and story to the numbers that Business People care about. We can help you understand WHY your customer do what they do.

And most importantly, UX People can tell you WHY your customers prefer one product over another. This single insight is the most important insights that matter to Business People.

Everything else flows out of the answer to this one question.

And we provide that insight.

Josh LaMar (He/Him) is the Co-Founder and CEO of Amplinate, an international agency focusing on cross-cultural Research & Design. He helps Entrepreneurs and SMBs uncover UX insights to secure market dominance.

Next
Next

Speaking English so non-native colleagues understand