Research teams need a prioritization framework now more than ever
An Introduction to the Research Opportunities Assessment Matrix
Research is at a crossroads.
Layoffs have swept through tech and, from everything I can glean, Research has been hit harder than anyone else. Entire teams have been laid off (e.g., Amazon Shopping’s UXR team, the UXR team at Google’s Area 120 incubator); others have been gutted, including my former team at Airtable, where 76% (16 of 21 employees) of the Research organization was let go.
Many, myself included, have been talking about how Research should be tackling higher-order strategy, with human-centered insights driving things like business model creation, visioning, strategic innovation, and risk identification, but we’ve clearly failed to demonstrate our essentiality at even the functional strategy level. I still firmly believe we can get there, but we’ll need to prove that we can tackle substantive questions, influence decision-making, and help shape strategy at the product level. To do that, we need to be more strategic in our own work.
We talk a lot in this field about how to do research, but not enough about what research should be done.
To grow our influence, we need to be rigorous in our focus on work that gets results. A prioritization framework is essential for determining the right opportunities and making that process explicit and collaborative. ROAM is what I developed for doing exactly that.
History
In 2020, based on conversations with researchers I was advising, I came up with a rubric for prioritizing research opportunities. I rolled it out as a pilot with some teams at Workday (my employer at the time). I surveyed those teams and they reported they were making better-informed decisions as a result of using the rubric. (Interestingly, this improvement was even more pronounced in managers than in ICs.) I gave talks about it at Advancing Research and Flex Your UX and some other places. I posted it as a resource on researchstrategy.info. The model has evolved since then, becoming more focused on the key factors for success.
But I never wrote it up. Until now.
Overview
The ROAM
The Research Opportunities Assessment Matrix (AKA the ROAM) is a prioritization rubric that serves three essential functions: to rigorously assess opportunities, facilitate discussion, and align the team around the work to be done. It’s an important tool for deciding what research should be done and what research should not be done, and it externalizes and makes explicit the process. The goal of any framework should not be to answer a question but to provide structure for understanding. In this case, the ROAM provides a structured approach to surface information, clarify unknowns and assumptions, discuss trade-offs, and document decisions.
Other Frameworks
Prioritization frameworks certainly aren’t new. Even within research there are several existing rubrics, ranging from the elegant, clarifying simplicity of Jeanette Fuccella’s 2X2 matrix to the meticulous comprehensiveness of GitLab’s research prioritization calculator.
I don’t care if you use the Research Opportunity Assessment Matrix or not. But I do care that you use a framework. I care that you externalize and make explicit the process and the criteria; and that you have sufficient structure for your needs to facilitate a conversation about resource allocation and the team’s effectiveness.
For those I’ve worked with, the ROAM strikes a balance between being accessible and rigorous. It is not burdensome, nor is it reductive. It prioritizes opportunities by four criteria:
Ambiguity
Potential Upside
Potential Downside
Strategic Utility
Simply stated, the greater the ambiguity, the potential upside, the potential downside, and the strategic alignment for a particular initiative, the more important it is that research informs decision-making. There’s much more to it than that, of course, so let’s dig in.
Using the ROAM
Timing
When should this matrix be used? Any time the question of prioritization and resource allocation arises. Typically this would be a part of quarterly planning, but it could be revisited more frequently as new opportunities arise.
Collaborators
Who should be involved? You should include as many people as might have unique and valuable input for that process. But keep that phrase “unique and valuable” in mind. The more people involved in any process, the more cumbersome and convoluted it becomes; the fewer you include, the greater the risk of misalignment. You might just include the researcher and the lead PM, or you might include team leads from Research, Product, Design, and Engineering. Which is right for you will depend on your organization’s culture, power structures, decision-making, and the research maturity of its constituencies.
The matrix at-a-glance
I will go through it column by column, but first want to provide an overview of the whole thing together.
The far left column (Project) is a list of each possible project.
Prioritization Criteria are four equally-weighted qualities to be assessed by all interested parties. The ratings for these will determine the Value rating, which will be used to decide the projects' Rank.
Resourcing Considerations and Red Flags are not part of the Stakeholders Template. These are for the researcher(s) to discuss with their leadership in deciding whether and how to support these opportunities.
The Process
To complete the first phase of the ROAM, you’ll need to have a name for each opportunity, a solid understanding of its research objectives, and a rough sense of the methodological approach. You do not need to have methods identified or fully scoped projects. (I’m not addressing how to collect or clarify opportunities in this article, but the Questions Workshop is my recommended method for doing this.) Please note that your goal should not be to triage requests from others, but to evaluate opportunities. Most of those will likely come from stakeholders, but Research can and should also play a role in identifying potential projects. If your process doesn’t facilitate that, you do a disservice to Research’s influence. If that isn’t the way it’s been done previously, introducing a prioritization framework provides a perfect excuse to change that dynamic.
List the names down the left column of the Stakeholders Template and distribute it to everyone who needs to have input. (You can find step-by-step instructions for the Stakeholders Template here.) Each person involved in the exercise will go across and individually complete Prioritization Criteria for each row, answering the question below the column header with Low, Medium, or High. (Or, if the answer is not clear, select [See Notes] and add a brief explanation below.) Keep in mind that collaborators will have to defend their ratings later and High ratings should be especially scrutinized. Specific guidance is provided under Column Descriptions below.
Once you’ve completed the Prioritization Criteria, you will see an indication of the opportunity’s likely value in the Value column. You’ll be able to see at a glance which are High Value and which are not.
[Workshop] Collaborators share their completed spreadsheets and discuss any ratings where there are differences of opinion in order to arrive at a group consensus. Any High ratings should be especially scrutinized. This discussion is critical as this is where the team will surface differences of opinion, assumptions, and unknowns. Once the collaborators have arrived at consensus ratings, they will collectively rank the opportunities from one to [however many there are]. It is important to set expectations with interested parties that this exercise provides important guidance and Research will meet separately to make a final determination.
After the workshop, the researcher and their leadership will meet to complete the Researchers Template, which includes the sections titled Researcher Considerations and Red Flags. This will help to flesh out the scope and to determine whether and how to engage.
Column Descriptions
Prioritization Criteria
Potential Upside
The primary functions of research are to minimize risk and identify opportunities. Potential Upside and Potential Downside address these.
Potential Upside looks at the likely value of the decisions the research is meant to address, in terms of improvements to user experience and business metrics. Consider the significance of the improvement, the volume of users affected, and the effect on the business’s bottom line. Is this a meaningful improvement for a small subset of users? That’s probably Medium, but if, for example, those users pay a premium or have an outsize influence on purchasing decisions, Potential Upside would be High. A minor improvement for all users? That’s probably Medium, but if, for example, that change saved the company a lot of money, Potential Upside would be High. A minor improvement for a subset of users that is expected to have little effect on business metrics would be Low.
Potential Downside
Research is conducted to aid decision-making. Here we ask: what is the risk if those decisions are wrong? Is this a sizable initiative or indispensable new functionality that absolutely has to work? If so, the Potential Downside is High. Or perhaps it’s a change with a more constrained risk profile, in which case, the Potential Downside is Medium or Low. There are many aspects to risk. Things to consider: risk to the business (e.g., what if we spend all this money and it isn’t any better than what we have now?), risk to the product (e.g., what if we “break” core functionality with a new flow that confuses users and they abandon the process?), and risk to users and/or affected communities (what if we alienate or harm certain users or other groups?). Many stakeholders are not comfortable talking about what could go wrong so you may need to prod them.
Strategic Utility
Whether your organization uses OKRs, KPIs, or something else, they (hopefully!) will have strategic objectives for the entity as a whole that cascade down into strategic objectives for its component parts. In this section, evaluate how helpful this project’s outcomes are to those objectives. Consider how directly the outcomes are aligned with objectives and how high in your company those objectives are articulated (i.e., are these company-wide or team objectives). Consider also the research’s ability to drive strategy.
An initiative that seeks to understand a core assumption of a company-wide strategic objective would be High. A similar undertaking focused on a team-wide objective would be Medium. An initiative that doesn’t clearly ladder up to a stated objective would probably be Low, unless it is intended to identify strategic choices the organization hasn’t anticipated.
Ambiguity
Here we gauge how well we understand the problem space. Sometimes groups understand the problem space well, either through prior research or industry experience. In this case, Ambiguity is Low. In other cases, the group doesn’t have research or experience to fall back on and Ambiguity is High. Be careful with this one, as I’ve seen many groups who felt “we know what we need to do,” but they were operating off of assumptions, their confidence blinded them to changing dynamics, or their expertise created an echo chamber such that they lost sight of users’ comfort and familiarity with the domain. I’ve had to point out the assumptions in a product vision and ask stakeholders how confident they are in those assumptions. What your stakeholders initially identify as Low Ambiguity may actually be Medium or High.
Value
Value is just a calculation based on the previous columns’ entries for a given project.
Ranking
After the Prioritization Criteria have been completed for each of the opportunities and their value is displayed, collaborators will work together to complete a forced ranking of all the opportunities from 1 (most important) to [however many there are]. Questions should be asked about why the #1 opportunity is more important than #2, why 2 is more important than 3, etc.
And that concludes the Prioritization Criteria, Value, and Ranking!
It is important to set expectations with interested parties that this exercise provides important guidance and Research will meet separately to make a final determination.
Resourcing Considerations
After the workshop, the researcher and their leadership will meet to complete the Researchers Template, which includes the sections titled Resourcing Considerations and Red Flags, to determine whether and how to engage. Collaborators will not see these sections.
Why don't collaborators have visibility or input with respect to Cost, Versatility, Buy-in, and Ethics? For four reasons. First, these columns require an understanding of research design that most non-researchers lack. Second, because Research will need to decide on how to resource these opportunities once ranked. Third, to keep things straightforward--four criteria that result in a Value rating are easy to understand. Fourth, to avoid the political pressures that can come into play when discussing Ethics and Buy-in with stakeholders.
Cost
Cost looks at how much a given project will cost in terms of time and money. A Cost rating of High, Medium, or Low is not a bad thing or a good thing. It just helps Research to identify the resources they will need to allocate, and whether they have the capacity to undertake it. Is leadership willing to devote multiple researchers to one initiative that will take an entire quarter at the expense of conducting several smaller studies? They may be, if the large initiative is the #1 ranked opportunity, or they may decide it makes more sense to conduct several smaller studies. An international initiative with in-person interviews in multiple countries, a diary study with a large sample size and duration, a MaxDiff survey that seeks to compare and contrast responses across multiple segments would all be considered High Cost. An unmoderated concept test, a small intercept survey, or secondary research would all be considered Low Cost.
Versatility
Versatility evaluates the leverage and longevity of a project’s findings. Will the findings be helpful to one team or multiple teams? Is the research concerned only with the next release or is it likely to produce insights that have durable, lasting value? I often hear researchers complaining that they’re doing too much evaluative research, not enough generative or exploratory research. Researchers concerned about this are fixating on the wrong thing–it’s not the type of research that matters, it’s that research’s effect. (A focus on Versatility should in fact shift the type of research being done, but that’s not the goal. The goal is research that is effective.)
Foundational research that informs how an organization understands its users or the problem space would have High Versatility. Research that looks only to answer a team’s questions about an update they’re planning for the next release would be ranked Low. Research that intends to aid a team’s decision-making over the next couple of quarters (e.g., the product roadmap for the next six months) would have Medium Versatility, but if that research can be used by multiple teams it would be High.
When deciding between multiple research opportunities, we should steer our efforts to those that can inform multiple initiatives and build our understanding for future efforts. This may mean telling stakeholders that certain high-priority projects should be conducted by non-researchers (i.e., democratized) or by vendors from outside the organization (i.e., agencies, consultants) so that Research can focus on those initiatives most likely to have broad and enduring results. (For more, see Resourcing below.)
Red Flags?
These factors are generally not to be discussed with cross-functional partners but are critical determinants of whether research should be conducted. The researcher and their leadership should explore these thoroughly and then decide whether to bring others into the discussion.
If the ratings for Buy-in or Ethics are Low, that is a red flag.
If Buy-in is Low, it is unlikely the project will be as effective as another option where Buy-in is higher, even if that option has a lower priority ranking.
If the rating for Ethics is Low, we should not do the research.
If a project you’re considering has red flags, it may be time for some difficult conversations. The researcher and/or their leadership may need to discuss and negotiate those red flags with relevant stakeholders. Or they may simply need to explain to stakeholders they’re prioritizing other opportunities and hold firm.
Buy-in
By evaluating buy-in, researchers can ensure their partners are fully committed to the work’s success. This is something I don’t see discussed enough and yet it’s one of the key success factors for projects. Too often we assume there’s buy-in only to discover that our partners aren’t making time to provide input or there isn’t enough time or political will to implement substantive changes if any significant issues are uncovered.
If you have cross-functional partners who are fully invested in the research and there’s sufficient time and political will to adjust based on the results of that research, Buy-in would be High; Low Buy-in would be associated with partners who are endorsing the research to check a box or appease leadership but who have no intention of altering their plans in any meaningful way. If Buy-in is Low, it is unlikely the project will be as effective as another option where Buy-in is higher, even if that option has a lower priority ranking.
Ethics
Research can cause harm. Academic researchers have to go through an Institutional Review Board; applied researchers do not. Until we have better protections in place, it’s up to us to evaluate the ethical risks of our work. It’s not enough that our intentions are good, we need to systematically review every aspect of the research plan and its capacity for causing harm to participants along psychological, social, physical, economic, legal, and environmental dimensions. If we are not confident that we can conduct the project without causing harm, we should not do the research.
Alba Villamil has been a trailblazer in ethical research practices and I highly recommend her talk on The Ethical Researcher’s Checklist that she created and use of the checklist itself.
Using Alba’s checklist, a project that has significant ethical risks and/or limited methods for mitigating risks in these areas would have Low ethical confidence. A project with very few ethical risks and robust mitigation strategies would have High ethical confidence.
Resourcing
Once the ROAM process is complete, you’ll have a ranked list of research opportunities and the information you need to determine whether and how to engage. Resourcing is so dependent on the resources at your disposal that it’s beyond the scope of this essay. A resourcing approach that works for a large research organization with deep pockets won’t work for an underfunded research-team-of-one.
A large research organization may have the flexibility to move researchers around, the tools and practices in place to democratize some projects, and the funds to spin off other projects to vendors. They may decide not to support opportunities with a Low Value, let stakeholders decide if they have the resources to democratize or farm out those of Medium Value, and focus Research on fully supporting all High Value opportunities. Conversely, a research-team-of-one may be the only resource available for all the opportunities identified in their organization. They may need to select the two or three highest value options and say no to the rest.
At a high level, I can say this: We should focus our efforts where Research can provide the most value. Opportunities where we provide less value should be democratized or left unsupported.
“The essence of strategy is choosing what not to do.” —Michael Porter
One Final Note
“Perpetual devotion to what a man calls his business is only to be sustained by perpetual neglect of many other things.” -Robert Louis Stevenson
While I have your attention and you’re thinking about priorities, let me say this: the point of prioritizing our research projects is to increase our effectiveness at work, but do not let work be your priority. What sustains us, what creates a life worth living, is our connection to others, to the natural world, to ourselves, to… something spiritual. Studies of longevity bear this out.* Studies of burnout bear this out.** Studies of happiness bear this out.*** Make connection your priority. The work will still be there tomorrow. Or it won’t. But your connections will.
💛🙏✊
-Chris
Photo credit: photo by Zoltan Tasi on Unsplash
Shout out to those who provided input: Amanda Rosenberg, Andrew Warr, Carol Rossi, and the Research Strategy Community’s leadership team. Thank you!
* Yang, Y. C., Boen, C., Gerken, K., Li, T., Schorpp, K., & Harris, K. M. (2016). Social relationships and physiological determinants of longevity across the human life span. Proceedings of the National Academy of Sciences of the United States of America, 113(3), 578–583.
** Ruisoto, P., Ramírez, M. R., García, P. A., Paladines-Costa, B., Vaca, S. L., & Clemente-Suárez, V. J. (2021). Social Support Mediates the Effect of Burnout on Health in Health Care Professionals. Frontiers in psychology, 11, 623587.
*** Saphire-Bernstein, Shimon, and Shelley E. Taylor, ' Close Relationships and Happiness', in Ilona Boniwell, Susan A. David, and Amanda Conley Ayers (eds), Oxford Handbook of Happiness (2013; online edn, Oxford Academic, 1 Aug. 2013).
Comments