RICE is a straightforward way to prioritize that accounts for an item’s reach, impact, confidence, and effort. I believe it was originally popularized by Intercom.
Using the formula and its results, you can compare items against one another to determine which ones you should tackle first and which ones may offer the most “bang for the buck”.
RICE = (Reach * Impact * Confidence) / Effort
To do this in practice, create a spreadsheet with 6 columns (name, reach, impact, confidence, effort, RICE). Then make a list of all of the things you want to score. These can be feature requests, ideas, bug fixes, etc.
Then score each of your items on rice, impact, confidence, and effort. In the last column, calculate RICE using the formula above. Then, sort by RICE score.
It’ll look something like this…
Boom! Highest priorities identified.
Not sure how to score each item? Here’s a possible way to think of each score. Your mileage and opinions may vary.
p.s there are also tools out there like speckled that will help you do this automatically.
p.p.s here’s a link to a sample spreadsheet
The Elements of the RICE Prioritization Formula
Measured as a number
Ex. 120 (users)
Reach is how many users will likely be impacted or touched by the item.
For new features, reach is typically how many people will actually use the feature. As PMs, we’d like to think that everyone is going to love our shiny new feature. However, setting our reach number to our whole user base count for every feature is going to throw off all our other estimates. Therefore it is best to estimate reach as realistically as possible. If your feature applies to a certain segment of your application, then you may want to use a portion of that segment instead of your whole user count.
Measured as high, medium, or low
When using this in a spreadsheet, assign values of 0-3. They don’t have to be an integer. If something is medium-high for example, you could use 2.5.
- 3 = High
- 2 = Medium
- 1 = Low
Impact is how much of an effect this item is going to have on your users. This can be very subjective, and each item’s impact score may depend on other items it is being scored against.
Examples of low impact items:
- A bug fix that few people have reported
- A feature that has a use case not applicable to a majority of your users
- A bug fix or a new feature that will improve the application for a small number of users
- A feature that is unlikely to bring in new users or retain existing ones
- A bug fix that is preventing users from accessing your application and is easily worked around
- A UI/UX change that will make something a lot prettier but not more useful
- Code changes that will add unnecessary technical debt to the codebase
Examples of medium impact items:
- A feature that adds significant improvements, but will not likely be used by everyone
- A feature that may or may not bring in new users or retain existing ones
- A bug fix that is preventing users from accessing your application but could be worked around
- A feature that would effect around half of your user base
- A UI/UX change that will make something look better and make it more usable
- Clarifying copy
- Code changes that will clean up some existing technical debt
Examples of high impact items:
- Features that fundamentally change how your application works
- A feature that adds significant improvements for a majority of your user base
- A feature that will bring in new users or retain existing ones
- A critical bug fix that is preventing users from accessing your application or requiring unreasonable workarounds
- A UI/UX change that will delight and add to the usability of the application
- Clarifying copy that customers continually reach out to support to help understand
- Code improvements that will improve the code base for the future
Expressed as a number
Ex. 3 (mythical man-months)
Effort is how long your team will need to pull this item off.
The unit of this number is up to you. For small teams, this could be measured in days. Medium teams may measure this in weeks, while large teams may use a month scale.
If you’re not great at estimating how long software will take to build (and hey, who is?), then it may be wise to get some input from the team on these scores.
Expressed as a percentage
Confidence is an indicator of how sure you are of all your estimates above.
Know that button color change will take only a few minutes, its impact will be low, and it may impact 100 users? Your confidence score may be in the upper 90s.
Swapping out your entire payment processing system with a new one? It could take a couple of weeks or it could take months. It’s impact to your internal systems may be high, but to users it may be low. Reach is tricky and it may touch none of your customers or all of them if they land up needing to re-add their payment method. Confidence on this one may be much lower. Perhaps in the 30-40% range.