Conversion Rate Optimization

What is CRO?

In internet marketing, conversion optimization or conversion rate optimization (CRO) is a system for increasing the percentage of visitors to a website that convert into customers, or more generally, take any desired action on a webpage. It is commonly referred to as CRO.
Kahled, Saleh and Shukairy, Ayat (2011). Conversion Optimization: The Art and Science of Converting Prospects into Customers, p. 2. O'Reilly Media, Sebastopol. ISBN 978-1-449-37756-4.

If you’ve found this page looking to learn more about Conversion Rate Optimization (CRO), you’ve come to the right place. This is a resource for you to better understand CRO, what CRO looks like in action, and how to get started with CRO. By the end of this page, you should be ready to start implementing a CRO process of your own.


CRO refers to two distinct phases:

1) Conducting research on your existing website visitors and their experience. The goal of this research is to understand these users, how they behave, where they come from, and what makes them convert into customers — or not convert at all.

“Research is formalized curiosity. It is poking and prying with a purpose.” - Zora Neale Hurston

2) Mitigating any blocks you identify in the first phase. You’ve likely found several ways that you can improve the efficiency of your business.

Typically this starts with easy to understand changes like increasing your visitor-to-lead conversion rate, but it can go deeper as you get more advanced and start working to improve core business processes, customer lifecycle performance, and more.

CRO drives results because it’s a repeatable process through which we can apply simple math to establish successful optimizations and solutions to user problems. For instance, a landing page with 2000 monthly visitors that generates 60 leads per month has a 3% conversion rate. If that same landing page is optimized over time and the conversion rate rises to 7%, the number of generated leads jumps up to 140 per month. It's really that simple! That’s the power of optimization. Lifting a conversion rate from 3% to 7% results in a 133% increase in leads. You’ll be hard pressed to find a better way to get more leads without adding additional traffic to a site.

While conducting research (phase 1), you will likely want to jump into validation (phase 2) as soon as possible. Before you do, keep this quote from Albert Einstein in mind: 

“If I were given an hour to do a problem upon which my life depended. I would spend 40 minutes studying it, 15 minutes reviewing it and 5 minutes solving it.”

The more informed your solutions are, the more significant your results will be.


To see what CRO looks like in action, let’s look at an example of what one team did to convert more of their website visitors into leads. Throughout the research phase, marketers at the company realized that the website was attracting a healthy amount of visitors, but they were having a hard time converting those visitors to leads.

In order to maximize the value of this website traffic, it would need to convert more of those visitors into leads, which could then be routed to sales and closed as new business. The team saw that visitors had no problem navigating to their primary landing page (a web page with a form in which they could submit their information), they just weren’t filling in the web form to access the content offer.

From this research, the team determined that they should focus their solution on that web form. Something needed to be changed or altered to increase the likeliness of a website visitor filling it out and clicking "Submit."

When the team reviewed the form, the first and most obvious idea was to request less information from visitors. Friction refers to the amount of effort a website visitor has to apply to convert into a lead. By decreasing the amount of friction at this conversion point, the team could increase its rate of conversion (conversion rate) from visitor to lead.

The team hypothesized that they were asking for too much personal information.

“These people just learned about us. Are they really going to share all that information with us right at the beginning of their buyer's journey? Let’s trim this form down.”

To fill in the form, the user had to trust that the company would deliver the guide promised on the landing page. The information in that guide then exists to further build trust with the company, so when it’s time to start a conversation with a salesperson, the prospect truly believes in the salesperson's credibility and the company’s ability to provide value to them.

Let’s see what our team did in the first test to validate their solution that a shorter form with fewer questions would increase conversion rates from website visitor to lead ... 

There you have it: A nice, simple form asking only for the information a salesperson needs to get in touch: first name, last name, email, postal code, and a "comments" section. (People want to be able to explain themselves, right?)

The team ran an A/B test through which they showed half of the landing page visitors the new shortened form and the other half the original, longer form. Over the course of a month, this split test would clarify which form version converted more website visitors into leads.

But wait ... the shorter form converted 15% fewer website visitors into leads! What happened?

"That A/B test cost us leads!"

The team went back to the drawing board. Rather than following the generally recommended best practices, they ran some user interviews to see what was really working on the form. Remember, more research means better results.

The thumbs-up represent fields that people were comfortable with and more likely to complete, and the thumbs-down indicates fields that visitors were uncomfortable with, confused by, or less likely to complete. With that additional information, it’s pretty clear why the new form didn’t perform as well:


For their second test, the team developed a new form based on their research. This form included the fields that visitors were comfortable with and omitted those that they didn't prefer. The team also included the highest engagement fields at the top of the form to help build momentum.

With a new form ready, another A/B test was set up. Over the course of a month, half of the visitors would see the original form (the better performing variant in the first A/B test) and the other half would see this new variation. A month worth of visitor data later, and they had a positive result ... +15% increase in conversions from the new form.

The team had made a significant improvement, and they did it through iterative testing and user research. They used thorough research to make an informed observation, they defined a solution and let the numbers validate or invalidate it, they recorded those learnings, and they moved onto the next test with that newly acquired information. This cycle is the basis of CRO — an objective approach to systematically improving conversion rates.

Now that you’ve seen the conversion rate optimization process in action, let’s dive deeper into the all-important research phase.


As you learned in the last section, user research is vital to creating informed solutions to user problems. You'll never know every variable impacting how or why a visitor chooses to convert. Maybe their internet went out at that moment, or they received a call from their mom that made them abandon the page. The goal of user research isn’t to know everything — it’s to know as much as you need to start building solutions.

You can divide your research work into two main categories, quantitative and qualitative.

Quantitative User Research

Quantitative research represents data and user behavior focused research. Gather this information with tools like Google Analytics. This quantitative data can speak volumes about the flows between pages of your website and how variables such as traffic source impact those flows.

Look at how long a visitor is on a page. Are they reading the content, or are they bouncing from the page immediately? In other words, are they seeing what they expected to see when they landed on that page? Do visitors who enter your website through an organic search end up converting to leads, or do referral visitors from a publication convert at a higher rate?

Go deeper: What indicators can you take from your data around what visitors or lead sources have the best lifetime value (LTV)? When you're looking at this data, put yourself in your visitors' shoes. Imagine them visiting your website for the first time. These are all insights to seek as you dig through your data and analytics.

Qualitative User Research

Qualitative research refers to research from channels like customer interviews, review websites, and focus groups. Whether you interview your customer support team to understand what questions and concerns new customers have, or conduct user research by watching new users navigate your website, this type of research is all about documenting behaviors and ideas and extrapolating broader observations from them.

Qualitative findings can be bucketed as the why of your user’s behavior. This sort of insight comes from more freeform feedback channels. This stage is where you want to learn the human questions and reasons around why your audience does what they do.


Through these two research methods, you should have a myriad of observations from which you can test new hypotheses. How do you know which solution to create first ,though? There’s an easy framework for prioritizing your optimization. In fact, it’s as easy as pie.

The PIE framework for prioritizing solutions is an acronym that stands for Potential, Importance, and Ease. By scoring each solution by Potential (the potential improvement from your solution), Importance (the value of the traffic to that page), and Ease (the resources needed to build and test the solution), you can quantify and order your list of possible solutions and be sure that the solution you build next is the best choice.

In order to score each of these areas, be sure to have clear success criteria. You should be able to state that “if this solution is successful, [some quantifiable metric] will change by [some measurable amount]." Your success criteria needs to be the same in order for you to prioritize solutions by the likeliness that they achieve that criteria.

Once you’ve prioritized your ideas, you can go ahead and build your user-driven solution.


Let’s use the following checklist to make sure we're ready to run our first experiment. Before you proceed, you should have:

1. ... performed quantitative research on the users you want to impact in order to determine the conversion point that will produce the most impactful results for your company. Is it conversion from website visitor to lead, or lead to opportunity, or customer to referral?

2. ... performed qualitative research on your users and the people who know your users. Do you have a sense of who these people are? Can you imagine a conversation with the average user, if there were such a person? Do you have a sense for the variability in your audience sample? Are they all very similar, or very different?

3. .. .defined a list of possible solutions, and prioritized them based on Potential, Importance, and Ease. What observations have you made in steps one and two? Is there ‘low hanging fruit’ that you feel strongly about? Does a high score in Ease outweigh a low score in Importance? Or perhaps the investment in a very resource-intensive (low Ease score) solution is well worth the high potential impact it may have?

4. ... designed and built your first solution, carefully documenting the control group, variant, and success criteria. Do you have a group of visitors that will act as a control group, demonstrating what would have happened if no variable was changed? Is the variable you are changing for your variant group a big and bold enough change to produce a significant result? Do you have a scheduled date at which time you expect to see a statistically significant result?

Once you’ve completed those steps, it’s time to run your experiment to validate your solution. Run the experiment (often an a/b test, but not always) for as long as you need to produce a statistically significant result. The more visitors flowing through an experiment, the more quickly your results will become significant.

For this reason, some marketers recommend forcing higher data volumes by running ads towards your page that result in a higher volume of traffic. Arguably, this traffic source introduces another variable into your experiment that was not accounted for in your research phase. Instead, pick a variable that will demonstrate a significant change with respect to the volume of data your experiment will innately have.


As noted above, conversion rate optimization (CRO) is an iterative process. Perhaps the most important part of testing solutions is not the result you get, but the education you gather from that result. Whether your solution is validated or not, take the time to record what you’ve learned. More often than not, it will be the unexpected results that illuminate a new finding that can inform your next optimization!

Use this newly gained information to continue your user research. Did the outcome of your last experiment uncover an interesting behavior worth analyzing further? Can you contact a particular user that demonstrated some interesting behavior? Did findings from your first test change the way you would score future solutions in the PIE framework?

As you move through optimization cycles, you will begin to form a rhythm and become aware of your capacity to build and test new solutions. Be mindful of running concurrent experiments that may impact each other and take care to ‘keep your laboratory clean,’ so to speak, as you run more tests.

What you have learned by reading this resource is just an introduction to CRO. We hope you’ve enjoyed this overview and are excited to get started finding new solutions to your user’s challenges!

Related Articles on Conversion Rate Optimization