If your conversion rate moves from 1.5% to 2.2%, you haven't just made 47% more sales. You've also improved the return on your Google Ads, your SEO, your email marketing, and everything else. Every dollar spent on acquisition now works harder.

That's why CRO is often the highest-impact marketing investment a business can make, especially once you've hit a decent traffic baseline. I help US and Canadian businesses run CRO programs rooted in evidence, not opinions, in English or French.

01

Research and audit: understand before you test

The biggest mistake in CRO is jumping straight to A/B tests. Without research, you're testing opinions. My first step is always to understand why visitors aren't converting.

What I look at:

  • Quantitative analysis. GA4 data, conversion funnels, exit rates by page, behavior by device. Where are visitors dropping off? Is it systemic or page-specific?
  • Qualitative analysis. Heatmaps (Microsoft Clarity, Hotjar) and session recordings to see what visitors actually do. Surprises are common: buttons nobody sees, forms that silently fail, elements that pull attention away from the main CTA.
  • User research. On-site surveys (post-purchase micro-surveys, exit-intent surveys), customer interviews where possible. Visitors use their language, not yours. Understanding their hesitations is half the work.
  • UX and technical audit. Load speed, mobile UX, accessibility, JavaScript errors that break flows without anyone noticing.

Client example: on a Montreal e-commerce site, a heatmap analysis revealed that 73% of mobile visitors never saw the product reviews, buried too far down the page. Moving reviews above the add-to-cart button produced a measurable lift in mobile conversion rate. No need for a 6-month test to figure out what was happening.

02

Prioritized hypotheses and A/B testing

Once we've spotted the frictions, we form hypotheses. Not random ideas. Structured ones: "Because our data shows X, if we change Y, then the conversion rate should go up because Z."

How I work:

  • ICE or PIE prioritization. Potential impact, confidence in the hypothesis, ease of implementation. We test what has the best effort-to-return ratio first. No button color tests while there are major bugs in checkout.
  • Statistical rigor. Sample size calculated before launching a test. Target 95% statistical significance. No tests called after 48 hours because "it looks like it's winning." That's how you end up with false positives that cost you money long-term.
  • Test what matters. The big levers first: checkout, product pages, value propositions, lead forms. Not micro-optimizations until the rest is clean.
  • Every test documented. Hypothesis, variants, results, learnings. Even losing tests are valuable: they tell you what doesn't work with your customers. That knowledge base compounds over time.
03

Optimization levers I work on

CRO wins come from several places. Here's where I step in most often:

  • Product pages (e-commerce). Information hierarchy, photo quality, customer reviews, availability, purchase options. The product page is your best salesperson. If it's doing a bad job, the rest of the site rarely compensates.
  • Cart and checkout. Reducing friction, clear fee breakdown, payment options, security signals, abandon recovery. Checkout is often where 60 to 70% of cart starters drop off.
  • Forms (lead generation). Every field you remove increases completion rate. Every required field that isn't essential costs leads. I review the relevance of each question.
  • Trust signals and social proof. Customer reviews, certifications, guarantees, case studies. Placed at the right moments in the journey, not just in the footer.
  • Performance and mobile UX. A slow or poorly-adapted mobile site loses up to 50% of potential conversions. Google's Core Web Vitals aren't just an SEO factor, they're a conversion factor.
  • Abandon recovery. Folded into Google Ads remarketing and email sequences. A visitor who made it to cart is worth far more than a cold visitor.
04

Continuous loop with analytics

CRO isn't a one-shot project. It's a continuous process that depends on clean analytics. Without reliable data, you can't measure test results. Without tests, data just describes the past instead of informing the future.

My recommendation for most clients: start with an analytics audit (to make sure the numbers you're leaning on are actually correct), then move into CRO research and testing. The two services are built to work in tandem.

What to know before you test

A few truths you won't find in a blog post about "the 10x conversion hack":

  • Most tests don't win. In reality, about 1 in 5 tests produces a statistically significant lift. That's normal. Losing or flat tests are part of the process.
  • Big wins come from big changes. Testing a button color rarely produces a measurable effect. Testing a product page redesign does. The size of the change has to match the size of the hoped-for gain.
  • You need traffic. A site with 500 visitors per month can't run real A/B tests. For those cases, qualitative research and fixing obvious issues produce better results.
  • Fix bugs before testing. If your add-to-cart button doesn't work on iOS Safari, you don't need an A/B test. You need a fix. I always start by cleaning up what's visibly broken.
  • Averages lie. A test that "wins on average" can lose on mobile and win on desktop. Always segment results by device, traffic source, and customer segment.

Frequently asked questions about CRO

What's the difference between CRO and A/B testing?

A/B testing is a tool. CRO (Conversion Rate Optimization) is the full discipline. A solid CRO program starts with research (analytics, session recordings, customer interviews), forms evidence-based hypotheses, then uses A/B tests to validate or invalidate those hypotheses. Testing without research first is a waste of time and traffic.

I have low traffic. Does A/B testing still work?

Classic A/B testing gets hard at low volumes: you need thousands of conversions per variant to reach statistical significance. For those cases, I shift to a different approach: deeper qualitative research, optimizations based on proven heuristics (trust signals, speed, mobile UX), and sequential tests instead of simultaneous ones. You can still optimize, just not the same way.

How long before we see results?

An A/B test usually runs 2 to 4 weeks to reach statistical significance. The initial research and audit phase takes another 2 to 4 weeks. So figure 6 to 8 weeks before the first validated wins. That said, fixing obvious bugs (broken form fields, mobile issues, broken buttons) can produce immediate gains without any test.

What tools do you use for A/B testing?

Depends on the setup. For Shopify stores, I often use Convert or VWO. For custom platforms, Optimizely or A/B Tasty. For heatmaps and session recordings, Microsoft Clarity (free) or Hotjar. The tool is secondary, the methodology is what matters.

Do you work with Shopify, WooCommerce, or other platforms?

Yes. I've worked on Shopify, Shopify Plus, WooCommerce, Magento, and custom platforms. Each platform has its quirks when it comes to tracking and testing tool integration. I adapt to what's in place rather than forcing a platform change.

How much does a CRO program cost?

Depends on the depth of the program. A one-off CRO audit with prioritized recommendations takes 2 to 4 weeks. A continuous program with monthly tests is a longer commitment. I always give a fixed quote after an initial call, with clear milestones and a timeline.

Getting traffic, but it's not converting?

A CRO audit typically surfaces 5 to 10 measurable frictions costing you sales right now. First call is free, whether you're in the US or Canada.

Get in touch