Here’s how you can leverage two completely different sales channels with experimentation, but only if you know where to research and what to prioritize. Butterfly Network sells ultrasound machines to two different user groups: individual consumers via self-service ecommerce and healthcare organizations via 1-1 sales. Now, the team at the Butterfly Network knew that experimentation could drive many growth opportunities with their two separate paths to conversion. But experimenting costs. So, the team wanted to align stakeholders on from several departments to ensure they had the internal resources and buy-in. Here’s how we helped them: Speero ran our full-service experimentation program alongside the Butterfly Network team. We used: — Our test bandwidth calculator to show which areas of the website held significant opportunities. — ResearchXL to gain insights into friction and pain points. — PXL to prioritize the hypothesis objectively. Our ResearchXL work combined qualitative and quantitative research methods, such as user testing, copy testing, heuristics, and UX design audits. This way, we had a holistic view of the customer journey and were able to build a roadmap supporting both of the channels. Our framework helped the team align stakeholders across the company by providing a data-backed business case for supporting the program. Our structured approach meant we could increase the quality and number of tests. Speed + impact = lots of revenue and growth Speero shared learnings from all the run tests and prioritized those that would inform bigger business decisions (like pricing). Butterfly Networks wanted to introduce new modular pricing called “Pro Custom” on their ecommerce store. We tested this new pricing structure and measured each stage of the signup flow. Seeing where the drop-off was helped to inform and refine their approach toward specialty-driven packages. The results? — Increase the number and quality of tests via data-backed hypotheses and prioritization. — Gained support and resources from stakeholders across the business by showing the value of experimentation. — Informed significant business decisions around pricing and product.
Speero
Business Consulting and Services
Austin, Texas 7,608 followers
Growth Experimentation & CRO for Marketing and Product Teams
About us
Speero is an experimentation agency. Speero focuses on helping product and marketing teams make better, faster decisions using A/B testing and more broadly experimentation programs. We are for mid-sized and enterprise organizations looking to build and scale CRO and experimentation. Speero, formerly CXL agency, was founded in 2011 by Peep Laja the #1 most influential conversion rate optimization expert in the world. Generally we serve lead-gen and ecommerce, with clients such as ADP.com, Mongodb, Codecademy, Serta-Simmons, Native Deodroant, Miro.com, and others. Speero has offices in London UK, Tallinn Estonia, and Austin, Texas USA. (but we're more and more fully remote!)
- Website
-
https://speero.com/
External link for Speero
- Industry
- Business Consulting and Services
- Company size
- 11-50 employees
- Headquarters
- Austin, Texas
- Type
- Privately Held
- Founded
- 2011
- Specialties
- Conversion Optimization, Customer Experience , CX, CRO, Experimentation , User research, Optimization, A/B Testing, and Analytics
Locations
-
Primary
901 S Mo Pac Expy
Suite 150
Austin, Texas 78746, US
-
Telliskivi 60
Tallinn, Harjumaa 10412, EE
-
Jászai Mari tér 5-6
Budapest, 1137, HU
Employees at Speero
-
Jonny Longden
Chief Growth Officer @ Speero | Growth Experimentation Systems & Engineering | Product & Digital Innovation Leader
-
Kristin Kelly Ravesloot
Agency Managing Director Americas | Chief Operating Officer, Key Accounts Director, Sales, Business Development, Partnerships + Digital Marketing…
-
Peep Laja
CEO @ Wynter. 3x Founder. Host of the How to Win podcast.
-
Jason Lively
Optimizing Growth with Data | PM Stuff @ Speero
Updates
-
Most teams say they’re data-driven. Few actually are. The default playbook (pulling numbers from GA4, watching conversion rates fluctuate, and looking at dashboards) isn’t enough. That kind of data tells you what happened but not why. And optimizing without knowing why is just educated guessing in a lab coat. Real optimization (the kind that moves metrics and changes orgs) requires more than surface-level numbers. It takes triangulation: pairing quantitative signals (like AOV, RPV, bounce rate) with qualitative depth—polls, heatmaps, session replays, cancellation reasons, user friction, all of it. Not because one is better than the other, but because together they build context. You need to see the full picture: – How customers move through the funnel – What’s breaking trust or momentum – Where your experimentation program is leaking time, value, or insight Forget about measuring conversions. Understand behavior, analyze drop-offs, and track performance at the program level (not just the test level). This isn’t just optimization hygiene. But a strategic leverage. A team grounded in real data doesn’t just avoid mistakes, but finds compounding insights. And those insights compounds into growth. Gut feeling gets you started. But data’s what scales.
-
-
"With AI in Airtable, there are many new possibilities for experimentation programs. From simple AI-generated text to fully automated agents. You can now integrate AI directly in fields, automations, the AI chat, and Interfaces to streamline your workflow. Some of the things we’ve been building: 🔹 Management summary fields 🔹 Automated tagging of psychological principles 🔹 AI feedback coach (i.e., on hypothesis and learning quality) 🔹 Follow-up test ideas based on completed experiments 🔹 Test ideas generated from many completed experiments (i.e., per page) 🔹 Idea checker (i.e., has it been tested before, similar tests, feedback) 🔹 Meta learnings 🔹 Opportunity generator What AI solutions have you built or seen in Airtable?" Original post and image by Ruben de Boer In this week's Speero's best take of the week: where we celebrate the best takes and opinions from our fellow experimentation experts.
-
-
You can find lots of insights even after the customer buys. For example, we had a cosmetics retail client who had incredibly high customer loyalty, and we wanted to investigate what factors led to that. While they had a visible founder running the business and a great company story there was more going on. So we got in touch with customer service, and started asking questions. We discovered that their delivery function was adding high-quality candy to the packages of cosmetics that were shipped out to their customers. And people really loved that. It was a small thing, but it resulted in clear and measurable customer retention, referrals, and positive testimonials. Knowing how to retain customers is not a tactic, but it’s something that you can figure out using the same processes we use. Processes like: – Gathering and analyzing qualitative data – Gathering and analyzing quantitative data – Experimentation When we looked at our most successful experimentation programs, we saw that those that went beyond the website and explored the broader customer experience were able to create the most value.
-
-
We tested adding 'See available offers' CTA for returning users and those who didn't buy, and this increased transactions by 6.03% First, in exit polls & surveys, we found users who said they saw offers they couldn’t find later, causing friction & frustration. Our hypo was: If we add a “See available offers” CTA for returning and/or uses who didn’t buy, they'll find missing offers when coming back, increasing transactions. Outcome/Takeaway: Winner - Implement +6.03% in transactions (96% sig.), +10.41% revenue per user, +4.24 AOV.
-
-
For years, A/B testing was simple, yet shallow. Drop a JavaScript tag on the front end, point the tool at the thank-you page, and test basic UI tweaks. Button colors, headline changes, maybe moving a widget. Simple, but not strategic. That approach is now obsolete. The market has changed dramatically for two big, connected reasons: 1-You need deeper business answers. A quick "win" on the front end—like a 5% bump in sign-ups—doesn't tell the full story anymore. Your brand wants to know: Does this lift Customer Lifetime Value (CLV)? Does it improve long-term retention? Does it impact our profit margins? To answer those questions, you need complex, end-to-end data. That requires connecting your experiment results to your entire data view, often by piping variant data directly into your central data warehouse for reliable, full-funnel analysis. 2-Experimentation went full-stack. Testing isn't just a marketing activity anymore. It's moving into the core product and business logic. Think about experimenting with: Pricing models Subscription logic Logistics algorithms Internal workflows You can't run these high-leverage tests with client-side tools alone. They require robust, integrated full-stack experimentation capabilities. The truth is, experimentation is now a data discipline. If your data is fragmented, delayed, or siloed, you’ll be stuck running low-impact UI tests forever. Strategic, high-leverage testing that moves the entire business requires decision-grade data at its core. The key question you have to answer: How complete and reliable is your data foundation? That's your limiting factor for growth.
-
-
Wanna join a free experimentation/UX/data event with no strings attached? We’re running Experiment Live Meetups in Barcelona, Boston, Madrid, and Oslo in the next two months. What makes Meetups different is that there are no pitches, only peer talks. The vibe is about sharing the messy realities of our work, not just the glossy highlight reels you see online. Scaling experimentation isn’t easy, and the best way to learn about it is from these direct, in-person interactions with people who face the same real-world obstacles. Some features we plan to bring include: – 3 short talks from folk about growth engineering, AI in experimentation, or any latest topic – AMA for others afterwards – Mini-workshops for attendees – Ticket lottery for Circus (our new event in London and beyond) If you want to experience the "real conversations" and "good energy" yourself, keep an eye out for our next Experiment Live Meetups. – November 6, Madrid – November 11, Barcelona – November 13, Boston – December 3, Oslo
-
-
Buying online continues after the post-purchase. Delayed or missing items. To track the order, you need to log in to a separate app. This is why you need to optimize beyond the initial acquisition. Most website optimization agencies only look at top of the funnel (ToFu) conversions: How do we get the customer to buy? How do we get the customer to buy faster? How do we get the customer to buy more? Answer these and you win. But only in the short term. People don’t have a lot of brand loyalty. You need to do more than convert: make customers stay. If you want to understand why customers stay or churn, you need to look past the top of the funnel. You need to look at data in places like customer support logs and surveys. True, these resources aren’t directly connected to the sales piece of the puzzle. But they are connected to the customer experience. And the customer experience goes way beyond the point of sale. The full customer journey includes post-purchase behaviors, support interactions, and long-term retention metrics like CLV. Understanding this end-to-end experience helps you test more than landing pages—it helps you test loyalty drivers, reduce churn, and improve operational efficiency. This broader lens lets experimentation become a strategic engine, not just a CRO tool. When you connect qualitative and quantitative data, and tie experiments to every customer touchpoint, you build a compounding loop of insight, learning, and value creation.
-
-
If you want to build an experimentation program, finding the right tool isn’t it. You need to build culture. Make experimentation a part of the company’s DNA, not just push more tests. Here’s the thing: You can't go from zero to a full-blown testing machine overnight. You need a phased approach, and the right program metrics help you track your progress and know what to focus on next. Gympass was in the same situation when they hired us. Their initial goal was to run a single experiment. It should be yours too. This proves the process is possible. Once that's done, your focus shifts: increase aggregate test velocity. While increasing velocity, you should also start measuring the adoption of resources and templates. Tracking this KPI lets you drive governance and shared understanding, and assess the level of experimentation adoption across the business. We helped Gympass start tracking this and also established monthly meetings and internal forums. We also told Gympass to track meeting attendance to ensure we engaged with diverse stakeholders from product, marketing, and engineering. Once the velocity is up, you need to spread it across teams. This way, you establish experimentation across the whole company, not just marketing. Focus on complexity. Don't just count tests. Start segmenting your metrics by test type. Are teams running simple tweaks, or are they tackling more substantial, disruptive experiments? Seeing a shift towards more sophisticated tests shows the program is truly maturing. Coach, don't just "do." Initially, you might handhold teams. The ultimate goal is to move from constant support to coaching them through complex questions. The more independent teams become, the more bandwidth you have to focus on strategic insights and tougher problems. The end result? Well, we did all of this for Gympass, and they ran complex experiments from scratch in just 6 months. The fastest we’ve seen a program move from zero to two. Gympass also achieved an 80% adoption rate for experimentation templates across the brand and developed rituals and artifacts for training and enablement.
-
-
What’s the goal of a testing program? Are we doing experimentation, conversion rate optimization, or customer experience optimization? There isn’t one 1 answer to this. It depends on the questions you’re asking yourself and where and what you want to optimize. —Experimentation is about improving decision-making. How your company operates, innovates, and looks at things. —CRO is more tactical. It’s focused on a specific swim lane or metric group. —CXO focuses on the users and their experience. Their goals are different: —XOS (Experimentation Operating System): learn to test. Ensure you know how to run and analyze reliable tests. —CXO (Customer Experience Optimization) focuses on customer learnings and educating the team. —CRO (Conversion Rate Optimization) focuses on getting customer and revenue wins. Their metric strategies as well: —XOS is focused on guardrail metrics and program strategy. Common KPIs include: velocity, error rate, complexity level, program maturity level, teams/user testing. —CXO is focused on customer metrics and customer experience strategy. Common KPIs include: number of users/teams, speed, engagement depth scores, UX quality scores, referral rates, and retention. —CRO is focused on revenue metrics and growth strategy. Common KPIs include: unit economics, number of transactions or subscriptions, AOV, LTV, leads, MQL/SQL, and pipeline metrics. This blueprint or model isn’t perfect, but it does provide a starting point in the right direction.