Bell Statistics’ cover photo
Bell Statistics

Bell Statistics

Business Consulting and Services

Hire Advanced Analytics Top Specialists

About us

Hire Advanced Analytics Top Specialists. Bell offers outsourced advanced analytics services by experienced statisticians, specializing in: A/B Testing, Causal Inference, Media Mix Modeling and Geo Tests.

Website
www.bellstatistics.com
Industry
Business Consulting and Services
Company size
2-10 employees
Headquarters
Tel Aviv
Type
Privately Held
Founded
2022
Specialties
A/B Testing, Statistics, Data, Marketing Mix Modeling, Geo Testing, Advanced Analytics, and Causal Inference

Locations

Employees at Bell Statistics

Updates

  • Bell Statistics reposted this

    View profile for Allon Korem

    CEO at Bell Statistics - Data-driven growth with advanced A/B testing for your business | Ex-googler | Statistician

    Think your ads are working? Cool. But can you prove it? Billboards. Local promos. Big campaigns. Everyone runs them. But when it comes to impact, most teams are still guessing. Top brands aren’t. They’re using Geo Testing to measure the real-world lift of their marketing. No fluff, no “brand halo,” just hard numbers. At Bell, we’ve been helping clients run Geo Tests for years. And now, with Statsig’s new guided workflow, it’s easier than ever to do it yourself. On Tuesday, September 30th, 19:00 (IL time) I’ll be teaming up with Michael Makris, Data Scientist at Statsig, for a live session: Geo Testing 101. We’ll break down: • What Geo Testing is - and when to use it • The methodology explained simply, with real examples • How to measure impact directly from your warehouse • How Statsig’s workflow makes Geo Testing easy to set up If you’re running marketing campaigns and want to stop guessing - this one’s for you. Save your spot: https://lnkd.in/dVJj5MaS #experimentation #abtesting #productmanagement #dataanalytics #growth #statsig #bellstatistics

    • No alternative text description for this image
  • Bell Statistics reposted this

    View profile for Allon Korem

    CEO at Bell Statistics - Data-driven growth with advanced A/B testing for your business | Ex-googler | Statistician

    In his latest post on Statsig blog, Israel Ben Baruch, our CRO expert, shared a practical framework for navigating tests when results are unclear, including: (1) The 4 “next steps” available to you after any test (2) When it’s okay to move forward with a non-significant uplift (3) When you should rerun, hold back, or dig deeper (4) How to stop treating every test as a verdict and start treating it as a tool Too often, teams default to a binary decision: “Did it win?” But that mindset leads to stuck roadmaps, endless retesting, and slow teams. Instead, we show how great teams ask: What did we learn from this test? And what should we do next? It’s not about lowering the bar. It’s about building momentum without fooling yourself. Seek to build test velocity? Read the full guide here: https://lnkd.in/dtuW5pmS #ABTesting #Experimentation #ProductAnalytics #GrowthMindset #DecisionMaking #BellStatistics

  • Bell Statistics reposted this

    View profile for Allon Korem

    CEO at Bell Statistics - Data-driven growth with advanced A/B testing for your business | Ex-googler | Statistician

    Still running A/B tests one at a time in 2025? In the fast-paced world of product development and data analytics, the traditional approach of running A/B tests sequentially is increasingly becoming a bottleneck. A recent blog post by Oryah Lancry-Dayan & me challenges this convention and advocates for the adoption of parallel testing in A/B experiments. The primary concern with running multiple A/B tests simultaneously is the potential for interference between experiments, which could confound results. However, we argue that with proper statistical methods, such interactions can be identified and managed. In many cases, the likelihood of significant interference is low, especially when individual changes have relatively small effects. And what are the advantages of Parallel Testing? (1) Accelerated experimentation Timelines: Running tests one at a time often leads to delays, as new experiments must wait for the completion of ongoing ones. Parallel testing removes this constraint, allowing multiple experiments to run concurrently, thereby speeding up the overall testing process. (2) Enhanced statistical power: When tests are conducted sequentially, there's pressure to shorten their duration to keep the testing pipeline moving, which can compromise statistical power. Parallel testing alleviates this pressure, enabling analysts to maintain high statistical power and reduce the risk of overlooking meaningful effects. (3) Deeper insights through complex experimentation: Running experiments simultaneously allows for the exploration of interactions between different variables. For instance, testing the impact of color and font size on revenue in parallel can reveal whether their combination has a synergistic effect, providing more comprehensive insights for decision-making. While parallel testing offers significant benefits, it's essential to approach it thoughtfully. Not all tests are suitable to run simultaneously. Analysts must assess the potential for interactions between experiments and ensure that proper statistical controls are in place to account for these interactions. Additionally, clear documentation and communication across teams are crucial to manage the complexity that comes with running multiple tests at once. For a more in-depth exploration of parallel testing in A/B experiments, read the full blog post here: https://lnkd.in/dHCRmzV2. #ABTesting #ParallelTesting #DataAnalytics #ProductDevelopment #Experimentation #BellStatistics #AdvancedAnalytics

  • Bell Statistics reposted this

    View profile for Israel Ben Baruch

    Founder at Optimizer | Google CRO expert | 2000+ A/B tests | Helping marketing and product executives boost conversion rates

    איזה מכוער הדבר הזה... איזה ממשק גרוע 🤮 איזה פונט מיושן מכירים את זה? אתם ומעצב המוצר מסתכלים על חוויה - ולא מצליחים להבין איך הדבר הגרוע הזה בכלל עובד... עם הפונטים האלה, הריווחים, האייקון המכוער. אם גם אתם עובדים בחברות שלא זזות בלי ולידציה מספרית - זה המקום לדבר על Non-inferiority tests או בשפת העם - טסטים דיפנסיביים 🛡️ כאלה שבהם הדבר היחיד שנרצה לוודא הוא שאנחנו לא גורמים נזק. חלק בלתי נפרד מהחיים שלנו. כשנרצה להגיע למצוינות עיצובית שלא תמיד מזיזה את המחט, להתיישר לרוח הברנד, לעמוד בדרישות הרגולציה  - ולפעמים פשוט להפסיק להתבייש בתוצר שלנו. איך עושים את זה?  בשונה מטסט רגיל (superiority) שמכוון לשיפור מובהק, בטסטים דיפנסיבי נצא לדרך רק אחרי שהגדרנו מראש את המרווח, הפגיעה המקסימלית שאנחנו מוכנים לספוג (למשל ירידה של עד 2% ב-KPI המרכזי), ועדיין להחיל את הטסט. בקיצור - לא כל שינוי צריך להרים את המדדים. לפעמים הוא פשוט צריך לא להרוס. רוצים לצלול לעומק?   בתגובה הראשונה מאמר מעולה למיטיבי לכת של Allon Korem ו-Oryah Lancry-Dayan, אני רק הוספתי לו בסיום את זווית ה-CRO. #ABtesting #CRO #Noniferiority

    • No alternative text description for this image
  • Bell Statistics reposted this

    View profile for Allon Korem

    CEO at Bell Statistics - Data-driven growth with advanced A/B testing for your business | Ex-googler | Statistician

    Running 5 experiments a year is easy. Running 5,000 without breaking things? That’s the real challenge. In 2 days, June 4th, 19:00 (IL-time), I’m joining Yuzheng Sun from Statsig to break down how top product and data teams scale experimentation without losing trust in their results. We'll cover: • What changes as you scale experiment velocity • How to evolve your infra, culture, and guardrails • Why things break, and how to avoid the most common traps If you care about faster learning and smarter decisions, don’t miss this. 📅 Grab your spot: https://lnkd.in/dHQ4v9Fr #experimentation #abtesting #productmanagement #dataanalytics #bellstatistics #statsig

  • Bell Statistics reposted this

    View profile for Allon Korem

    CEO at Bell Statistics - Data-driven growth with advanced A/B testing for your business | Ex-googler | Statistician

    Is “good enough” ever truly good enough in A/B testing? TL'DR: Absolutely. In a recent blog post by Oryah Lancry-Dayan & me (and some insights from Israel Ben Baruch), we've explored the concept of non-inferiority tests and how to use them. What is a non-inferiority test? Unlike traditional superiority tests, which aim to demonstrate that a new version outperforms the control, non-inferiority tests are designed to confirm that a new version is not unacceptably worse than the existing one. This approach is particularly valuable when: • Implementing necessary changes: Such as legal compliance updates or backend optimizations, where the change is required, but it's crucial to ensure it doesn't negatively impact key metrics. • Refreshing designs: When updating branding elements like logos or fonts, the goal may be to modernize the appearance without harming user engagement or conversion rates. • Optimizing resources: In scenarios like reducing vaccine dosages during the COVID-19 pandemic, the aim was to maintain efficacy while conserving resources. What are the key differences in hypotheses? In a superiority test, the null hypothesis (H₀) posits that the new version is equal to or worse than the control, and the alternative hypothesis (H₁) suggests it's better. In a non-inferiority test, H₀ assumes the new version is worse than the control by more than a predefined margin (Δ), while H₁ asserts it's not worse than the control by more than Δ. This shift in hypotheses allows teams to validate changes that are necessary or beneficial in ways not captured by performance metrics alone. How to designing a non-inferiority test? (1) Define the non-inferiority margin (Δ): Determine the maximum acceptable decline in performance. For instance, a 2% drop in conversion rate might be tolerable for a significant backend improvement. (2) Ensure adequate statistical power: Design the test to detect differences within the margin with sufficient confidence. (3) Interpret results appropriately: A statistically significant result in a non-inferiority test indicates the new version's performance is not worse than the control by more than Δ, supporting its implementation. For a deeper dive into non-inferiority testing and its applications, read the full blog post here: https://lnkd.in/dGD3pH-j #ABTesting #NonInferiority #Experimentation #ProductAnalytics #BellStatistics #AdvancedAnalytics

  • Bell Statistics reposted this

    View profile for Allon Korem

    CEO at Bell Statistics - Data-driven growth with advanced A/B testing for your business | Ex-googler | Statistician

    How many experiments did you run last quarter? 10? 50? What happens when you need to run 5,000 a year? Every major tech company like Amazon, Meta, Netflix, Airbnb, has one thing in common: They don’t just A/B test. They’ve built experimentation into their DNA. Fast shipping. Constant iteration. Rapid learning. Why? Because speed of learning = speed of growth. But scaling from 5 to 5,000 experiments isn’t just about running more tests. It’s a mindset shift. A system upgrade. A cultural transformation. And for analysts, (Growth) PMs and anyone who deals with experimentations at fast-moving companies, this is the next big unlock. But it’s also where things get messy - false positives, fragile infra, conflicting priorities, decision paralysis. On June 4th, 19:00 (IL-time) I’m teaming up with Yuzheng Sun, Principal Data Scientist at Statsig, to break it all down: • What changes as you scale experiment velocity? • How to evolve your infra, culture, and guardrails? • Why things break, and how to avoid the most common traps? If you’re building a product or data org that wants to move faster and smarter, this is for you. Save your spot: https://lnkd.in/dHQ4v9Fr #experimentation #abtesting #productmanagement #dataanalytics #growth #statsig #bellstatistics

    • No alternative text description for this image
  • Bell Statistics reposted this

    View profile for Allon Korem

    CEO at Bell Statistics - Data-driven growth with advanced A/B testing for your business | Ex-googler | Statistician

    Meet Bell’s advanced A/B Testing tool When it comes to A/B testing infrastructure, companies usually face a tough choice: Buy an out-of-the-box platform that’s fast to implement OR build an internal toolset that’s tailored but time-consuming. The truth? There’s no one-size-fits-all. Buying a commercial tool is fast and polished, but comes with constraints. Integration can be slow and fragile, customization is limited, and as you scale, licensing costs climb - just as switching becomes harder. Building gives you full control but requires time, deep expertise, and constant maintenance. For most teams, it’s a heavy lift. At Bell, we offer a third option. We help companies build internal A/B testing tools that feel custom with robust methods, automation, and flexibility but without starting from scratch or reinventing every component. Our Advanced A/B Testing Tool is a customizable, in-house dashboard system developed by our team of experts and tailored precisely to each client’s needs. It's not a generic plug-and-play tool. It's your tool, built with Bell’s best practices, robust methods, and clean visualizations, designed to scale with your team. So what’s inside? Our dashboards are equipped with features typically seen only in top-tier commercial tools (and some that aren’t offered at all): Sequential Testing – for faster, more flexible decisions without inflating error rates CUPED – for increasing statistical power SRM detection – built-in guardrails to flag allocation or sampling issues Automated sub-segment analysis – insights without endless slicing and dicing Custom metrics, alerts, and visuals – aligned with how your team works and decides All integrated into a clean, intuitive interface your analysts and PMs will actually enjoy using. What’s the impact? Since rolling this out with select clients, we’ve seen: 3x more tests run - without bottlenecks 50% shorter runtime - thanks to smarter methods 75% less analyst time per test - due to automation and cleaner workflows More reliable, trustworthy decisions - fewer false positives, clearer insights And here’s a sneak peek into one of our live dashboards (sensitive data blurred). If you're considering building your own but don’t want to reinvent the wheel, let’s talk! #ABTesting #Experimentation #ProductAnalytics #DataScience #CausalInference #BuyVsBuild

    • No alternative text description for this image
  • Bell Statistics reposted this

    View profile for Allon Korem

    CEO at Bell Statistics - Data-driven growth with advanced A/B testing for your business | Ex-googler | Statistician

    In A/B testing, KPIs often come with a catch: highly skewed distributions and extreme outliers. It’s tempting to treat those as “just part of the data,” but doing so can quietly erode the statistical power of your tests. At Bell, we recently published on Statsig blog a deep dive into how outliers impact statistical power, and what to do about it. Outliers increase variance. Increased variance reduces power. Reduced power means your test is less likely to detect a real effect - leading to wasted time, inconclusive results, and missed opportunities. We illustrate how inflated variance due to a few extreme values can increase the likelihood of Type II errors, failing to detect a true difference, even when one exists. How to detect Outliers? We cover both: • Visual techniques (box plots, histograms, scatter plots) • Statistical methods (Z-scores, IQR, percentile thresholds) What to do once you’ve found them? We’re not talking about bad data - these are legitimate, extreme values, like high-spending users in a gaming app. Do you keep them? Cut them? Transform the metric? We recommend winsorization! Capping the top (and/or bottom) X% of values at a defined threshold (like the 99th percentile). It’s a simple yet powerful technique that keeps the data structure intact while significantly reducing the variance caused by outliers. Using real revenue data from a gaming company, we simulated different effect sizes and tested three levels of winsorization (none, 1%, 0.1%). We ran the analysis 1,000 times per scenario. The result? Winsorization substantially improved power - making it easier to detect true effects, especially when the treatment effect was subtle. Key takeaway: Outliers can both contain signal and create noise. Managing them well - especially through winsorization - can give your experiments the clarity and sensitivity they need to deliver real insights. Read the full blog for methodology, visuals, and step-by-step winsorization guidance: https://lnkd.in/dFpBwKRs #ABTesting #Outliers #Winsorization #Experimentation #DataScience #ProductAnalytics #Statistics

Similar pages

Browse jobs