Skip to main content

A/B Testing di Design di Captive Portal per una Maggiore Conversione di Iscrizioni

Questa guida di riferimento tecnico fornisce una metodologia passo-passo per eseguire A/B test statisticamente validi sui design dei captive portal. Copre i calcoli della dimensione del campione, la pianificazione della durata del test e l'interpretazione dei risultati per aumentare la conversione delle iscrizioni al WiFi ospite per gli operatori di sedi e i team IT.

📖 6 min di lettura📝 1,311 parole🔧 2 esempi3 domande📚 8 termini chiave

🎧 Ascolta questa guida

Visualizza trascrizione
Welcome to the Purple Intelligence Briefing. I'm your host, and today we're tackling a topic that sits right at the intersection of network operations and commercial performance: how to run statistically valid A/B tests on your captive portal designs to drive higher guest WiFi sign-up rates. Whether you're managing a hotel estate, a retail chain, a stadium, or a conference centre, your captive portal is the front door to your first-party data strategy. And yet, most organisations deploy a single portal design and leave it running indefinitely — never testing, never optimising. That's the equivalent of running a single version of your website homepage for five years without ever looking at the analytics. Today, we're going to change that. Let me set the scene. The average unoptimised captive portal in a hospitality environment converts somewhere between 22 and 30 percent of connecting devices into registered profiles. After a structured A/B testing programme, that figure typically rises to between 40 and 52 percent. That's not a marginal improvement — that's a near-doubling of your first-party data acquisition rate, which has direct implications for your CRM pipeline, your marketing automation workflows, and ultimately your revenue per guest. So, let's get into the technical methodology. The first thing to understand is what we're actually testing. A captive portal A/B test is a controlled experiment where you split incoming WiFi users into two or more groups — each group sees a different version of your splash page — and you measure which version produces a higher sign-up completion rate. The key word here is "controlled." This is not a sequential test where you run version A for a month, then version B for a month. That approach is fundamentally flawed because it confounds your results with seasonal variation, footfall changes, and event calendars. You need concurrent, randomised assignment. Most enterprise WiFi platforms — including Purple — support multi-variant portal configuration, which means you can serve different portal designs simultaneously from the same SSID. The platform handles the randomised assignment, typically using a hash of the device MAC address or a session token to ensure each user sees the same variant consistently across their session, while the overall split remains close to 50-50. Now, let's talk about the single most important concept in any A/B test: statistical significance. This is where most organisations go wrong. They run a test for a week, see that variant B has a higher conversion rate, declare it the winner, and deploy it. But without sufficient sample size, that result is almost certainly noise. Here's the framework you need to apply. Before you start any test, you must define three parameters. First, your baseline conversion rate — that's your current portal's sign-up rate, which you should already have from your WiFi analytics dashboard. Second, your minimum detectable effect, or MDE — this is the smallest improvement you actually care about. If your baseline is 28 percent, you might decide that a 5 percentage point improvement is the minimum worth acting on. Third, your confidence level — the industry standard is 95 percent, meaning you accept a 5 percent probability of a false positive. With those three inputs, you can calculate your required sample size per variant using the standard formula: n equals Z-squared multiplied by p times one minus p, divided by MDE-squared. For a baseline of 28 percent, an MDE of 5 percentage points, and 95 percent confidence, you need approximately 2,800 unique visitors per variant. That means 5,600 total sessions before you can draw any conclusions. Now translate that into calendar time. If your venue sees 800 unique device connections per day, you're looking at a minimum of 7 days. But here's the critical nuance: you should never run a test for fewer than two full business cycles, regardless of whether you've hit your sample size target. A "business cycle" in this context means the repeating pattern of your footfall — for a hotel, that's typically a full week to capture both leisure and business travellers. For a retail environment, it might be two weeks to capture both weekday and weekend shopping patterns. For a stadium, it means running the test across multiple comparable events. Why does this matter? Because day-of-week effects are real and significant. A portal test that runs only Monday to Friday in a business hotel will over-represent corporate travellers and under-represent leisure guests. Your winning variant might perform brilliantly for one segment and poorly for the other. Running across full cycles averages out these effects. Let me give you a concrete example from the hospitality sector. A regional hotel group with 12 properties wanted to increase their guest WiFi registration rate to improve their direct booking programme. Their baseline portal had a 26 percent sign-up rate. They were using a three-field form — name, email, and room number — with a generic "Connect to WiFi" call to action. They structured an A/B test with two variants. Variant A was their existing design — the control. Variant B reduced the form to two fields — email and room number only — and changed the call to action to "Access Free High-Speed WiFi." They also added a single line of value proposition copy: "Stay connected and receive exclusive member offers." The test ran for 21 days across all 12 properties, accumulating 34,000 unique sessions. Variant B achieved a 41 percent sign-up rate against variant A's 26 percent — a 15 percentage point lift, well above their 5 percentage point MDE threshold, with a p-value of less than 0.001. The result was unambiguous. What drove the improvement? Post-test analysis pointed to two factors. First, reducing form fields from three to two lowered the perceived friction significantly. Research in conversion rate optimisation consistently shows that each additional form field reduces completion rates by approximately 11 percent. Second, the revised call to action addressed the user's immediate motivation — fast, free connectivity — rather than the brand's motivation, which was data capture. Now let's move to the retail environment. A shopping centre operator managing a 140-unit mall wanted to improve their WiFi sign-up rate to feed their footfall analytics and tenant marketing platform. Their baseline was 19 percent — lower than hospitality, which is typical for retail because the dwell time is shorter and the perceived need for WiFi is lower. They ran a three-variant test — what's sometimes called an A/B/C test. Variant A was their control: a standard email-and-name form with a "Sign In" button. Variant B replaced the form with a single-click social login via email — "Continue with Google" or "Continue with Apple." Variant C used a single email field with the copy "Get 10% off your next purchase at participating stores — enter your email to connect." After 28 days and 62,000 sessions, the results were striking. Variant B — social login — achieved 34 percent conversion, a 15 percentage point lift. Variant C — the discount incentive — achieved 31 percent. Variant A remained at 19 percent. The operator deployed Variant B as the primary portal but retained Variant C as a seasonal overlay during promotional periods. The key learning here is that in low-dwell environments, reducing authentication friction is more impactful than adding incentives. Social login removes the cognitive load of entering credentials on a mobile keyboard, which is the primary barrier in retail settings. Now, let me address some common implementation pitfalls. The first is novelty effect bias. When you launch a new portal design, there's often a short-term spike in engagement simply because it looks different. This is why your warm-up period — the first three days of a test — should be excluded from your analysis. Only count data from day four onwards. The second pitfall is running too many variants simultaneously. It's tempting to test five or six design changes at once to accelerate learning. But each additional variant dilutes your traffic, extends the time needed to reach statistical significance, and makes it harder to attribute results to specific changes. Unless you have very high traffic volumes — above 5,000 daily sessions — stick to two variants per test. The third pitfall is ignoring GDPR compliance in your test design. Every variant you test must meet your data protection obligations. If you're testing a variant that requests additional personal data fields, you need to ensure that the consent mechanism is equally prominent in both variants. Running a test where variant A has a clearly visible privacy notice and variant B buries it in small print will produce a conversion lift that you cannot legally exploit. Your legal team should sign off on every portal variant before it goes live. The fourth pitfall is what I call "winner's curse" — deploying a winning variant without understanding why it won. Always conduct a post-test analysis that segments your results by device type, time of day, and visitor segment where possible. A variant that wins on mobile may underperform on desktop. A variant that wins during peak footfall may struggle during quiet periods. Understanding the mechanism of improvement makes your next test smarter. Now, a rapid-fire round on the questions we get asked most frequently. "How long should my test run?" Minimum two full business cycles, never fewer than 14 days, and only after hitting your minimum sample size. If you haven't hit sample size after 30 days, your traffic is too low to run a valid test — consider pooling data across multiple sites. "What's the most impactful element to test first?" Call-to-action copy, consistently. It has the highest impact-to-effort ratio and takes less than an hour to implement. Start there before touching form fields or visual design. "Can I test on a single site?" Yes, but with caveats. Single-site tests are valid if you have sufficient traffic. If your site sees fewer than 300 unique daily connections, you'll need 30 or more days to reach significance — at which point seasonal drift becomes a real concern. Multi-site testing, where the same variants run across comparable venues simultaneously, is the more robust approach. "What about multi-variate testing?" MVT — multi-variate testing — allows you to test combinations of changes simultaneously. It's more efficient than sequential A/B tests but requires significantly more traffic. As a rule of thumb, you need at least 1,000 daily sessions per variant combination. For most venue operators, sequential A/B testing is the right starting point. To summarise the key principles from today's briefing. One: always calculate your required sample size before launching a test — never declare a winner on gut feel. Two: run tests for at least two full business cycles, regardless of early results. Three: test one element at a time, starting with call-to-action copy. Four: exclude the first three days of data to eliminate novelty effect bias. Five: ensure every variant is GDPR-compliant before deployment. Six: segment your post-test results by device type and visitor cohort to understand the mechanism of improvement. If you're operating on Purple's platform, the multi-variant portal capability gives you the infrastructure to implement everything we've discussed today without additional development overhead. The analytics layer provides the session data you need for sample size tracking, and the portal builder supports concurrent variant deployment from a single management console. Your next step is straightforward: pull your current portal's sign-up rate from your WiFi analytics dashboard, set a 5 percentage point MDE as your target, calculate your required sample size, and design your first variant with a revised call-to-action copy. You can be running a statistically valid test within 48 hours. Thank you for joining the Purple Intelligence Briefing. If you found this useful, explore our guides on event-driven marketing automation and WiFi-triggered email workflows — links in the show notes. Until next time.

header_image.png

Riepilogo Esecutivo

Per gli operatori di sedi aziendali, il captive portal è il punto di ingresso critico per i dati di prima parte degli ospiti. Eppure, molte organizzazioni implementano una splash page statica e la lasciano in esecuzione indefinitamente, ignorando il sostanziale aumento di conversione possibile attraverso una sperimentazione strutturata. Il captive portal medio non ottimizzato in un ambiente di ospitalità o vendita al dettaglio converte tra il 20% e il 30% dei dispositivi connessi in profili registrati. Attraverso rigorosi A/B testing di elementi di design, flussi di autenticazione e proposte di valore, le organizzazioni possono aumentare in modo affidabile questa base al 40%-50% o più.

Questa guida fornisce una metodologia completa per strutturare, eseguire e analizzare gli A/B test sui design dei captive portal. Va oltre le semplici modifiche di design per affrontare il rigore statistico richiesto per risultati validi — in particolare i calcoli della dimensione del campione, la pianificazione della durata del test e la mitigazione degli errori sperimentali comuni come il bias di novità. Sfruttando piattaforme che supportano portal multi-variante, come la soluzione Guest WiFi di Purple, i team IT e di marketing possono trasformare la loro rete ospite da un centro di costo a un motore di acquisizione dati ad alta conversione.

Approfondimento Tecnico: La Meccanica del Testing dei Captive Portal

Un A/B test di un captive portal è un esperimento controllato in cui il traffico WiFi in entrata viene diviso in modo casuale e uniforme tra due o più varianti di una splash page. L'obiettivo è identificare quale variante produce un tasso più elevato di autenticazioni riuscite (l'evento di conversione).

Instradamento del Traffico e Persistenza della Sessione

Per mantenere la validità sperimentale, l'infrastruttura di testing deve garantire la persistenza della sessione. Quando un utente si connette all'SSID e viene intercettato dal gateway, il server radius o il controller cloud gli assegna una variante specifica (ad esempio, Variante A o Variante B). Questa assegnazione è tipicamente gestita tramite un hash dell'indirizzo MAC del dispositivo. È fondamentale che, se l'utente si disconnette e si riconnette durante il periodo di test, gli venga servita esattamente la stessa variante che ha visto inizialmente. La mancata manutenzione di questa persistenza inquina i dati, poiché gli utenti esposti a più varianti non possono essere attribuiti in modo pulito a nessuna delle due.

Significatività Statistica e Effetto Minimo Rilevabile (MDE)

La modalità di fallimento più comune nell'A/B testing è terminare l'esperimento prematuramente. Osservare un tasso di conversione più elevato nella Variante B dopo tre giorni non garantisce un design vincente; potrebbe essere semplicemente rumore statistico. Per garantire che i risultati siano affidabili, i team devono calcolare la dimensione del campione richiesta prima dell'inizio del test.

Il calcolo richiede tre input:

  1. Tasso di Conversione di Base ($p$): Il tasso di iscrizione attuale del tuo portal esistente, ottenibile tramite la tua dashboard WiFi Analytics .
  2. Effetto Minimo Rilevabile (MDE): Il più piccolo miglioramento relativo o assoluto che giustifica il costo operativo di implementazione del nuovo design. Per i captive portal, un MDE assoluto di 5 punti percentuali è standard.
  3. Significatività Statistica ($lpha$): La probabilità di rifiutare l'ipotesi nulla quando è vera (un falso positivo). Lo standard del settore è del 95% ($lpha = 0.05$).

sample_size_calculator_infographic.png

Utilizzando la formula standard per confrontare due proporzioni, una sede con un tasso di conversione di base del 25% che cerca un miglioramento assoluto di 5 punti percentuali con una confidenza del 95% richiede circa 3.000 visitatori unici per variante.

Considerazioni su Standard e Conformità

Quando si modificano i flussi di autenticazione, i test devono aderire agli standard di rete e ai quadri normativi sottostanti.

  • IEEE 802.1X / EAP: Se si testano metodi di autenticazione senza interruzioni (come Passpoint/Hotspot 2.0) rispetto a SSID aperti tradizionali con captive portal, assicurarsi che i log di accounting radius attribuiscano correttamente la sessione alla variante.
  • Conformità GDPR / CCPA: Qualsiasi variante che modifica i campi di raccolta dati (ad esempio, aggiungendo un campo per il numero di telefono) deve mantenere meccanismi di consenso conformi. Una variante non può "vincere" semplicemente oscurando la politica sulla privacy.
  • PCI DSS: Se si testano livelli WiFi a pagamento, assicurarsi che le integrazioni del gateway di pagamento rimangano isolate dalla rete aziendale principale.

Guida all'Implementazione: Strutturare il Tuo Primo Test

L'esecuzione di un test statisticamente valido richiede un approccio disciplinato e indipendente dal fornitore. Segui questo framework di implementazione passo-passo.

Fase 1: Generazione dell'Ipotesi e Design della Variante

Non testare modifiche casuali. Ogni test dovrebbe derivare da un'ipotesi chiara. Ad esempio: "Ridurre il modulo di autenticazione da tre campi (Nome, Email, Codice Postale) a due campi (solo Email) ridurrà l'attrito e aumenterà la conversione di almeno il 5%."

Quando si progettano le varianti, concentrarsi prima sugli elementi ad alto impatto. Come mostrato nel grafico dell'impatto sulla conversione qui sotto, le modifiche alla copia della Call to Action (CTA) e ai campi del modulo producono rendimenti significativamente più elevati rispetto a piccole regolazioni di colore.

conversion_impact_chart.png

Fase 2: Configurazione e QA

Configura le varianti all'interno della tua piattaforma di gestione del captive portal. Assicurati che:

  • La divisione sia configurata al 50/50 per un A/B test standard.
  • Il tracciamento delle analisi sia correttamente implementato sulla pagina di successo (il reindirizzamento post-autenticazione) per contare accuratamente le conversioni.
  • Entrambe le varianti siano testate su più tipi di dispositivi (iOS, Android, Windows, macOS) e browser (Safari, Chrome, mini-browser nativi dei captive portal) prima del lancio.

Fase 3: Esecuzione del Test ae Durata

Avvia il test, ma non monitorare i risultati quotidianamente. Controllare costantemente i risultati porta a un "bias di sbirciata", aumentando la probabilità di dichiarare falsamente un vincitore.

Esegui il test per un minimo di due cicli commerciali completi (tipicamente 14 giorni) per tenere conto delle variazioni di affluenza in base al giorno della settimana. Ad esempio, una struttura Hospitality vede profili demografici diversi il martedì (viaggiatori d'affari) rispetto al sabato (ospiti per svago). Anche se raggiungi la dimensione del campione richiesta al quinto giorno, lascia che il test segua il suo corso completo per assicurarti che la variante vincente funzioni bene in tutti i segmenti di pubblico.

Best Practice per Portali ad Alta Conversione

Basati su dati aggregati provenienti da implementazioni aziendali, i seguenti principi guidano costantemente tassi di iscrizione più elevati:

  1. Minimizza l'Attrito nell'Input: Ogni campo aggiuntivo del modulo riduce la conversione. Se hai bisogno solo di un indirizzo email per attivare un Event-Driven Marketing Automation Triggered by WiFi Presence , non chiedere la data di nascita.
  2. Sfrutta l'Autenticazione Sociale: In ambienti ad alto traffico come hub di Transport o centri Retail , offrire l'autenticazione con un clic tramite Google, Apple o Facebook supera significativamente l'inserimento manuale dei dati, specialmente sui dispositivi mobili.
  3. Copywriting Orientato al Valore: Sostituisci le CTA generiche come "Connettiti al WiFi" con testi orientati al valore come "Ottieni Accesso ad Alta Velocità" o "Iscriviti per il 10% di Sconto Oggi".
  4. Ottimizza per il Mini-Browser: Il Captive Portal si carica spesso in un mini-browser limitato (CNA - Captive Network Assistant) piuttosto che in un browser completo. Evita JavaScript complessi, video di sfondo pesanti o font web esterni che potrebbero non caricarsi o andare in timeout su una connessione pre-autenticata.

Risoluzione dei Problemi e Mitigazione del Rischio

Quando i test non producono risultati utilizzabili o influiscono negativamente sull'esperienza utente, ciò è solitamente dovuto a una di queste modalità di fallimento comuni:

Modalità di Fallimento Causa Radice Strategia di Mitigazione
Effetto Novità Gli utenti di ritorno interagiscono con un nuovo design semplicemente perché è diverso, causando un picco iniziale che regredisce alla media. Scarta i primi 3-4 giorni di dati del test (il periodo di "riscaldamento") prima di calcolare la significatività.
Timeout CNA La Variante B include risorse pesanti (immagini/script) che impiegano troppo tempo a caricarsi tramite la connessione walled garden, causando la chiusura del Captive Portal da parte del sistema operativo. Mantieni il peso totale della pagina sotto i 500KB. Usa font di sistema e comprimi tutte le immagini.
Attribuzione Inquinata Gli utenti che si spostano tra i punti di accesso attivano più impressioni del Captive Portal, falsando il conteggio dei visitatori. Assicurati che la piattaforma di analisi deduplichi le sessioni basate sull'indirizzo MAC entro una finestra di 24 ore.

ROI e Impatto sul Business

Il caso aziendale per l'A/B testing dei Captive Portal è semplice e altamente misurabile. Considera un'organizzazione Healthcare o una grande proprietà commerciale che registra 50.000 connessioni di dispositivi unici al mese.

Se il tasso di conversione di base è del 20%, la struttura acquisisce 10.000 profili mensilmente. Implementando un programma di test che aumenta la conversione al 35%, la struttura acquisisce 17.500 profili—un ulteriore 90.000 profili all'anno senza aumentare l'affluenza o la spesa di marketing.

Questi profili aggiuntivi alimentano direttamente i sistemi a valle. Se integrato correttamente, ad esempio utilizzando Mailchimp Plus Purple: Automated Email Marketing from WiFi Sign-Ups , questo pubblico ampliato si traduce direttamente in tassi di coinvolgimento più elevati, maggiori iscrizioni a programmi fedeltà e un aumento misurabile dei ricavi.

Termini chiave e definizioni

Captive Portal

A web page that a user of a public access network is obliged to view and interact with before access is granted.

The primary ingestion point for guest data in enterprise WiFi deployments.

Minimum Detectable Effect (MDE)

The smallest improvement in conversion rate that you care to measure and that justifies the cost of implementing the change.

Used before a test begins to calculate the required sample size. Setting an MDE too low requires impractically large sample sizes.

Statistical Significance

The mathematical likelihood that the difference in conversion rates between Variant A and Variant B is not due to random chance.

IT teams use a 95% confidence level to ensure they don't deploy a 'winning' design that was actually just a statistical fluke.

Walled Garden

A restricted environment that controls the user's access to web content and services prior to full authentication.

Crucial when testing social logins; the OAuth domains (e.g., accounts.google.com) must be whitelisted in the walled garden.

Captive Network Assistant (CNA)

The pseudo-browser that operating systems (like iOS or Android) automatically open when they detect a captive portal.

CNAs have limited functionality (no tabs, limited cookie support, aggressive timeouts). Portal designs must be tested specifically within CNAs, not just standard desktop browsers.

Session Persistence

The mechanism by which a user is consistently served the same variant of a portal if they disconnect and reconnect during the test period.

Essential for data integrity. Usually achieved by hashing the device MAC address to assign the variant.

Novelty Effect

A temporary spike in user engagement caused simply by a design being new or different, rather than inherently better.

Mitigated by discarding the first few days of test data to allow returning users to normalise their behaviour.

A/B/n Testing

An experimental framework where more than two variants (A, B, C, etc.) are tested simultaneously against a control.

Requires significantly higher footfall/traffic than standard A/B testing to reach statistical significance in a reasonable timeframe.

Casi di studio

A 400-room business hotel currently uses a captive portal requiring Name, Email, and Room Number, achieving a 22% conversion rate. The marketing director wants to increase this to 30% to grow their loyalty database. They propose testing a new variant that adds a 'Company Name' field but offers a free coffee voucher upon sign-up. How should the IT manager structure this test?

The IT manager should structure a 14-day A/B test. Variant A (Control) remains the 3-field form. Variant B (Challenger) becomes the 4-field form with the coffee voucher offer. To detect an 8 percentage point lift (from 22% to 30%) at 95% confidence, they need approximately 1,100 unique visitors per variant. Given the hotel's occupancy, this will take about 10 days, but the test must run for 14 days to capture two full business cycles (weekday corporate vs. weekend leisure).

Note di implementazione: This scenario tests the balance between friction (adding a field) and incentive (the voucher). The IT manager correctly identifies the need for a full two-week cycle. Often, adding fields depresses conversion, but a strong enough incentive can overcome this friction. The test will definitively prove which force is stronger.

A large stadium with 60,000 capacity experiences severe network congestion during the 15-minute half-time interval. The current captive portal requires email verification via a magic link. Conversion is only 12%. The network architect wants to test a one-click 'Sign in with Apple/Google' variant. What are the specific technical constraints for this test?

The architect must configure the walled garden (pre-authentication whitelist) to allow traffic to Apple and Google's OAuth servers. Without this, the social login buttons will fail to load or authenticate. The test should be run across three consecutive match days to ensure sufficient sample size and to account for different fan demographics. The primary metric is not just conversion rate, but 'time-to-authenticate' to ensure the new method reduces DHCP lease holding times during the half-time rush.

Note di implementazione: In high-density environments like stadiums, captive portal design is as much about network throughput as it is about marketing. The architect correctly identifies that social login requires specific walled garden configurations. Measuring time-to-authenticate is a critical secondary metric for venue operations.

Analisi degli scenari

Q1. A retail chain runs a portal test for 5 days. Variant B shows a 45% conversion rate compared to Variant A's 30%. The marketing team wants to deploy Variant B immediately across all 50 stores. As the IT manager, what is your recommendation?

💡 Suggerimento:Consider the 'Two-Cycle' rule and the concept of business cycles in retail.

Mostra l'approccio consigliato

Do not deploy yet. Five days is insufficient because it does not cover a full business cycle (a full week including both weekdays and weekends). Retail footfall demographics change significantly between Tuesday morning and Saturday afternoon. The test must run for at least 14 days to ensure Variant B performs consistently across all shopper profiles, even if statistical significance appears to have been reached early.

Q2. You are testing a new portal design that includes a large, high-resolution background video to showcase a new hotel property. During the test, Variant B (the video version) shows a significantly lower conversion rate than the plain text Control, but network logs show high drop-off before the page even fully renders. What is the likely technical issue?

💡 Suggerimento:Consider the environment where captive portals load on mobile devices.

Mostra l'approccio consigliato

The high-resolution video is causing Captive Network Assistant (CNA) timeouts. CNAs on iOS and Android have aggressive timeout thresholds and limited resources. If the page weight is too heavy (e.g., a large video file) over the pre-authenticated walled garden connection, the OS will assume the network is broken and close the CNA window before the user can authenticate. The mitigation is to remove the video, keep page weight under 500KB, and re-test.

Q3. A venue wants to test changing the portal CTA from 'Sign In' to 'Join WiFi & Get Offers'. They also want to change the button colour from grey to Purple, and remove the 'Last Name' field. They propose launching this as Variant B. Why is this experimental design flawed?

💡 Suggerimento:Review the 'Test One, Learn One' memory hook.

Mostra l'approccio consigliato

This design violates the principle of isolating variables. By changing the copy, the colour, and the form length simultaneously in a single variant, the team will not know which specific change caused the outcome. If conversion increases, was it the shorter form or the better copy? The test should be restructured to isolate one variable (e.g., test the copy change first), or structured as a multi-variate test (MVT) if traffic volumes permit.