Preparing for interview: A/B Tests

Preparing for interview: A/B Tests

Introduction

A strong understanding of A/B testing is crucial for frontend engineers because it empowers them to make data-driven design decisions that directly impact user experience and conversion rates. By testing variations of UI elements, layouts, or copy, frontend engineers can identify which options resonate most effectively with users. This knowledge allows them to create interfaces that are not only visually appealing but also optimized for key business metrics. In interviews, showcasing your knowledge of A/B testing demonstrates a commitment to user-centric development and the ability to translate data into actionable improvements, making you a more valuable asset to potential employers.

Here are some interview questions for you and they describe what to expect in the answers as well.

Table of contents

  1. Can you explain in simple words what are ab tests and how you used them in your job ?
  2. What is statistical significance ?
  3. What are some of the common problems in running A/B tests ?
  4. What are some of the things you learned during your previous job while doing A/B Tests ?
  5. What are some skills that you think are important when writing good A/B tests ?
  6. You run a test that seems successful, but it negatively impacts another metric. How do you handle this?
  7. Gives us a run down of a hypothetical feature and how you go about designing the a/b test.
  8. Conclusion

Can you explain in simple words what are ab tests and how you used them in your job ?

A/B testing gives you the power to fine-tune your product with confidence, knowing that your changes are actually making a positive impact. Think of it like a science experiment for your website or app!

In my previous role at an ecommerce company, we were obsessed with boosting conversions (i.e., getting more sales!). We'd A/B test everything: different checkout button colors and sizes, personalizing those buttons based on user data... you name it!

The process is straightforward:

  • Choose your goal: What do you want to improve (clicks, signups, sales)?
  • Form a hypothesis: What change do you think will make a difference?
  • Roll out the experiment: Split your audience to see the original version (control group) and your new version.
  • Analyze the results: Did your change move the needle?

While the concept is simple, understanding concepts like statistical significance is important. You'll need to crunch some numbers to make sure your results aren't just random chance.

What is statistical significance ?

When we observe a promising result in an A/B test, we need to make sure it's not a fluke before rolling it out to everyone. To do this, we need two things:

A large enough sample size: We want to ensure that enough users have participated in the test to represent the diversity of our user base.

Adequate test duration: The experiment needs to run long enough to overcome any initial randomness and to account for possible behavioral changes throughout the week or month.

User behavior can drastically shift between weekdays and weekends. Shopping for work-related items might peak during the week, while leisure products might get more attention on weekends. Running a test only on the weekend skews your data towards a specific type of behavior. To make truly informed decisions, you need a test that reflects the full spectrum of user behaviors throughout a typical cycle. Limiting it to a weekend doesn't provide the complete picture.

Statistical significance is also a mathematical concept that revolves around ideas like mean and standard diviation etc. but that is not really my expertise. I often relied on excel sheets with formulas fed into them.

What are some of the common problems in running A/B tests ?

As a front end developer I think the flicker effect was one of the biggest problems we faced. We always modified the web page using DOM manipulations which meant we had to load the control group first and then modify the DOM to display the hypothesis group. If not done properly this could lead to a flicker effect which means use ends up seeing both control and hypothesis. This is not a pure A/B Test and we can not rely on its results.

One way to solve this problem is always to "hide" the element to be modified first, do the dom manipulations and then display it. That way user might see a black page for some time but never both versions.

Another common issue with A/B tests is that we should classify the users properly into buckets. A user who sees experiment version should always see the experiment version and not both.

Third problem is that having too many tests running at the same time could have side effects that impact multiple tests.

What are some of the things you learned during your previous job while doing A/B Tests ?

Here are some of my learnings :

  1. Accessibility is important. We noticed signficiant increase in metrics when we improve color contrasts, supported light mode dark mode, more readable fonts etc. This makes our designs inclusive and makes more money.

  2. For shopping sites we noticed that when people shop for specific event such as valentines day, highlighting the fact that the product will get delivered before the event helps them make decision to buy them more quickly.

  3. While data is a great way to make decisions. Always question the data and do not accept it unless you are satisfied that the data is indeed correct. Sometimes our own mind is very biased to see certain things and when data matches our expectations we tend to ignore alternative explanations.

What are some skills that you think are important when writing good A/B tests ?

  1. DOM manipulation while keeping in mind performance.

  2. Ability to write concise code.

  3. Understanding how feature flags work, how feature rollouts happen etc. is important.

  4. One should remember to clean up feature flags properly.

  5. Ability to properly analyze the results of a test. Generally if you see too big changes you should analyze them in depth. For example one time we noticed a pretty big jump in our conversions and after analysis we found that our control group was broken in Edge browser but the experiment fixed it and hence the jump. So we had to fix it for the Edge browser first and then run the test.

You run a test that seems successful, but it negatively impacts another metric. How do you handle this?

This is a difficult question to answer. At any given time we track multiple metrics and some have bigger impact on the business on the other. We need to take a holistic look at the whole business and see what makes sense for it.

For example, sometimes too aggressive nudging might increase users buying products but then buyer's remorse might take over and they might return it. Such returns often result into losses. So it is not enough if we just track sales but we should also focus on returns and make sure that metric is not too negatively impacted.

This often involves working closely with business analysts and product managers to understand the goals.

Gives us a run down of a hypothetical feature and how you go about designing the a/b test.

Let us call this Feature X. This is how we typically do it.

  1. Define and scope the feature. Sometimes this could be just a minor UI change. So we get UX to give us both before and after designs.

  2. Work with PMs and Analysts to figure out which metrics would be impacted and what impact we expect.

  3. Determine which users should see this feature and if there is any criteria. Sometimes we might want to run tests for narrow set of users based on say gender or location, sometimes it is wider.

  4. Then create a "roll out plan" where we describe what % of users will see this experiment.

  5. Implement the test, make sure you can track the numbers and then roll out the test.

  6. Once the test gets sufficient amount of traffic wait for few days to make sure the test has reached statistical significance.

  7. Monitor the test results daily to make sure you have not broken anything.

  8. At the end discuss the impact on metrics with your analysts and then decide whether you want to roll this feature to everyone or not.

  9. Document the results for future engineers to see and understand why a particular change was made and what impact it had.

Conclusion

A/B testing is a powerful tool that helps frontend engineers build better, more user-focused experiences. By understanding the principles of A/B testing, frontend developers can:

Make informed design decisions: Instead of relying on guesswork, A/B tests provide data-backed evidence of what works for your users.

Iterate and improve:

A/B testing fosters a culture of continuous optimization, allowing you to refine your product over time. Boost key metrics: Successful A/B tests can directly enhance conversions, engagement, and other business-critical goals.

Mastering A/B testing concepts demonstrates to potential employers that you're not just a coder, but a strategic problem-solver who can drive tangible results. The questions outlined in this article will help you prepare for a successful interview and highlight your expertise in this valuable skill.