Some days ago, we launched ABBY.io, a cloud service to evaluate and document all your A/B tests.
If you’ve ever struggled with concepts like pvalues and statistical power: we’ve got you covered.
This blog post explains the importance of a proper evaluation and why documentation (like so often) is crucial.
Evaluating A/B tests can be hard and error-prone. We want to make it easy for everyone to evaluate A/B tests in a safe way without
the need to study statistics or buy an expensive statistics software.
We developed a robust and well-tested evaluation framework so that you don’t
have to worry about the nasty details anymore.
Upload your raw data and enter some test details and ABBY.io does all the rest for you.
You can also upload existing results; either via the web view or through our dead simple API.
ABBY.io provides you with easy-to-understand results that include important measures like statistical significance, power,
the uplift you achieved with your test group and an interval that contains the true uplift with 95% confidence (confidence interval, CI).
The results are color-highlighted so you have an easy overview, if any measures are significant.
Also, the status in the overview list will show the appropriate status for every test, so you can see relevant results in a glance.
We found that documentation is an often neglected aspect of A/B testing.
But in our experience it is crucial to know what was tested when and with which users.
Hence, ABBY.io makes it easy to add details as links, when the test was running, screenshots etc. to your A/B tests.
A good documentation makes it easy to show results to co-workers and clients.
If you have a well documented test with a good description,
it is often enough to just send them a link and they will have no problem to understand what is going on.
If there is a metric thats going down, you can easily check if your test might be the reason
for it and you can easily eliminate it as a source, if your test only affects other segments.