Growth on autopilot with the help of Machine Learning

We are using +500 Data points to deploy +500 new automated experiments every 4 hours

Frequently Asked Questions

You will be able to decide. There will be 3 types of traffic groups for each experiment:

The control group – this will be the sample of traffic that will be tracked against the live variation to determine the difference

The learning group – this will be traffic sample that will be used by the system to continuously learn, thus adjusting and improving the algorithm. 

The tested/optimized group – this will be the traffic sample that will actually be affected by the experiments.

The more, the better. However, we will start with companies that have at least 200k visits each month, in order to validate the impact.

We use separate apps for creating, configuring and monitoring the experiments and for applying the experiments and collecting data on the website. These are called “Dashboard” and “Tracking”. They are both written in PHP. Symfony is the base for our Dashboard system, while the Tracking has a proprietary minimal-footprint framework, since performance is critical for this system.

For storing data, we use both relational (Percona) and noSQL (MongoDB) systems, depending on the data which is being stored and the read/write ratio for it.

We also use caching systems such as Redis and a caching strategy resembling the fractal index concept.

For async processing we use queues and workers, with RabbitMQ at the core of the system.

Finally, for the Machine Learning module, both training and prediction we use Prediction.io, with custom-written algorithms.

For DevOps needs, such as monitoring/alerting, logging and deployment we use industry-standard tools such as Grafana, Kibana, Elastic Search and Jenkins.

We will assist you in creating templates that have the look & feel of your website & brand identity.

The implementation involves placing a script on your website and pushing various data points from your back-end. After the tech implementation there will be an initial experiment setting in terms of creatives, traffic allocation and capping that will be made together with our team – a customer success manager, a conversion expert, a data analyst and a tech engineer will assist you in this process from our end.

We will decide together which experiments to run and we will set the experiments according to the experimentation plan. You will come up with the look & feel and we will do the heavy-lifting in terms of tech implementation.

Yes, we will tag users by sending events into GA, and you will be able to easily identify users from the control group and tested group.

The project uses the right tools for each job: PHP for web-related apps (we use Symfony for the app in which you configure the overlays and see the reports) and Scala for the Machine Learning tasks.

No, since the problem we solve does not require it. Deep learning is very useful when extracting progressive features from raw input.

We use a clustering algorithm for determining similarities between customers and predicting the probability of success for each overlay.

We don’t have yet a pricing structure. The system will be free to use until the end of Q4 2019.

However, our aim is to provide value and to track the revenue uplift attributed to Adapt. After that, we will define a revenue-sharing structure, aligned with each retailer’s margin & profitability.

We are facing a clustering problem, since we are trying to determine customers who resemble, without prior knowledge of how they look, and without having some predefined groups.

The system holds models generated for each individual experiment, since different characteristics may influence experiments in different ways. For each display opportunity, if the capping rules pass, the system associates the user into a cluster, and It ultimately predicts a probability of purchase for each experiment that may be applied given the user’s context, and selects the one most likely to lead to a purchase.

You will be able to choose both Bayesian and frequentist inferences, which are the most commonly used methods for determining success for experiments.

You will decide on which devices to run the experiments. Our platform fully supports mobile devices and responsive design.

Adapt is platform-agnostic. It will be possible to run it on any platform such as Magento, Shopify, BigCommerce, Demandware, etc. It is also possible to integrate it with custom-developed platforms, as well.

Adapt uses a clustering algorithm, with DBSCAN at its core.

No, since the code will be included async, thus limiting the impact on your website if anything should go wrong on our end.

Worst-case scenario, the experiments will be applied late or not at all, but your website will not be affected by any such scenarios.

Performance and availability are at the heart of everything we build, and we really enjoy the challenge of designing and running scalable, high-available systems in an operational-excellence oriented environment.

We are using +500 Data points

to deploy every 4 hours

Ready to see Adapt in action?

Become an Early Adopter

Shopping Basket