There is no need to spend a lot of time setting up content experiments.
You will,
however, need to set up at least one or two different pages before you can log in. When
you have a few things set up and ready to go, go to Google Analytics and start there.
Well, let’s see in practice.
Let’s start. In the Behavior section of Google Analytics, which you don't see
when you switch between Acquisition and Conversion, there's a label called
"Experiments." It's not very clear what it means, but it sounds very harmless. A blank
screen will appear when you hit it. To start a new experiment, click the "Create
Experiment" button on the top left of your window.
Take the next step by picking a test to do. It's time for fun. It doesn't matter
what you call it. And look down as you choose the objective. This is where you can set a
specific goal to track results against and figure out which variation is best.
Select a goal from the list (like opt-ins, purchases, etc.) or measure how many people
are using your site (like bounce rate)
If you don't already have a goal or objective set up, create one. You'll need it to
run a conversion-based experiment. It all comes down to why you're taking this test in
the first place. Most people are surprised to find that their old blog posts get the most
attention. The problem is Old, out-of-date pages are often the ones with the highest
number of comments.
Then, go to: Behavior > Secondary Dimensions + Google/Organic > Top Pageviews
> Bounce Rate, and click on it.
For now, let's choose "Bounce Rate" as our goal. This way, we can make changes
to how the pages look or add more high-quality images to keep people on the site longer.
Click on "Advanced Options" after you've chosen your goal. This will give you more
detailed settings for this test. By default, these advanced options are not turned on.
According to how well your site changes, Google will change the amount of traffic that
comes to your site "dynamically."
However, if you turn on the option, your experiment will run for two weeks and
try to reach a ninety-nine percent statistical confidence level. If this page gets a lot of
traffic, then you can start with shorter tests. If there's only a small amount of
traffic, you might need to extend the tests for longer, like for a month or two, or even
longer if there's a lot of traffic. So far, things are going well!
Step 3 where you Construct Your Experiment. Add the URLs for all of the pages
you want to [Link] you copy and paste, that's it! Give them names that are easy to
remember. Or, you could say, it will just number them for you.
During the fourth step you Include Script Code on Your Page All right, now it's
time for the fun part! Under this section, there's a button that lets you send all this
code to your favorite tech person.
The first thing you should do is make sure that your default Google Analytics
tracking code is on all of the pages you plan to test. CMS users should have it. It's
usually added to the whole site when it's first set up.
Next, highlight and copy the code that was given.
To find the original head tag, you'll need to look for it at the top of your HTML
document. The easiest way to do this is to look for what you need.
Then click Next Step in Google Analytics after that. They'll check to see if everything
looks good there again.
Do you not know if you did it right? Don't worry, they'll let you know.
For example, the first time I tried to put the code for this demo in, I put it right next
to the regular Google Analytics tracking code (which they so helpfully and clearly
pointed out).
And you're ready to go now!
It is important to mention that one indicator is not always enough to assess the
effect of changes. For example, after making changes to your online store website, the
average receipt may decrease, but the overall revenue will grow by increasing the
conversion of visitors into buyers. Therefore, it is important to monitor several key
indicators.
After the key indicators are defined, the test is running, and we have the first
data. When we get test results that are close to what we thought they would be, there
is a temptation to make assumptions about the results.
We should not be in a hurry. The values of our key indicators may change from
day to day, which means that we are dealing with random variables. To compare random
variables, we estimate average values, and it takes some time to accumulate enough
history to estimate the average value.
The effect of making a change is defined as the difference between the
averages of the key indicators in the segments. This raises the next question: how
confident are we in the validity of the result obtained? If we run the test again, what
is the probability that we will be able to repeat the result?
If you're going to do A/B testing on your resources, your project probably already has
key metrics that need to be improved. If you don't have them yet, then it's time to
think about them.
The difference in the mean values is not enough to consider the result reliable.
We must also estimate the area of overlap between the distributions.
The smaller the overlap, the more confident we can say that the effect is
really significant. This "certainty" in statistics is called the "significance of the
result."
Typically, in order to make a positive decision about the effectiveness of a change, the
significance level is chosen to be ninety percent, ninety-five percent, or ninety-nine
percent. The overlap of the distributions is ten percent, five percent, or one percent,
respectively. With a low level of significance, there is a danger of making erroneous
conclusions about the effect obtained as a result of the change.
Despite how important this is, in reports on A/B tests, people often forget to include
the significance level at which the result was found.
By the way, in practice, about eight out of ten A/B tests are not statistically
significant. Mathematicians have come up with a whole section called "statistical
hypothesis testing" that allows them to compare random variables. One is called the
"null hypothesis," and the other one is called the "alternative hypothesis." The null
hypothesis says that the difference between the average values of the indicator in
each of the segments is not very big. The alternative hypothesis says that there is a
big difference in the average values of the indicator in each of the parts.
It is possible to use a number of statistical tests to check the hypotheses that
you make. The tests are different depending on what kind of thing you are trying to
figure out. We can use the student's test if we are trying to figure out how many days
of the week there are in a week. This test has worked well with small amounts of data
because it takes the number of people who took the test into account when determining
how important it is.