SEO Book.com |
Experiment Driven Web Publishing Posted: 08 Apr 2013 10:22 PM PDT Do users find big headlines more relevant? Does using long text lead to more, or less, visitor engagement? Is that latest change to the shopping cart going to make things worse? Are your links just the right shade of blue? If you want to put an end to tiresome subjective arguments about page length, or the merits of your clients latest idea, which is to turn their website pink, then adopting an experimental process for web publishing can be a good option. If you don't currently use an experiment-driven publishing approach, then this article is for you. We'll look at ways to bake experiments into your web site, the myriad of opportunities testing creates, how it can help your SEO, and ways to mitigate cultural problems. Controlled ExperimentsThe merits of any change should be derived from the results of the change under a controlled test. This process is common in PPC, however many SEO's will no doubt wonder how such an approach will affect their SEO. Well, Google encourages it.
Post-panda, being more relevant to visitors, not just machines, is important. User engagement is more important. If you don't closely align your site with user expectations and optimize for engagement, then it will likely suffer.
Experiments can help us achieve greater relevance. If It 'Aint Broke, Fix ItOne reason for resisting experiment-driven decisions is to not mess with success. However, I'm sure we all suspect most pages and processes can be made better. If we implement data-driven experiments, we're more likely to spot the winners and losers in the first place. What pages lead to the most sales? Why? What keywords are leading to the best outcomes? We identify these pages, and we nurture them. Perhaps you already experiment in some areas on your site, but what would happen if you treated most aspects of your site as controlled experiments? We also need to cut losers. If pages aren't getting much engagement, we need to identify them, improve them, or cut them. The Panda update was about levels of engagement, and too many poorly performing pages will drag your site down. Run with the winners, cut the losers, and have a methodology in place that enables you to spot them, optimize them, and cut them if they aren't performing. Testing Methodology For MarketersTests are based on the same principles used to conduct scientific experiments. The process involves data gathering, designing experiments, running experiments, analyzing the results, and making changes. 1. Set A Goal A goal should be simple i.e. "increase the signup rate of the newsletter". We could fail in this goal (decreased signups), succeed (increased signups), or stay the same. The goal should also deliver genuine business value. There can be often multiple goals. For example, "increase email signups AND Facebook likes OR ensure signups don't decrease by more than 5%". However, if you can get it down to one goal, you'll make life easier, especially when starting out. You can always break down multiple goals into separate experiments. 2. Create A Hypothesis What do you suspect will happen as a result of your test? i.e. "if we strip all other distractions from the email sign up page, then sign-ups will increase". The hypothesis can be stated as an improvement, or preventing a negative, or finding something that is wrong. Mostly, we're concerned with improving things - extracting more positive performance out of the same pages, or set of pages. "Will the new video on the email sign-up page result in more email signups?" Only one way to find out. And once you have found out, you can run with it or replace it safe in the knowledge it's not just someone's opinion. The question will move from "just how cool is this video!" (subjective) to "does this video result in more email sign-ups?". A strategy based on experiments eliminates most subjective questions, or shifts them to areas that don't really affect the business case.
When crafting a hypothesis, you should keep business value clearly in mind. If the hypothesis suggests a change that doesn't add real value, then testing it is likely a waste of time and money. It creates an opportunity cost for other tests that do matter. When selecting areas to test, you should start by looking at the areas which matter most to the business, and the majority of users. For example, an e-commerce site would likely focus on product search, product descriptions, and the shopping cart. The About Page - not so much. Order areas to test in terms of importance and go for the low hanging fruit first. If you can demonstrate significant gains early on, then it will boost your confidence and validate your approach. As experimental testing becomes part of your process, you can move on more granular testing. Ideally, you want to end up with a culture whereby most site changes have some sort of test associated with them, even if it's just to compare performance against the previous version. Look through your stats to find pages or paths with high abandonment rates or high bounce rates. If these pages are important in terms of business value, then prioritize these for testing. It's important to order these pages in terms of business value, because high abandonment rates or bounce rates on pages that don't deliver value isn't a significant issue. It's probably more a case of "should these pages exist at all"? 3. Run An A/B or Multivariate Test Two of the most common testing methodologies in direct response marketing are A/B testing and multivariate testing. A/B Testing, otherwise known as split testing, is when you compare one version of a page against another. You collect data how each page performs, relative to the other. Version A is typically the current, or favored version of a page, whilst page B differs slightly, and is used as a test against page A. Any aspect of the page can be tested, from headline, to copy, to images, to color, all with the aim of improving a desired outcome. The data regarding performance of each page is tested, the winner is adopted, and the loser rejected. Multivariate testing is more complicated. Multivariate testing is when more than one element is tested at any one time. It's like performing multiple A/B tests on the same page, at the same time. Multivariate testing can test the effectiveness of many different combinations of elements. Which method should you use? In most cases, in my experience, A/B testing is sufficient, but it depends. In the interest of time, value and sanity, it's more important and productive to select the right things to test i.e. the changes that lead to the most business value. As your test culture develops, you can go more and more granular. The slightly different shade of blue might be important to Google, but it's probably not that important to sites with less traffic. But, keep in mind, assumptions should be tested ;) Your mileage may vary. There are various tools available to help you run these test. I have no association with any of these, but here's a few to check out: 4. Ensure Statistical Significance Tests need to show statistical significance. What does statistically significant mean? For those who are comfortable with statistics:
For those of you, like me, who prefer a more straightforward explanation. Here's also a good explanation in relation to PPC, and a video explaining statistical significance in reference in A/B test. In short, you need enough visitors taking an action to decide it is not likely to have occurred randomly, but is most likely attributable to a specific cause i.e. the change you made. 5. Run With The Winners Run with the winners, cut the losers, rinse and repeat. Keep in mind that you may need to retest at different times, as the audience can change, or their motivations change, depending on underlying changes in your industry. Testing, like great SEO, is best seen as an ongoing process. Make the most of every visitor who arrives on your site, because they're only ever going to get more expensive. Here's an interesting seminar where the results of hundreds of experiments were reduced down to three fundamental lessons:
Tests FailOften, tests will fail. Changing content can sometimes make little, if any, difference. Other times, the difference will be significant. But even when tests fail to show a difference, it still gives you information you can use. These might be areas in which designers, and other vested interests, can stretch their wings, and you know that it won't necessarily affect business value in terms of conversion. Sometimes, the test itself wasn't designed well. It might not have been given enough time to run. It might not have been linked to a business case. Tests tend to get better as we gain more experience, but having a process in place is the important thing. You might also find that your existing page works just great and doesn't need changing. Again, it's good to know. You can then try replicating this successes in areas where the site isn't performing so well. Enjoy Failing"Fail fast, early and fail often". Failure and mistakes are inevitable. Knowing this, we put mechanisms in place to spot failures and mistakes early, rather than later. Structured failure is a badge of honor!
Silicon Valley even comes up with euphemisms, like "pivot", which weaves failure into the fabric of success.
Experimentation, and measuring results, will highlight failure. This can be a hard thing to take, and especially hard to take when our beloved, pet theories turn out to be more myth than reality. In this respect, testing can seem harsh and unkind. But failure should be seen for what it is - one step in a process leading towards success. It's about trying stuff out in the knowledge some of it isn't going to work, and some of it will, but we can't be expected to know which until we try it. In The Lean Startup, Eric Ries talks about the benefits of using lean methodologies to take a product from not-so-good to great, using systematic testing"
Given testing can be incremental, we don't have to fail big. Swapping one graphic position for another could barely be considered a failure, and that's what a testing process is about. It's incremental, and iterative, and one failure or success doesn't matter much, so long as it's all heading in the direction of achieving a business goal. It's about turning the dogs into winners, and making the winners even bigger winners. Feel Vs ExperimentationWeb publishing decisions are often based on intuition, historical precedence - "we've always done it this way" - or by copying the competition. Graphic designers know about colour psychology, typography and layout. There is plenty of room for conflict. Douglas Bowden, a graphic designer at Google, left Google because he felt the company relied too much on data-driven decisions, and not enough on the opinions of designers:
That probably doesn't come as a surprise to any Google watchers. Google is driven by engineers. In Google's defense, they have such a massive user base that minor changes can have significant impact, so their approach is understandable. Integrate DesignPutting emotion, and habit, aside is not easy. However, experimentation doesn't need to exclude visual designers. Visual design is valuable. It helps visitors identify and remember brands. It can convey professionalism and status. It helps people make positive associations. But being relevant is also design. Adopting an experimentation methodology means designers can work on a number of different designs and get to see how the public really does react to their work. Design X converted better than design Y, layout Q works best for form design, buttons A, B and C work better than buttons J, K and L, and so on. It's a further opportunity to validate creative ideas. Cultural ShiftPart of getting experimentation right has to do with an organizations culture. Obviously, it's much easier if everyone is working towards a common goal i.e. "all work, and all decisions made, should serve a business goal, as opposed to serving personal ego". All aspects of web publishing can be tested, although asking the right questions about what,to test is important. Some aspects may not make a measurable difference in terms of conversion. A logo, for example. A visual designer could focus on that page element, whilst the conversion process might rely heavily on the layout of the form. Both the conversion expert and the design expert get to win, yet not stamp on each others toes. One of the great aspects of data-driven decision making is that common, long-held assumptions get challenged, often with surprising results. How long does it take to film a fight scene? The movie industry says 30 days. Mark Walberg challenged that assumption and did it in three:
How many aspects of your site are based on assumption? Could those assumptions be masking opportunities or failure? Winning ExperimentsSome experiments, if poorly designed, don't lead to more business success. If an experiment isn't focused on improving a business case, then it's probably just wasted time. That time could have been better spent devising and running better experiments. In Agile software design methodologies, the question is always asked "how does this change/feature provide value to the customer". The underlying motive is "how does this change/feature provide value to the business". This is a good way to prioritize test cases. Those that potentially provide the most value, such as landing page optimization on PPC campaigns, are likely to have a higher priority than, say, features available to forum users. Further ReadingI hope this article has given you some food for thought and that you'll consider adopting some experiment-based processes to your mix. Here's some of the sources used in this article, and further reading: Categories: |
You are subscribed to email updates from SEO Book To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment