Page views, Conversions & ROI from the perspective of THE MARKETER!?
If you've ever wondered what types of paid search tests you could be running with clients offering a bountiful budget, check out this study by Jim Novo from the Yahoo Analytics Group Discussion. I've postd the entire thread for you to follow. It's a great sutyd and it addresses a lot of concerns we have as search marketers highlighting the suspicions we all have when trying to associate the measurement of our paid campaigns to the overall successes of the the campaign as a whole. thanks Jim!
> I guess what I'm saying is that the only results from a PPC campaign
> that matter are the *incremental* clicks that paid search provides
> above and beyond organic search. And if those incremental results
> aren't significantly higher, it's a lot harder for me to justify the
> spend... 'ego spending' of course aside.
Exactly, and the question if you are optimizing *profits* as opposed to
sales or "exposure" is where is breakeven on the PPC? This of course
depends on the margin of the business and the cost of the click, but there
are several other dynamics in play, including the following:
1. The tendency of a 1st ranking PPC to deliver sub-optimal ROI due to
"sport-clicking" by causal / newbie "surfers"
2. The tendency of lower-ranking PPC to deliver higher ROI due to bid gaps
and (often) dramatically lower costs
3. The tendency of "deep searchers" - people who click on lower ranking
organic and PPC links - to be further into the research cycle / more likely
to convert to final objective
So, for example, in one 2003 test, a page ranked organically as #2 for a
certain high-volume phrase. This same page content was used to create a
landing page for a PPC campaign using the same search phrase. The test was
conducted in March, neutral seasonality for this business.
With a #1 PPC ranking, the PPC campaign generated 11% in *incremental*
sales with a 60-day latency tail but had a negative 12% ROI on margin minus
overhead (note specific use of the word incremental, which I will explain
further below). When this same listing was dropped down to "deep shoppers"
at PPC rank #4 (in this case, 1st ad at bottom of Yahoo page), it generated
4% incremental sales with a 1786% ROI on margin minus overhead for the same
60-day latency tail. That's almost 18 : 1 payout. Without the tail
(first conversion only), it was 623% (almost 7 :1).
In addition, the "deep shopper" segment on average had a 70% repeat
purchase rate as opposed to 58% for the #1 PPC position. So even the "tail
of the tail" was better at position 4. This was on the higest volume
search phrase for the site, so it made a huge impact on overall
profitability.
Remember, the landing pages for both the high ranking Organic link and the
#1 ranking PPC link were exactly the same - layout, copy, all of it.
Now, the reason I specifically used the word incremental is we had control,
which demonstrates a real dark side of PPC with Top 3 ranking Organic.
When the #1 ranking PPC ran with the #2 Organic link, the sales *volume*
coming from the PPC link ran about 43% versus 57% for the organic link.
This ties pretty closely with some recent studies on click behavior (on
average, 60% click organic, 40% click paid).
But dig what this really means: if incremental sales are 11% versus control
(no PPC) and sales volume with PPC is 43% PPC, that means nearly 77% of PPC
sales were **stolen from the organic side** - they would have happened
anyway without the PPC link.
Factor this "media cannibalization" into ROI, and now we're down around
negative 48% ROI for the test #1 ranking PPC with organic at #2. For every
$1 we spend we lose 48 cents - 12 cents in tangible ROI, and 36 cents in
"Opportunity ROI" - ROI we won't get because we wasted the click budget by
not using it to buy "real" incremental clicks.
Hey, just increase the budget, we'll make it up on volume!
This is precisely the situation I was alluding to in the first post on this
thread.
The visitor clicked PPC, landed, went Back, found Organic, converted. With
all this occuring in the same session, it's highly likely that paid click
is a pure subsidy cost - conversion would have happened anyway. To peg
this click as "source" gives credit where credit is (probably) not due, and
leads to eroding margins through increased PPC subsidy costs.
> So back to the original question, can you think of a solid way this
> incrementality could be tested?
"Solid" way? Well, I guess that would depend on the technology that was
available and the sales volume you are talking about. Not sure you can
truly "A / B / C" it without some significant bid management / search
engine API technology. I'm not sure the engines would give up that kind of
control - though I bet THEY have tested it at some level.
The above test was a 3 week manual "alternating days" test on Overture for
a store with 500K - $1 million annual sales and average order size about
$72, so there wasn't a lot of room for high tech tools.
If you run A on Monday, B on Tuesday, control / C on Wednesday, then start
over with A on Thursday and continue this rotation, the following Monday B
will run and on Tuesday C will run etc., so by the end of 3 weeks you will
have A, B, and Control data normalized by day - both campaigns and control
ran on every day of the week. Not a statistically pure methodology, but
not horrible, for sure - and cheap! As Robin said, "I'd rather do it
unscientifically (e.g. pull the ads for a time period that is longer than
the average latency and measure) and swallow the error rather than just
"attribute" the conversion."
If the result spread is significant enough, as it was on this test, I'll
give up points in accuracy to get closer to the "directional truth". Each
of the top 30 search phrases *where there was a top 3 organic ranking for
the phrase* was optimized in this way with the results very directionally
consistent across all phrases. It was almost always more profitable to
have a lower than #1 paid ranking when a top 3 organic ranking was present.
Below the Top 30 phrases, some of the lower volume phrases produced
inconsistent results which was probably a result of test methodology error
/ lack of frequency.
While it may not be "practical" for large scale retailers to test like
this, you would think for certain high volume phrases it would be worth
poking around it a bit given the potential for cost savings. Definitely
not worth thinking about if sales is the focus, because (probably
unprofitable) sales will be lost without the #1 PPC listing.
Are there any analytics providers with a direct interface to the search
engines capable of generating / controlling / measuring this kind of
testing on a large scale? Any analytics providers that interface with
complex bid management systems like Kevin Lee's didit.com ? I don't know
the answer.
Of course, changes in the way paid listings are displayed (often related to
how many bidders there are) can change the outcome of this test. The
results on Google were also directionally consistent though less dramatic.
I assume this is because of the different PPC display approach versus
Yahoo, but perhaps also due to some of the "alchemy" surrounding Google PPC
rankings, which are more difficult to control on Google.
And for sure, there are reasons people buy PPC other than to drive profits,
so being #1 may be worth it, but the true costs should be quantified.
Jim
jim@jimnovo.com
http://www.jimnovo.com
No comments:
Post a Comment
Leave your comments here.