How Removing Silos Between Your Marketing And Technical Teams Can Help You Succeed In The SEM Space.
Search Engines are becoming smarter. Google, for instance, has recently shifted their model from connecting users to high-quality results, to also becoming a knowledge base in and of itself. The emergence of big data, predictive analytics, answer boxes, and knowledge graphs into Google Search has resulted in customizable results that aim to be more relevant to each user. While these improvements create significant value for the user, they represent a significant threat to companies and advertisers seeking to rank in Google results or advertise through AdWords. Corporations and advertisers have no choice but to adapt to these changes, to strengthen and continuously improve their digital assets to preserve their relevancy in the search and pay-per-click space.
Most parties looking to rank in or to conquer the SEO space aim to “beat the system” by – in other words, by increasing their monetary efforts to send out a message with the hopes that it is either relevant to an audience, or to Google. While that may be a fair strategy for some players, the key to success in this space does not originate solely from outreach, but rather from the alignment of your organization’s marketing and technical teams and the maximization of existing digital assets.
For instance, one of the main problems in the SEO space is digital dilution, which occurs when a site releases a high volume of uncategorized, unrelated or non-compliant content. This content can actually impact your organization’s website negatively, especially when the underlying code for the content is non-compliant with current content best practices or trends.
If releasing content keeps your company’s site relevant, how can new content actually hurt it? Well, this is not always the case, but the problem for most parties is that they consider their technical and marketing teams completely separate entities. Thus, the goals for each are independent and sometimes conflicting. For instance, let’s say that your organization is planning to launch a new website, so the organization gathers the marketing and development team. Since the goal is to create a new, high-impact website, do you think that their priorities will be the same?
Typically, the answer is no. Even as a developer and a search engine marketer myself, I often struggle to align my technical and marketing priorities in a scenario like this. The problem relies on proper communication and joint goals. However, the lack of communication does not fall on either team’s plate, but rather, a much broader underlying management practice towards IT workers that should not be surprising – some managers simply do not know how to manage a technical team. In late 2015 TinyPulse, an employment engagement platform published surprising results. Only 19% of all technical workers in the United States are satisfied with their jobs (versus a 22% national average).
The surveyor concluded that amongst the top reasons for their dissatisfaction, the one that stood out most is the lack of alignment within the company – meaning that these folks are unable to find where their roles fit in with the organization’s values, goals, etc. The second most common reason is a poor connection with their teammates – about 47% of surveyed IT employees claimed to have strong relationships with their coworkers, but in other industries, this number jumped to 56%. Thus, your organization may comprise of a very talented technical team of both marketers and technical employees who are not reaching their full potential because of a lack of inclusion, aligned goals, and stronger bonds. It is up to your organization to build the bridges for mutual collaboration, because without it, each team will continue to work under their silo rather than a mutually defined goal.
Without alignment and inclusion, your organization will not be able to attain “win – win” outcomes that benefit not only the collaboration of these teams, but of the entire organization. An effective digital marketing campaign is relying equally on the messaging and on how the message is served, and the results are measured in increased leads and sales. It is just that simple. In the Search Engine Marketing space it is not your strategies that will not allow you to conquer this space, but rather the prevalence of communication and collaboration between your teams.
I am a huge believer in professional ambidexterity. Every opportunity that I have had to expand my technical or my marketing knowledge has only made me a better professional and equips me with better contributions and insights for our clients. At Synapse SEM, we practice the same philosophy as a key component of our culture, and we continue to be successful where others fail because we understand the technical and marketing needs of the industry. As Google continues to update its ranking metrics, can assure you that by building a united digital marketing front, your organization will be able to succeed no matter how complicated or competitive the space becomes.
Attribution Intro With Visual IQ CMO Bill Muller
Cross-channel attribution is a hot topic these days. We’ve been asked by many clients recently what they need to know about attribution and how it could be used to help improve their marketing results. To get answers, we went to industry leader (and current client) Visual IQ and sat down with their CMO, Bill Muller. Bill’s responses to the key questions related to attribution can be found below. This is a must-read for anyone new to attribution or for anyone considering investing in a cross-channel attribution platform.
Q: Can you explain for folks new to attribution, how does cross-channel attribution work? What are the main benefits of using a cross-channel attribution platform?
A: Cross-channel attribution, much like any discipline, can vary widely depending on the degree of sophistication and complexity of the platform that you use. It’s like asking, “How much does a car cost?” Well, it depends on whether it’s a Prius or a Ferrari.
The way we perform cross-channel attribution is a methodology called “algorithmic” or model-based attribution, which differs dramatically from rules-based methodologies that tend to be flawed and subjective. Algorithmic attribution works as a platform that ingests marketing performance data from both digital and non-digital sources. In the case of digital or “tagable” sources, we often use the ad server tracking that’s already being used by a client. We also use our own pixel to stitch together the various touchpoints that are involved in a user’s journey to a conversion.
That data is then fed into an attribution engine, which is a series of algorithms and machine-learning technologies that chew through the data and fractionally attribute credit for a conversion across the various touchpoints experienced by a user. Rather than simply looking at the order in which those touchpoints took place, the engine measures all of the individual components that make up those touchpoints; for example, channel, ad size, creative, keyword, or placement.
By doing this across an entire universe of users who are exposed to your marketing efforts, the software can calculate success metrics across all channels to show exactly how much credit each touchpoint and each channel deserves. Almost always, when that calculation gets performed, you get a very different picture of which channels, campaigns, and granular-level tactics are contributing to your overall success.
The main benefits are better decision-making and better allocation of budget. Ultimately what people do with the output of the attribution is reallocate budget to any channels, campaigns, and tactics that they previously undervalued. They then fund those by taking budget away from the channels that they’ve historically overvalued, the losers, and provide it to the winners.
Q: Does the platform tend to work better for certain industries?
A: To determine fit, we tend to look at “business models” more than “industries.” Until recently, attribution had been a direct response-related endeavor, meaning that companies using digital and/or digital combined with offline to produce hard and fast conversions, such as an e-commerce transaction, a lead, or a quote, will best benefit from the software. There are many industries that align with this type of business model.
In terms of attribution, business models that historically have been left out in the cold have been companies that do not have those types of transactions in place. In terms of their objective, attribution has primarily been about generating brand engagement, because they do not have a direct line to their conversion event.
Think about, for example, pharmaceutical companies. You are not buying a drug on their website or buying drugs as a result of seeing their TV advertisement, but there are marketing activities that are causing you to experience some brand engagement. Ultimately, you may be prescribed the drug and purchase it, but there is no linkage between their marketing and your purchase. There are no conclusions to draw.
This business model, as a result, has been difficult for attribution to conquer in the past because there hasn’t been a tie between media stimulation and the eventual consumption of an end product. Until recently.
Q: What kinds of recommendations will an attribution platform make? Are they typically budget related or otherwise? Are they typically real-time, on-going, or one-time recommendations?
A: The recommendations are typically budget-related, as we are talking about spending money on individual tactics: moving budget off of less successful ones, onto more successful ones. They are typically not real-time, but daily, because we can only make recommendations at the pace of which our attribution engine is fed with performance data.
The recommendations do, however, absolutely need to be ongoing. Much like a search campaign, it’s not ‘set it and forget it.’ The environment in which you operate is not a static one. It is constantly based on the marketplace, on what competitors are doing, on econometric factors, on global events, etc. It constantly needs to be adjusted based on the dynamic nature of the marketplace. This is ongoing and not a one-time recommendation.
Q: How drastic will the recommended changes be?
A: The type of recommendations can be as granular as the characteristics of the data that is provided. When a lot of people think of attribution, they think totally about the chronology of the touchpoints that have taken place in relation to the number of conversions. They think, ‘This happened first, this happened second, this happened third, and I really can’t control those things.’
What they often don’t realize is that these touchpoints are made up of various characteristics. If it was a display ad, there is size, placement, offer, and publisher to consider. If it’s the search channel, one can consider if it was paid or organic, keywords, impressions, or clicks. So the recommendations that come out of our application are often things like, “Stop spending $500 a month on this ad, of this size, with this creative, on this publisher, on these days, per week. Now take that money and put into this keyword, on this search engine, with this creative, and this offer, on these days of the week.” We include every characteristic of every touchpoint in the model to find out which has the most impact on a client’s overall success.
The recommendations can also be as dramatic as, “Stop spending on certain placements altogether,” or the opposite. We had a client recently that was going to eliminate spending on one display publisher altogether. When they looked at their attribution results, they recognized that instead of it being their worst publisher, it was the publisher that most contributed to their success. They then tripled the amount of spend on the publisher that they were originally going to eliminate from their marketing mix.
Q: Are there channels (Paid Search, SEO, Offline, etc.) that repeatedly prove to drive more or less value than previously believed?
A: Yes – Many clients are highly invested in paid search, but we’ve found that paid search is one of the channels that tends to be universally overvalued in a last-click methodology.
In other words, most of the world is using a last-click methodology to assign conversion credit. If an individual has touched four different times prior to a conversion, odds are you don’t have a methodology in place that can link those four touchpoints together. You don’t always know that the user had touched four times—All you know is that a person converted as a result of a search and a click on a paid search term.
Attribution allows you to tie together the otherwise unknown factors. If somebody was exposed to impressions of a display ad five times prior to their click on a paid search ad, and it ultimately led to a conversion, we can see that.
Q: How does the attribution model handle view-through conversions?
A: Our methodology not only ingests touchpoints that resulted in clicks, but it ingests touchpoints where there was only an impression. For example, you do not have to click to be cookied. When a touchpoint is analyzed, we look at all the constituent parts of it—its size, its publisher, its placement.
Using that data, our solution then calculates how much value a “mere” impression had in the grand scheme of things: What was the difference in performance between those people that were not exposed to the ad and eventually converted, compared to those that were exposed to the ad?
Q: Where do you see attribution technology evolving over the next five years? What will we be able to measure and/or optimize better by 2020?
A: As I mentioned previously, until now, attribution has very much been a direct-response technology. Recently, however, Visual IQ released a methodology that allows us to extend our solutions much beyond direct-response business models. Instead of ingesting direct-response conversions, it uses brand engagement touches— first visits to a website, video views, media asset downloads for example — to come up with a common brand engagement score. The attribution product then optimizes or makes recommendations on how to maximize that assigned brand engagement score.
Not only does this allow us to focus on companies that are pure brand engagement, but it also allows us to help the side of the house that has not been able to benefit from attribution in the past. And frankly, at some companies brand spending far outweighs direct response spending.
Q: What makes Visual IQ different from the other cross-channel attribution vendors in the space?
A: Part of it is our legacy, in that we were one of the first attribution vendors in the space, and that we were the first attribution vendor to offer algorithmic attribution.
From the very beginning, we tackled granularity. We let the machine-learning and the mathematical science do the calculations so that the data we receive tells the story. Because we’ve done this since the beginning, we’ve been able to improve the level of sophistication of our product.
Visual IQ’s products are smarter products. We’ve continued to innovate things like attribution branding and offline media attribution. We have a television attribution product. We are consistently offering features, benefits, and values to our clients before our competitors.
We’ve also been working with enterprise-sized clients since the inception of our organization. The largest, most successful brands in marketplace and some of the most demanding marketers in the world are using our products. We’ve developed our products over the past decade based on their needs and demands.
If we can bring in 17 different channels from one of the world’s largest credit card companies, across multiple countries and business units, and provide them with actionable business recommendations that they can act on to generate millions of dollars-worth of media efficiency, then we certainly have the ability to handle 99 percent of the potential businesses out there. Without our legacy and history of innovating, longevity, and continuing to improve our product, we wouldn’t have that capability today.
Q: For those who are interested in learning more about your platform, what’s the best way for them to get in touch with you?
A: If you have any questions surrounding cross-channel attribution, or to want to learn if Visual IQ attribution software is right for your business, please email me at Bill.muller@visualiq.com.
For folks who are trying to better understand us in the attribution space, we have been at the top of the last three Wave Reports done on our marketplace. By talking to Visual IQ, you can rest assured that you are truly talking to the industry leader.
4 Ways the Removal of Right-Hand Rail Ads Impacts PPC
In late February of this year, Google confirmed that they will no longer be serving PPC ads in the right hand rail of the search results. While this came as a shock to many, it is something Google has been testing since 2010 and just recently decided to roll out permanently. The online giant has a long standing history of discreetly testing out new updates to search engine results, and this one was no different as an anonymous Google employee leaked the permanent change to the media on February 19.
So what exactly does this change mean for paid search advertisers? What shift in results can digital marketers and advertisers expect to see over the next several months as this change in the search engine landscape rolls out? Below are 4 potential shifts to look out for with this recent update in the Google search results.
1) CPCs Might Increase
Over the next several months as more marketers and clients alike begin to notice the change in Google search results, the competition for the top 3-4 PPC search results is going to gain momentum. It is common knowledge in search that users tend to not spend a lot of time scrolling down to look at results below the fold, so marketers are going to be increasing bids to battle it out for the top paid search slots. There are a couple different scenarios to consider here. CPCs have the potential to increase as marketers compete to own those top spots. Alternatively, it is possible that Google may change the minimum Ad Rank requirements so that ads are showing more often and rotate in more evenly. Some of our clients have seen around a 5% increase in CPCs since the new update rolled out over the last couple months. We will be interested to observe how CPC shifts over the next few months after advertisers have had more time to settle in with this particular update.
2) Impression Share Could Be Harder to Maintain and QS May Carry More Weight
As more advertisers notice the change in SERP results, they will begin competing for the top 4 paid search spots which may make it more difficult for advertisers to maintain stronger impression share on their core terms. How will Google determine which ads to rotate in to those top 4 spots? How will that impact impression share? Will it be tougher to maintain strong impression share for your top terms or will Google loosen up the criteria for Ad Rank and rotate competitors in more evenly? One certainty here is that it will be critical to re-evaluate Quality Score on your most important terms to set yourself up for success with all the unknowns of Google’s next steps.
3) More Advertisers Will Likely Be Shifting into PPC
With this new change rolling out, the amount of paid ad space available on the SERP has decreased from up to 11 down to 7. There is, however, one additional spot available at the top of the page for a total of 4 paid search slots, as opposed to 3 in the past. What does this mean for SEO results? They will be pushed further down the page, bringing a higher number of SEO results below fold. Because of this shift in SEO positioning (and drop in traffic) more advertisers will likely be looking into setting up their own paid search campaigns to compete for the top page spots. This may end up adding another layer of competition to the paid search space, which could also have an impact both on CPC and impression share.
4) ecommerce Advertisers Will Likely Invest More Heavily in PLAs, and non-ecommerce Advertisers Will Be Awaiting Their Solution
While right hand rail paid search ads are disappearing completely, Google has confirmed that this change will not impact the Knowledge Panel or the Product Listing Ads on the side rail of the SERP. The strong positioning of PLA ads is optimal for ecommerce companies and retailers who are likely already heavily investing in PLA advertising. This is great news for ecommerce businesses, but there is no alternative solution for either B2B or B2C companies that do not have specific products for sale on their site.
There is currently a lot of speculation circling around the paid search world about how this major shift in search engine results is going to impact marketers and advertisers. Ultimately, the impact will depend upon how advertisers react to this change in landscape. Will they get more aggressive with bids right away, driving up CPCs? Will they take a step back to revise their keyword set and max out impression share on their most efficient terms? Whichever direction the reaction trends, marketers should take a step back to re-evaluate strategy and results to make sure no major dips in performance have occurred.
Some different types of analysis that may be helpful include segmenting traffic and leads by ‘top of page’ results versus ‘other’ both before and after the update to see if there is cause for worry. Advertisers will also want to look into improving Quality Score since it may end up carrying even more weight. To improve QS, advertisers can try segmenting keywords out into more granular ad groups and looking into ad copy and landing page content that is more relevant to the keywords within those given ad groups. To improve expected CTR, try testing queries on high volume terms to see how competitors are positioning themselves and adjust your copy to be more in line with the competition. Is there room to broaden your customer base? Are there unnecessary qualifiers currently in place within your ad copy? Improving overall QS should help minimize the impact of potential CPC increases, and hopefully lead to better overall positioning with negligible impact on CPCs.
Synapse SEM Forms Strategic Partnership with CohnReznick
GLASTONBURY, CT – March 18, 2016 – Synapse SEM™, a full-service search engine marketing firm with offices in Connecticut and Massachusetts, today announced it has formed an alliance with CohnReznick LLP, one of the leading accounting, tax, and advisory firms in the United States.
Synapse SEM will provide a full-scope of search engine marketing services to the renowned financial advisory firm, and collaborate with CohnReznick to expand its Technology and Digital Advisory Practice.
“This is undoubtedly a valuable partnership for us,” comments Synapse SEM Co-Founder Mark Casali. “We are linking with distinguished consulting leaders at CohnReznick to help digitally transform the way their clients do business.”
“Aligning with Synapse SEM enables CohnReznick to extend its technology offerings, particularly enhancing our ability to help clients connect with customers to build profitable and loyal relationships,” said Dean Nelson, Principal and National Director of the Technology and Digital Advisory Practice at CohnReznick. “Synapse SEM meets the demands of our clients by providing deeper insights, maximizing return on investment, and driving profitable growth by minimizing promotional waste.”
Through strategic paid search advertising, search engine optimization, social media marketing, and mobile advertising campaigns, Synapse SEM will help CohnReznick unlock new digital marketing opportunities and optimize competitive strategies across their client base.
About Synapse SEM, LLC
Synapse SEM is a full service online marketing firm that leverages robust data analysis and statistics to provide its clients with deeper insights and uncover otherwise overlooked opportunities. With core competencies in paid search advertising, search engine optimization, social media marketing, mobile advertising and conversion optimization, the company develops, implements, and executes online marketing strategies focused on maximizing its clients’ ROI. Leveraging proprietary data analysis techniques and experienced subject matter experts, the agency is committed to achieving unparalleled results and providing the highest quality of service to its clients. For more information on Synapse SEM, LLC, call 781-591-0752 or visit www.synapsesem.com.
About CohnReznick LLP
CohnReznick LLP is one of the top accounting, tax, and advisory firms in the United States. CohnReznick combines the resources and technical expertise of a national firm with the hands-on, entrepreneurial approach that today’s dynamic business environment demands. CohnReznick serves a number of industries and offers specialized services for Fortune 1000 companies, owner-managed firms, international enterprises, government agencies, not-for-profit organizations, and other key market sectors. Headquartered in New York, NY, CohnReznick serves its clients with more than 300 partners, and 2,700 employees in 30 cities. The firm is a member of Nexia International, a global network of independent accountancy, tax, and business advisors.
2016’s Top Search Engine Marketing Trends
2016 will be a revolutionary year for the digital marketing industry. After a historic 2015, a year in which we saw mobile searches overtake desktop searches, industry analysts are projecting that digital media spend will overtake traditional channels like TV for the first time. Apart from these macro changes, there are more technical developments that will be also be affecting digital marketers in the new-year. We share details on 5 critical trends that should be on your radar for 2016:
- Google Penguin Update – Google Penguin is not a new name to search engine optimization professionals. For those less familiar, Google Penguin is a layer to Google’s organic algorithmic that specifically evaluates link quality. Penguin is designed to discount or even penalize disreputable and manually engineered external followed links. Penguin was originally released in April 2012 and since then it has been refreshed around a half dozen times. This forthcoming release of Penguin, slated to launch in early 2016, marks a major change for the algorithmic layer. Instead of releasing periodic updates, the new release of Penguin will be run in real-time. This is both good and bad for advertisers. For websites suffering from historically spam-rich linking profiles, the benefits of any link cleansing work and disavowals will be felt quicker. In contrast, for websites who aggressively push the envelope on their link-building strategies, penalties and ranking drops will also be assessed and felt faster. The updates coming to Penguin underscore the importance of what has already been a link acquisition best practice for several years. Instead of building links, marketers should instead be focused on cultivating links—organically generating link backs by promoting unique, engaging content.
- Real-Time Personalization – Real time personalization is a growing technology that allows content management systems and advertising platforms to dynamically serve customized content for different cohorts of users. The technology, which is offered through marketing automation solutions like Marketo and CMS platforms like Sitefinity, works by integrating with an organization’s CRM system. A website visitor will get cookied and then the marketer can define different personas or user groups. One persona, say a C-Level executive, can then be served a different website experience (different messaging, calls-to-action, etc.) than, say, a specialist-level user. The same type of personalization can be embedded into paid pay-per-click ad copy and landing pages. This is invaluable technology than can lead to significant improvements in conversion rate and online revenue. If you’re a retailer, you can customize Branded paid search ads to be focused on the previous purchases on repeat customers. If you’re a B2B organization, you can tailor your website experience to the role of your visitor. A researcher may be prompted to download white papers and industry reports, while a decision- maker like an executive could be served deep-funnel calls to action like a demo request.
- RLSA – In mid-2015 Google AdWords expanded their RLSA or “Remarketing Lists for Search Ads” technology so that campaigns can leverage Google Analytics remarketing lists. RLSA allows marketers to integrate retargeting lists with their Search Network pay-per-click ads. Marketers can specifically target (or exclude) past website visitors, based on the pages they visited, or their on-site behavior. Past website visitors are typically more qualified users, so marketers can take a broader approach with their keyword set, and a more aggressive approach with their bidding strategy. As an example, if you’re an online retailer that sells luxury watches, a keyword like “gifts for my husband” would likely yield highly irrelevant/unconvertible traffic. However, with an RLSA campaign, we can aggressively bid on a keyword like “gifts for my husband” because we know the user has already expressed interest in our website/product. Similarly, RLSA can be used to improve traffic quality on traditional Search Network campaigns. For example, B2B SaaS websites often field significant traffic from existing customers who log in to the product through the website. As a marketer you might be running a Branded search campaign aimed at demand generation. Unfortunately, major amounts of your ad spend are likely being wasted on these existing customers who are simply trying to login to their account. With RLSA, we can create a remarketing list of all users who have reached a website’s login page. We can then exclude that list from seeing our Branded ads in our AdWords campaign.
- Mobile Conversion Optimization – Mobile users overtook desktop users for the first time in 2015. With mobile traffic becoming a bigger and bigger percentage of total traffic each year, it’s critical that marketers implement a mobile-specific conversion strategy on their website. In addition to ensuring that your website is fully responsive, marketers can use device detection scripts to serve customized content. For example, a marketer could setup a page to display consolidated messaging, shorter forms (less fields), or different navigation links when a user browses from a device with a smaller screen size like a smart phone or tablet. These types of adjustments can be implemented at the page level and they can have a profound impact on your mobile conversion rates.
- Dark Traffic – As mobile traffic continues to grow, so too does untrackable (not set) or “dark” traffic in our analytics platforms. Dark traffic typically originates from mobile social media and messaging apps. Many of these mobile apps trigger new windows when referral links are clicked. From Google Analytics’ perspective, the user is direct navigating to your website; in reality, the user is arriving vis-à-vis a referral source. For some B2C retail websites, dark traffic is becoming incredibly problematic, and in some extreme cases, it’s comprising over 50% of total website traffic. Many firms are trying to get around this issue by running landing page reports and making educated guesses on the original traffic source of the user. That approach is imprecise at best. Our firm has developed a sleeker solution. The apps engendering dark traffic kill the original traffic source of the user by opening up new browser windows. The majority of these apps also prevent marketers from manually tagging website links with UTM parameters. There is, however, a workaround that can be employed with an interstitial redirect. For example, let’s say a company includes a link to their website in their Instagram profile. The marketer can link to a unique URL that has a delayed interstitial redirect on it that points to a URL (e.g. the homepage) tagged with UTM parameters that communicate the user’s original traffic source. In this example the redirected URL could point to www.acme.com/?utm_medium=referral&utm_source=Instagram&utm_campaign=InstagramProfileLinkClick. This will tell Google analytics that the user came from the Referral medium, from the Instagram app/site, and from an Instagram profile link click.
How Google Determines Actual CPCs Will Surprise You
OK, so you’ve been investing in PPC advertising for years. You know how your KPIs are performing, and how much you’re spending each month. Your in-house team or agency reports back to you on overall performance and dazzles you with insightful and actionable analyses each week. You feel very comfortable with their PPC knowledge and then one day you ask one of the most basic PPC questions: how are CPCs in AdWords calculated? Their response is questionable at best, and you start to think their so-called “expertise” is a sham. They should be able to easily answer this question, right?
Well, before you judge your team too harshly, let us walk you through how CPCs are actually determined and why it’s not a question so easily answered.
How Are CPCs Actually Calculated?
Before we get into the specific calculations, we need to first talk about the AdWords auction and what influences your CPC, position and impression share, since these three metrics are all related. All three metrics are determined by your Ad Rank, a metric that includes your Quality Score, maximum CPC and expected impact of ad formats. The advertiser that shows in position 1 (“Advertiser 1”) is the advertiser whose combination of Quality Score, expected impact of ad formats and maximum CPC is highest. Google first determines the position for each advertiser, and then calculates the actual CPC for each advertiser based on that position.
To really see how this plays out, let’s look at an example:
In the chart above, Advertiser 1 will show in position 1 because they have the highest Ad Rank. Once position is determined, the AdWords system then determines the actual CPC that advertiser will pay. Keep in mind that the idea that each advertiser only pays $0.01 more than the next advertiser no longer applies (unless all advertisers have the exact same Quality Score and Format Impact). In fact, it is entirely possible for an advertiser in position 2 to pay a higher CPC than the advertiser in position 1. Advertiser 1’s actual CPC is the lowest amount they can pay while still achieving an Ad Rank higher than Advertiser 2. The CPCs for the other advertisers are calculated using the same logic. Since Advertiser 4 has by far the lowest Quality Score and max CPC, they are likely to be ineligible to show or have extremely limited impression share.
How Can I Use CPC Calculations to My Benefit?
Now that we know how CPCs are determined, how can this help you improve your performance? First of all, keep in mind that ad extensions do play a role in determining your Quality Score, and Google is introducing new ad extensions all the time (they just recently announced structured snippets, for example). You should be using as many ad extensions as reasonably possible, and optimizing your ad extensions at least as often as you’re updating your main ad copy. This will help improve Quality Score, which can help reduce your CPCs and/or improve your position.
Also, this may be obvious, but you should be making regular ad copy testing a top priority. With expected click-through rate and ad relevance accounting for a majority of your Quality Score, it’s critical that you’re using relevant headlines and descriptions that are truly differentiated from your competition and highly enticing to your audience.
Lastly, keep in mind that it’s extremely difficult to run PPC ads profitably with low Quality Scores. Constantly inflating your max CPCs to drive impression share and high positions is not a sustainable strategy. Other advertisers are typically setting their bids to meet profitability, and if they’re showing more often and more prominently, it likely means they have higher Quality Scores. If you end up paying significantly more per click, you should have a strong business case for doing so (e.g. significantly higher conversion rates, better lead close rates, higher customer lifetime value, etc.). You should also continually focus on ad improvements and ensure a relevant landing page experience. The steady, consistent path of testing and analysis (in replace of or in addition to aggressive bid increases) will help you to maintain efficiency as you expand and as competition increases.
If you’re interested in learning more about how CPCs are calculated, see a great video by Google’s chief economist, Hal Varian, or check out these two articles that cover CPCs for the Search network and CPCs for the Display network.
If you’d like to learn more about Synapse SEM, please complete our contact form or call us at 781-591-0752.
Advanced PPC Series: Your Test Results Can’t Be Trusted
Your Ad Copy Test Results Can’t Be Trusted: A Need-to-Read Article for Search Engine Marketers
If you are like us, you’re constantly running A/B ad copy tests in your AdWords campaigns. It’s possible that over the last several years you’ve been making the wrong decisions based on very misleading data.
Many of us use metrics such as conversion rate, average order value (AOV) and revenue per impression to choose a winner in an A/B ad copy test. Based on the statistically significant ad copy test results below, which ad would you choose to run?
Ad Iteration |
AOV |
Conversation Rate |
ROI |
Revenue/Impression |
Ad A (control) |
$225 |
3.15% |
$7.79 |
$0.42 |
Ad B (test) |
$200 | 2.65% | $6.79 |
$0.37 |
The answer couldn’t be clearer. You should run ad copy A, right? After all, it does have a higher AOV, a higher conversion rate, a higher ROI and it produces more revenue per impression than Ad B. What on earth could possibly convince you otherwise? The metrics above tell a very clear story. But are these the right metrics to look at?
Measuring A/B Tests: What Metrics Should You Consider?
Conventional wisdom tells us that if we’re running a true A/B test, then impressions will be split 50/50 between the two ad iterations. If this assumption holds true, then the metric we really should be focused on is revenue per impression. This metric tells us how much revenue we’ll generate for every impression served, which accounts for differences in CTR, AOV and conversion rate. If your business is focused on maximizing growth, then this may be the only metric to consider. If you also are focused on efficiency, then you will consider ROI and choose the ad that you believe provides the optimal combination of revenue and efficiency. While this approach is common, it is also fatally flawed. Here’s why…
Why Google Can’t Guarantee True A/B Ad Copy Tests
Earlier, we made the assumption that impressions are split 50/50 in an A/B test. However, when running our own A/B tests we noticed that certain ads were receiving well over 50% of the impressions, and in some cases, upwards of 70-90% of the impressions. We experienced these results when selecting the ‘rotate indefinitely’ ad setting, as well as in AdWords Campaign Experiments (ACE) tests. So why were we seeing an uneven impression split? Did we do something wrong? Well, yes: we made the mistake of assuming that impressions would be split 50/50.
How Google Serves Ads – And Why Quality Score Is Not a Keyword-Exclusive Metric
When you set up an A/B ad copy test in AdWords, Google will split eligible impressions 50/50, but served impressions are not guaranteed to be split 50/50, or even close to 50/50. Eligible impressions will differ from served impressions when one ad produces a higher CTR than the other. Since CTR is the primary determinant of Quality Score (and thus, Ad Rank), the AdWords system may actually serve a higher CTR ad more often than a lower CTR ad. This happens because your keywords’ Quality Scores will change for each impression depending on which ad is eligible to show for that impression. In other words, each time the lower CTR ad is eligible to show, the keyword that triggered the ad will have a lower Quality Score for that impression, and thus, a lower Ad Rank (because the expected CTR is lower with that ad), so the lower CTR ad will win the auction less often than the higher CTR ad. Naturally, this results in more impressions for the higher CTR ad, even though the two ads each receive roughly 50% of eligible impressions. If you use revenue per impression, one of the metrics we suggested earlier, then you will have failed to account for the discrepancy in impressions caused by varying CTRs. So, does this mean that your A/B ad copy test results are now meaningless? Not so fast.
Evaluating Test Results Are Easier Than You Think – Just Look at Revenue (or Revenue per Eligible Impression)
Let’s assume that your goal is to maximize revenue. The simplest metric to look at in an A/B ad copy test is revenue, but you can also look at revenue per eligible impression. Both metrics allow you to account for the variations in impressions due to different CTRs. To calculate revenue per eligible impression for each ad, divide the revenue from that ad by the impressions from whichever ad produced the higher number of impressions. Here’s an example: let’s assume Ad A generated a CTR of 6% and received 50,000 impressions and Ad B generated a 4.5% CTR and received 30,000 impressions. Between the two ads, Ad A received more impressions, so we can conclude that there were 100,000 total eligible impressions (twice the number of impressions generated by Ad A). Ad B was not served for 20,000 of the eligible 50,000 impressions due to a lower CTR (which impacted the keywords’ Quality Scores and Ad Rank for those impressions). If the revenue per impression metric is confusing, just focus on revenue: it will give you the same outcome. Let’s revisit the test results we showed earlier, which now include additional data.
Ad Iteration |
Impressions | CTR | Revenue | Transactions | AOV | Conv. Rate | ROI | Revenue / Impression | Revenue / Eligible Impression |
Ad A |
114,048 | 5.95% | $48,095 | 214 | $225 | 3.15% | $7.79 | $0.42 |
$0.36 |
Ad B |
135,000 | 7.00% | $50,085 | 250 | $200 | 2.65% | $6.79 | $0.37 |
$0.37 |
While Ad A outperformed Ad B based on its revenue per impression, it actually generated less revenue and less revenue per eligible impression than Ad A. Ad A did generate a higher ROI, however, so the tradeoff between efficiency and revenue should also be taken into account.
Interestingly, Ad A’s 19% higher conversion rate and 13% higher AOV still couldn’t make up for Ad B’s 18% higher CTR. This is because Ad A also received 16% fewer impressions than Ad B. Remember, a lower CTR will lead to fewer clicks AND fewer impressions – the double whammy.
The Conclusion – Focus Less on Revenue/Impression and More on CTR
Historically we have treated CTR as a secondary metric when evaluating ad copy performance. It’s easy to manipulate CTR with Keyword Insertion or misleading offers, but it’s quite difficult to generate more revenue and/or improve efficiency with new ad messaging. However, with a renewed understanding of how CTR can impact impression share, we are now focused on CTR when testing new ads. As we saw in the example above, if your new ad produces a significantly lower CTR than the existing ad, it will take massive increases in AOV and/or conversion rate to make up for the lost revenue due to fewer impressions and clicks. Therefore, when writing new ads we recommend that you focus on improving CTR (assuming the ads still attract the right audience). This will produce three distinct benefits:
- Greater click volume due to increased CTR
- Higher Quality Score due to increased CTR, which produces lower CPCs and/or higher ad position
- Increased click volume due to higher impression share
We are all familiar with the first two benefits, but the third benefit represents the most value and is the one most often overlooked.
Next time you run an A/B ad copy test be sure to consider the impact CTR and impression share have on your test results. Avoid focusing on revenue/impression, AOV and conversion rate to determine a winner and instead focus on revenue/eligible impression or total revenue. This will ensure that differences in impression share are accounted for, and, ultimately, that the higher revenue producing ad is correctly identified. If efficiency is a key consideration, keep ROI in mind as well.
Oscar Predictions 2015 Recap
Last night’s Oscars proved to be quite the spectacle, with Neil Patrick Harris walking around in his underwear and everyone finding out who John Legend’s and Common’s real names are. The results were interesting as well, with only a handful of upsets. Having taken it all in, there are a few things I noticed regarding the selections made by some of the sites I polled, and the effectiveness of various winner-choosing methods.
First, how did I perform? Well, I ended up winning 20 out of the 24 categories (or 83%). This fared pretty well against the sites that I polled; only one of the nine sites beat me, I tied with one other site and the other sites all won fewer categories than I, with Hollywood Reporter performing the worst with only 13 wins.
When I took a closer look at which sites performed well and which ones did not, one thing became immediately clear; the individuals or sites that used statistics vastly outperformed the sites that did not. For example, Ben Zauzmer won 21 categories (beat me by 1) and GoldDerby’s predictions led to 20 category wins (tied me). The other sites I polled averaged roughly 16.5 wins, which is about 18% worse than the stats-based sites.
As I mentioned in my original article, the film that wins Best Film also wins Best Director about 72% of the time. Interestingly, 4 of the sites I polled actually chose two different films for Best Film and Best Director, which strongly indicates they were making decisions with their gut rather than with calculated probabilities.
I made the mistake of going with my gut when I chose “Joanna” for Best Documentary Short Subject, even though “Crisis Hotline” was a decisive favorite. I chose “Joanna” because I saw both films and simply felt it was a better film than “Crisis Hotline.” Unfortunately, there is no correlation between who I feel will win and who actually wins, so it was a poor decision on my part. Ironically, at the Oscar party I attended I ended up tying for first instead of winning first because of the one pick where I strayed from the probabilistic approach. I’ve learned my lesson as it pertains to selecting Oscar winners, and as a search engine marketer I was reminded that we cannot ignore what the data is telling us. A probabilistic approach can provide huge advantages when making key optimization decisions within your digital marketing campaigns.
One last thing I’ll mention is that Ben Zauzmer, whom I mentioned earlier, made a very astute observation regarding the Best Original Screenplay category. He noticed that the WGA correctly forecasts the Oscar winner for this category 70% of the time, which would have meant that “The Grand Budapest Hotel” was the favorite to win. However, “Birdman” was ruled ineligible by the WGA so it didn’t have an opportunity to win this award. Instead of blindly believing the numbers, he adjusted the model to account for the likelihood that “Birdman” would have won if it had been eligible, which resulted in predicting that “Birdman” would win Best Original Screenplay, which it did. As marketers, we are required to constantly synthesize, and sometimes question, the data to ensure we’re making decisions based on signals rather than noise (shout out to Nate Silver). This approach has shaped our campaign management strategies, and I’m hoping it will also help you make better marketing decisions moving forward.
Oscar Predictions 2015: Beat My Ballot and Win a Free PPC Audit
The Oscars are right around the corner. If you’re like me, you’re jittery with anticipation. After all, how many other nights of the year provide such an amazing opportunity to put probabilistic theory to work?
Now I know what you’re asking yourself: why on Earth is a search engine marketer writing about the Oscars? Well, for one, choosing Oscar winners is a lot like choosing which landing page to use, or which ad copy to run; informed decisions require statistical insights and we use stats-based Bayesian models to help us make better marketing decisions for our clients on a regular basis. We’re applying those same principles to help us choose Oscar winners. Second (and the real motivator), I filled out an Oscar ballot last year for the first time and have been fascinated with the selection process ever since.
So before I reveal this year’s winners (or at least those that are favored to win), let’s lay the foundation for the logic behind choosing the winners.
First, I considered the popular opinion of top critics (GoldDerby does a great job of consolidating this information). The greater the consensus was among these critics, the more confident I felt in my decision. Second, I looked at previous winners to see if there are any trends or consistencies that could be applied to this year’s nominees. For example, of the 86 films that have been awarded Best Picture, 62 (or 72%) have also been awarded Best Director. So, if you think a film is going to win Best Picture, you should almost always pick the director of that film to win Best Director. Also, the number and types of awards a film has already won are the strongest indicators of its success at the Oscars. For example, most critics have chosen Birdman as the favorite to win Best Picture because it won the Directors Guild Award (among other awards), which is the strongest predictor of Oscar success for this category (over the last 15 years, the Best Picture winner in the Oscars also won the Directors Guild Award 80% of the time).
With these insights, I have carefully chosen this year’s Oscar winners for each category. If you send me your predictions ahead of time and win more categories than I do, Synapse will provide you with a free PPC audit. In the unlikely event that more than one ballot beats mine, the PPC audit will go to the lucky one who won the most categories. So, without further ado, here are my predictions:
- Best Picture: “Birdman” (it’s a slight favorite over “Boyhood”, but statistically it’s very close)
- Best Director: “Birdman,” Alejandro Gonzalez (going with the 72% stat here, plus the fact that Gonzalez has already won the Directors Guild Award, which is the strongest predictor of who will win best director at the Oscars)
- Best Lead Actor: Eddie Redmayne in “The Theory of Everything” (he’s nearly a 3:1 favorite over Michael Keaton, since he’s already won the SAGs, the BAFTAs and the Golden Globes)
- Best Supporting Actor: J.K. Simmons in “Whiplash” (he’s over a 90% favorite to win)
- Best Lead Actress: Julianne Moore in “Still Alice” (she swept the Golden Globes, the SAGs and the BAFTAs)
- Best Supporting Actress: Patricia Arquette in “Boyhood” (is anyone voting for anyone else?)
- Best Animated Feature: “How to Train Your Dragon 2” (it’s a sizable favorite, although some critics believe Big Hero 6 will win)
- Best Documentary Feature: “Citizenfour” (all nine sites I polled chose Citizenfour)
- Best Foreign-Language Film: “Ida,” Poland (“Ida” is a huge favorite)
- Best Adapted Screenplay: “The Imitation Game” (its WGA victory puts it slightly ahead of the pack)
- Best Original Screenplay: “Birdman” (this one is quite tricky because “Birdman” was ruled ineligible for WGA, but the WGA winner is typically a 70% favorite to win this category. Without this insight, the numbers would say “The Grand Budapest Hotel” should win, but Golden Globe and Critics Choice wins make “Birdman” the slight favorite)
- Best Cinematography: “Birdman” (this is a slight favorite)
- Best Costume Design: “The Grand Budapest Hotel” (heavy favorite over “Into The Woods”)
- Best Film Editing: “Boyhood” (its win at the American Cinema Editors guild makes it the favorite)
- Best Makeup and Hairstyling: “The Grand Budapest Hotel” (this is a heavy favorite based on its BAFTA and guild wins)
- Best Original Score: “Theory of Everything,” Johann Johannsson (about a 2:1 favorite over “The Grand Budapest Hotel”)
- Best Original Song: “Glory” (it won the Golden Globe and Critics Choice)
- Best Production Design: “The Grand Budapest Hotel” (it won the BAFTA and the Art Directors Guild award, which make it a heavy favorite)
- Best Sound Editing: “American Sniper” (one of the tightest races of this year’s Oscars, but “American Sniper” is a slight favorite)
- Best Sound Mixing: “Whiplash” (this one is tight, but its BAFTA victory puts “Whiplash” slightly ahead of “American Sniper”)
- Best Visual Effects: “Interstellar” (“Dawn of the Planet of the Apes” could upset, but “Interstellar” is the only film with more than two nominations in this category)
- Best Animated Short Film: “Feast” (last year the favorite lost, so watch out for “The Dam Keeper”)
- Best Live-Action Short Film: “The Phone Call” (heavily favored, although both Entertainment Weekly and IndieWire have predicted that “Boogaloo and Graham” will win)
- Best Documentary Short Subject: “Joanna” (I’m going against the stats on this one because I saw this film and it was amazing, and in my opinion, better than “Crisis Hotline”)
So I’m going with a completely probabilistic approach, with the exception of the last category. Based on the probabilities for each category, I am expected to win roughly 17-19 categories. Think you can beat me? Reply to this post or email me your selections at paul@synapsesem.com. Let the best probabilistic mind win!
Sources
http://www.ropeofsilicon.com/oscar-contenders/oscar-predictions/
http://www.theguardian.com/film/series/oscar-predictions-2015
http://www.goldderby.com/odds/experts/200/
http://www.ew.com/article/2015/02/13/oscars-predictions-2015-who-will-win
http://www.indiewire.com/article/2015-oscar-predictions
http://www.hollywoodreporter.com/awards/predictions/oscars/2015/oscars-2112015
http://www.awardscircuit.com/oscar-predictions/
Are All Impressions Created Equally?
Cracking the Mystery Behind Budget-Limited Impression Distribution
Over the course of my career I’ve learned that the vast majority of paid search accounts will at some point be affected by budget limitations. There are numerous reasons why budgets may become capped. You may need to temporarily pull-back spend to adhere to internal budgets. Maybe you add a new campaign to your account which leaves existing campaigns fighting for fixed resources. Or, an aggressive bid escalation strategy could make historical daily budgets insufficient. Whatever the reason might be, limited daily budgets affect most accounts at some point or another.
For most marketers, campaign level budget adjustments have become the go-to budget management strategy. You need to drop you spend by 25%? Sure, I’ll knock down your daily budget by 25%. Problem solved! It’s easy to understand why we’re so quick to change campaign budgets. It’s a simple, reliable method of controlling spend, and in an industry where time is at a premium, it’s a change that takes just a couple of seconds to implement. But do we really understand how we’re influencing performance when we change campaign budgets?
Interestingly, Google does not have much to say about the matter. They explain that when budgets are capped “ads in the campaign can still appear, but [they] might not appear as often as they could.” Talk about an exhaustive scientific conclusion! Unfortunately, we too often assume that this impression rotation is going to occur on a pro rata basis. In the past, I have aggressively dropped budgets with the expectation that I am going to see a linear drop-off in performance. For example, if I drop by budget by 50%, then I would expect my impressions, clicks and conversions to all drop by the same 50%. Click-through-rate and conversion rate shouldn’t be affected if Google is reducing my impression share at random and on a pro rata basis. Although this logic makes perfect sense, the data we’ve collected after limiting budgets has told a different story.
When we cap campaign budgets the more important question we should be asking is “which impressions am I limiting?” Let’s go back to our hypothetical budget cut. If we drop campaigns budgets by 50%, we said we would expect impressions to drop by 50%. Stop right there. While it’s true that a 50% reduction in budget will likely lead to a 50% drop in campaign level impressions, we need to stop and consider which impressions are going to be affected. The obvious first consideration is whether impressions will decline equally across the entire keyword set. But there are also less tangible and less measurable dynamics to consider. How will budget limitations impact the time of day my ads get served, the geography they get served in, and the devices on which my ads are showing? Google basically has free reign to decide where and how they are going to rotate our impressions.
Our experience has shown us that budget limitations (and even minor <15% limitations) can significantly impact the quality of our campaigns’ impressions. Our hypothesis is that when excessive impressions are available, Google will serve your ads across segments (times of day, geographies, devices, etc.) that offer the lowest competition. In other words, Google is not rotating impressions at random. It’s in Google’s best financial interest to take this approach as they’re now creating a bigger market (with higher bids) when and/or where there was previously less activity. Unfortunately, the lower competition segments are often less active because they relate to lower quality impressions that produce lower conversion rates.
So where’s the proof of this impression variability? Let’s look at three real-life data sets from our client base:
- Example One: One of our non-profit clients ran state-level campaigns across the nation. They did not have the budget to support this effort, but they wanted some visibility in every state. Most campaigns had impression shares (due to budget) limited by 25-40%. After explaining the risks of this strategy we were able to get the client to agree to experimentally increase our monthly budget so that we could run the campaigns with completely uncapped budgets. Very few changes (optimizations, testing, etc.) were made to the campaigns during this time, and the client is not significantly impacted by seasonality. Before and after results are shared below:
- Example 2: One of our B2B software clients was reluctant to significantly invest in a Branded advertising campaign. After a full year of running a campaign with a lost IS of 31%, the client decided to maximize branded spend. It is worth noting that 95% of the traffic and impressions from this campaign originated from the exact match iteration of its brand name. Again, very few changes were made to the campaigns during this time, and the client is not significantly impacted by seasonality. Before and after results are shared below:
- Example 3: Another one of our B2B software clients introduced several new campaigns related to an expanding product line. This required us to limit one of our existing campaign’s impressions by 15%. Once again, very few changes were made to the campaigns during this time, and the client is not significantly impacted by seasonality. Before and after results are shared below:
This data clearly suggests that there is an inverse relationship between lost impression share and impression quality and efficiency. It is critical that we consider the ramifications that campaign-level budget changes may have on performance. Before dropping budgets, dive deeper into your accounts and ‘cut-the-fat’ at the most granular level possible. Even better, when building out new accounts, consider how your campaign structure might influence budget management. Are there keywords that you will never want capped because they are exceptional performers? If there are, they should be broken out at the campaign level. Even if you didn’t structure your account like this when you launched, it’s not too late. Scan your account for your top performing keywords and consider moving those terms to a new “high priority” campaign that you can ensure receives sufficient daily budget.
All impressions are not created equal. Google has far too much autonomy in their impression rotation methodology to simply assume you can linearly scale performance up or down with budget. When facing budget limitations, take the time to cut unproductive spend by optimizing targeting settings and/or by pruning keywords. Break out top performers into separate uncapped campaigns. Campaign level budget reductions should be reserved as a last-resort optimization.