Identity, Affinity, and Personalization: What Marketers Can Learn From Tinder

Posted by bridget.randolph

Everyone has an opinion about Tinder. Whether you’re happily single, actively seeking a partner, or in a committed relationship, something about the concept of “swiping” yes or no on strangers’ pictures seems to guarantee strong opinions. There are endless articles about what Tinder (and similar apps) say aboutmodern dating, love in the 21st century, and, more broadly, millennial shallowness. And, as someone who can’t resist twisting a good dinner party topic into a marketing blog post, I started thinking about how what we know about Tinder and the way people use it can give us insight into how people shop. After all, some of my friends refer to Tinder usage as “shopping for boys.”

[image credit: http://ift.tt/1PC1nNQ]

So what does the modern singleton’s approach to online dating tell us about their shopping behavior? And what should we be doing about it? The answer can be found in a look at social and technological history and the concept of an individual with a sense of personal identity.

As a marketer attempting to connect with the “Tinder Generation,” your goal is to tap into your customers’ values at a very personal level, connect with them through their personal network or “tribe,” and help them to avoid choice paralysis while nonetheless providing them with a sense of having plenty of personalized options.

The rise of the individual and the concept of personal identity

Historically, in Western society, the family could be considered the basic unit of society. Marriage as a concept was heavily tied to economic factors, along with a diplomatic aspect at the higher levels of social status, and proximity at the lower end of that scale. The local community was a fairly static unit, with individuals being born, marrying, raising a family, and being buried all in the same village. Marrying for love is an age-old theme found in literature, but is not the typical experience for the majority of people until the 20th century.

In the wake of the Industrial Revolution, there was mass migration to cities. Over time, as cities were increasingly unable to accommodate all their residents, the concept of “living in the suburbs” became more common, but still as a family unit. There’s a strong sense of the gendered roles of men and women in this period, who together make up a family unit (particularly with the birth of children).

The gendered division of labor is reflected in the dating behavior from this period. The stereotype of “boy meets girl, boy buys girl a milkshake, boy marries girl” is a product of this emphasis on the family as the basic unit, where the man is the provider and the woman is the homemaker. This is a society in which a man asks a girl’s father for her hand in marriage, and typically you marry the boy or girl “next door” (a callback to the traditional economic and proximity factors).

From a marketing perspective, this is the society which produced those charmingly disturbing retro ads like this one:

[image credit: http://ift.tt/1h2pnu2]

Following the Sexual Revolution of the 1960s, and the zeitgeist which produced feminist works like Betty Friedan’s The Feminine Mystique, this focus on gendered division of labor, and viewing of the individual only as he or she contributes to the family unit, began to shift. The individual becomes the basic social unit rather than the family. There is also far less emphasis on marriage and starting a family as the primary markers of having attained adulthood and respectability.

This leads to a much greater emphasis within modern society on personal identity and authenticity (“be true to who you are”).

Within this model, the approach to dating is about “me”: my personal identity, what my choice of partner says about me, and what I want from a relationship at this point in time. There are more options than ever, and we want to be seen as unique and autonomous beings.

Despite this, humans are social creatures. We like to connect. We like to share an identity with a group, to feel like part of a tribe. This is why we borrow aspects of different social groups to explain that unique personal identity.

This also explains why, as people become more detached from their original location- and family-based communities, they nevertheless find (and create) new tribes and communities which are not based on traditional structures. What used to be a relationship based on kinship by birth is now based instead on personal choice and finding other people “like us” in terms of identity rather than genetics. For instance, the concept of an “urban family”, or the close-knit ties represented in popular tv shows like Friends and Buffy the Vampire Slayer.

[image credit: http://ift.tt/1zDAfYN]

And this is why marketers have consistently seen the power of social proof — which is all about reinforcing that tribal identity (“1000 other people like you have bought this product!”).

Technological innovations and the rise of personalization

In the meantime, technology has been developing (in parallel to these societal shifts) which supports individual freedom and endless choices. We’ve moved from the more family- and community-oriented devices of the past (radio, tv, even household PCs) to individual devices (smartphones, tablets, smartwatches) which contain all aspects of our lives and our individual identities.

The popularity of these hyper-personal devices, combined with the power of the Internet to connect people globally, has enabled big data collection and analysis. This in turn leads to granular personalization and machine learning on a mindblowingly large scale. And this explosion of personalized, high-speed technology has contributed to the expectations that we as consumers have from businesses and their products or services:

  • We expect lots of options that “work for me”
  • We expect convenience and ease of use
  • We expect to have everything in one place
  • We expect instant gratification and will do almost anything to avoid boredom
  • We expect to stay connected to other people digitally

When we combine all of this with the social phenomenon of the individual’s personal identity being the most important thing, we get the rise of the blogger, the Youtube celebrity, the Twitter activist – all of these people who want to express their own unique voice and share it with the world. And for the rest of us, we get social media in general, which is all about presenting a particular, curated identity and staying connected to family, friends, and fans digitally and in real-time.

The rise of social media leads to the concept of “viral” content, which is a piece of content which a lot of people share, often because of what it allows them to say about themselves. Buzzfeed are the masters of creating this type of content, becausethey understand the value of tapping into those personal loyalties and other elements which go into creating a sense of one’s own identity while remaining connected to others.

[image credit: http://ift.tt/213YiMx]

But what does this have to do with Tinder? Or marketing?

This is where Tinder comes in. Tinder represents the intersection of these two historical trends: the sociological and technological. Modern dating, and particularly online dating, has always been about curating an “authentic” but attractive version of one’s identity and selling that identity to one’s target audience, namely a prospective partner. Tinder takes these elements, combines them with the desire for choices, convenience, and the rise of the smartphone, and turns it all into a fun game to play when you’re bored. And it provides all of these benefits in one simple action: the swipe.

[image credit: http://ift.tt/1oidWGv]

Tinder users are often accused of being shallow and judging people based solely on externals. But in reality, Tinder is the perfect example of this phenomenon of tapping into social cues and semiotics in order to tell a story about the person whose profile you are looking at. It’s a classic example of a phenomenon written about in books such as Blink, Thinking Fast and Slow, and Predictably Irrational. For a more in-depth explanation of this as it applies to Tinder, check out this Buzzfeed article (meta, no?).

In essence, Tinder reflects the “acquisition behavior” of a generation who have grown up in the age of the Internet, social media, and the rise of the smartphone. Tinder allows users to curate and announce a personal identity as well as reflect tribal affinities (I’m a traveller, I’m a hipster, I’m a frat boy, I’m an artist … or, I’m some combination of all of these). It then allows these users to browse through countless “match” options who reflect these same affinities and values to a greater or lesser degree, and provides the illusion of infinite choice. And it alleviates boredom by providing an entertainment option for when you’re stuck in line at the store or bored on your commute. The interface deliberately plays into this ?gamification” by rewarding you with an “It’s A Match” screen with two options: “Send a Message,” or, significantly, “Keep Playing.”

[image credit: http://ift.tt/1G58CGZ]

If you want someone to “convert” from your profile to a real world date, you face a similar challenge as that which the majority of brands are facing: the paradox of choice. With so many potentially better options available, how do you create a profile which will not only earn you a swipe right but also continue to engage your target customer throughout the user journey from match screen to conversation to first in-person date?

The principles remain the same as those of any good marketing strategy in today’s world:

  • Reflect your target audience’s values (for example, if you want to meet someone who values intelligence and education, you might use a university photo as one of your pictures);
  • Connect with them on the basis of shared friends or interests (this is the social proof aspect of Tinder’s interface); and
  • Personalize the experience in order to guide them to the conversion point (for instance, don’t start a conversation with “heyyy” or “what’s up” unless you want to be ignored).

So how do marketers reach the Tinder Generation?

In terms of applying these insights to our marketing strategies, we can break them down into three key areas:

  • Personal values
  • Tribal affinities
  • Personalization

For each of these areas, there are tactics which can allow you to tap into these sociological and psychological factors and optimize for your target audience. I’ve included some examples below, but this is by no means an exhaustive list.

Personal values

  • Find a way in which your product or service enables the customer to “say something about myself” by using it. A great example of this is luxury brands like Gucci, whose customers literally self-identify by wearing clothing and other objects with the logo of the brand visible.
  • Recognize that the top of the funnel is becoming smaller as people self-qualify themselves in or out of the funnel before they even enter the customer journey. This is particularly true with the changes to search engine algorithms and interfaces, which allow the search engine to do the lead qualification on your behalf. A great tip for this is to think in terms of what “actions” the user can take on your landing pages, and optimize for the action you want them to take. Craig Bradford and I have talked about this over here as part of the Distilled Searchscape project.
  • Entertain your customers and provide distraction when they’re bored. This won’t work for every brand, but a great example of a product brand successfully doing this is Red Bull’s content branch. But even if you can’t build out an entire publication arm of your business, this might be a good way to approach your social media strategy. How much does your social media presence encourage users to visit your page specifically? Sephora’s Pinterest strategy does just that.

Tribal affinities

  • Understand your audience as a social group: who influences them? What do they value as a group? Can you tap into this in your content? Your social strategy? Through an influencer outreach campaign? Remember that it’s not always the influencer with the most followers who is the most influential in terms of a particular segment of their audience.
  • Make use of social signals (language, references, influencers) to indicate your affinity with your target audience, and social proof related to the specific tribe/community which your target audience is a part of (“8 out of 10 moms say…”).
  • Segment your audience and target your campaigns at the most specific level possible.

Personalization

These are just a few of the ways in which marketers can adopt some of the same strategies that work in the dating world and apply them to business. But even if the specific tactics mentioned here don’t directly apply to your business, you can’t go wrong by paying attention to your audience and their behavior. Consider what they do when they’re in a non-buying context, and see if you can interact with them on that level (if not in that context!). And with a bit of practice, and some well-targeted campaigns, your customers should discover that you’re a match made in heaven!


Now it’s your turn! Are there any tactics you’ve noticed in these areas which have worked particularly well for you and your target customers? Do you agree with my theories about dating? Let me know your thoughts in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

How to Optimize for Competitors’ Branded Keywords

Posted by randfish

It’s probably crossed your mind before. Should you optimize for your competitors’ branded keywords? How would you even go about it effectively? Well, in today’s Whiteboard Friday, Rand explains some carefully strategic and smart ways to optimize for the keywords of a competitor — from determining their worthiness, to properly targeting your funnel, to using third-party hosted content for maximum amplification.

http://ift.tt/1owtlmp

http://ift.tt/1GaxkYO

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about optimizing for your competitors’ branded terms and phrases, the keywords that are your competitors’ product names or service names. This gets into a little bit of a dicey area. I think it’s challenging for a lot of SEO folks to do this and do it well, and so I’m going to take you through an approach that I’ve seen a lot of folks use with some success.

A strategic approach

So to start off with, let’s go to the strategy level. Is it actually the case — and sometimes it’s not, sometimes it is not the case — that branded keywords are driving high enough volume to actually be worth targeting? This is tough and frustrating, but basically one of the best thing that I can recommend in this case is to say, “Hey, if we are…”

I’m going to pretend for the purposes of this Whiteboard Friday that we’re all working together on the SEO campaigns for Wunderlist, which is a to-do app in the Google Play and iPhone app stores, bought by Microsoft I think a little while ago. Beautiful app, it looks really nice. One of their big competitors obviously is Evernote, certainly an indirect competitor but still.

Are branded keywords driving high enough volumes to be worthwhile?

Essentially what you might want to do here is actually go ahead and use AdWords to bid on some of these keywords and get a sense for how much traffic is really being driven. Can you draw any of that traffic away? Are people willing to consider alternatives? If there’s almost no willingness to consider alternatives — you can’t draw clicks here, you’re not getting any conversions, and it is the case that the volume is relatively low, not a lot of people are actually searching for Evernote, which is not the case, there are tons of people searching for Evernote and I’d probably tell Wunderlist they should go ahead. Evernote is actually bidding on Wunderlist’s terms, so turnabout is fair play. Bidding on AdWords can answer both of these questions. That can help them get us to:

What do you need to solve?

All right, now what is it that we need to solve? What are potential customers doing to compare our products or our services against these folks, and what are they interested in when they’re searching for these branded names? What makes them choose one versus another product?

Related searches can help us here, so too can normal forms of keyword research. So related searches is one form, but certainly I’d urge you to use search suggest, I’d urge you to check out Google’s AdWords Keyword Tool, if you like keywordtool.io or if you like Huballin or whatever it is that you think is a great keyword tool, check those out, go through those sources for your competitor’s keywords, see what’s coming up there, see what actually has some real volume. Obviously, your AdWords campaign where you bid on their branded terms can help tell you that too.

Then from there I’d go through the search results, and I’d see: What are people saying? What are the editorial reviews? For example, CNET did this Wunderlist review. What does their breakdown look like? What are people saying in forums? What are they saying on social media? What are they saying when they talk about this?

Ask the same questions of your competition

So if I’m seeing here’s what Wunderlist versus Evernote looks like, great. Now let me plug in Evernote and see what everyone’s saying about them. By the way, you don’t just have to use online research. You can go primary source on this stuff, too. Ask your customers or your audience directly through surveys. We’ve used here at Moz Google Custom Audience Surveys, and we’ve used SurveyMonkey Audience’s product. We like both of those.

Once you’ve got this down and you say, “Hey, you know what? We’ve got a strategic approach. We know what we need to talk about in terms of content. We know the keywords we’re targeting.” Great. Now you get to choose between two big options here — self-hosting some content that’s targeting these terms, or using third-party hosting.

Self-hosted content

With self-hosted content we’re going to try and go after those right terms and phrases. This is where I’ve seen some people get lost. They essentially go too high or too low in the funnel, not targeting that sweet spot right in the middle.

1. Target the right terms & phrases

So essentially, if someone’s searching for “Evernote review,” the intent there is that they’re trying to evaluate whether Evernote is good. Yeah, you know what? That’s right in the middle. That’s right in the sweet spot, I would say that is a good choice for you targeting your competitors’ keywords, anything around reviews.

“Evernote download,” however, that’s really at the bottom of the funnel. They’re trying to install at that point. I don’t think I’d tell you to go after those keywords. I don’t think I’d bid on them, and I don’t think I’d create content based on that. An Evernote download, that’s a very transactional, direct kind of search. I’d cross that one off my list. “How to use Evernote,” well, okay that’s post-installation probably, or maybe it’s pre-installation. But it’s really about learning. It’s about retaining and keeping people. I’d probably put that in the no bucket as well most of the time. “Evernote alternative,” obviously I’m targeting “Evernote alternative.” That is a great search phrase. That’s essentially asking me for my product. “What is Evernote,” well okay, that’s very top-of-funnel. Maybe I’d think about targeting some content like, “What do apps like Evernote, Todoist and Wunderlist do?” Okay. Yeah, maybe I’m capturing all three of those in there. So I’d put this as a maybe. Maybe I’d go after that.

Just be careful because if you go after the wrong keywords here, a lot of your efforts can fail just because you’re doing poor keyword targeting.

2. Craft content that delivers a superior user experience

Second is you need to craft that content that’s going to deliver a superior user experience. You’re essentially trying to pull someone away from the other search results and say, “Yeah, it was worth it to scroll down.

It was worth it to click and to do the research and to check out the review or check out the alternative.” Therefore, you need something that has a lot of editorial integrity. You need that editorial integrity. You can’t just be a, “Everything about them is bad. Everything about us is great. Check out why we kick their butt six ways from Sunday.” It’s just not going to be well-perceived.

You need to be credible to that audience. To do that, I think what’s smart is to make your approach the way you would approach it as if you were a third-party reviewer. In fact, it can even pay in some cases to get an external party to do the comparison review and write the content for you. Then you’re just doing the formatting. That way it becomes very fair. Like, “Hey, we at Wunderlist thought our product compared very well to Evernote’s. So we hired an outside expert in this space, who’s worked with a bunch of these programs, to review it and here’s his review. Here are his thoughts on the subject.”

Awesome. Now you’ve created some additional credibility in there. You’re hosting it on your site. It’s clearly promoting you, but it has some of that integrity.

I would do things like I’d think about key differentiators. I’d think about user and editorial review comparisons. So if you can go to the app stores and then collect all the user reviews or collect a bunch of user reviews and synchronize those for folks to compare, check out the editorial reviews — CNET has reviewed both of these. The Verge has reviewed both of these. A bunch of other sites have reviewed both of them. Awesome. Let’s do a comparison of the editorial reviews and the ratings that these products got.

“Choose X if you need…” This is where you essentially say, “If you’re doing this, well guess what? We don’t do it very well. We’d suggest you use Evernote instead. But if you’re doing this, you know what? Wunderlist is generally perceived to be better and here’s why.” That’s a great way to do it. Then you might want to have that full-feature comparison breakdown. Remember that with Google’s keyword targeting and with their algorithms today they’re looking for a lot of that deep content, and you can often rank better if you include a lot more of those terms and phrases about what’s inside the products.

3. Choose a hosted location that doesn’t compromise your existing funnel

This is rarely done, but sometimes folks will put it on their main homepage of their website or in their navigation. That’s probably not ideal. You probably want to keep it one step away from the primary navigation flow around your site.

You could conceivably host it in your blog. You could make it something where you say, “Hey, do you want to see comparisons? Or do you want to see product reviews?” Then we’re going to link to it from that page. But I wouldn’t put it in the primary funnel.

3rd-party hosted content

Third-party hosted content is another option, and I’ve seen some folks do this particularly well recently. Guest content is one way to do that. You could do that. You could pay someone else, that professional reviewer and say, “Hey, we want to pitch this professional reviewer comparing our product against someone else’s to these other outlets.”

Sometimes there are external reviewers who if you just ask them, if you just say, “Hey we have a new product or we have a competing product. We think it compares favorably. Would you do a review?” A lot of the time if you’re in the right kind of space, people will just say, “Yeah, you know what? I’ll put that on my schedule because I think that can send me some good traffic, and then we’ll let you know.” You kind of knock on wood and hope you get a favorable review there. You could contribute it to a discussion forum. Just be open and honest and transparent about who you are and what you’re doing there.

Native ads

Today you can do sponsored content or what’s called native ad content, where essentially you’re paying another site to host it. Usually, there’s a bunch of disclosure requirements around that, but it can work and sometimes that content can even rank well and earn links and all that kind of stuff.

Promotion & amplification

For promotion and amplification of this content, it’s a little trickier than it is with your average content because it’s so adversarial in nature. The first people I would always talk to are your rabid loyal fans. So if you know you’ve got a community of people who are absolutely super-passionate about this, you can say, “Hey, guess what? We released our comparison, or we released this extra review comparison of our product versus our competitor’s today. You can check it out here.”

You can pitch that to influencers and pundits in your space, definitely letting them know, “Hey, here’s this comparison. Tell us if you think we were honest. Tell us if you think this is accurate. Tell us if this reflects your experience.” Do the same thing with industry press. Your social audiences are certainly folks that you could talk to.

Give them a reason to come back

One of the key ones that I think gets too often ignored is if you have users who you know have gone through your signup flow or have used your product but then left, this is a great chance to try and earn their business back, to say, “Hey, we know that in the past you gave Wunderlist a try. You left for one reason or another. We want you to see how favorably we compare to our next biggest competitor in the space.” That can be a great way to bring those people back to the site.

Consult your legal team

Last thing, very important. Make sure, when you’re creating this type of content, that you talk to your legal professional. It is the case that sometimes using terms and phrases, trademarked words, branded words, has some legal implications. I am not a legal professional. You can’t ask me that question, but you can definitely ask your lawyer or your legal team, and they can advise you what you can and cannot do.

All right, everyone. Hope you’ve enjoyed this edition of Whiteboard Friday, and we will see you again next week. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

The 2015 Moz Annual Report: All the Facts and Then Some

Posted by SarahBird

Longstanding insomnia sufferers, rejoice! My Moz 2015 Annual Report is here. Check out 2012, 2013, and 2014 if you’re a glutton for punishment.

So much happens in a year — fantastic and terrible things — distilling it into one blog post is my annual albatross. Alright. Enough wallowing in self pity. Here we go!

dr who Allons -y.jpg

Here’s how I’m organizing this post so you can jump around to whatever strikes your fancy:

Part 1: tl;dr 2015 was a strengthening year!

Part 2: Two 2015 strategic shifts

Part 3: Two invisible achievements

Part 4: The tough stuff

Part 5: Inside Moz HQ

Part 6: Performance (metrics vomit)

Part 7: The Series C and looking ahead

[Part 1]

tl;dr: 2015 was a strengthening year!

2015 was a strengthening year. We grew customers, revenue, and product offerings. We also began some major tech investments that will continue to pay off in the years ahead.

With all the product launches comes increased opportunity in 2016, and also increased complexity. In the year ahead, you’ll see Moz delivering much more personalized onboarding, re-working the brand to accommodate our product families, changing up our customer acquisition flow, and investing in technologies and practices to speed up product development.

[Part 2]

Two 2015 major strategic shifts

First, we’ve shifted from a one-size-fits-all product to crafted customer experiences.

The most visible strategic change is the move away from cramming every feature into one product, instead into crafted experiences. Our community and customers are diverse. The solutions we offer should be too.

We started 2015 with Moz Pro, Moz Local and our API business. We’re ending the year with two new products under out belt, Moz Content and Followerwonk. Pro will evolve in 2016 to focus on professional SEOs. Moz Local just launched a major upgrade to its offering making it the most useful way to manage your local SEO. Content marketers will love Moz Content. And Twitter fanatics can enjoy analyzing their followers with Followerwonk.

Why did we did back away from all-in-one? Well. We discovered that adding more features into a product isn’t always better; Sometimes it’s just more. We heard from customers that they valued certain parts of the product that solved their problem, but weren’t interested in the others.

More simply, we built one product that many different kinds of customers could get a little benefit from. Instead, we want to build many products that customers get a lot out of. Even more simply, instead of giving you a plate of food with lots of small bites, only 30% of which you enjoy. We’re giving everyone a big plate of their favorite food. Yum.

Second, people sometimes really want to talk to people. And that’s good.

We’ve also relaxed our religious ferver about keeping humans out of the sales and onboarding process. We prided ourselves for years on dogmatically proclaiming that only bad products need human intervention. “The product should sell itself and be obvious to use,” we insisted.

We [I] clung to this belief in the face of overwhelming feedback from our customers that they would love to have more interaction with Moz.

I’m finally ready to let go of my belief that wanting to speak with a human is a failure in the system. We should give our customers what they want. Guess what, they sometimes want sales people, and personal onboarding and training.

We will not resort to barfy tactics like high pressure sales, harassment, and limit self-service. But maybe, just maybe, the world isn’t so black and white as humans=bad, computers=good.

Expect more opportunities to engage with real live, bona-fide Mozzers as part of your product experience, if you want need us.

should_you_need_us_by_goblinqueen1993-d41po21.jpg

[Part 3]

Two invisible accomplishments

Not all of our big 2015 investments are transparent to customers or the community. They are just as important nonetheless.

The fance-pantsiest new engineering platform

We knew that out innovate our competitors and make marketing easier for our customers in this dynamic environment, we needed a step-function improvement in our ability to experiment and innovate.

We were inspired by compelling new development platforms built and tested at places like Google, Hubspot, and Twitter. They simplified the software development process without compromising security or performance.

RogerOS is our new engineering platform. It’s based on the Mesos Kernal with a marathon wrapper. Moz Content was built 100% on it, so the two innovations incubated and launched together last year. More Moz services are starting to move to it.

In the spirit of generosity, we open sourced a big chunk of our work and look forward to contributing more in the future. We’ve still got a lot of work to do to make the platform more robust and we’ll continue these efforts in 2016.

The platform is poised to deliver the step function increase in innovation. Because a bigger more complex Moz, shouldn’t mean slower.

Kissing bad architecture goodbye

Technical debt is the worst. Ugh. It’s demotivating for the team and siphons cycles away from innovation. It’s hard on customers because feature delivery stalls keeping a fragile system from imploding.

Our Moz Pro product was hobbled with some serious tech debt. The team spent months trying to keep it up. Customers were disappointed and the team was tired. We needed a plan to fix it that didn’t involve a highly risky 18-month rebuild.

Luckily, one of my engineers had an epiphany, and a bunch of other engineers worked very hard to turn that epiphany into a workable plan that delivered feature improvements (not just parity!) while retiring painful tech debt in seven months. That’s way, way better than the dreaded 18 months.

We have massively transformed the backend architecture for Moz Analytics. This frees up cycles for innovation and unlocks a bunch of latent potential in the data. It feels like we were running a race in a cast and crunches, and now finally our leg is free! We’re throwing those crutches to the sideline and sprinting. Here we come!

[Part 4]

The tough stuff

Have you noticed how many year-in-review posts skip the tough stuff? I don’t want to do that. After all, a lot of this year’s tough stuff become next year’s strategic initiative.

The marketing software space is getting crowded.

The spigot of investor cash has been flowing fast and free into marketing tech for last couple years. We’re definitely seeing more competition in the market. It’s no secret that companies need to transform their marketing to match the new ways people discover, engage, and buy.

To our competitors: We Salute You!

we salute you.gif

You keep good pressure on us to innovate and deliver a great experience for good value.

Moz is ahead in some areas and lagging in others. We’ve struggled to keep our link data reliable and we have to play catch up on the size and quality of our index. We’ve been very weak on keyword research, and will be remedying that in 2016. Our customer acquisition flow and brand is also way more complicated than it was a mere two months ago. We’ll be investing heavily in optimizing and improving this experience so it’s easier to find what you’re looking for.

These challenges are non-trivial, and yet invigorating. We’ve got the best people on the planet at Moz and we’ve been forward thinking tech investments. It’s game on in 2016.

matrix invite the fight.gif

[Part 5]

Inside Moz HQ

Amidst all of the shifts and changes, some things remain constant.

TAGFEE remains our aspiration and our compass. As an organization, as people, we often have great integrity with our values. We also have moments of failure.

But what makes Moz special is not the absence of flaws, or the TAGFEE page on the website; it’s the genuine commitment to those values. The pursuit is relentless.

I don’t know anyone who is perfect. The people I admire most are those that strive for excellence when they fail; they pick themselves up and keep trying. They never give up the commitment to their values. Mozzers are like that. We’ve got 192 Mozzers now, up from last year’s number of 149.

This year, we’ve done a lot of good work on teaching Mozzers about productive conflict resolution, feedback, and inclusion. We’re not done, but we’ve made an earnest start.

Our gender diversity numbers are still terrible, but at least we’re headed in the right direction. Overall, we’re 40% women, up from 37% last year. We’re up to 27% in engineering. 54% of non-engineering roles are women.

A lot of the work we’re doing to make the tech industry more inclusive doesn’t even benefit Moz directly. For example, we partner with lots of programs to bring middle and high school girls on tours of Moz HQ and encourage them to consider careers in tech — maybe even start their own business someday. Several Moz engineers volunteer at coding schools, like ADA Academy, mentoring and welcoming underrepresented people to tech careers. We’re also partnering with Year Up to give underserved young adults meaningful careers.

Our charity match program continues to be one of my most proud parts of Moz. Last year we donated over $110k to charities that Mozzers are passionate about. We match every Mozzer donation 150%.

Our paid, PAID vacation program continues to be a high point for all Mozzers.

Last year, Moz spent over $400k on airfare, hotels, tours, food, boats, and life-changing, memory-making experiences for Mozzers.

That’s money well spent on lives well lived.

Lastly, we reached a milestone so wonderful, I’m having a hard time expressing how it makes me feel. Two Mozzers, who didn’t know each other when they started working here, fell in love and are getting married. We made a whole family!!!

[Part 6]

Performance

2015 was a strong improvement over 2014 revenue growth rate. We finished the year at about ~$38 million in revenue. That’s a growth rate of 21.6%, compared to the 5.7% the year prior.

Moz Pro still drives the majority of revenue, and Moz Local has demonstrated impressive growth.

Product gross profit margin fared well this year at 76%. That’s basically holding steady from last year. If you throw non-product in there, overall gross profit margin is 73%.

Total Cost of Revenue (COR) went up a little bit from last year. Most of the cost driven by increases in the amounts we pay to our data aggregator partners for Moz Local. We expect this to grow even more in 2016 as Local becomes a bigger share of revenue.

Total operating expenses came to $36.4 million dollars in 2015 (excluding CORs). The basic shape of that spend has remained pretty constant. The vast, vast majority of our company spend is people. No major shifts in spending trends from 2014 to 2015 other than increased 3rd Party Data.

As planned, our EBITDA increased from last year to -$3.1 million.

Cash burn was slightly above our 10% of revenue plan, but we were pretty darn close at 11%.

Adam shared a detailed reflection of changes and upgrades to Moz Pro in 2015. I encourage you to check it out. Those changes are attracting a slightly different customer. The number of new Moz Pro customers we’re acquiring is much lower than in previous years, but our average revenue per user is increasing. We’re also keeping customers longer. Obviously, we’d love to add tons of new Pro customers *and* increase Average Revenue Per User (ARPU). We’ll be putting energy into that in 2016.

Moz Local Locations more than doubled in 2015. And we’re very excited to see how customers are enjoying the big Moz Local Insights release we released this week. It’s only been 24 hours, but initial response is very good.

Community KPIs

[Part 7]

The Series C and looking ahead

I wrote last week about closing our Series C. (BTW, did you notice the public markets for SaaS companies nose-dived soon after? phew! If you’re reading this, we love you Foundry!)

We made big investments and placed some big bets in 2015. It’s so exciting to see them start to bear fruit. In the next 12 months, you should see (1) more feature releases, (2) more personal interaction with the Moz team when buying and using our products, and (3) increased clarity on our brand and customer acquisition flows.

Thanks for sharing your feedback, sticking with us, and rooting for us. We’ll keep trying to make great stuff that helps you do your job better, and bring a smile to your day!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Moz Pro: The Rear View and the Road Ahead

Posted by adamf

2015 was very much a rebuilding year for Moz Pro. We entered last year with some core infrastructure problems, and so worked heavily on less visible projects to make our SEO software faster, more reliable, and more polished. Still, on top of everything, we were able to add a host of new features and make some major design improvements.

The great news for 2016? A lot of that core infrastructure work is done or near completion. With this foundation in place, we’re going to seriously level up key sections of Moz Pro, like rank tracking, keyword research, site audits, and crawls. Expect to see some of these improvements as soon as next week!

If you’re a Moz Pro customer, or just interested in where we’ve been and where we’re going with our SEO product (hint, hint, we offer a free trial if you’re curious), give this post a read. I’ll cover the following:

  • Key updates from 2015
  • Moz Pro’s renewed focus on SEO
  • Some improvements in store for 2016

Key updates from 2015

Link data and analysis

Spam Score

This new metric helps SEOs identify spammy links for the purposes of assessing risky link profiles, performing link cleanup, and evaluating link targets. To learn all about Spam Score and how to apply it, check out Rand’s excellent Whiteboard Friday on the topic.

Spam Score is available in Open Site Explorer, the MozBar, and through our Mozscape API.

Spam Score in OSE

Spam Score in the MozBar

Link building opportunities

Finding high-value link targets is challenging work, so we added some powerful new features to Open Site Explorer early in 2015 to surface those hard-to-find opportunities that are most relevant for your site. You’ll find three views in the Link Opportunities section of OSE.

  • Reclaim Links: Find pages with link equity that are broken or blocked
  • Unlinked Mentions: Find fresh content that mentions your site or brand, but doesn’t link back to you
  • Link Intersect: Find links that related or competitive sites have, but that you don’t

We now surface unlinked mentions in your campaigns, too!

The Mozscape index

I won’t sugarcoat it: it was a rough year for our link index. We ran into some infrastructure issues that led to delays, outages, and inconsistencies. The good news? We’ve added reinforcements to the team and the infrastructure to keep our core index running smoothly. We are dedicated to improving our index quality, stability, and consistency in 2016.


Keyword rankings

From mobile rankings to search visibility to a complete UX refresh, we made some significant updates to campaign rankings data in 2015.

Mobile rankings

Last year, Google made it no secret they would take mobile seriously. They added mobile friendliness to their ranking factors, so we added it to our rank tracking. You can now track mobile rankings for Google, compare them to desktop rankings, and see which pages Google considers mobile-friendly.

We also added an extra engine to all campaigns allowing you to collect mobile rankings for every keyword you already track! Effectively, we added 25% more rankings collections to your account for free!

Search Visibility

Along with mobile, we added a new way to understand your rankings—our new Search Visibility Score. You can easily see how visible your ranking pages are across all of the keywords you track. Tying this together with mobile rankings lets you see if your site’s mobile device compatibility may be affecting how you rank.

Local rankings

Last, but certainly not least, we completed a lot of the work to support Local Rankings in January of 2015. This robust addition offers the capability to not only track your rankings nationally, but also see how Google rankings appear in specific areas within a country. If location matters for your business, this feature can really help you understand and measure your local SEO visibility.


Page optimization

In 2015 we completely revamped our on-page optimization section of Moz Analytics, offering more accurate scores, updated advice, real-world usage examples, and a more elegant and intuitive design.

Precise scores & better advice

We eliminated letter grades from Page Optimization reports in favor of numerical scores. Scores of 0–100 are more precise than letter grades, and are more universally understood. We also updated relative weighting of page optimization criteria and incorporated updated advice from top SEOs to provide clear, relevant, practical optimization suggestions.

Improved workflow

We also made some big improvements to the on-page optimization workflow, adding a brand new page for you to track, monitor, and report on just the pages that you are actively optimizing. We’ve also improved our optimization suggestions, and put them into a separate Discover tab.


Other notable improvements

Multi-user support

We released our first version of Multiseat this past summer, which allows you to create extra logins and share access to your Moz Pro account with your team or clients. This was our most requested feature ever, and a feature we were keen to build for a long time. Multiseat turned out to be a surprisingly complex project, and required a coordinated effort across a bunch of teams to build out the infrastructure and make this feature a reality.

Improvements to campaign insights

Campaign insights highlight meaningful changes, help you quickly identify issues, and uncover opportunities to improve a site you’re actively optimizing. On top of significant performance improvements, we added new insights, a cleaner, more readable style, and even the ability to export insights to Trello.

New Pro homepage

This simplified page makes it easier to find and access the tools and services included with your Pro subscription.

Lots and lots of other updates and fixes

If you are interested in all of the details, we added a What’s New page with a more detailed chronology of updates, both big and small.


Moz Pro focus for 2016

2016 is the year that Moz Pro refocuses completely on SEO. We’ve diverted our focus in the past, adding peripherally relevant features to Moz Pro, only to find that customers didn’t value them and that we’d spread ourselves too thin.

As a company, Moz has honed its strategy, breaking into smaller teams that can each maniacally focus on the primary needs of their customers. This means that our very driven and talented Moz Pro product and engineering teams will get to focus their time, energy, and ingenuity in these areas:

  • Rank Tracking
  • Keyword research
  • Site audits and optimization
  • Link analysis and acquisition
  • Great workflow to tie these together

As always, we will strive to provide the best data and metrics possible to help you evaluate, understand, and improve your search engine presence.


A sneak peek at some upcoming releases and improvements

I’m excited to share some of what we have in store for 2016! Our talented engineers, product managers, and SEO experts, along with some exceptionally helpful customers, have collaborated to dream up some big things for the coming year. Expect powerful new data sets, more intuitive workflows, and big improvements to core parts of the Pro subscription. Here’s a preview some of the big things coming your way in the next few months.

Keyword Explorer

This audacious effort has been some time in the making, and a significant passion project for Rand. We’re really looking to make this keyword research tool stand out in the market, so while we already have a working version, we’re still vetting it with our beta testers and adding the final touches so that it can be as powerful and easy to use as possible.

Rand shared a sneak peek at the tool a short while back:

http://platform.twitter.com/widgets.js

Rankings history, advanced filtering, and snappier data

At its face, unlimited rankings history doesn’t sound that groundbreaking. That’s because it isn’t. It’s a feature we’ve wanted to offer for some time, but couldn’t due to the limitations of our application’s architecture. Those limitations are history. We’ve invested in a completely redesigned, highly scalable infrastructure that allows us to unleash the entire history of your data, and make it viewable, manipulable, filterable, exportable, and much faster to load.

This will also allow us to build in some more powerful features in the near future, making rankings much more usable if you track a lot of keywords. We are officially launching this update next week, but the engineering team was a little impatient — and so we quietly launched these improvements today. If you’re already a Moz Pro customer, go to your campaign rankings page to see these updates right now!

Related Topics

Along with keyword research, topical analysis and optimization has become an important focus for SEOs. Moz’s Data Science team has built out a great service to analyze and extract topics from any page on the Web. We will soon offer this service in Moz Analytics campaigns to help you discover topically-related keywords based on competitors in the SERP. Adding these keywords to your pages can help search engines identify your pages’ topic and intent, and help you rank for a broader set of queries.

Site crawl and auditing

This update is still in the very early stages, but expect to see some big improvements in site crawl performance and features in the first half of this year.

Bigger, more frequent, and more reliable link index updates

This is a significant priority for this year. We’ve already made some important progress, but there’s still much to do.

And much more

We plan to really beef things up this year. Some features are still in the planning phase, and some are just raw ideas at this point. We will be sharing updates frequently.


In conclusion: Thank you!

Moz is nothing without all of you, our amazing customers and community. Thank you for continuing to engage, be critical, send praise, divulge your best tactics, share lessons from defeats, help strangers, and make new friends. Thank you for being transparent, authentic, generous, fun, empathetic, and exceptional. We will always strive to do the same.

PS: Please keep letting us know what you need

One more thing before I sign off — please, continue to share your feature requests and frustrations with us so we can improve Moz Pro and build the things you need most.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

What Can Twitter Teach You about the Top 6 US Presidential Candidates?

Posted by annboyles

As many of you — particularly our friends in Iowa and New Hampshire — are keenly aware, the height of the US presidential election season is upon us. In recent years, social media has had an increasingly profound impact on campaigns and election conversations, with candidates and their supporters (and detractors) taking to Twitter in droves. Here at Moz, we’re seizing the opportunity to examine the Twitter accounts of candidates seeking the highest office in the US to see what insights we can discover by sifting through the data.

Over the next nine months, we’ll be putting our Twitter analytics tool, Followerwonk, to work on the presidential candidates: analyzing followers, tracking changes in followership around key moments in time, and sharing any other interesting tidbits we run into along the way.

Join us on the Moz Blog between now and the general election in November as we reveal insights on followership trends of the top-performing presidential candidates. You can also follow the data we’re tracking the current six top-performing candidates* in realtime by visiting their individual Followerwonk analysis report pages:

*Top-performing candidates as measured by their finishing positions in the 2016 Iowa Caucus and New Hampshire primary. This list will evolve with the election cycle.

Top states getting their political tweets on

Let’s dive in!

We wanted to identify the top 10 states where users were tweeting about each candidate during 24-hour periods surrounding the Iowa Caucus and the New Hampshire primary.

So, what’s the big deal with Iowa and New Hampshire?

We’re going to take a quick step back here for those of you who may not be familiar with these events to explain why they matter. The Iowa Caucus and New Hampshire primary are the first two electoral contests during the US presidential election cycle. It’s the first time real voters cast real votes to narrow the field of candidates hoping to become the next president. These events play a significant role in shaping the public conversation about which candidates will ultimately be best positioned to capture their party’s nomination.

We won’t dig into the debate on this forum about whether either the Iowa Caucus or New Hampshire primary should be as influential as they are — plenty of other blogs and media outlets have that covered. But, there’s no denying they play an important role in setting the tone for the electoral contests still to come.

Which states were a-twitter on Twitter?

Back to our original question: Which states tweeted the most about the candidates during the first two electoral contests? To capture this, we examined all tweets mentioning the candidates in the Twitter Sample Stream and used Followerwonk’s location resolution algorithm to determine which US states were most represented in users’ Twitter bios. We then normalized results based on state population size. It’s worth noting that we used the locations from Twitter bios, which cannot always be resolved accurately. That said, we think it paints an interesting picture.

http://ift.tt/1ShgBfM

Top states tweeting during the Iowa caucus

http://ift.tt/20pjI4z

Top states tweeting during the New Hampshire primary

Not surprisingly, you’ll see that the US capital, Washington, DC, is a hotbed of political activity on Twitter, ranking as the #1 location for tweets mentioning candidates during both the Iowa Caucus and New Hampshire primary. Other states with highly active political tweeters include Nevada, Iowa (which appeared on every candidate’s top 10 list for the Iowa Caucus), New Hampshire, and New York, which appeared on every candidate’s top 10 list for the New Hampshire primary.

Do political party patterns reflect voting patterns?

When reviewing the most active states broken down by candidates, we wanted to see if the political party patterns reflected how citizens of those states voted in the most recent 2012 presidential election. In other words: in the early 2016 contests, did GOP candidates generally see the most activity from “red” states (traditionally Republican-voting) and Democratic candidates from “blue” states (traditionally Democratic-voting)? Sometimes yes and sometimes no.

For Democratic candidates, the answer is generally yes: only one of Sanders’s top 10 states surrounding the Iowa Caucus (Montana) and the New Hampshire primary (Indiana) was a red state in 2012, and only two of Clinton’s top 10 states for the Iowa Caucus (Alaska and Indiana) and New Hampshire primary (Indiana and Tennessee) were red states in 2012.

But for Republicans it was not as clear cut. Only two states (Alaska and South Carolina) on Rubio’s Iowa Caucus top 10 list, and only four states on both Trump’s list (Alaska, Idaho, North Dakota, South Carolina) and Kasich’s list (Nebraska, Louisiana, Arizona, and Oklahoma) were decidedly red states in 2012. In fact, Ted Cruz was the only candidate to net a majority of decidedly 2012 red states in his 2016 Iowa Caucus top 10 list. Results from the New Hampshire primary were fairly similar, although this time around both Trump and Cruz netted 50 percent decidedly red states on their top 10 lists, while Rubio and Kasich had closer to 30 percent decidedly red states on their lists.

So why the skew in data? We suspect it’s likely due to the fact that Twitter’s overall user base tends to have more liberal leanings; data suggests more Twitter users identify as Democrats than Republicans.

Is a Twitter bio worth a thousand words?

If a picture is worth a thousand words, we wanted to create pictures of who follows each candidate. So we generated word clouds based on the most frequently used one-word and two-word phrases in the Twitter bios of each candidate’s followers.

For the GOP candidates, some commonalities and divergences exist. For example, everyone has the word “love” as the #1 word used in their followers’ Twitter bios, but only Trump’s does not have “conservative” as a close second. In fact, “conservative” does not appear anywhere in Trump’s one-word follower word cloud. And while “business” appears in every GOP candidate’s one-word follower word clouds, only for Trump does it rank in the top five.

01a - Trump word cloud.png

Trump one-word bio word cloud

01b - cruz word cloud.png

Cruz one-word bio word cloud

01c - rubio word cloud.png

Rubio one-word bio word cloud

Screen Shot 2016-02-10 at 7.16.50 AM.png

Kasich one-word bio word cloud

Then there’s the prominence of religious words in the bios of people following GOP candidates. For Cruz and Rubio, the word “God” comes in at #5 and #7, and “Christian” at #7 and #9, respectively. According to Iowa Caucus entrance polls, 64 percent of Republican caucus-goers were evangelical Christians, which may help explain Cruz’s first-place finish and Rubio’s better-than-expected third-place finish in nation’s first presidential electoral contest of 2016.

Out of all of the top six presidential candidates, only Kasich’s one-word bio word cloud features the name of a state (Ohio, the state of which he is governor). This suggests that, prior to his New Hampshire primary second-place finish, many of his followers may have been from Ohio. Now that he’s made more of a splash on the national stage, this may change as he gains more followers from around the country.

When we looked at two-word bio clouds, we found both the expected and unexpected. For instance, the words “real estate” ranked in the top two phrases for (not surprisingly) Trump, but also for Hillary Clinton and John Kasich, which we didn’t expect to be featured so prominently.

02 - trump two word cloud.png

Trump two-word bio word cloud

03 - clinton two word cloud.png

Clinton two-word bio word cloud

Screen Shot 2016-02-10 at 7.18.14 AM.png

Kasich two-word bio word cloud

Bernie Sanders’ two-word bio cloud boasts a number of phrases one might associate with a more youthful follower base, including “video games,” “pop culture,” “grad student,” “state university,” and “college student.” Voting data suggests Sanders’ Twitter followers reflect those supporting him electorally: according to both Iowa Caucus and New Hampshire primary exit polls, a whopping 84 percent of Democrats under the age of 30 voted for Sanders. It’s also worth noting that Sanders’s two-word bio cloud was the only one to feature the candidate’s name, suggesting that his followers are sufficiently interested in him to place his name in their own Twitter bio.

04 - sanders two word cloud.png

Sanders two-word bio word cloud

How candidates tweet: Retweets vs. original content

For most candidates (Clinton, Kasich, Rubio, and Sanders), around 30 percent of their total tweets are actually retweets. Ted Cruz, however, is an enthusiastic retweeter: at 65.5 percent, the majority of his tweets are retweets. Donald Trump is on the other end of the spectrum, preferring to generate original content: his retweet percentage is only 5.5 percent.

05 - trump profile badge retweets.png

06 - cruz profile badge retweets.png

Battle of the sexes

What insights can we glean from the gender breakdown of each candidate’s followers? Turns out, nothing too revolutionary. Followerwonk’s gender ratio analysis produced results falling roughly in line with what one would expect from the demographic breakdown of the larger Republican and Democratic electorates. The Pew Research Center has found that, in the American electorate as a whole, women lean Democratic by 52 percent vs. 36 percent Republican, while men are roughly evenly divided at 44 percent Democratic, 43 percent Republican.

In Followerwonk’s gender analysis, we uncovered similar findings: the followers of the Republican candidates tended to skew more male (anywhere from 62–68 percent of gender-determined followers), while the followers of the Democratic candidates were more evenly divided between men and women (women made up 48–52 percent of gender-determined followers). As Twitter’s overall user base has a greater percentage of men, this aligns closely with the Pew results.

It should be noted that a significant proportion of each candidate’s follower base falls under “undetermined” — that is, Followerwonk is unable to determine their gender — but the results are still illuminating. For instance, Clinton is the only candidate with a larger percentage of female followers.

07 - trump gender ratio.png

Gender breakdown of Trump followers

08 - rubio gender ratio.png

Gender breakdown of Rubio followers

09 - cruz gender ratio.png

Gender breakdown of Cruz followers

Gender breakdown of Kasich followers

10 - sanders gender ratio.png

Gender breakdown of Sanders followers

11 - clinton gender ratio.png

Gender breakdown of Clinton followers

It’s all in the timing

Most candidates received the bumps in followership you would expect from greater visibility that comes with the debates and the Iowa Caucus: Democratic candidates saw a spike in new followers around January 17 and February 4 (the most recent Democratic debates), whereas Republican candidates saw similar spikes around January 28 and February 6 (the most recent Republican debates). Additionally, all candidates increased their follower base around February 1, 2016: the Iowa Caucus (and, generally to a lesser extent, the New Hampshire primary on February 9). Sometimes candidates saw spikes when the opposite party debated, such as Hillary Clinton, who experienced a modest uptick (14,050 new followers) around the Republican debate on January 28.

clinton.png

Clinton follower change chart

While Sanders hasn’t, on the whole, been gaining as many new followers as Clinton on a daily basis, his spikes on the dates of the Democratic debates, the Iowa Caucus, and the New Hampshire primary were much higher: 23,647 to Clinton’s 16,979 for the January 17 Democratic debate, 25,544 to Clinton’s 9,341 for the February 4 Democratic debate, 30,592 to Clinton’s 16,613 on the Iowa Caucus, and 17,529 to Clinton’s 11,396 for the New Hampshire primary. This may be because, while most people are already familiar with Clinton, more are finding out about Sanders during these major events.

sanders.png

Sanders follower change chart

If you thought people on Twitter would be turned off by a candidate eschewing a debate, you’d be mistaken. In fact, Trump saw a significant jump in followers (41,948) immediately following his announcement that he would not participate in the January 28, 2016 GOP presidential debate hosted by FOX News. Indeed, while Cruz and Rubio (the two largest competitors to Trump during the Iowa Caucus) saw their largest increases in new followers after that contest, Trump’s largest increase was around the date of the Republican debate he did not attend. Even his smallest recent bump in followers for the February 6 Republican debate (17,768 new followers), however, was still larger than either Cruz’s or Rubio’s largest spikes for the Iowa Caucus (11,599 and 17,342 respectively).

Prior to the New Hampshire primary, Kasich’s largest bump in new followers occurred after the February 6 debate. Kasich’s biggest spike ever, however, came following his second place finish in the New Hampshire primary: in the day leading up to, of, and following the primary, he’s had a nearly 4.5 percent net increase in followers. Compare that to Cruz and Rubio (finishing 3rd and 5th respectively), who both experienced a less than 1 percent increase in followership during the same period. Trump, as the first place finisher, saw his biggest follower increase since the January 28 Republican debate, gaining more than 35,000 new followers.

trump.png

Trump follower change chart

rubio.png

Rubio follower change chart

cruz.png

Cruz follower change chart

kasich.png

Kasich follower change chart

What’s next?

If you enjoy nerding out on this data as much as we do, check back with the Moz Blog between now and the general election in November, where we’ll regularly report on more of our analysis and findings. You can also follow Followerwonk on Twitter, where we’ll share interesting nuggets and stats we uncover along the way.

In the meantime, let us know any interesting trends you’re seeing with political candidates and issues on Twitter. Maybe you’re using Followerwonk or other social media analysis tools to track candidates for local office in your hometown or to keep a pulse on hot political elections and referendums internationally. Feel free to share your insights in the comments — we’d love to hear about them!

Special shout out to Marc Mims, whose mad developer skillz brought us all this juicy data, and to Angela Cherry, whose obsession with politics meant she couldn’t resist co-authoring this post.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

We’re Thrilled to Announce… Moz Local Insights Released from Beta!

Posted by dudleycarr

Today we’re excited to remove the Beta label on Moz Local Insights! This feature has been in beta since November when David first announced it. This release of Insights continues the push to provide our customers with a holistic understanding of local search presence.

First, a tremendous thank you to all of our customers who gave us feedback about what works and what doesn’t work.

In the two months that Insights has been active for all 60K+ listings in Moz Local, we’ve collected 2.5 billion individual metrics! Insights is tracking 220K search keywords, traffic for 13K locations in Google Analytics, and 1.1M reviews.

Now, with the addition of Google My Business data and Google/Facebook reviews, Moz Local Insights is ready to have organizations large and small depend on it to deliver value for customers investing in local search.

This release includes a large number of small improvements and tweaks. However, the following are the big changes since we first announced Insights:

First up: Google My Business data

The release of the Google My Business API has been a welcome change for everyone doing local listing management. We have big plans in the near future to make extensive use of the API to help with distribution by eliminating the need for downloading and uploading CSVs every time listing information changes.

Moz Local Insights helps clients with many US/UK locations see what their performance looks like in aggregate across all your Google My Business locations. We’re using the API to help collect Google My Business data currently locked away in the dashboard and accessible only a single location at a time.

Here’s how Google My Business data looks like within Moz Local for a medium-sized client:

Screenshot 2016-02-05 16.12.20 copy.png

The screenshot shows 35 listings aggregated in a graph with the same data that’s available in GMB. Insights will also break out locations by best performing, worst performing, and top gains and losses. The same data is available for the click data.

Google My Business makes up to 90 days of data available. With Insights, we’ll track this data over time, starting with the initial 90 days of data available in GMB at the time we’re given authorization.

Second: PDF reports

From the beginning, Moz Local Insights has aimed to focus on the data and the visualizations that are the most meaningful to you and your client. The missing link has been making that data available via PDFs so that they can easily be shared with others.

We provide a single PDF, organized in sections containing the following Insights sections:

  • Distribution Insights (Accuracy, Listing Score, and Reach)
  • Performance (Google My Business and Google Analytics)
  • Visibility (Local pack and organic)
  • Reputation

Our focus for this feature was to make the PDFs easy to set up, compelling, and beautiful. Here’s a peek at what the PDF report looks like for Moz.com:

Screenshot 2016-02-09 05.45.37.png

You can check out the full sample PDF report here.

Starting today

With Moz Local Insights coming out of Beta, we’re starting our normal 2-week trial for listings. During the 2-week trial, you’re free to try out all of the functionality — including the newly announced features. You can purchase the listings at any point during the trial. Pricing is only $120 for self-serve locations and $99 for enterprise locations per year.

What’s next?

We’re happy to get all of the improvements out, but there’s more goodness along the way — and shortly! Here’s the list of things we’ll tackle next for Search Insights:

  • Exporting via CSV
  • Omniture support
  • Keyword suggestions
  • On-demand PDF exports
  • More review sites including Instagram

Even though the beta period is over, we’re just as eager to hear from you about what works or could be better with Moz Local Insights. Please feel free to reach out to me or send feedback here.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

What Really Earns Loyalty in the Local Business World?

Posted by MiriamEllis

St. Valentine’s Day is on the way, and I’ve been thinking about love and loyalty as they apply in the local business world. It’s been estimated that it costs 7x more to acquire a customer than to retain one; in my city, most of the major chains offer some type of traditional customer loyalty program. Most rely on a points-based system or an initial sign-up investment to receive benefits, but I wondered about Main Street.

I picked 15 locally owned businesses at random to see if they had created loyalty programs, and then I checked Google and Yelp to see if any of these programs had been inspiring enough to generate mentions in reviews (the most obvious online signs of devotion or dismay) in the past year. Here’s what I found:

Business Model Loyalty Program Mentions in Reviews
Toy store $10 coupon for every $200 spent; $5 birthday card gift; teacher discount 1 mention
Grocery store Grocery purchasing card that donates to local schools 0 mentions
Video store Rent 12 videos, get 1 free 0 mentions
Craft store Senior Tuesday 10% discount; birthday discount of 20% 0 mentions
Hardware store No program N/A
Bookstore Purchase $25 rewards card and get 10% off of purchases for 1 year 1 mention
Restaurant Complimentary birthday or anniversary dessert 0 mentions
Deli No program N/A
Café Get 10 stamps for beverage purchase and get a free drink 2 mentions
Clothing boutique No program N/A
Kitchen store No program N/A
Bike shop Spend $6,000 and receive free flat repair, swag, free event entry, and more 0 mentions
Hair salon Get 7 cuts and receive ½ off on merchandise 0 mentions
Bakery No program N/A
Pet supply Buy $5 card, get 5% off of merchandise for the year 2 mentions

At a glance, 2/3 of the independently owned businesses in this city have created loyalty programs, and in the last year, there were 6 total mentions of these benefits in all of the reviews earned by the 10 businesses offering these programs. Of course, this doesn’t mean that more customers aren’t participating in these programs, but it does seem to indicate that the majority of customers feel positive about a business for reasons other than official loyalty programs, at least in my small study.

So, what does foster loyalty? In the reviews I looked at, nearly all happy customers referenced either a specific great experience or an ongoing positive aspect of the business. These memories, if impressive enough, are what drive good reviews and help customers to remember to return for further good experiences. Then there’s the flip side — experiences so negative that they can drive a customer away forever.

Given the high cost of acquiring new customers vs. retaining existing ones, I’m going to document here 5 personal experiences with local businesses that made me vow never to return, and then I’ll follow that up with 5 excellent experiences that not only merited a great review from me, but have also lead to multiple transactions over the years. It’s my hope that these personal mini case studies will give local business owners and local SEOs a glimpse into the mind of one unbiased consumer, and that the findings will be widely applicable to most business models.

Bad business

The bad experience What could have made it better?
Lack of empathy

Worms in the rice bin of the bulk section of the local grocery store! Yuck! Reply from the store clerk? A very bored “Oh.” No apology, no offer to get a manager. Not even an, “Eww!” of shared feeling.

I’ve never bought bulk from them again.

Show me you care

The wormy rice grocery store clerk could have mirrored my dismay and gotten a manager over immediately to explain how the merchandise had gotten bugs in it, and have let me seen them removing the bin before I left the store.

Staff not only need to be treated empathetically by employers, but need to be trained to share that culture of empathy when confronted with customer complaints.

Lack of training

Shopping for an exercise bike at the local sporting goods store, I was pleased to find floor models you could try out. Unfortunately, none of the staff knew how to turn the bikes on. They all stood there scratching their heads and saying, “I dunno. Maybe there’s a key or something.”

Needless to say, a transaction never happened.

Show me you’re trained

Staff could have phoned the owner to ask how to operate their bikes, or at least have taken my name and number to have the owner invite me back for a personal demo.

Owner could have assured me he was scheduling a staff-wide training session to ensure I’d have a better experience next time.

Lack of management

In the midst of a family emergency in a rural area, I needed lodging pronto. What I found was a room filled with dead bugs, inch-thick dust, and a fridge festooned with green mold. Owner response? His housekeeper was having “emotional problems” and he guessed he ought to check up on the place from time to time. Ya think?

Had to scramble for another place to stay in the next town, which was the last thing I needed to be doing that day.

Show me you’re on top of things

This couldn’t be fixed on the spot because the owner had let things slip for too long. He might have offered to help me find another place to stay, given me some local coupons, or done something to express his regret.

Any business owner who isn’t overseeing his own business lacks the necessary commitment to succeed.

Lack of quality

My community has a hate-hate relationship with the only local fabric store franchise, attested by a volume of negative reviews. The place is an absolute mess and basic, high-quality fabrics are almost always lacking. The inventory is cheap and disorganized.

I’ll drive for hours to shop elsewhere, or shop online. This chain is my very last resort of desperation, because I know I’ll be disappointed and feel unhappy if I go there.

Show me you’re responsive

Read the bad reviews and then poll the customers to find out what local sewing enthusiasts would love to see stocked in the inventory. And keep the store clean at all times!

Playing the monopoly card because you’re the only game in town is not going to win loyalty. Should a more responsive competitor open its doors, the existing chain could see its customers leave in droves.

Lack of accountability

When the electronics franchise in my area sold me an external hard drive that blew out my computer, I expected… something. Maybe an apology? Maybe a free fix-it service? I got neither.

Instead, I got a condescending speech from a manager explaining that he wasn’t responsible for the products he sold. If I wanted to pay his tech team for diagnosis, they’d get back to me in a week to tell me how much more it would cost to fix my computer. I haven’t trusted the company since.

Show me you’re responsible

Instead of rudeness, the manager could have mirrored the horror I was feeling about my computer, offered free overnight diagnosis, and demonstrated that corporate policy stood both behind the products sold and behind me — the customer!

Any business policy that fails to recognize that customers are the lifeblood of existence is exposing a glaring weakness, and a competitor with a genuine plan to win customer loyalty can make that weakness work for them.

Good business

Now, for the good stuff! These experiences were impressive enough to make it into my permanent memory bank, and moreover, have been the foundation of repeat transactions. Here’s a chance to consider whether your customers are having similar positive experiences when doing business with your company.

Great job! Why does it work?
Superior selection

Twice a month, I take a 3-hour trip to shop at an independent market that offers a selection of produce and groceries with which the local natural food chain can’t even compete. The food has a clear emphasis on local sourcing, is clearly labeled with its farm or origin, is fresher, and — a major biggie for me — is 100% organic.

Markets nearer to me simply don’t have this superior quality, aren’t 100% organic, and often carelessly mislabeled products.

Proven quality

You’ll notice I didn’t say I shop there because it’s cheap. Quality matters more to me than anything when it comes to the food I purchase, so I’ll go a country mile and to some expense to get the best I can afford. This can be applied to any product lineup when the customer base is looking for the best.

You can go the extra mile, as well, to explain why your products/services are superior to other offerings. Educate customers and then let them experience the difference.

Superior staff

My favorite plant nursery is owned by a family that knows absolutely everything there is to know about gardening. They’ve got an amazing library of horticultural books, too, and often look up unusual plants for me, sharing their knowledge and their delight in all things green.

I value their expertise, and make my major annual purchase of vegetable starts from this nursery each spring, knowing every question I have will receive a helpful answer.

Proven training

Everyone who works at this business either knows the answers to my questions or knows how to get those answers for me. You may not need a staff of wizards, but the infrastructure needs to be there so that every employee knows who to ask when they don’t know the answer to a product or service question.

Your investment in employee training — in educating the people who represent your business — is priceless.

Superior convenience

My family may be in the minority, but we only own one car. And when that car gets worked on, we’ve had oodles of fun sitting for 4 hours on a hard bench in a dirty parking lot in 101 degree weather, waiting to get back on the road.

But one local automotive chain has started offering courtesy cars — you can believe we’re going for that!

Proven support

It’s the sensitive business that implements policies that make life a little easier for customers at times of inconvenience. Maybe that means offering water in a lobby, shortening check-out lines, or narrowing service window timeframes to limit long waits.

Put yourself in the customer’s shoes in a not-fun situation and ask if there’s anything that would make it a bit easier. Offer that support.

Superior atmosphere

Are there places you hate to shop? That dark cave, or hulking warehouse, or total zoo! You feel lousy and tired being there. You’d rather be anywhere else.

Remember the fabric store, mentioned above? In contrast, there’s a small quilt shop in town that I can go to for some of the things I need, and the soft lighting, soft carpets, and beautiful organization of the merchandise make shopping there a treat and a pleasure. I shop there whenever I possibly can.

Proven welcome

Cleanliness, organization, a user-friendly floor plan, and visual appeal are conducive not just to one-time purchases but to return visits to enjoy the welcoming vibes of a place.

Volumes have been written about trapping customers in “mazes” to make them purchase more. Sadly, it works, but do you feel you’ll win more loyalty and better reviews from customers who feel trapped or customers who feel welcomed?

Superior individuality

Big brands have their place, but it’s at the locally-owned business that customers are likely to have the most unique shopping experiences. From the first time I visited one of the many farm stands in the area in which I live, I was delighted with their rustic tin shed, befriended by their down-to-earth staff, and touched that they often threw something extra into my shopping bag — an apple, a bunch of thyme, a variety of melon I’d never tried.

The upshot: I shop there once a week, every week of the year.

Proven creativity

Big box stores may be here to stay, but Main Street is still fighting. The big box is not going to give you a free lettuce, or lend you an umbrella when it rains, or tell you to pay them next time when their power goes down. It’s not in their corporate policy to do those things.

Your locally-owned business gets to react to spur-of-the moment customer needs, creatively customize shopping experiences, and put a genuine human face on transactions. With a unique approach, you can become a cherished local institution.

Making a local business policy

For independently-owned businesses, official loyalty programs can offer an extra reason for customers to return to you, but the findings of my little research project indicate that they are not the core catalyst of great reviews or repeat business.

As a local business owner, you have the necessary freedom for making a particular culture, rather than a program, your official policy. So much of this comes down to basic acts of thoughtfulness: matching product/service quality to customer needs, running a well-cared-for ship that puts customers in the mood to buy, and training staff not just to answer questions but to use their own talents to provide creative solutions at the spur-of-the-moment. Sometimes, it’s the smallest thing that can make a memory and gain consumer loyalty — something as small as offering genuine thanks for doing business, or genuine empathy when a customer is disappointed. While looking at reviews, I couldn’t help noticing the repeat use of the word “love.”

“I love their selection!”

“I love how helpful they are!”

“I love their bagels!”

Can you think of any other word with a more promising ring of loyalty?

Humans are generally loyal to family and friends because of the ties that bind, stitched with countless memories of important shared experiences. With business, it’s different. I’m not intrinsically bound to any company — not until they’ve created enough of a good impression to make it into my permanent memory bank, reminding me to “please, come again.” And a bad enough experience stays imprinted on my mind for a very long time, too. Like the elephant, I never forget.

What will your local business be doing in 2016 to go above and beyond? To go from just doing business to doing it memorably well? Please, share your plans to inspire our community!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

A Checklist for Native Advertising: How to Comply with the FTC’s New Rules

Posted by willcritchlow

The FTC recently published their updated rules (and more accessible “guidance”) on what constitutes a “misleading” native advert [PDF]. I’ve read them. I only fell asleep twice. Then, near the end, a couple of bombshells.

But first, the background.

Native ads and the FTC

For those who haven’t been following the trends closely, native advertising is a form of digital advertising whereby adverts are included “in-stream,” interspersed with regular editorial content.

On social platforms, this takes the form of “promoted” posts — including stories or videos in your Facebook stream or tweets in your Twitter stream from brands you didn’t explicitly follow or “Like.” See, for example, this Coursera ad in my Facebook stream:

Native ads are particularly interesting on mobile, where the smaller screens and personal nature tend to make anything that isn’t in-stream intrusive or irrelevant.

For publishers, native advertising looks more like brand content presented as whole pages rather than as banners or advertising around the edges of the regular content. It can take the form of creative content promoting the brand, brand-funded “advertorial,” or anything in between. See, for example, this well-labelled advertorial on Autocar:

You might notice that this is actually a lot like offline magazine advertising — where “whole page takeovers” are common, and presented in the “stream” of pages that you turn as you browse. And in a similar way to the digital version, they can be glossy creative or advertorial that looks more like editorial content.

The big way that the digital world differs, however, is in the way that you find content. Most people turn pages of a magazine sequentially, whereas a lot of visitors to many web pages come from search engines, social media, email, etc. — essentially, anywhere but the “previous page” on the website (whatever that would even mean).

It’s this difference that has led the FTC to add additional regulations on top of the usual ones banning misleading advertising — the new rules are designed to prevent consumers from being misled if they fail to realize that a particular piece is sponsored.

For the most part, if you understood the spirit of the previous rules, the new rules will come as no surprise — and the newest parts mainly relate to ensuring that consumers are fully aware why they are seeing a particular advert (i.e. because the advertiser paid for its inclusion) and they are clear on the difference between the advert and the editorial / unpaid content on the publisher’s site.

At a high level, it seems very reasonable to me — the FTC wants to see clear disclosures, and will assess confusion and harm in the context of consumers’ expectations, the ways in which confusion would cause them to behave differently, and will take into account the rest of the publisher’s site:

The Commission will find an advertisement deceptive if the ad misleads reasonable consumers as to its nature or source, including that a party other than the sponsoring advertiser is its source. Misleading representations of this kind are likely to affect consumers’ decisions or conduct regarding the advertised product or the advertisement, including by causing consumers to give greater credence to advertising claims or to interact with advertising content with which they otherwise would not have interacted.

And, crucially:

The FTC considers misleadingly formatted ads to be deceptive regardless of whether the underlying product claims that are conveyed to consumers are truthful.

They summarize the position as:

From the FTC’s perspective, the watchword is transparency. An advertisement or promotional message shouldn’t suggest or imply to consumers that it’s anything other than an ad.

Subjectivity

I was interested to see the FTC say that:

Some native ads may be so clearly commercial in nature that they are unlikely to mislead consumers even without a specific disclosure.

While I think this would be risky to rely upon without specific precedents, it nicely shows more of the FTC’s intent, which seems very reasonable throughout this briefing.

Unfortunately, the subjectiveness cuts both ways, as another section says:

“…the format of [an] advertisement may so exactly duplicate a news or feature article as to render the caption ‘ADVERTISEMENT’ meaningless and incapable of curing the deception.”

It’s not easy to turn this into actionable advice, and I think it’s most useful as a warning that the whole thing is very subjective, and there is a lot of leeway to take action if the spirit of the regulations is breached.

The controversial and unexpected parts

It wasn’t until quite far through the document that I came to pieces that I found surprising. The warning bells started sounding for me when I saw them start drawing on the (very sensible) general principle that brands shouldn’t be able to open the door using misleading or deceptive practices (even if they subsequently come clean). Last year, the FTC took action against this offline advert under these rules:

Ruling that the price advertised in big red font was a “deceptive door opener” because:

To get the advertised deal, buyers needed to qualify for a full house of separate rebate offers. In other words, they had to be active duty members of the military and had to be recent college grads and had to trade in a car.

Bringing it back to web advertising, the Commission says:

Under FTC law, advertisers cannot use “deceptive door openers” to induce consumers to view advertising content. Thus, advertisers are responsible for ensuring that native ads are identifiable as advertising before consumers arrive at the main advertising page. [Emphasis mine]

If you understand how the web works, and how people find content on publishers’ sites these days, this will probably be starting to seem at odds with the way a lot of native advertising works right now. And your instincts are absolutely right. The Commission is going exactly where you think they might be. They title this set of new rules “Disclosures should remain when native ads are republished by others.

Social media

In the guidelines document, the Commission includes a whole bunch of examples of infringing and non-infringing behavior. Example 15 in their list is:

The … article published in Fitness Life, “Running Gear Up: Mistakes to Avoid,” … includes buttons so that readers can post a link to the article from their personal social media streams. When posted, the link appears in a format that closely resembles the format of links for regular Fitness Life articles posted to social media. In this situation, the ad’s format would likely mislead consumers to believe the ad is a regular article published in Fitness Life. Advertisers should ensure that the format of any link for posting in social media does not mislead consumers about its commercial nature. [Emphasis mine]

Now, it’s obviously really hard to ensure anything about how people post your content to social media. In the extreme case, where a user uses a URL shortener, doesn’t use your title or your Open Graph information, and writes their own caption, there could be literally nothing in the social media post that is in the control of the advertiser or the publisher. Reading this within the context of the reasonableness of the rest of the FTC advice, however, I believe that this will boil down to flagging commercial content in the main places that show up in social posts.

Organic search

Controlling the people who share your content on social media is one challenge, but the FTC also comments on the need to control the robots that display your content in organic search results, saying:

The advertiser should ensure that any link or other visual elements, for example, webpage snippets, images, or graphics, intended to appear in non-paid search results effectively disclose its commercial nature.

…and they also clarify that this includes in the URL:

URL links … should include a disclosure at the beginning of the native ad’s URL.

Very sensibly, it’s not just advertisers who need to ensure that they abide by the rules, but I find it very interesting that the one party noticeably absent from the FTC’s list is the publishers:

In appropriate circumstances, the FTC has taken action against other parties who helped create deceptive advertising content — for example, ad agencies and operators of affiliate advertising networks.

Historically, the FTC has maintained that it has the authority to regulate media companies over issues relating to misleading advertising, but has generally focused on the advertisers; for example, when talking about taking special measures to target a glut of misleading weight loss adverts:

“…the FTC said it does not plan to pursue media outlets directly, ‘but instead wants to continue to work with them to identify and reject ads with obviously bogus claims’ using voluntary guidelines.”

In the case of native advertising, I am very surprised not to see more of the rules and guidelines targeted at publishers. Many of the new rules refer to platform and technical considerations, and elements of the publishers’ CMS systems which are likely to be system-wide and largely outside the control of the individual advertisers. Looking at well-implemented native ads released prior to these new guidelines (like this one from the Telegraph which is clearly and prominently disclosed), we see that major publishers have not been routinely including disclosures in the URL up to now.

In addition, individual native ads could remain live and ranking in organic search for a long time, yet the publisher could undergo redesigns / platform changes that change things like URL structures. I doubt we’d see this pinned on individual advertisers, but it is an interesting wrinkle.

Checklist for compliant native advertising

Clearly I’m not a lawyer, and this isn’t legal advice, but from my reading of the new rules, advertisers should already have been expecting to:

  • Ensure you comply with all the normal rules to ensure that your advert is not misleading in content including being sure to avoid “deceptive door openers”
  • “Clearly and prominently disclose the paid nature” of native adverts — it is safest to use language like “Advertisement” or “Paid Advertisement” — and this guide has detailed guidance.

In addition, following the release of these new rules, advertisers should also work through this checklist:

  • The URL includes a disclosure near the beginning (e.g. http://ift.tt/1Q4FOd5;)
  • The title includes a disclosure near the beginning
  • The meta description includes a disclosure
  • All structured data on the page intended to appear in social sharing contains disclosures:
    • Open Graph data
    • Twitter cards
    • Social sharing buttons’ pre-filled sharing messages
    • Given the constraints on space in tweets especially, I would suggest this could be shorter than in other places — a simple [ad] probably suffices
  • Links to the native advert from elsewhere on the publisher’s site includes disclosures in both links and images
  • Embedded media have disclosures included within them — for example, via video and image overlays
  • Search engines can crawl the native advert (if it’s blocked in robots.txt, the title tag and meta description disclosures wouldn’t show in organic search)
  • Outbound links are nofollow (this is a Google requirement rather than an FTC one, but it seemed sensible to include it here)

And it seems sensible to me that advertisers would require publishers to commit to an ongoing obligation to ensure that their CMS / sharing buttons / site structure maintains compliance with the FTC regulations for as long as the native advertising remains live.

Conclusion

There are elements of the new requirements that seem onerous, particularly the disclosure early in the URL which could be a technically complicated change depending on the publisher’s platform. I have looked around at a bunch of major publisher platforms, and I haven’t found a high-profile example that does disclose all advertorials in their URLs.

It also seems likely that full compliance with all these requirements will reduce the effectiveness of native advertising by limiting its distribution in search and social.

On balance, however, the FTC’s approach is based in sound principles of avoiding misleading situations. Knowing how little people actually read, and how blind we all are to banners, I’m inclined to agree that this kind of approach where the commercial relationship is disclosed at every turn probably is the only way to avoid wide-scale misunderstandings by users.

I’d be very interested to hear others’ thoughts. In particular, I’d love to hear from:

  • Brands: Whether this makes you less likely to invest in native advertising
  • Publishers: Whether these technical changes are likely to be onerous and if it feels like a threat to selling native ads

I look forward to hearing your thoughts in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Can SEOs Stop Worrying About Keywords and Just Focus on Topics? – Whiteboard Friday

Posted by randfish

Should you ditch keyword targeting entirely? There’s been a lot of discussion around the idea of focusing on broad topics and concepts to satisfy searcher intent, but it’s a big step to take and could potentially hurt your rankings. In today’s Whiteboard Friday, Rand discusses old-school keyword targeting and new-school concept targeting, outlining a plan of action you can follow to get the best of both worlds.

http://ift.tt/1obPAh3

http://ift.tt/1GaxkYO

Can We Abandon Keyword Research & On-Page Targeting in Favor of a Broader Topic/Concept Focus in Our SEO Efforts?

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week, we’re going to talk about a topic that I’ve been seeing coming up in the SEO world for probably a good 6 to 12 months now. I think ever since Hummingbird came out, there has been a little bit of discussion. Then, over the last year, it’s really picked up around this idea that, “Hey, maybe we shouldn’t be optimizing for researching and targeting keywords or keyword phrases anymore. Maybe we should be going more towards topics and ideas and broad concept.”

I think there’s some merit to the idea, and then there are folks who are taking it way too far, moving away from keywords and actually losing and costing themselves so much search opportunity and search engine traffic. So I’m going to try and describe these two approaches today, kind of the old-school world and this very new-school world of concept and topic-based targeting, and then describe maybe a third way to combine them and improve on both models.

Classic keyword research & on-page targeting

In our classic keyword research, on-page targeting model, we sort of have our SEO going, “Yeah. Which one of these should I target?”

He’s thinking about like best times to fly. He’s writing a travel website, “Best Times to Fly,” and there’s a bunch of keywords. He’s checking the volume and maybe some other metrics around “best flight times,” “best days to fly,” “cheapest days to fly,” “least crowded flights,” “optimal flight dates,” “busiest days to fly.” Okay, a bunch of different keywords.

So, maybe our SEO friend here is thinking, “All right. She’s going to maybe go make a page for each of these keywords.” Maybe not all of them at first. But she’s going to decide, “Hey, you know what? I’m going after ‘optimal flight dates,’ ‘lowest airport traffic days,’ and ‘cheapest days to fly.’ I’m going to make three different pages. Yeah, the content is really similar. It’s serving a very similar purpose. But that doesn’t matter. I want to have the best possible keyword targeting that I can for each of these individual ones.”

“So maybe I can’t invest as much effort in the content and the research into it, because I have to make these three different pages. But you know what? I’ll knock out these three. I’ll do the rest of them, and then I’ll iterate and add some more keywords.”

That’s pretty old-school SEO, very, very classic model.

New school topic- & concept-based targeting

Newer school, a little bit of this concept and topic targeting, we get into this world where folks go, “You know what? I’m going to think bigger than keywords.”

“I’m going to kind of ignore keywords. I don’t need to worry about them. I don’t need to think about them. Whatever the volumes are, they are. If I do a good job of targeting searchers’ intent and concepts, Google will do a good job recognizing my content and figuring out the keywords that it maps to. I don’t have to stress about that. So instead, I’m going to think about I want to help people who need to choose the right days to buy flights.”

“So I’m thinking about days of the week, and maybe I’ll do some brainstorming and a bunch of user research. Maybe I’ll use some topic association tools to try and broaden my perspective on what those intents could be. So days of the week, the right months, the airline differences, maybe airport by airport differences, best weeks. Maybe I want to think about it by different country, price versus flexibility, when can people use miles, free miles to fly versus when can’t they.”

“All right. Now, I’ve come up with this, the ultimate guide to smart flight planning. I’ve got great content on there. I have this graph where you can actually select a different country or different airline and see the dates or the weeks of the year, or the days of the week when you can get cheapest flights. This is just an awesome, awesome piece of content, and it serves a lot of these needs really nicely.” It’s not going to rank for crap.

I don’t mean to be rude. It’s not the case that Google can never map this to these types of keywords. But if a lot of people are searching for “best days of the week to fly” and you have “The Ultimate Guide to Smart Flight Planning,” you might do a phenomenal job of helping people with that search intent. Google is not going to do a great job of ranking you for that phrase, and it’s not Google’s fault entirely. A lot of this has to do with how the Web talks about content.

A great piece of content like this comes out. Maybe lots of blogs pick it up. News sites pick it up. You write about it. People are linking to it. How are they describing it? Well, they’re describing it as a guide to smart flight planning. So those are the terms and phrases people associate with it, which are not the same terms and phrases that someone would associate with an equally good guide that leveraged the keywords intelligently.

A smarter hybrid

So my recommendation is to combine these two things. In a smart combination of these techniques, we can get great results on both sides of the aisle. Great concept and topic modeling that can serve a bunch of different searcher needs and target many different keywords in a given searcher intent model, and we can do it in a way that targets keywords intelligently in our titles, in our headlines, our sub-headlines, the content on the page so that we can actually get the searcher volume and rank for the keywords that send us traffic on an ongoing basis.

So I take my keyword research ideas and my tool results from all the exercises I did over here. I take my topic and concept brainstorm, maybe some of my topic tool results, my user research results. I take these and put them together in a list of concepts and needs that our content is going to answer grouped by combinable keyword targets — I’ll show you what I mean — with the right metrics.

So I might say my keyword groups are there’s one intent around “best days of the week,” and then there’s another intent around “best times of the year.” Yes, there’s overlap between them. There might be people who are looking for kind of both at the same time. But they actually are pretty separate in their intent. “Best days of the week,” that’s really someone who knows that they’re going to fly at some point and they want to know, “Should I be booking on a Tuesday, Wednesday, Thursday, or a Monday, or a Sunday?”

Then, there’s “best times of the year,” someone who’s a little more flexible with their travel planning, and they’re trying to think maybe a year ahead, “Should I buy in the spring, the fall, the summer? What’s the time to go here?”

So you know what? We’re going to take all the keyword phrases that we discovered over here. We’re going to group them by these concept intents. Like “best days of the week” could include the keywords “best days of the week to fly,” “optimal day of week to fly,” “weekday versus weekend best for flights,” “cheapest day of the week to fly.”

“Best times of the year,” that keyword group could include words and phrases like “best weeks of the year to fly,” “cheapest travel weeks,” “lowest cost months to fly,” “off-season flight dates,” “optimal dates to book flights.”

These aren’t just keyword matches. They’re concept and topic matches, but taken to the keyword level so that we actually know things like the volume, the difficulty, the click-through rate opportunity for these, the importance that they may have or the conversion rate that we think they’re going to have.

Then, we can group these together and decide, “Hey, you know what? The volume for all of these is higher. But these ones are more important to us. They have lower difficulty. Maybe they have higher click-through rate opportunity. So we’re going to target ‘best times of the year.’ That’s going to be the content we create. Now, I’m going to wrap my keywords together into ‘the best weeks and months to book flights in 2016.'”

That’s just as compelling a title as “The Ultimate Guide to Smart Flight Planning,” but maybe a tiny bit less. You could quibble. But I’m sure you could come up with one, and it uses our keywords intelligently. Now I’ve got sub-headings that are “sort by the cheapest,” “the least crowded,” “the most flexible,” “by airline,” “by location.” Great. I’ve hit all my topic areas and all my keyword areas at the same time, all in one piece of content.

This kind of model, where we combine the best of these two worlds, I think is the way of the future. I don’t think it pays to stick to your old-school keyword targeting methodology, nor do I think it pays to ignore keyword targeting and keyword research entirely. I think we’ve got to merge these practices and come up with something smart.

All right everyone. I look forward to your comments, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

The Machine Learning Revolution: How it Works and its Impact on SEO

Posted by EricEnge

Machine learning is already a very big deal. It’s here, and it’s in use in far more businesses than you might suspect. A few months back, I decided to take a deep dive into this topic to learn more about it. In today’s post, I’ll dive into a certain amount of technical detail about how it works, but I also plan to discuss its practical impact on SEO and digital marketing.

For reference, check out Rand Fishkin’s presentation about how we’ve entered into a two-algorithm world. Rand addresses the impact of machine learning on search and SEO in detail in that presentation, and how it influences SEO. I’ll talk more about that again later.

For fun, I’ll also include a tool that allows you to predict your chances of getting a retweet based on a number of things: your Followerwonk Social Authority, whether you include images, hashtags, and several other similar factors. I call this tool the Twitter Engagement Predictor (TEP). To build the TEP, I created and trained a neural network. The tool will accept input from you, and then use the neural network to predict your chances of getting an RT.

The TEP leverages the data from a study I published in December 2014 on Twitter engagement, where we reviewed information from 1.9M original tweets (as opposed to RTs and favorites) to see what factors most improved the chances of getting a retweet.

My machine learning journey

I got my first meaningful glimpse of machine learning back in 2011 when I interviewed Google’s Peter Norvig, and he told me how Google had used it to teach Google Translate.

Basically, they looked at all the language translations they could find across the web and learned from them. This is a very intense and complicated example of machine learning, and Google had deployed it by 2011. Suffice it to say that all the major market players — such as Google, Apple, Microsoft, and Facebook — already leverage machine learning in many interesting ways.

Back in November, when I decided I wanted to learn more about the topic, I started doing a variety of searches of articles to read online. It wasn’t long before I stumbled upon this great course on machine learning on Coursera. It’s taught by Andrew Ng of Stanford University, and it provides an awesome, in-depth look at the basics of machine learning.

Warning: This course is long (19 total sections with an average of more than one hour of video each). It also requires an understanding of calculus to get through the math. In the course, you’ll be immersed in math from start to finish. But the point is this: If you have the math background, and the determination, you can take a free online course to get started with this stuff.

In addition, Ng walks you through many programming examples using a language called Octave. You can then take what you’ve learned and create your own machine learning programs. This is exactly what I have done in the example program included below.

Basic concepts of machine learning

First of all, let me be clear: this process didn’t make me a leading expert on this topic. However, I’ve learned enough to provide you with a serviceable intro to some key concepts. You can break machine learning into two classes: supervised and unsupervised. First, I’ll take a look at supervised machine learning.

Supervised machine learning

At its most basic level, you can think of supervised machine learning as creating a series of equations to fit a known set of data. Let’s say you want an algorithm to predict housing prices (an example that Ng uses frequently in the Coursera classes). You might get some data that looks like this (note that the data is totally made up):

In this example, we have (fictitious) historical data that indicates the price of a house based on its size. As you can see, the price tends to go up as house size goes up, but the data does not fit into a straight line. However, you can calculate a straight line that fits the data pretty well, and that line might look like this:

This line can then be used to predict the pricing for new houses. We treat the size of the house as the “input” to the algorithm and the predicted price as the “output.” For example, if you have a house that is 2600 square feet, the price looks like it would be about $xxxK ?????? dollars.

However, this model turns out to be a bit simplistic. There are other factors that can play into housing prices, such as the total rooms, number of bedrooms, number of bathrooms, and lot size. Based on this, you could build a slightly more complicated model, with a table of data similar to this one:

Already you can see that a simple straight line will not do, as you’ll have to assign weights to each factor to come up with a housing price prediction. Perhaps the biggest factors are house size and lot size, but rooms, bedrooms, and bathrooms all deserve some weight as well (all of these would be considered new “inputs”).

Even now, we’re still being quite simplistic. Another huge factor in housing prices is location. Pricing in Seattle, WA is different than it is in Galveston, TX. Once you attempt to build this algorithm on a national scale, using location as an additional input, you can see that it starts to become a very complex problem.

You can use machine learning techniques to solve any of these three types of problems. In each of these examples, you’d assemble a large data set of examples, which can be called training examples, and run a set of programs to design an algorithm to fit the data. This allows you to submit new inputs and use the algorithm to predict the output (the price, in this case). Using training examples like this is what’s referred to as “supervised machine learning.”

Classification problems

This a special class of problems where the goal is to predict specific outcomes. For example, imagine we want to predict the chances that a newborn baby will grow to be at least 6 feet tall. You could imagine that inputs might be as follows:

The output of this algorithm might be a 0 if the person was going to shorter than 6 feet tall, or 1 if they were going to be 6 feet or taller. What makes it a classification problem is that you are putting the input items into one specific class or another. For the height prediction problem as I described it, we are not trying to guess the precise height, but a simple over/under 6 feet prediction.

Some examples of more complex classifying problems are handwriting recognition (recognizing characters) and identifying spam email.

Unsupervised machine learning

Unsupervised machine learning is used in situations where you don’t have training examples. Basically, you want to try and determine how to recognize groups of objects with similar properties. For example, you may have data that looks like this:

The algorithm will then attempt to analyze this data and find out how to group them together based on common characteristics. Perhaps in this example, all of the red “x” points in the following chart share similar attributes:

However, the algorithm may have trouble recognizing outlier points, and may group the data more like this:

What the algorithm has done is find natural groupings within the data, but unlike supervised learning, it had to determine the features that define each group. One industry example of unsupervised learning is Google News. For example, look at the following screen shot:

You can see that the main news story is about Iran holding 10 US sailors, but there are also related news stories shown from Reuters and Bloomberg (circled in red). The grouping of these related stories is an unsupervised machine learning problem, where the algorithm learns to group these items together.

Other industry examples of applied machine learning

A great example of a machine learning algo is the Author Extraction algorithm that Moz has built into their Moz Content tool. You can read more about that algorithm here. The referenced article outlines in detail the unique challenges that Moz faced in solving that problem, as well as how they went about solving it.

As for Stone Temple Consulting’s Twitter Engagement Predictor, this is built on a neural network. A sample screen for this program can be seen here:

The program makes a binary prediction as to whether you’ll get a retweet or not, and then provides you with a percentage probability for that prediction being true.

For those who are interested in the gory details, the neural network configuration I used was six input units, fifteen hidden units, and two output units. The algorithm used one million training examples and two hundred training iterations. The training process required just under 45 billion calculations.

One thing that made this exercise interesting is that there are many conflicting data points in the raw data. Here’s an example of what I mean:

What this shows is the data for people with Followerwonk Social Authority between 0 and 9, and a tweet with no images, no URLs, no @mentions of other users, two hashtags, and between zero and 40 characters. We had 1156 examples of such tweets that did not get a retweet, and 17 that did.

The most desirable outcome for the resulting algorithm is to predict that these tweets not get a retweet, so that would make it wrong 1.4% of the time (17 times out of 1173). Note that the resulting neural network assesses the probability of getting a retweet at 2.1%.

I did a calculation to tabulate how many of these cases existed. I found that we had 102,045 individual training examples where it was desirable to make the wrong prediction, or for just slightly over 10% of all our training data. What this means is that the best the neural network will be able to do is make the right prediction just under 90% of the time.

I also ran two other sets of data (470K and 473K samples in size) through the trained network to see the accuracy level of the TEP. I found that it was 81% accurate in its absolute (yes/no) prediction of the chance of getting a retweet. Bearing in mind that those also had approximately 10% of the samples where making the wrong prediction is the right thing to do, that’s not bad! And, of course, that’s why I show the percentage probability of a retweet, rather than a simple yes/no response.

Try the predictor yourself and let me know what you think! (You can discover your Social Authority by heading to Followerwonk and following these quick steps.) Mind you, this was simply an exercise for me to learn how to build out a neural network, so I recognize the limited utility of what the tool does — no need to give me that feedback ;->.

Examples of algorithms Google might have or create

So now that we know a bit more about what machine learning is about, let’s dive into things that Google may be using machine learning for already:

Penguin

One approach to implementing Penguin would be to identify a set of link characteristics that could potentially be an indicator of a bad link, such as these:

  1. External link sitting in a footer
  2. External link in a right side bar
  3. Proximity to text such as “Sponsored” (and/or related phrases)
  4. Proximity to an image with the word “Sponsored” (and/or related phrases) in it
  5. Grouped with other links with low relevance to each other
  6. Rich anchor text not relevant to page content
  7. External link in navigation
  8. Implemented with no user visible indication that it’s a link (i.e. no line under it)
  9. From a bad class of sites (from an article directory, from a country where you don’t do business, etc.)
  10. …and many other factors

Note that any one of these things isn’t necessarily inherently bad for an individual link, but the algorithm might start to flag sites if a significant portion of all of the links pointing to a given site have some combination of these attributes.

What I outlined above would be a supervised machine learning approach where you train the algorithm with known bad and good links (or sites) that have been identified over the years. Once the algo is trained, you would then run other link examples through it to calculate the probability that each one is a bad link. Based on the percentage of links (and/or total PageRank) coming from bad links, you could then make a decision to lower the site’s rankings, or not.

Another approach to this same problem would be to start with a database of known good links and bad links, and then have the algorithm automatically determine the characteristics (or features) of those links. These features would probably include factors that humans may not have considered on their own.

Panda

Now that you’ve seen the Penguin example, this one should be a bit easier to think about. Here are some things that might be features of sites with poor-quality content:

  1. Small number of words on the page compared to competing pages
  2. Low use of synonyms
  3. Overuse of main keyword of the page (from the title tag)
  4. Large blocks of text isolated at the bottom of the page
  5. Lots of links to unrelated pages
  6. Pages with content scraped from other sites
  7. …and many other factors

Once again, you could start with a known set of good sites and bad sites (from a content perspective) and design an algorithm to determine the common characteristics of those sites.

As with the Penguin discussion above, I’m in no way representing that these are all parts of Panda — they’re just meant to illustrate the overall concept of how it might work.

How machine learning impacts SEO

The key to understanding the impact of machine learning on SEO is understanding what Google (and other search engines) want to use it for. A key insight is that there’s a strong correlation between Google providing high-quality search results and the revenue they get from their ads.

Back in 2009, Bing and Google performed some tests that showed how even introducing small delays into their search results significantly impacted user satisfaction. In addition, those results showed that with lower satisfaction came fewer clicks and lower revenues:

The reason behind this is simple. Google has other sources of competition, and this goes well beyond Bing. Texting friends for their input is one form of competition. So are Facebook, Apple/Siri, and Amazon. Alternative sources of information and answers exist for users, and they are working to improve the quality of what they offer every day. So must Google.

I’ve already suggested that machine learning may be a part of Panda and Penguin, and it may well be a part of the “Search Quality” algorithm. And there are likely many more of these types of algorithms to come.

So what does this mean?

Given that higher user satisfaction is of critical importance to Google, it means that content quality and user satisfaction with the content of your pages must now be treated by you as an SEO ranking factor. You’re going to need to measure it, and steadily improve it over time. Some questions to ask yourself include:

  1. Does your page meet the intent of a large percentage of visitors to it? If a user is interested in that product, do they need help in selecting it? Learning how to use it?
  2. What about related intents? If someone comes to your site looking for a specific product, what other related products could they be looking for?
  3. What gaps exist in the content on the page?
  4. Is your page a higher-quality experience than that of your competitors?
  5. What’s your strategy for measuring page performance and improving it over time?

There are many ways that Google can measure how good your page is, and use that to impact rankings. Here are some of them:

  1. When they arrive on your page after clicking on a SERP, how long do they stay? How does that compare to competing pages?
  2. What is the relative rate of CTR on your SERP listing vs. competition?
  3. What volume of brand searches does your business get?
  4. If you have a page for a given product, do you offer thinner or richer content than competing pages?
  5. When users click back to the search results after visiting your page, do they behave like their task was fulfilled? Or do they click on other results or enter followup searches?

For more on how content quality and user satisfaction has become a core SEO factor, please check out the following:

  1. Rand’s presentation on a two-algorithm world
  2. My article on Term Frequency Analysis
  3. My article on Inverse Document Frequency
  4. My article on Content Effectiveness Optimization

Summary

Machine learning is becoming highly prevalent. The barrier to learning basic algorithms is largely gone. All the major players in the tech industry are leveraging it in some manner. Here’s a little bit on what Facebook is doing, and machine learning hiring at Apple. Others are offering platforms to make implementing machine learning easier, such as Microsoft and Amazon.

For people involved in SEO and digital marketing, you can expect that these major players are going to get better and better at leveraging these algorithms to help them meet their goals. That’s why it will be of critical importance to tune your strategies to align with the goals of those organizations.

In the case of SEO, machine learning will steadily increase the importance of content quality and user experience over time. For you, that makes it time to get on board and make these factors a key part of your overall SEO strategy.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!