Survivorship Bias in RevOps Data
- Has anything you learned in law enforcement applied to what you do today? [1:08]
- What is survivorship bias? [2:06]
- Can you give an example of survivorship bias? [3:04]
- Do people overly focus on pre-existing metrics? What are those metrics? [4:27]
- How do you approach creating metrics that focus on the data you actually want? [7:21]
- What are some outbound metrics that you're focused on? Things you see often in your customers? [13:13]
- What does your process for understanding the depth of a company look like? [17:35]
- How do territories come into play when it comes to top of funnel? [20:47]
- Can you elaborate on “territories for the sake of territories?” [22:32]
- Survivorship bias, top of funnel, and territories. How do you tie the three together? [28:03]
- Final Thoughts [29:56]
Hello, everybody. Welcome to another episode of The Fullcast Fireside Chat. I'm your host, Tyler Simons and I am the head of Customer Success at Fullcast, which is a go-to-market planning and execution platform. Today our guest is Taft Love. We're going to be talking about "Survivorship Bias in RevOps Data." Taft, why don't you give us a brief introduction?
Hi, everybody. My name is Taft Love. I'm the founder of Iceberg RevOps. We're a small consulting firm that helps start ups bridge the gap in operations, from founder selling to having a qualified in house Ops team. That's a place where companies tend to struggle. Before that, I built internal sales and SalesOps teams for several high growth startups, PandaDoc, SmartRecruiters, and others. And before that, I was in law enforcement for a decade. Lived a totally different life, and was a street cop, a detective, and a federal agent.
Has anything you learned in law enforcement applied to what you do today? [1:08]
This is completely unrelated to our topic, but I wonder if anything that you learned in law enforcement has been applied to what you're doing today.
I think one thing that I learned in law enforcement, especially as a detective, is the power of confirmation bias. When we talk about metrics all the time in Ops, that's a place where I see confirmation bias pop up. People tend to make the numbers fit their idea. And it was something that I fought against as a detective, deciding who did it, improving it, versus looking at the information and making a fair and balanced case. So I think that's a really strong parallel.
What is survivorship bias? [2:06]
Interesting. And I think today we're going to be talking about survivorship bias, which is kind of almost tied to it, maybe the opposite. I don't know. We'll talk about it. I think what was interesting is that I had a question internally about what is survivorship bias? And I was going to just save this dumb question for later on in the thing, but might as well just let's define what survivorship bias is for everybody so we know the path we're going to take today.
So survivorship bias applied in this context is just a fancy way of saying making what's already measurable, important instead of figuring out what's important and making it measurable. Credit here to Erel Toker, who sort of helped me put a name on it, which has I think existed for a while, but I didn't know until recently when he started talking about this a lot. It's something his company truly is focused on eliminating. So that's really what it means in Ops.
Can you give an example of survivorship bias? [3:04]
Can you just give me an example of maybe like what that would be?
Yeah. So an example I always point to because I started out in SalesDev as a sales development rep and then a leader, and then SalesOps focused somewhat on sales development. Every time you go to a sales development conference, anytime you go to the outreach conference, even today, long after they should be. Everybody is talking about "What's your email open rate? How many dials do you make? How many people do you approach?" Everything is person-centric or activity-centric, and nobody ever asked the question, how many companies do you need to approach to get a meeting?
And the reason for that is because outreach spits out the first three. It just gives you that. They're easy to measure because it's baked into your systems and Salesforce makes it pretty easy to measure. It is not so easy to aggregate that data in a way that helps you understand "How many companies did we start prospecting this week? How many do we need to prospect on average to get two meetings?" Things like that.
Thinking company-centric is really hard to measure, but it's super important. And very few companies actually ask those questions because it's easy to measure the stuff that outreach spits out for you. So we decided those are the important metrics to the exclusion of the ones that, in my opinion, are actually more important.
Do people overly focus on pre-existing metrics? What are those metrics? [4:27]
Now is that just something that everyone tends to do? Like they focus on these pre existing metrics, but maybe the tools that they buy or a blog that they read.
I think really the question is what are the problems that really arise from focusing on the pre existing metrics? Maybe we'll get into just like how we could then create some metrics. So I'll ask that question later, we'll separate my questions up. I want to ask everything all at once here.
So what have you seen? Like some people and teams that are just overly focused on pre existing metrics? Have you seen them do that? And maybe what are some of those metrics that everyone's focused on?
Yeah. There are two reasons I think people choose metrics to focus on in the context of survivorship bias. The metrics that make what you can measure important versus the opposite. Some examples here are in sales development, like I said, focusing on the activity in person metrics versus the sort of company funnel, which is hard to measure.
Off the cuff here, I think there are some other examples. So you have ones that your tools can measure for you, which that makes it easy. The other is ones you already know how to measure because if you're a VP of sales, you know how to tell your SalesOps team to get you win rate. You know what win rate is. Tell them the formula to plug into a report and you have your win rate.
But sometimes there are other questions that we should be asking, but we don't because we don't actually know how we would go about measuring them. Or we asked them and then set them aside when it's clear we can't measure them yet or it would take a lot of effort.
So some examples of that are "When we lose, to which competitor do we lose most often?" Or we may say, "Well, we know how many deals we lost to each competitor, but I don't know how we would figure out the actual monetary value and weight it by dollars lost. So we'll just pay attention to how many deals we lost. Where it would be really important to know that we only lost two deals to this one competitor, but it was 60% of our pipeline. But we'll never know that because we don't know how to figure out to do it from a point of view of dollars. We only know count of deals lost.” You tend to just set aside the questions that aren't easy to answer.
How do you approach creating metrics that focus on the data you actually want? [7:21]
What's interesting here is that it almost sounds like each business is kind of unique in the fact that they may have different problems that they need to solve, and different questions they should be asking, and therefore, maybe some different metrics. So maybe you could talk to me a bit about like at Iceberg Ops, how do you approach creating metrics that focus on the data you actually want?
It's a really good question. So the first thing we do is we press the pause button, we do a time out, and we don't start with metrics. If you don't take the time as an Ops person or in our case, an Ops agency, if you don't take the time to actually understand their business, you're going to get it wrong. If you take boilerplate metrics and go Chinese menu and say, "Which of these do you want to know?" Which is how a lot of agencies and Ops people and contractors do business, you're going to end up probably missing the answers to some important questions.
So the first thing we ask is "How do you run your business? And what are the things in your business that matter? What are the questions your board asks you over and over? What are the things that actually move the needle for you?" It can be different for every business. If you're a funded start-up, growth at all costs is probably what you're thinking about. If you're like Iceberg and a bootstrapped semi tech company, growth at all costs isn't what you're thinking about. So there are totally different metrics for fairly similar companies.
So start with the questions. Plain English. I literally have founders or if a VP of sales hires us on, I have them write down the questions that they want to know. Whether or not they can answer them today is irrelevant, and there's a chance we won't be able to answer all of them for them. But it helps me understand how they think about running their business.
Then we want to know, "Where do you want to get? Where are you today? Where do you want to be in a year or five years", whatever time frame we agree to think about. So starting with plain English questions and long term goals, then you can easily work backwards to metrics.
"So what are the KPIs against which we're going to judge that success? What are the questions we're going to ask to sort of check that we're on the way there?" And translating that into metrics is how you should get to your set of metrics, not what almost the whole world does, which is, "Well, back at Salesforce, we used to measure win rate and number of deals and number of calls between stage two and close, so go build that for me." Which is often the instruction we get. We start a new engagement, and they say, "I've got an Ops background. Here's what you need to do." And they give us a list and it's always "Time out. Write it in plain English, please. And what are the long term goals?" So that's how we think about it.
I was just saying, we've talked about this on previous Fireside chats in terms of focusing on the outcomes and working backwards. We actually have another guest coming up in the future that we're going to talk about, like starting with outcomes and process improvement and then working backwards, finding those components. This is a very similar approach. Where you've got to tell me, "What's your end goal, what's your long term goal," whatever that is. And then we can think about the components or the levers or whatever you want that plug into that. That's what you theoretically want to set up and measure. So that if you're like, "Okay, well, if we focus on increasing this, what's going to happen to the outcome? Or decreasing something", or whatever it might be.
So I think that's a really important framework, because I think a lot of people also don't really know where to start. They might not even be asking the right questions. "I have an idea of what my business is and what I think I want to know, but I might not be asking those right questions." Instead, going back to your time out, starting to think about, well, I know that I need to hit a certain dollar amount in ARR or I need to do whatever the end goal is and then working backwards from there will be a good place to start.
Another thing we see, and then I'll round out this one. With metrics, keep it simple, small number, and every single metric that you're actually investing Ops resources into measuring and tracking, you better be able to answer the question, "How will I behave if this changes?" If you can't do that, then I would question why you're even measuring it. If any change in this doesn't lead to some different behavior. I think that's a good thing to keep top of mind too. Otherwise you'll boil the ocean and have a thousand metrics that are meaningless.
What are some outbound metrics that you're focused on? Things you see often in your customers? [13:13]
I'm laughing because I've gone through that. Actually a great example of this is COVID and COVID Tests. This might be controversial, but when you test positive for COVID with COVID tests, and maybe now it's different, but it's not going to change your behavior in terms of going to the hospital or not going to hospital. What's going to change that behavior of going to the hospital because you have COVID is because you can't breathe and you're really sick and you need to go. Just because you test positive doesn't mean that you're like, I need to go to the hospital and go do all these things.
So there's like this whole piece that's just like if you're not going to change your behavior when you get a positive or negative result, then what's the point? You're just going to just do the same thing you always did. Anyway, I think that's super important to note. So what about some outbound metrics that maybe you're focused on and things that you see repeat on a regular basis and with some of your customers?
Let me set the stage here because I think it's important to give it a clear example of why the metrics I'm talking about matter, because they feel a bit obscure and might be hard to understand why I'm saying this without setting the stage. So imagine you have two SDRs. They both have the exact same metrics in outreach.
You pull up your outreach dashboard yesterday or let's say last week. They both made 500 calls. They both approached 200 people and they both sent 1000 emails. They both got two meetings. So they had the same week. They have had the same business impact.
But what if I told you SDR-A reached out to 500 companies? Excuse me, they reached out to 200 companies. So one person per company and lots and lots of companies. That's a Yelp rep. That's somebody who's calling every gas station in Minnesota all week long and talking to the owner because there's nobody else to talk to and they set two meetings.
Now, let's say the other person is doing an account based marketing and sales development plan against Coke, Pepsi and a few other giant drink conglomerates and they reached out to four companies last week and got two meetings. Well, now the picture starts to look very different, even though what everybody measures says they had the same week.
So the difference there is in the latter, fewer companies, larger value opportunities. The activity in person funnel looks pretty similar. Maybe they both reached out to 200 people. But one is 200 companies. One is four companies.
So one person is going very shallow across a huge pool of companies. The other is going very deep, reaching out to 50 people per company in four companies. One is not better than the other, but they're very different. The actual value they're driving to the business is likely very different. All else equal if the level of qualification of those meetings is the same.
But it's really hard for most companies to answer the question, "How many companies did my sales development team start approaching last week?" And I've talked to a lot of VP of sales about this. Who say, "No, we know that." I don't think you do. I ask people, I challenge people, "No, go pull the report, show me." And they'll go look at last activity date or these fields that already exist in Salesforce. And they'll say, look, it's 40.
"There were 100 counts created last year."
Yeah, exactly. But the truth is, if you actually poke at these numbers, none of them answer the question I just asked. That's just like the very top of the funnel question. If you can't answer top of the funnel, you got no funnel.
So how many of those actually talk to us? Again, if you don't know how many you started talking to, you have no denominator. So you don't know a ratio of like, "How many companies do I need to reach out to to get someone within the company to talk to me? And then how many conversations do I need to have, regardless of sentiment, to get to a meeting? Then, how many meetings do we need to have to create an opportunity?"
You have no denominator, you have no funnel. But, because it's so easy to look at how many contacts we reached out to, how many calls we made, and how many emails we made, that's what everyone focuses on, to the exclusion of what's actually important. You sell the companies, not people. Unless you're at Yelp and only ever talking to one person per company, then that matters.
What does your process for understanding the depth of a company look like? [17:35]
That's interesting. Do you see this, though? Taking the Yelp example versus larger company example, or like an enterprise company example. Is there almost like some time that needs to be spent on understanding, 'Well, what is the depth that we need to go into a company?" Because maybe there's companies out there where really two to three people that you reach out to at a company is what you need to do versus, "I'm going to sell to Coca Cola and it needs to be 50 people." Do you go through a process to figure that out? And what does that look like?
Yeah. So you and I have personal experience building these mechanisms in Salesforce. I lean pretty heavily on you to think through this and then a year or two later ended up building a Salesforce app to measure exactly this stuff. So when you helped us build it at Salesforce, the mechanism you put together to answer these questions, the first thing I did is let it run. I didn't try to give any guidance. I just looked at what the numbers told me and what became pretty clear was a couple of things.
One is the sales development reps, talent, and personal strategy are so important that there's no one clear answer. You'll have outliers on a big sales development team who just like to break the mold. Take for example, a guy named Rob who worked for me. On paper, he was not working that hard, but somehow he got more meetings than everyone else, so he was doing something right. So setting aside those outliers, when you look at the data and aggregate, you'll get some averages. You'll see some patterns emerge.
Over time, what we figured out was, all of our opportunities combined, if you just isolate the outbound opportunities created and look back up the funnel, on average, we had reached out to four people in the company that took the opportunity.
Now we also on average talk to one person before an opportunity is created and then it's up to the account executive in our sales motion to go network in the org and get other people in meetings. So it's going to be different for everybody. For everyone.
I caution against giving too much weight to benchmarks and what other people see work. I think once you can measure this stuff, pause. Let it run. See what the numbers tell you, and then start coaching to that. What do successful people look like in this funnel? What do unsuccessful people look like? The patterns will emerge and help you sort of drive the team to focus on the metrics that actually do move the needle based on historical data.
How do territories come into play when it comes to top of funnel? [20:47]
This is kind of an interesting segue into territories because I feel like this stuff would be different based on segment, verticals, etc. Think about it. "I'm an enterprise rep. It's going to be much different than an SMB rep in terms of what my metrics should look like and how many people I should be reaching out to per company, emails per day, etc."
I'm curious to hear when we start to think about this top of funnel, in your experience, how have you seen territories come into play as it relates to all this top of funnel stuff? Both outbound and inbound. Whatever else you might want to enlighten us with.
So in Iceberg, we do tons of work, not usually helping design territories. That’s something I think Fullcast does a lot more of, like design and instrument territories. What we do a lot of is building the resulting infrastructure that's needed like lead routing to honor the new territory set up and things like that.
So a couple of thoughts here. Top of mind. So one of the problems I see and I'll just call this out first is territories for the sake of having territories. It's something I see pretty often that I think is a big mistake.
The second is, and these often overlap. There's a lot of overlap here with territories not properly instrumented. Somebody just buys an app that does territories, slaps it in Salesforce, and next thing you've bought yourself years of operational debt without realizing it and an angry SalesForce and a hot mess of lead routing and outbound targeting.
Can you elaborate on “territories for the sake of territories?” [22:32]
I want to go back to the territories for the sake of territories. What would be a good example of that existing? I'm assuming it's probably somebody that's smaller that really doesn't need territories because you only have five reps and you need a different model. But I'm just curious if you could just elaborate on that for us.
Yeah. So we work with a lot of small to medium sized companies. So often we see people who have rolled out territories. I think if you don't have a cogent argument for why you need territories, then you don't need territories. I think it's not the other way around.
Often I'll see companies who have set up territories and they have three reps and they're all selling the same thing. There's no verticalization, no real specialization, and they're all in the same office. So you're not setting up GEOS for time zone sake is not helpful. Why the hell do you have territories? What do territories give you that Round Robin doesn't?
With lean data, you can get pretty granular and say, "Hey, if somebody else is working a lead from the same domain, route this incoming lead to skip the Round Robin to that person." So you also like round Robin can have some nuance to it with today's tooling.
So that's where I see territories for the sake of territories. It's almost always somebody who came from a big company, had no idea how the sausage was made, but just knew we had territories and we were successful. So now we have territories and let's set up GEOS, and six months later, everything hurts, nothing works, and nobody can explain why we needed these in the first place. So that's the case I'm talking about.
And I certainly caution a lot of our customers on going straight to GEOS, especially on the SMB side of things, because it's such a high velocity segment. The typical thing that you start to look at is like, "Okay, we have SMB. It exists." Let's say you are now at, let's say 30 reps. Which is a decent size in SMB. Even at that size, you could break the United States, this is AMER only. So United States only.
So 30 SMB reps, that's actually huge. Maybe we take it down. Let's do 15. So let's just say 15 reps in AMER. The way to think about this is maybe you break it down into these big regions. So time zone like you were talking about. But then you don't need to go any crazy. Like "We're going to break San Francisco up into postal codes," and instead you tie that back into routing. So then you have five reps per time zone or region. I know there's more than three time zones in the US everybody.
So then you have the five reps, but then as inbound leads come in or they want to work additional accounts or whatever that might be. Those things get Round Robin to the five reps within the region and therefore you keep this fairness and this balance across this stuff. I think that's kind of what you're hinting at.
Usually for the small teams, it can just be a Round Robin of inbound accounts or leads that come in. And as you grow, maybe you start to segment but don't go all the way down into crazy tight GEOS or we're going to do a fence or vertical where really you've got nobody to work that and so on and so forth.
A commercial for Fullcast here, but this is something I believe is really important. You need someone who has done this before and knows how the sausage is made. Because if you're figuring it out on the fly, you're probably going to get it wrong because there are so many little nuances to territories that people don't think about. "What is our source of truth? How do we handle it if we have conflicting data across multiple fields about where this account is located? What is location in our definition?"
Because something I explained to customers a lot is just like with duplicates. That's another thing we work on a whole lot with customers. It's like you can't set up territories or auto merge duplicates until you define "What is the location for a company. Is it their billing address? Is it their parent company's billing address? Is it something else altogether?"
And just like with duplicates, if you can't define a duplicate, we can't help you. That's the starting point. So I think another thing with territories is get help with it. Don't figure it out on the fly. Ask me how I know that's a mistake.
Survivorship bias, top of funnel, and territories. How do you tie the three together? [28:03]
How do you know? I'm kidding. We're running short on time. I want to bring this quickly back to Survivorship Bias, top of funnel, and territories. So we're going to tie the three together today. Really, I think it's because what I see from our customers are we build these territories out. We're trying to carve and balance stuff, but everyone tends to not focus on anything top of funnel as it relates to building territories and they focus on everything else, like open pipe, closed one, number of accounts, our account score, like other metrics that balance.
So I'm curious, and maybe you haven't seen this on your side, but curious if you've seen anything in all of the customers that you've worked with where you've kind of helped them, guide them across. Like, "Okay, let's look at the inbound that's coming in and maybe we help structure things in such a way that we build this fairness in for AES."
I completely agree with you and you're right. Pipeline is where so many people look because it's easy to measure pipeline by territory. Once you've sort of shaped your territories. "These States are territory. A, these are B." It's pretty easy to go run a report and see how much pipeline you have. It's a little harder to actually model it out in such a way that you have a clear understanding of what's coming in at the top of the funnel, like you said.
The higher up the funnel you get and the more thoughtful assumptions you can build in, the more you can sort of exercise some of a different kind of bias, but some of the bias that comes from a good rep worked here. Whereas a not so good rep worked there.
So if you look at how much pipe each built, in the pursuit of fairness, if you look at the wrong metrics, you're going to build a totally unfair map because the best rep is going to get the small territory because she is better at building pipe than the bad rep who gets more territory because he's bad at it.
The more you can abstract the territory fairness decisions from rep performance and eliminate that bias, the higher up the funnel you get, probably the better your territories are going to be. You always have to think about potential, not what exists today when you're building territories. Otherwise, you're going to run into the biggest problem with territories, which is they are a mechanism for picking winners and losers.
Final Thoughts [29:56]
Any final thoughts before we adjourn for the morning?
Oh, man, this is such a big topic. I think for final thoughts here, I'm going to bring it back to basics. Whether it's territories, duplicate management, any of these Ops topics, metrics you're choosing, any of these things. The first thing to do is slow down to speed up, press pause, take a step back and ask the question, what is the goal this maps to?
If you don't have a cogent, thoughtful answer to that, then you need to ask yourself, "Why are we doing this exercise?" Because Ops is always spread thin and you're just going to spread them too thin if you're throwing them at every project that comes to mind without thinking about what it maps to long term, strategically.
Wonderful. Taft, it's been a real pleasure. Good to see you. It's been a long time since we worked together, so it was really nice to be able to do this. I have this weird sign off and I just always like, until next time. I've noticed these things after time. Anyway, until next time.
Great to see you.
See you Taft.