top of page
logo horizontal white.png

The Journey: #59 Familiar Isn't Fit

  • 2 days ago
  • 4 min read

Early in my Customer Success career, we implemented Gainsight. At the time, they were the only real player in the space and, given the complexity of our program, the only solution that could truly keep up. We were successful using it. The team adoption was strong, our program ran, and the business saw value.


But if I’m being honest, a lot of that adoption came from leadership pressure more than natural pull. There was a little more stick than carrot.


As I moved through my career and found myself in a position to evaluate Customer Success platforms again, I would go through the motions of a proper selection process. I’d build out requirements, meet with vendors, compare functionality. On paper, it looked like diligence. But in reality, I was walking into those evaluations with about an 85 percent chance of choosing what I already knew.


I told myself it was about risk mitigation, about speed to productivity, about not putting my team through a steep learning curve.


There was some truth to all of that. But there was also comfort and familiarity. And an existing vendor relationship that made the path of least resistance feel like the right one.


So I often landed on a good solution. Just not always the right one.


After my third company, I made a conscious effort to run a more thorough process. I spent more time looking at alternatives and asking harder questions about fit. And yes, after all of that… I still chose Gainsight.


But that last evaluation planted a seed. I started to notice how quickly other platforms were catching up, they were becoming more robust, more intuitive, and in some cases, stronger in specific workflows than the platform I knew so well. The gap was closing, and in certain areas, it was already closed.


Then I joined ClientSuccess and spent the next five years going deep into that platform. The product, the ICP, and the approach were very different from what I had used before, but it made sense for that business. It worked because it fit the context.


Still, during those five years, I was largely living inside one ecosystem. I saw the innovation happening across the Customer Success Platform landscape, but I wasn’t actively evaluating it. I was operating, building and leading.


Now that I’m consulting, my responsibility is different, and I can’t default to what I know or knew. I have to understand what my clients are solving for, what their teams look like, how they’re resourced, how their customers engage, and where they’re trying to go. The recommendation has to fit their reality, not my comfort zone.


That means I’ve been spending a lot of time getting reacquainted with the broader CSP market. Sitting in demos and asking more pointed questions. Looking closely at where each platform excels and where it doesn’t. Not just from a feature perspective, but from an operational one.


I decided to start with the platform I’d heard the most buzz about but knew the least about, Planhat. I’ll share more on that in the coming weeks as I spend more time understanding how it approaches data, workflows, and program design.


For now, I want to leave you with a few thoughts if you’re in a position where you’re evaluating technology for your team.


First, familiarity can quietly shape your decisions more than you realize. It’s easy to tell yourself you’re optimizing for speed or minimizing disruption when, in reality, you’re optimizing for comfort. There’s nothing wrong with that instinct, but it’s worth naming.


Second, the “best” platform isn’t universal. It’s contextual. The right solution for a high-touch enterprise motion may not be the right solution for a scaled or product-led environment. The right solution for a 20-person CS team may not be right for a team of five. Fit matters more than brand recognition.


Third, define your criteria before you talk to vendors. This is the one I see teams get wrong all the time. If you let vendors run the demo and shape the conversation, they will naturally steer you toward their strengths. Suddenly your evaluation criteria starts to mirror their feature set instead of your needs. Before you take a single call, get clear on what you’re solving for. What workflows matter most? What data do you need visibility into? What does success look like six and twelve months from now? Build your criteria around your program and your customers, then use that to guide the evaluation. Not the other way around.


Fourth, evaluate for where you’re going, not just where you are. Your current needs matter, but so does your trajectory. The platform you select will shape how your team works, what you measure, and how you scale. It becomes part of your operating model, not just your tech stack.


Finally, bias doesn’t disappear just because you run an evaluation process. It shows up in subtle ways. In the questions you ask. In the weight you give certain features. In how you interpret what you’re seeing. Bringing in multiple perspectives and giving your team real hands-on time can help counterbalance that.


The Customer Success technology ecosystem is evolving quickly. There’s more innovation, more specialization, and more choice than there was when many of us first entered this field. That’s a good thing, but it also means the decision requires more intentionality.

bottom of page