Please rotate your device to portrait mode to view this website.
Episode Transcript
Read More

Old Playbooks Meet AI Reality

Speaker 0:00

The old playbooks are dead, you know, how do I actually operate in this world? It's because you have to remember LLMs, commoditize expertise, right? The playbooks, all these things are easy to pull. Are you asking the right questions? It's the experience that lets you connect the dots, which becomes valuable. This is where I think all of the executives on the on the go-to-market side have to think about how do they align back to, I mean, I hate to say it because it sounds so trite. You know, what's that outcome? Because at this point in time, AI is really converging all these different roles together around those outcomes. You really do have a burnout because humans are in the middle and they have to context switch constantly. And this is a real troll. And this is where organizations are trying to rethink how do they rebuild cultures where you can manage. I mean, this will be the last set of managers who'd manage only humans.

Rajiv Parikh 0:52

Hello and welcome to a special Spark of Ages podcast. We are here in Park City, Utah, as part of our growth marketing summit. We have a returning guest, Mike Ni. Mike, I think you've been with us about three. It's your third time. Third time, third time. And this one you're alone. So to let you know who Mike is, Mike Ni is the VP and principal analyst at Constellation Research. Having previously served multiple C roles, CMO of Open Prize, CGO of Coveo, CPO of Avingate, so we got marketing growth product. You've been an entrepreneur before too, and you have over 25 years of executive entrepreneur and entrepreneurial experience. Mike specializes in driving growth for SaaS companies and brings deep expertise in product strategy, go-to-market execution, and RevOps enabled journey orchestration. In Mike's current role, he covers the fast-evolving data decision automation landscape. Mike also holds an impressive trifecta of degrees. He's the kind of guy every mom wants her kid to marry. BS Mechanical Engineering from MIT, MS and Systems Engineering at Stanford, and MBA from our favorite school, Harvard Business School. So, Mike, welcome back to the Spock.

Speaker 2:07

I guess my only tagline would be over-educated, underutilized, or something like that. I think that's how it goes. Sounds like you're heavily utilized. I couldn't quite figure out which one anyway, which way it goes. It's amazing.

Rajiv Parikh 2:21

Well, you pay for them. All the schools sound cost the same anyways. Yeah, well, back then, for sure. Yeah. So, okay, so Mike, we're getting you right at as like before you were working as a CMO. Right. And you were in the whole RevOps space and the whole, you know, looking at journey orchestration, that kind of thing. And now you've made this shift to being an analyst. Yes. How's that feel?

Speaker 2:43

You know, it feels it's actually really interesting. You know, like you said, I've been through the ARC, the CPO, CMO, uh, CGO, and every day was, you know, just full of execution. And what I started seeing was with the ramp of technology, really AI is going to change how we work. But it was really hard to really really think about that without taking a step back. And hence the role of an analyst is one that was just calling to me, which was hey, take a step back, work. You know, I'm working across executives from you know large to startups who are trying to use this technology. I'm you I'm working with the executives at all these vendors who are starting to look at how they can change the universe, impact the universe, right? And it's just great to be able to take a step back and connect the dots.

Rajiv Parikh 3:27

So, like to as constellation research, Ray Wong's amazing uh research firm, right? He's a polymath. So, in that that, like you're covering the buyers of the research. Right. So you're contracting by the buyers of the research. You also work with companies that you analyze and write paper and write papers and give guidance to. And you also work, I think you're working with the larger system integrators.

Speaker 3:50

That's right.

Rajiv Parikh 3:50

So you get a perspective, especially in this AI world, there's a lot of analytics, there's a lot of AI, and there's a lot of just execution, organizational execution at very large scale.

Experience Matters More Than Expertise

Speaker 4:00

Well, I mean, I think first and foremost, when you think about the per, you know, just my own personal journey and just there's something to learn for other CMOs, CPOs, and CROs out there, you know, it it is that you can, you know, by connecting these stats, you're trying to see that one, expertise is actually commodity. It's really experience, which is it which counts. And so this is where like you know, the playbooks, this is where you I talk to CMOs and they're like, well, the old playbooks are dead. You know, how do I actually operate in this world? It's because you have to remember LLMs commoditize expertise, right? The playbooks, all these things are easy to pull. Are you asking the right questions? It's the experience that lets you connect the dots, which becomes valuable, right? And I think this is where if you think about it from a from a role perspective and taking a step back, this is where I think all of the executives on the on the go-to-market side have to think about how do they align back to, I mean, I hate to say it because it sounds so trite. You know, what's that outcome? Because at this point in time, AI is really converging all these different roles together around those outcomes.

Rajiv Parikh 4:59

And so they have roles are changing.

Speaker 5:00

I mean, absolutely.

Rajiv Parikh 5:02

One of the things that we're talking about is the organization's design based on the limits of a human.

Speaker 5:08

Right.

Rajiv Parikh 5:08

Right? The the human the human mind can handle a certain quantum of work. And so we're built, and so the companies are companies and organizations are built on that, and then the software we're building, mostly user-based pricing, we're building software for that, valuing it based on that. And then now with AI, all that is blowing up.

Speaker 5:30

Well, you know, it's it's interesting. Humans have become the bottleneck. And we were just we just had a CR uh CEO conference, you know, we had you know nearly 100 CEOs and board members. Futures form. Futures form, absolutely. And I, you know, it's funny because that actually point caught up. If they think about what the executives need to think about, organization and governance is clearly some of the top of mind pieces. And one thing that they absolutely see is humans are tired. You know, humans are the bottleneck now because they're they are in the middle of so many automated flows.

Rajiv Parikh 5:59

What part of the bottleneck by?

Speaker 6:00

Well, I mean, let's take my son, for example. Right now, he's a he's a software developer, and he's got more projects than historically a software developer would be handling because he has six, seven virtual assistants building code, and he's in the middle of you know, reviewing them. And if he goes for lunch, he's like, oh shoot, I forget claw, I forgot to get my claw working. I've already lost an hour. I better keep it running. I gotta go to lunch. Well, you have the burnout. You really do have uh burnout because humans are in the middle and they have to context switch constantly. And this is a real troll. And this is where organizations are trying to rethink how do they rebuild cultures where you can manage. I mean, this will be the last set of managers who'd manage only humans. And we haven't rethought how that changes.

Rajiv Parikh 6:40

Is that real? I mean, I've heard like you know, executives at top SaaS firms say, why it's like workday. Um, when Dave Summers was there, he's like, Oh, I built my company's built 6,000 agents, multi-thousand models, and now you got to manage your agents, your man agent uh organization and your people organization. Is that really happening in larger firms?

Speaker 7:01

It really is. It really is. In fact, we had Anil as one of the speakers, um, and of course he's come back and we see a lot of that CEOs coming back to take over their companies, taking over workday. Right. And what you are seeing is a lot of experimentation, and things are now starting to roll out to production. And you know, we you know, we can get into a little bit of some of the trends, but certainly you see organizations being reshaped, both tops down and bottoms up right now.

Humans As Bottlenecks And Burnout

Rajiv Parikh 7:26

Okay, that's really interesting. We'll have to get to that. Okay, so you noted that um AI projects don't fail at reasoning, they fail at the inputs because context breaks at execution time. What does execution context, which is inputs that are fresh, constrained, and permission aware, look like in practice for an enterprise trying to deploy agents?

Speaker 7:44

Okay.

Rajiv Parikh 7:44

So first, okay, to start with what is context?

Speaker 7:49

Okay.

Rajiv Parikh 7:49

In in layman's terms. In layman's terms.

Speaker 7:52

Well, it's it's basically um given the same inputs, can I expect machine to have the same outputs, right? And that's just at the very highest level. Can I trust it to do something I know?

Rajiv Parikh 8:02

Yeah.

Speaker 8:03

And what we don't realize is that for we as humans, we take a lot of things for granted. You know, there's a lot of things called tribal knowledge, right? Not, you know, I know that how we calculate lifetime value and how I want to follow up with a customer in my division is different than another division, right? And so what does it mean is one part of that. You know, how does it relate to something else? And so what you have is three real large buckets of things that go into context. Let's describe what it is and we'll talk about a use case on that. So there's a fundamental set of data. You know, what does it mean, often called semantics? You know, how does this sit in a hierarchy? I have a division, there are departments within divisions, so there are hierarchies. And within that, there's how do things relate to each other? You know, a airplane sits at a gate and they and therefore they know how to relate to each other. Yeah. That just provides a basic information for uh an AI to do what humans do naturally and be able to understand what's going on. You also have to understand where I am in a process and have memory. I spoke to you last week and I gave you a solution. You called back and said it didn't work. I better remember that, right? And how do I use that for an agent to give you a response? Yep. And of course, something that is becoming incredibly important and should get everyone leaning forward. I'm not sure if you saw the Foundation Capital article, which was context graphs are the next trillion dollar opportunity. Yep. And this is because the ability to trace and observe uh how things are working is actually part of context as well. And that actually gives you a sense of. So maybe do this way.

Rajiv Parikh 9:29

So you're saying we're talking about enterprise grade context. That's right. So that we're saying context means a lot of AI, enterprise AI, not what you're using in Chat GPT. Right. Stateless. Okay, that was another big word. But stateless means it has no memory, meaning every time you go back to it, right? It acts as if it doesn't know anything. Right. So stateful means that it retains memory about what happened and learns from it.

Speaker 9:54

That's right.

Rajiv Parikh 9:54

And so it's like groundhog day. So the biggest thing with context is to is like it's a big word, it's a complex sounding word, but really it means I un it's like I come back to you and I know, I retain knowledge about you. It's like two friends who meet. That's right. Two people who meet at a bar, and when they meet each other again, they're not clueless about each other. And marketers have been looking at this for a while, right?

Speaker 10:14

It's rather than groundhog day, and that's what really an LM out of the box is.

Rajiv Parikh 10:18

Frankly, it's how most software works.

Speaker 10:20

Right. And and and so what happens is it's like going back to uh your your you know the the old age old example. I go to the the butcher and he knows the cut of meat and said, Oh, I got something for your mom. I know she hasn't been looking to try this thing. That's context, right? I know you're related to your mom, I know what you ate last week, and therefore, here's something that I could recommend you want to try because I know your taste profile. That's context. And we've been marketing, we've been talking about personalization. This is all about am I giving it.

Rajiv Parikh 10:46

But personalization got kind of bastardized, right? It did. And personalization became junks. Like you get something, say, you know, based on your HBS background. Hi, hi, Mr. Crimson, right? Like some nonsense like that.

Speaker 10:58

Or you know, no one wants to start in groundhog day.

Rajiv Parikh 11:00

Start talking about the tree, right? Like it's fake personalization, fake relationship. Right. So personalization, unfortunately, that term has lost a lot of its flavor, meaning, and impact. So maybe context or inference, inference from context.

Speaker 11:13

Right. So this is where why is it a trillion dollar opportunity? Yeah. Right? We kind of know what it is. Hey, it's memory, and it's memorizing the right things and knowing and learning things for the company, not just our relationship. That lets me answer your question and act. But it's about adaptability. It's how, you know, right now, a lot of processes are hard-coded like any type of go-to-market journey. If I have AI now, and without getting too detailed, if I have better context, I now can give it more, I can allow it to be more adaptable. You know, rather than having hard code ifs-then and let it go down a path, I give it a a broader swath of in this case, you know, I want, you know, here are some opportunities of where I can take this journey next, and let the let the LLM, let the AI just look at the what's going on, take the context and guidewall, guardrails you gave it, and choose what's the next step. And again, that's adaptability.

Rajiv Parikh 12:04

So is that and so now you're seeing that occur in at early stages in enterprises.

Enterprise Context Explained Simply

Speaker 12:10

Oh, for sure. Actually, we should take a step back and put it in context. Do you mind if I give a little bit of what's going on in the in the market? You know, right now, the pressure is insane. Like if I look at what what came out of the of the CEO conversations, the futures forum, they saw a couple big mega trends. First of all, this idea of exponential efficiency. People are playing at the wrong scale. Right now, people have been playing with 10%, 20% improvements. Really, if you look at AI first companies, you're talking about a million, 10 million AR per employee. Right?

Rajiv Parikh 12:38

You're looking at that's a annual recovery scale.

Speaker 12:42

I mean, we just look at space launching. Elon changed the space launch game, one tenth the cost of NASA. Right. You know who's one tenth the cost of Elon? I don't know. Who's Indian Space Agency? Right? You talk about now that's that's frugal innovation. Well, it well, you're talking about 10 times cheaper. You're talking about 10 times faster, 10 times more accurate.

Rajiv Parikh 12:59

Right.

Speaker 12:59

We're no longer at like 20%. We're talking about 10x. So what are the tracks?

Rajiv Parikh 13:02

What are the practical applications you're seeing from companies that's moving at that lift?

Speaker 13:06

For sure. I mean, I think one of the surprisingly, the first set of companies are are have been those that have been regulated industries, right? Mission critical. This is where you see financial services or even like military. Yeah. Like why has Palantir operationalizing AI had so much success? Right. And this is partly because they've been able to leverage the AI and have very bounded rules. Again, in this context, I know what I can do, I know the type of process I'm following, and that actually gives AI a lot of context to be able to make very smart decisions quickly. So they can actually bring down and automate whole sections of work as well as make better decisions.

Rajiv Parikh 13:44

So where are the humans? Are the humans in that or making judgments? Oh, for sure. Yeah.

Speaker 13:48

But let's take a step back and say, okay, how where does this sit? I think there's a couple big mega trends that you have to sit in, right? One, especially for marketers or go to market, dashboards die. Decision loops live. And what that means is just that you it's not about the data and having humans try to analyze. It's how do I create learning loops that get better over time? Right. Number two, this means that you have, you know, AI gets a job. It's no longer just a productivity tool like a chatbot. It actually has a process that's automating. And that has measurability, it has an owner, it has something at boards love, which is accountability, right? And this means organization charts melt because you're now actually starting to talk about processes, span organizations. What's an SCR? Is it really seeing between marketing or sales? Is it that to go in rep. Yeah, why can't they run a campaign? You know, we're not pr we're not defending login keys to Marketo anymore. Right. With AI, we we allow it broadly across the perspective. What's the outcome? And what am I trying to achieve? Because now I can run a campaign just as well. And I'm bounded by brand guardrails. And so what you're seeing is a fairly substantial shift to processes, not only in terms of how fast I can run a campaign, how fast to follow up. I mean, you're in the middle of all these discussions. Yeah, living this. That's right. And what happens is there's a tremendous amount of learning. And this is where I think we will get into next, which is learning is continuous and it's cumulative. And this is where the board members are very, very focused. Because when you talk about a learning curve, if companies don't come up with that learning curve of how they use their data better and how to start leveraging AI to start radically changing their go-to-market uh and their business processes, they fall behind. And learning curves. Because about how we do that. And exponential. And then if you don't get the learning curve fast enough, you'll always be behind your competitors.

Rajiv Parikh 15:36

So in essence, context is about ensuring you're building your company's signal sensing and and outputs so that they are they're build being built into these learning loops. You've talked about them as reason code, right? That's right. As as the notion of uh or decision tracing, right? Systems.

Speaker 15:56

Well, I mean, let's talk about marketing, for instance. I mean, this is where best practices, you know, get encapsulated into you know artifacts. And this is where it a best practice is just because part of the context, you start learning, hey, we tried this campaign, and now I've actually you know captured the trace. AI can actually capture all the surrounding data around that. Who was a segment, what was the weather, you know, where what were the competing campaigns at that point in time, and now it's giving you closed loop that humans were having a hard time trying to get closed on.

Rajiv Parikh 16:27

It would take a tremendous amount of work to put in.

Speaker 16:29

Absolutely. And that's a decision trace that becomes a learning loop. And this is where not just campaigns, I think of it as small decisions.

Rajiv Parikh 16:35

I think of it as inferential marketing.

Speaker 16:37

Inferential marketing. Absolutely right. And what happens is what you know, this is where you and I were talking about, like service as software, what happened is all these playbooks, all this interest and all this learning now gets into software artifacts that become part of a context that becomes part of how organizations, how services like yourselves start becoming uh increasingly uh more performant. Because you bring all that context with you.

Decision Loops Replace Dashboards

Rajiv Parikh 17:02

Yeah. No, I think it's totally changing the way we think about it, the learning loops that we establish. So you recently contrasted Meta's immediate AI ROI in a closed loop with Microsoft's enterprise AI.

Speaker 17:14

Oh man, we're going there. Okay.

Rajiv Parikh 17:16

No, let's go there. Where that was one of your videos where payback is gated by capacity and buying cycles. So, how can B2B SaaS companies adopt closed loop learning to prove ROI into decision philosophy faster to their enterprise?

Speaker 17:29

Well, let's first explain this tale of two cities, which was Microsoft and Meta.

Rajiv Parikh 17:33

Yep.

Speaker 17:34

Um, Meta, you know, they they're similarly making a massive investment. I think it was like 70 billion, both in five, one was 120, one was 70 billion. Anyway, it was a lot of money. And what you're seeing is Meta got rewarded in the market versus Microsoft, which you know stock tanked 10%. And so tale of two cities, but what you really saw was Meta applying it to their own learning. To their own business. To their own business.

Rajiv Parikh 17:57

And what you saw was I already see it with advertising from Meta, where folks that were just trying to drive results had flocked to Meta and they put up they put up ads that were not connecting to the actual products they were selling. That's right. They're using deceptive ads. And so with AI, Meta's gotten a lot better at beating that out, and that leads to a better experience for their users. And so people are more likely to believe the more legitimate ads.

Speaker 18:23

Well, that, and you're seeing you know, leverage on all points, whether it was auto generation. Well, yeah, it's learning for your context, you know, your type of customers, for the type of products you're offering, you know, how do I generate the right type of ads? Because now we're starting to use machine learning to be able to generate different types of ads.

Rajiv Parikh 18:40

Auto generate the ads.

Speaker 18:41

Auto-generate. And now I'm playing at different day parts. I'm learning, you know, the type of campaigns I like to run. And this is a human creativity, it still comes in a loop. You still have ideas, you're putting your point of view on there, but there's no hiding from the actual closing loop on all the data.

Rajiv Parikh 18:54

But why when why would Microsoft take it the other way? Because Microsoft, they are software, right? They have Azure, they have the infrastructure cloud infrastructure, so they could sell that to many players. Um, they're in various aspects of direct businesses, right? So why wouldn't Microsoft get a big lift from all this?

Speaker 19:10

Well, that's like you said.

Rajiv Parikh 19:11

I mean, they own 40% at one point they own 40% of OpenAI, not anymore.

Speaker 19:14

Well, they were the first with a copilot story. Right. I mean, they were really early out there to try to tell a story.

Rajiv Parikh 19:19

Well, I mean, I think Was the Copilot too much like Clippy? Is that what it means?

Meta Versus Microsoft On AI ROI

Speaker 19:22

Well, there's three things. You know, you do have a Clippy overhang. Let's just be clear. There's there's not to be underestimated, not to be underestimated, but the they what they didn't do was measure how it was affecting their own margins, as well as the uh they weren't breaking out how the up to uptick on by their customers on that. And so the market was just seeing a massive investment, no measurement to the back to the back end of what it meant for the business. And so this is where all companies did think about how do I start closing that loop? You know, decision, like you said, this is where data, you know, uh AI needs to get a job. And this is where you saw for Meta closed loop, how do I start really just changing processes and having a measurable outcome versus Microsoft, which is investing capabilities they wanted to embed it in their products, but hadn't actually broken out how much of additional sales, what type of additional particular. I think the part of it was that I explained number two, they haven't gotten necessarily the uptake that they necessarily wanted on this, and there was maybe a little bit of clippy overhang. Not to underplay clippy, that's the thing.

Rajiv Parikh 20:21

Okay, so you let's talk about one of our favorite subjects. Okay, SASPOCalypse. So you've argued that SASPocalypse is overplayed. Not just because they're paying you guys to to research them, but because you actually you're you're intellectually honest and you're looking into them. And from what you see, you acknowledge that AI is collapsing just workflow value and breaking traditional headcount led, you know, led pricing. So as value shifts from seats to decisions, yeah, workflows to execution, how must SaaS companies restructure their pricing models to survive? Oh so I asked you in two ways. SaaSpocalypse. Right. So SAS pricing.

Speaker 21:01

So SaaS pocalypse, absolutely overplayed.

unknown 21:03

Right.

Speaker 21:04

Overplayed. So three things that take. So like I'm Salesforce, everyone sell up. I can usually account on 20% uplift. I just have a negative, you know, uh net retention. And so I'm constantly growing just because of my install base. That is now broken. You don't have that same growth because seat counts are down. Uh you don't have the number of seats because AI is taking roles, or people are starting to downsize. What there's a direct correlation, you can argue. Number two, the add-ons are being uh consumed out with faster AI first players.

Rajiv Parikh 21:49

So do you mean other players or people building things in cloud?

Speaker 21:52

Both.

Rajiv Parikh 21:53

Or open claw.

Speaker 21:54

Both. Because what happens is these add ons, and normally you would have added Memo Claw, which we can get to. Um, it's a whole different thing. But the margin compression is happening partly because the add-on natural growth, the promise of SaaS, which was negative net retention, is just not there anymore. So the economics of fund lets you change. The market's repricing for that. Number two, they're really having to invest forward. They're not going to get the returns yet. And that's what happened to Microsoft. They're really investing forward of seeing the backside return of their customers adding on additional sales. Right. And number three, the market uncertainty overall. I think everyone got wiped out in the SAS apocalypse. Now people are going to start repricing. Is it a buying opportunity yet? Not quite yet, because you're still looking at business.

Rajiv Parikh 22:34

So then just but when you buy like that and you see an opportunity, that's where you're taking a risk. That's right. So are are you allowed to say who you're like their favored ones that you believe will come out of this?

Speaker 22:46

Well, you know, this is where I think a lot of folks will say the same thing, which is like Salesforce is just oversold. At the end of the day, there's just a good prototype example. There are a system of record processing. Well, remember, there's three things going on with Salesforce. Distribution matters, by the way. Let's just be clear. Number two, they actually have a huge amount of context. And they're actually investing in it. If you want to see how companies are starting to think about as an application vendor, coming back and saying, hey, we own everything around customer and the context around how do we actually sell and how do we actually market, how do we actually engage and grow a customer and a relationship with them, they have a tremendous amount of, you know, what does lifetime value mean? How do I how do I invest? What's what is a campaign? And so a lot of that you don't have to retrain a model. They actually have that like context for you and the execution that actually feeds that model and the way to pipe all this data back. Right. So they're actually building out of the box some of that loop. But of course, specific for your vertical, you know, these are things that you have to build on top of yourself. But oversold have a lot of the tools and a lot of the data that's going to make them relevant. Now it gets back to your other question, which is pricing. And where is that going to be? Right. And they still staying.

Rajiv Parikh 23:59

And for services workers were easy.

Speaker 24:01

And now people want to price by outcome, which is still in the works. When they tried to go to outcome-based pricing, I mean the amount of pushback was tremendous. And they immediately had to actually retreat back from their usage-based pricing. Yeah. Right. Because there's just unexpected pricing for a lot of folks. And so this is where there's still an open question about where is out, where is pricing going to be? Platform pricing, those who are selling data context players are coming up as well. They're new players, as uh Foundation Capital would attest to. They can price as a platform play because people are used to paying for that. Seat-based pricing, switching over to outcomes, is something that people are still working for.

Rajiv Parikh 24:36

It's a tough one.

Speaker 24:37

It's still an open question about how they price attractively.

Rajiv Parikh 24:40

So you've stated that in the AI era, the integration tax becomes the an AI tax. How do system integrators need to pivot their service offerings to help enterprises collapse layers and build unified systems of context rather than stitching together disparate apps?

Speaker 24:55

Okay. So a couple things are going on.

Rajiv Parikh 24:57

Wow, this is a new f are we're gonna call it a fabric now?

Speaker 25:00

Well let's let's stay away from words that are overloaded. There's a lot of ways that fabric can go, but I think what you're seeing is interesting enough, verticalization absolutely happening, right? If you look, you know, that was actually from uh conversation about Zoho, uh, which is really interesting. They understand from the Amazon. Zoho does. Zoho is a uh overall API software. Very SMB.

Rajiv Parikh 25:23

Very SM.

Speaker 25:24

Yeah, and they know their market very well. But they're a great example of this integration tax because what happens is they know the workloads, they know uh how to, you know, what the LM is going to be. Because a lot of people are very surprised by how much costly, how costly these AI tools can be as tokenomics really take hold. And they know the workloads going all the way down. So they actually have their own chips, all the way down to their own data centers, because that verticalization allows them to service their market. They know exactly what the SMB needs. That's a very low-cost provider, very price sensitive. And what you're seeing is verticalization happening across the board as well. Google is going to be absolutely one of the low-cost providers. They have the chips, they have the data centers, the network, all the way through to how do they actually serve as a data and AI platform. And you'll see others really focusing in on how do they actually deliver on this verticalization. Now, I said, those the other question you asked was you know, how do companies start uh or SIs begin to deliver on helping the um, you know, whether you're Walmart or whether you're a large system integrators, right? Large system integrators are are really hosts. That's right. I mean, how are they helping you make choices on the different parts of the technology stack that lets you actually better optimize? Right? This is where you see uh better choices of in you know, what you know, you start to think about context, where do I want to apply traditional machine learning? Because that's gonna be fundamentally much cheaper than using a generative AI. I'll give you an example. Like a someone's using a million, you want to do enrichment on a million rows. I mean a marketing standard thing. Hey, I've got a million records.

Rajiv Parikh 26:59

Enrichment on a million rows. What does that mean?

Speaker 27:01

Well, I'm I'm trying to enrich a list. I'm trying to enrich a list of prospects. And I want to just add email addresses, I want to add industry. I want to just uh extend it with sort of the campaign I want to uh go after. High probability.

Rajiv Parikh 27:11

If you're gonna if you go after them on Facebook, you'll need their social, they'll need a personal email connected their company email.

Speaker 27:19

That's right. You'll need their name, where they live, and maybe some valuation, like which campaign will they be most likely to respond to. Right, right. And so there are multiple steps that go in there. And I've seen um customers actually run this LM and get surprised by the massive cost of tokens that it took to row by row enrich these records. You know, machine learning works just fine, right? And this is where you know traditional rules and machine learning do a great job in just data quality, especially at large scale, where you can deal with a little bit of that uh you know discrepancies. And then maybe the last mile, you do some enrichment uh or auto generation of emails that are specific to each one. And so knowing which campaigns work in which context is actually very important. And this is where not only are you trying to bring some of the expertise of like the SIs, but you're also looking at how you deliver as a whole system. And I'll give you an example that uh software that's been really well proven, which is software development.

Rajiv Parikh 28:11

Okay.

Speaker 28:12

It's very similar to like market or doing a campaign. You know, software development, when you do an individual tool that helps me build software, do emails faster, I absorb that to myself. Like, and usually the productivity falls just to me. And it doesn't translate to more code, more campaigns. You have to change the entire marketing process or the sales process, QA, documentation, the C uh, the change control, and how do I get this into uh deployment? That process needs to change, and that's where you see a massive simplification. So, this is why you have large SIs helping people redesign their whole process end-to-end and actually bring the right context of how does the guidance to your L AI at each step actually matter for your process. And that actually delivers that.

Rajiv Parikh 28:55

That's where there's a massive opportunity.

Speaker 28:56

And there's a massive opportunity because each company, especially when you start getting to things like campaigns, software is a little more repeatable, hence it's done a little bit more horizontally. We start thinking about companies, industries, you know, local, these are all very specific. And how do you find what works for you? And this is where SIs are increasingly becoming embedded because it's not just a program, it's actually an iterative process to help you experiment and keep running.

The Integration Tax Becomes AI Tax

Rajiv Parikh 29:21

It's not like before where you spec out a project and you go back and forth getting the spec down, and then you write everything and then you implement and you go through these iterations, right? That's right. Multiple releases. You're literally building an initial spec, and now your effort is in tuning it, continuously honing and tuning it. That's why I like this, frankly, from a lot of our discussions contextual and inferential marketing. Right. That I can supply you a knowledge base of rules and things that I've seen over time, and that's explicit knowledge. That's right. But the tacit knowledge that my people have developed in working with clients, they can't explain it that well. AI can help uncover some of that. That's right. But a lot of it is just it's watching them do things. And so then we need to build the learning loops for the system to capture that and feed that back in. Absolutely.

Speaker 30:11

Is that a fair absolutely? Let me give you even an example. I mean, we were just dealing with a large comp uh large SI. So there's a huge difference between AI native and AI laggard uh SIs. And so both are doing AI. Both are doing AI, but let me just give you an example. You know, there's a large implementation. The bid was for $2 million, 160 employee uh uh on the ground to help a company build out a program. And when these AI first came in and said, look, I'll do it in six months rather than two years, I'll do it with uh 10 people. What they did do is delivered it in three months for one-tenth the cost. And they left them with a solution that continues to learn. And this is a difference of working with night and day, right?

Rajiv Parikh 30:56

It's night and day. I we just demonstrated yesterday how with a client we built a whole partner academy to arm their their strategic partners. That's right. The company's strategic partners, you know the company. Yes, and we built it inside of a month for a lot less than they would have had to build it themselves. But now we are gonna tweak it and cone it and that's right, make go from inside to outside and make it a marketing tool and an education tool and a way to certify people. So it's a whole system, but you it can literally be done in less than a month. Well, and this actual start period was only two days.

Three Takeaways And Closing

Speaker 31:32

Well, this is why partnership is so important. I mean, you have one AI native or not uh from an SI perspective. Number two, you have projects that become iterations and embedded people. Because what happens is to your point, the first cut was just getting you started and getting the infrastructure in place and then working with you. Because you need someone who understands both the process as well as the technology so that they're like, okay, if we just added this data or captured this as part of your campaign, we can actually bring that back into context and this will learn better. And we can automate or actually bring additional uh AI to that stale. Awesome.

Rajiv Parikh 32:06

Well, Mike, this was amazing. So let's you know, you know, kind of wrap up. What would be three takeaways that a uh marketer or go-to-market leader would want to think about with regard to AI and how they should think about it in their organizations and implementing it for their team? The reason I'm asking you is because now you've seen it as the one on the ground building it, right? You were offering those capabilities to companies, and now you're seeing it researched at scale, talked about at scale by multiple vendors and then the companies that are buying from and the folks that are implementing from it. Right. So just what are three things?

Speaker 32:40

Three things. One you just one, you just have to get started in a program. When I take your, you know, your your most innovative builder and someone who just wants to start in a program and get them, uh, number two, get them help, right? What you need is to bring in some external expertise because you're caught in the day-to-day. You need someone who can very quickly bring technology, data, and understanding of best practices to bear to help you build out some of your context, some of your data that was going to be building on top. How your AI was.

Rajiv Parikh 33:08

Wait, innovator, get them started.

Speaker 33:09

Get them started. And number three, you just have to co-lead. And this is what's hard for marketers because oftentimes you just want to execute. You don't think about the fact that you're really building out uh a platform here. And you have to work here with your, you know, the folks who own your data. You have to think about how you work with your CRO because it's very easy for lines to blur. Yeah. And the who owns a campaign, right? Who owns demand gen versus um the customer generation.

Rajiv Parikh 33:36

You're definitely blurning.

Speaker 33:37

And marketers have the right skill sets, by the way, they just don't have the right style. And so the question is, how do you make sure that you're collaborating with the full go-to-market? And that needs to be done early.

Rajiv Parikh 33:47

That's fantastic. So, Mike, thank you so much. My pleasure. This was today. Um, it's great to have you come off of so many big events and chat with us about all the insights and all the things you've learned.

Speaker 33:59

It's been great.

Rajiv Parikh 34:00

Yeah.

Speaker 34:00

We can go to the next time.

Rajiv Parikh 34:02

All right, next time. All right, thanks for listening. If you enjoyed the pod, please take a moment to rate it and comment. You can find us on Apple, Spotify, YouTube, and everywhere podcasts can be found. This show is produced by Anand Shah, production assistants by Tarant Talley. We have this amazing Utah-based production crew that helped us along with uh some of the F Funny folks, edited by Laura Ballant. I'm your host, Rajiv Parik from Position Squared. We're an AI native growth marketing company based in Silicon Valley. Come visit us at position2.com. This has been the F Funny production, and we'll catch you next time. And remember, friends, be ever curious.

More on AI Episodes