Please rotate your device to portrait mode to view this website.
Episode Transcript
Read More

Cold Open On AI Disruption

SPEAKER_04 0:00

So I actually think that for some of the really advanced high IQ jobs, they're going to get a hit a lot faster because the arbitrage between the labor cost of skilled labor and what a model can do is going to be so visible.

SPEAKER_05 0:11

Is it coming for our tasks or jobs? So it's certainly coming for our tasks. Jobs will be a little bit later, right? As you look at the whole ecos whole everything that you provide as part of what you do in your work, what uh the different stakeholders you work with, there's certain things that's going to make life very easier.

SPEAKER_04 0:29

Aaron Powell I realize that back, I guess, in the olden days, and maybe to some level now, you're relying on a few things. It's goals, high-level goals, whatever they are, guardrails if they're even established, and ultimately good people. You have to pick the best people with the best judgment and put them in those situations where they can operate independently. There is no other way.

Welcome From Park City

Rajiv Parikh 1:04

And what I deeply appreciate about them is their deep thinking about technology, business, history, societal trends. And what's unique about them is that they just don't read and notice things and infer things. They're actually hands-on practitioners. So for all the folks who say they know a lot about AI and say they're up to speed on this, that, or whatever, these guys are actually in it, playing with it, using it, and they have very different perspectives than a lot of what you read. We discuss humanity, empathy, and technology from a tinker's point of view. So you're really gonna enjoy it. Hello and welcome to the Spark of Ages podcast. We have a special one today from Park City, Utah, during our growth marketing summit. And we're doing a special roundtable discussion on the future of AI and its larger impact on society, enterprises, and business leaders. Two amazing guests, folks I love, I've known over the years. Neil Shepard. Neil is the founder of the American Dream Index, a seasoned growth executive with 25 years of experience in Silicon Valley. He most recently served as the VP of growth at Cohere, an LLM company, enterprise LLM company, where he led marketing and digital strategy. And he has led marketing and digital strategy at organizations like BCG, Scale AI, PayPal, and McKinsey. He's got BCG and McKinsey. Neil is an expert in product-led growth, data science, and leveraging generative AI, where he helps companies scale their revenue and user acquisition, combining his strategic expertise with his lifelong passion for hacking consumer devices and building models. Then I have Amit. Amit Malchotra is a private equity operating advisor and technology builder with over two decades of experience at the intersection of AI, digital transformation, and business growth. He has led massive turnarounds and rebuilt technology stacks from scratch for major brands like Bye Bye Baby and 1-800 Contacts. Amit is an expert in enterprise architecture, machine learning, and turning complex technology into a growth multiplier, bringing a hands-on practitioner's perspective to the AI landscape. So, Neil and Amit, welcome to the Spark of Ages.

SPEAKER_01 3:20

Thank you. Thank you.

Rajiv Parikh 3:22

Great to have you here. I think for our event, our fourth event this year, Neil, this is your third. And Amit, this is your first.

SPEAKER_05 3:31

Yes. Yes.

Tasks First Jobs Later

Rajiv Parikh 3:32

So we've spent so much time chatting together at South Fi Southwest, so I thought you two would be just great to talk about this. Let's first just talk about technology and leadership. And this is for both of you. For every technological revolution, the printing press, electricity, the internet, they created more jobs than they destroyed. But they also permanently changed what work even meant. Both of you have spent decades at the bleeding edge of technology. So we've heard for years that AI is coming for our jobs. And it feels that way just about every day. So is AI actually coming for our tasks? And if so, what does that leave us humans to do? So, Neil, do you want to start?

SPEAKER_04 4:11

Yes. Um, so I'm gonna say something here that AI right now is absolutely coming from our time for our tasks because in many cases it's better, and it's certainly faster, and in many cases it's also cheaper. And we haven't seen anything yet. The capability of AI models has a long way to go, and we can expect to see some exponential growth in the next couple of years. So, yes, it is, but long term, I think the same is going to apply as every other industrial or other technological revolution. Over time, humanity will adjust, we will find new jobs to fill up the gap.

SPEAKER_05 4:42

Right. The way you framed it, is it coming for our tasks or jobs? Uh so it's certainly coming for our tasks. Um jobs will be a little bit later, right? As you look at the whole ecosystem whole everything that you provide as part of what you do in your work, what uh the different stakeholders you work with. There's certain things it's gonna make life very easier. Yes, if your job is literally copying and pasting from one spreadsheet to another spreadsheet, yeah, that's your entire talk. I heard you talking about people doing that. There are people doing that. There are people doing that. Uh so absolutely there are they are little, they're not little, there's a lot of people whose uh lives depend on it. But you know, I I don't think anyone should be spending their lives cutting and pasting from one spreadsheet to the other. Maybe, yeah. Yeah, there's probably better.

Rajiv Parikh 5:25

But Neil's getting, you're getting at the kind of the initial shock value as to why it may come for your jobs. Is that is it because of the suddenness, the rapidity?

High IQ Work Gets Hit

SPEAKER_04 5:33

I think there's two things happening here. One is this is happening way faster than any previous technological revolution. Even the advents of the internet took a long time to really sink into everybody's psyche and be used by a large majority of the population. This one's clearly happening a lot faster, and the capabilities are clearly a lot more impressive and in some cases scary, depending on which side of the fence you're sitting.

Rajiv Parikh 5:55

Yeah, I'm we're talking to two folks who actually will play with these things. Okay, so Neil, you described a stark workforce bifurcation between entrepreneurs who leverage new tech and handle turn who process routine work and are getting left behind. However, AI agents are now moving beyond routine tasks to tackle high-entropy complex workflows. As AI rapidly expands its capability to measure and automate cognitive labor, what is the societal breaking point if 80% of the workforce falls into the handle turner category?

SPEAKER_04 6:29

Well, we're going to find out in the next three to five years. It's not going to take very long before it's very obvious. In fact, I think we're going to see a lot of this play out this year. For people who have handle turning jobs, just as Amit mentioned, like moving stuff between spreadsheets, that should be automated fairly quickly. The reality is it'll take a while because enterprises don't necessarily respond quickly to technological innovation. For cognitive loads, that is going to happen probably even faster because the people who are able to take advantage of these new models that are able to supplement cognitive loads are going to be the ones who are able to use the models. They're going to move faster, they're going to see the benefits. So I actually think that for some of the really advanced high IQ jobs, they're going to get a hit a lot faster because the arbitrage between the labor cost of skilled labor and what a model can do is going to be so visible.

SPEAKER_00 7:18

What's an example of that?

SPEAKER_04 7:20

I would say a good example is data science. Data scientists tend to be well paid. It's a very difficult job. They have a lot of tools, and there's a lot of intrinsic knowledge they need to make use of in order to do their jobs. I'm now seeing that some of the most capable models are capable of doing data science at a very good, I would call it, three years at a college level, and that's only going to get better. I've had models running for hours processing data and actually finding errors in it, and they do a really good job these days. That's something that surprised me.

Rajiv Parikh 7:50

Wow, that's amazing. Uh Amit, having seen multiple technology hype cycles, I mean we've had many conversations about this. How does the current wave of agentic AI differ fundamentally from early expert systems? And what lessons learned are modern enterprises completely ignoring today?

SPEAKER_05 8:06

It's definitely caught the imagination of the public. Um people are projecting into AGI, they're talking about this, everything is going to change and all those things. And I felt when we had expert systems and when we were like, this is a dog, this is a cow, it's a mammal, blah, blah, blah. It was a little bit more like here's a solution to a problem, here's a better way to do things. Um, it felt a little force-fitted. And we didn't have the data at the time. So now we have a this incredible uh way of processing large amounts of data. I see the possibility. I see a big chasm between the possibilities and what is actually landed and sticks, right? Like, yes, everybody's built their little replic, whatever, demos and apps and all those things, but it'll be interesting to see the first set of companies which are truly unlocking that value. So I see it more as if you're a 10x engineer, a 10x marketer, whatever, you're gonna be supercharged. It's gonna be amazing. I mean, it's been amazing.

Rajiv Parikh 9:03

So what are enterprises missing though? What are they missing out in that thing?

SPEAKER_05 9:06

Enterprises have codified a lot of jobs into very discrete tasks. Um, enterprises have tried to really narrow down what every person does into little boxes, as if it's a factory. And if you think of an enterprise as a collection of people who are aligned towards a common goal, then you may have a little bit more fluid uh view into what would happen to these people if they have incredible tools to collaborate, to work together.

Rajiv Parikh 9:34

So it's like the quantum of work is based around the limitations of the human today.

SPEAKER_05 9:39

Yeah, yeah. And that's sort of boxed in too. They boxed in processes which have been built from carryover from past paradigms. But there are companies, I guess. I mean, there's uh even without AI, there have been companies which are radically different in how they operate everything from GE aircraft to Tesla and all that, which are very collaborative. And if these people are given the tools to just run as fast as they can and they don't have the monkey with expense reports or something, then it's going to be incredible. So I'm excited about this.

Rajiv Parikh 10:05

This is a so companies that are freer, that give freer to play.

SPEAKER_05 10:10

They have playtime.

unknown 10:24

Right.

SPEAKER_01 10:24

This kind of gives credence that Google thing about Google work week, four days, one day off.

SPEAKER_05 10:29

Yeah, I don't know if they're doing if Google is doing that anymore. Google is not doing a lot of things better than you know the Google folks too.

SPEAKER_04 10:35

Um I think that the 20% thing went away in reality some years ago. Yeah. Yeah. So unfortunately.

Rajiv Parikh 10:42

So they don't now they're just working 90 hours a week on putting the latest effigentic in. Yeah.

SPEAKER_04 10:47

Well, I would say the ML teams are probably doing that, yes. Yes, yes.

Rajiv Parikh 10:51

Not everyone, maybe. Okay. Uh, Neil, as the marginal cost of producing digital content and executing tasks trends towards zero, economic value is predicted to migrate towards cryptographic provenance, status, and human consensus. Will B2B marketing have to abandon the traditional inbound content playbook entirely and instead rely solely on proof of personhood and unsimulatable human relationships to sell products?

SPEAKER_04 11:17

What I think you're asking is um like how is this going to change for B2B marketing and what are people going to have to do very radically different given this environment? We're seeing the cost of content go towards zero. Yeah. Depends what kind of content. We're seeing the cost of even applications tend towards zero at this point. But what is absolutely costly is human attention. And human attention is already being bombarded from social media and various other sources, and we don't have enough of it. And this is a factor in the digital world we live in. So to go around that, you have to think differently. You have to come up with very unique content that people really value, that is unique and has something uh some thought leadership behind it that means that people will gravitate towards it. So you need to stand out a lot more, churning out the same old content you can find anywhere else, forget it. That's just, I think that's dead already. The second thing that is going to come up um a lot is brand and trust. People people trust companies and they trust brands because it cuts through a lot of the noise. And it means that they can create a short list of which company they will buy from much more quickly. They heard it from a friend, they've seen some thought leadership.

Rajiv Parikh 12:30

That's brand, it's uh experiences.

SPEAKER_04 12:33

Yes. All of those, and all of those are essentially shortcuts in the mind to get to a point where I've heard enough that I'm gonna put these people on the short list and do some research from there. You must be on that short list, you must have some attention, you must have a brand going forward.

Rajiv Parikh 12:49

Is there gonna have to be a differentiation between a human and a non-human recipient?

SPEAKER_04 12:54

Well, I'm not sure. Eventually we're all gonna start looking the same, frankly, Regis.

SPEAKER_01 12:59

You think so?

SPEAKER_04 13:00

I mean, I um who knows? Maybe within 10 years it'll be hard to tell the difference, and the the Turing test will be uh behind us at that point. But I will say this um the attention thing is real, and I think it's also at some point even going to apply to machines. You need that brand in order to get cut off.

unknown 13:21

Yes.

SPEAKER_04 13:23

We've only got so many hours in the day. We have only so many decisions to make in the day. It's a lot of cognitive load to make a decision, depending on how much data.

Rajiv Parikh 13:30

You can't no matter even if you have the infinite room to make decisions, you're not gonna make that many. You're not gonna make you're gonna make so many. You can make so many that you can handle.

SPEAKER_04 13:37

Yes. There is a certain amount of cognitive load people are willing to put up with during their day. And right now we are essentially having our attention stripmined by social media and various other things. Um, so we find shortcuts. That's what happens. Humans adapt.

Rajiv Parikh 13:50

And that's what branding does.

SPEAKER_04 13:52

Yeah.

Guardrails And Human Verification

Rajiv Parikh 13:52

Yeah. It helps you with that shortcut. So I'm at uh a major risk of AgenTech AI is the Trojan horse externality, where agents perfectly optimize for measurable KPIs but silently accumulate technical debt or misalignment that doesn't reveal itself until a crisis hits. So help you architect human verification in your deployments where the speed of agents' execution vastly outpaces humans auditing bandwidth.

SPEAKER_05 14:18

So at this point, um, it is by focusing the agents on very small step functions or small sort of components, right? So you don't uh and then being always in charge. And I personally feel it. Like sometimes during a long session, if you may, I s and I'm getting tired, I find myself sort of giving in to the suggestions of Claude. It's like, no, maybe you should do it this way. And I say, yeah, yeah, maybe it doesn't. Just give up. Maybe like I find myself losing, right? And then I have to like slap myself and be like, no, no, no, no, no. I am the boss here, right? And in you have to be in control. You have to understand all the possibilities. You can talk to the agent and find out what could possibly go wrong. How could you screw up, right? And then actually is surprisingly honest. I I actually installed OpenClaw and I said, Howard, someone hack you. And it's like, well, someone could ask me your password and I would give it to it. And this is the first thing. Yeah. So you can actually get good data from there. I think the dream of an agent going out there to the wild and doing interesting things is a little bit away. At this point, it is you put it on tight use cases, you check it. Uh I think I shared with you my example of calling triple.

SPEAKER_03 15:24

That's right.

SPEAKER_05 15:25

Triple A from uh last week while I was stuck in the middle of uh nowhere in South Texas. Right.

Rajiv Parikh 15:29

Is it did the whole interaction or most of it?

SPEAKER_05 15:31

Um most of the interaction. So the good news. Yeah. So first of all, the good news with calling very few legacy companies in the last two weeks for some reason. I called Spectrum, the cable company. I called Verizon, I called T-Mobile, and I called AAA, very legacy companies. I got human operators within 30 seconds. So it was great. All US fantastic conversations, got great deals and rebates. It was amazing. It was like something I wasn't. The selling side. No, no, no. This was customer support. Right, yeah. They just cut my bills, blah, blah, blah. Now, triple A, they got the now because this was in the middle of nowhere, they say it's eight hours to get a truck out to you. So sitting in the say, hey, what's going on? Where's the truck? And now this time it routed me to uh an AI. Kind of makes sense. Hey, let's have your first interaction. We have set up workflow for you. We're going to now route you to a trap. And uh they said, Yep, a truck has been discharged uh discharged for you. And just so you know, here's the name of the trucking company is six terabytes, so many gigabytes, uh, lakeshore towing. And so the its agent told you that the agent told me that that the name of the company is six terabytes. Six terabytes, uh something gigabytes, lakeshore towing. You know, it's just like just so they're they're using AI. They're using AI, and it's it, yeah, and they're not to the humans. So it's it's still like, you know, of course, I knew what a terabyte is in the context of you know being in the middle of nowhere uh in uh South Texas, but I don't know how many people will understand it. And this is, you know, so I think we are we are in early stages. Um there's a lot of promise. Uh it's super exciting, but uh I worked with teams where by the time when we started with a larger scope, by the time we ended up putting so many guardrails on what it needed to do, some of the developers said, Well, I should have just written this in code to start with, right?

Rajiv Parikh 17:19

In a deterministic way. Yeah, exactly. And I've had Fred's friends tell me that. Yeah. Where sometimes instead of letting the model can continue making it LLMs by them are are probabilistic.

SPEAKER_05 17:29

Yeah. Yeah.

Rajiv Parikh 17:30

At some point, maybe you just turn it into deterministic.

SPEAKER_05 17:32

What is it? LLMs are like a box of chocolates. You never know what you're gonna get, right?

American Dream Index And Inequality

Rajiv Parikh 17:37

You don't want your payroll to be probistic. So Neil, you I know you play with these things all the time too. I mean, um you're writing your own code for this. First say what American Dream Index is about.

SPEAKER_04 17:49

Sure.

Rajiv Parikh 17:49

Uh, because I think you're talking about there's this widening gap between salaries in a good life.

SPEAKER_04 17:54

Yes.

Rajiv Parikh 17:55

And uh AI is potentially an accelerant to inequality. And and then, you know, talk about what that is and why you did it, and then what you're trying to when you're building for it.

SPEAKER_04 18:07

Essentially, I'm writing a b a trend, not a great trend. It's uh it's about affordability and the ability for Americans to afford the American dream. So think of this as credit karma for the middle class that might be working and living in high cost of living locations. And there's a lot of people who are living hand to mouth or are not saving enough for retirement, and they kind of know that they've got a problem here, but they've never really quantified this. So I'm creating this index to make it really obvious about where can people afford to live based on where they live, what they do for a living, and whether they have children or not, or plan to. And all of that data is publicly available, and I'm can putting it in one place and providing an index, a bunch of advice on what can people actually do to improve this. Wow. So that's the that's the goal. Extensively using AI models to code this up. And I have been I've had an excellent experience. They do hallucinate, they do make mistakes, but it's becoming easier and easier to correct those mistakes as they as you go on.

Rajiv Parikh 19:06

So are you are you in the same situation as Amit where you have multiple agents checking each other, pushing back on each other?

SPEAKER_04 19:14

That's advanced level stuff. I have enough pushing back in my life already.

Rajiv Parikh 19:18

But but I'm going to Are you fighting with Claude? Are you exhausted by Claude at some point?

SPEAKER_04 19:24

I you know what? I think there's like a crawl walk run going on here. And I know what run looks like. And crawl is basically you put a bunch of prompts into an LLM and you get it to do some very basic small task that it can't screw up too badly on.

SPEAKER_03 19:35

Yeah.

SPEAKER_04 19:35

Then you move on and you're like, okay, well, I want to give it bigger tasks. And it comes back and it might screw up or not, but it you you're you're pushing the trust boundary. And then you're like, you know what? Here's the problem. I have to wait sometimes five or six minutes for these tasks to come back, and I can't context switch myself, so I waste that six minutes waiting for something.

SPEAKER_03 19:54

Yeah.

SPEAKER_04 19:54

So you realize I want to be more effective. Now I want to run two models in parallel doing different things. Now three so I can be productive and have all these models essentially being my little workers. And Nirvana, which is we're not there yet, at least I'm not there yet. Ahmed, you may be, is have the models fight each other to do all this work so I don't have to. That would be one thing.

Rajiv Parikh 20:13

Dueling each other, collaborating each other, come out with better ideas.

SPEAKER_04 20:17

You missed something. You just you should have done this better. I found a UX error or whatever it is. And then I just sort of come back after my coffee or whatever. It's just done. That'd be lovely.

Rajiv Parikh 20:26

That'd be amazing. But not yet.

SPEAKER_05 20:28

Not yet. Yeah, similar thing. Um, sometimes they make incredible progress so quickly and it's done with a lot of things that uh like uh you know, I started a project with like, oh, maybe over the next few days or maybe a month or so I was working on a paper and I'll have this done. But then sometimes done in 30, 40 minutes or maybe a couple of hours. And I haven't talked through that long, that further along of what I want to do. What I want to do with this information, I have to sometimes go for a walk, right? Like I just literally go for walks. How do I respond to this? Because I don't want to have a bad response because that's gonna be in its memory.

Rajiv Parikh 21:01

Set your own subconscious to its processing.

Imperial History For Delegation

SPEAKER_05 21:03

But I can't do it that quickly on that problem, right? And and I don't want to have I'm very thoughtful of what I'm telling it because it's going in its memory and it's gonna go down, the paths is gonna go down. And I've tried forking uh my sessions, but then I realized that I ended up with three or four forks, then I had no idea where I was, right? So there's your tech debt. Yeah, yeah, but yeah.

Rajiv Parikh 21:25

Yeah. We're gonna shift gears a little bit, talk a little bit about history. So Amit, you've pondered how the British Empire managed vast tech territories like India despite a multi-month communications lag. So when we deploy highly autonomous AI agents that execute decisions faster than human oversight can track, what lessons in decentralized governance can modern enterprises learn from imperial history?

SPEAKER_05 21:50

Oh, that's such a good question. Because the way I was thinking about it was a lot of people are always wanting real time data, real time insights, accurate information to make decisions. And I was thinking, wait a second, people in the past made big decisions. They accomplished a lot with the British or the Mongols or whatever, with very little information. So what kind of directives were they giving? I think the boundary conditions, the win, how they could win, what they could do, picking the right person was a big thing. And maybe they maybe they were also, you know, we look back, it seems like they they accomplished their goal, but maybe that wasn't their goal. Remember, the British were in India for maybe two, three hundred years before they even decided they want to, you know, monkey around inside what was happening in India. And it was Cornwallis who came back from the American Revolution and said, Well, I'm not going to do anything I did in the American colonies and we're going to do the opposite that gave them the first break. So maybe it wasn't planned, right? So I guess to answer your question about like how could you use that insight, I do think that some of the British expansion was that's how it turned out. Rather than if you read the notes from the parliamentary proceedings of the British government, they were not happy with what the East India Company was doing. It was interesting to see what their stock price was doing at the time. You can actually read the transcripts.

Rajiv Parikh 23:05

So they were not happy in how what way they want to.

SPEAKER_05 23:07

They did not want the East India Company. Take over behind.

Rajiv Parikh 23:11

But they were funded. They were chartered to be lit as literally a government.

SPEAKER_05 23:14

They were chartered, but this is the first corporate or the second corporation, I think the Dutch would disagree. I don't think people knew what corporations were or the powers of corporations at the time.

Rajiv Parikh 23:24

Yeah.

SPEAKER_05 23:24

So they were, I mean, this is like one of those things you throw it out there. Yeah, they were like, wait a second. This guy, these guys went out there, like they went IPO in the border country, right?

Rajiv Parikh 23:32

So like I think what you're getting at there though is about you were putting a human in charge with the distance. So there wasn't the ability to go back and forth and get approval.

SPEAKER_05 23:47

Yeah.

Rajiv Parikh 23:47

Now you have the instant ability to get approval. And then with agents, you have agents that may make decisions faster.

SPEAKER_05 23:53

So they had a lot of autonomy, right? Like they're they started amassing arms because they were fighting the French colonies and the Portuguese colonies. That was their main thing. That's why they started arming themselves. And then they said, well, we are not getting our goods because the local king is kind of goofing off. So then they said, well, we need to get our supply chain sorted. So we need to get, you know, take over Bengal, right? That's how it started. And then one thing led to another. So I think with the agents, we do want them to have a goal. I don't think at the time the British actually said, we are going to go and conquer this. They were kind of like falling into it. They were like, well, turns out now we have this country. What do we do?

Rajiv Parikh 24:29

Neil, do you have a point of view on this?

SPEAKER_04 24:31

Well, as a Brit, I'm expected to.

Rajiv Parikh 24:36

Notice I didn't tell you, I didn't throw that at you.

SPEAKER_04 24:38

I'm just I'm I'm just happy to have uh a simple question for once. But um but yes. The the uh one thing that comes that comes to mind is you're right, like a lot of this is I would call a scope creep. Yeah. Like there are people there on the ground, they have a few things going on for them, and they they take initiative, and they hopefully they've got a good moral compass, not always, as we know. But when you were talking about the fact that there's low latency, people sent over there, you can't communicate for months. I realize that back, I guess, in the olden days, and maybe to some level now, you're relying on a few things. It's goals, high-level goals, whatever they are, guardrails, if they're even established, and ultimately good people. You have to pick the best people with the best judgment and put them in those situations where they can operate independently. There is no other way.

SPEAKER_01 25:25

That's amazing.

Pareto Oversight And Redesigning Work

Rajiv Parikh 25:26

That's a great setup for later on. Okay, a final question for both of you in this segment. Um, the future of work can be viewed as a collision of two cost curves the exponential decay in the cost to automate, driven by compute, versus the biologically bottlenecked cost to verify, driven by limited human time and expertise. So, what's your enterprises and society at large invest in human cognitive augmentation? Or is human oversight fundamentally unscalable? Either of you can take that.

SPEAKER_05 25:59

I always I've thought about it. This is where I led to the other agents, which early on, uh before I started doing more agents, I was just cutting and pasting between all the, you know, this guy said this. What do you think? What do you think? What do you think? And then I don't have an answer. I think you can start with the lower level tasks as you get, you know, you you go to the higher level tasks. And so it is nearly impossible at this time for humans to verify if you're doing native agentic projects, right? If I'm doing a check-in for some code and I change some functions, of course I can see what the agent did. And this is what I did early on. But right now, I there's some projects, I they're black boxes for me because it's so far in there. I try to sometimes I get curious. I say, document this, explain it back to me, and that kind of stuff. Have some another agent with another LLM explain it back to me. So that's I try to process it. But no, it's uh you just try to diminish it. And it's a possibility. Yeah, yeah.

SPEAKER_03 26:55

So one of your talk.

SPEAKER_04 26:56

Yeah, so my opinion is given the the vast cost savings and the the fact that uh agents can mu operate much faster than humans, we're gonna be hitting Pareto at some point. The goal is gonna be Pareto. Give as much work as possible to the agents, and where the humans can add the most value, put them in. And that's typically gonna be around Pareto as a limit. You're talking about I would call it the economic limit. There's a certain amount of risk versus financial reward out of everything you're doing. And the more you give to an agent, the more risk you have. There's still some risk with humans, but there's a lot less risk with humans. But if you were to look at where are agents and humans best suited, humans, if you can find the skill, are best suited to evaluating the agents if they have to, if another agent can't do it and the work, making decisions, and applying judgment with tacit information that a model may not have. Now, those are very different skill sets that most humans have, but if you can find the right mix, it's going to look something like that. And you will have business processes handled very quickly with great results and still sufficient human control over the outcome.

unknown 28:03

Right.

Rajiv Parikh 28:03

So changing the boxes that Amit was referring to earlier. Yes.

SPEAKER_04 28:06

Yes. We've been sort of working our way upwards with like a little task here, a little task there, and you sort of like merge them a little bit at some point. But I think the bold, the bold vision here is give the agents as much as possible of business processes and have the humans at the right places to minimize the risk and make sure that that we are actually operating within the goals and guardrails.

Rajiv Parikh 28:29

So human expertise, organizational redesign.

Controversial Takes On 2026 Leaders

SPEAKER_04 28:31

Yes.

Rajiv Parikh 28:32

All right. So the future of work is no longer a PowerPoint slide. It is a live high-stakes restructuring of the human career. As AI agents move from assisting to operating, the skills that make a leader successful over the next 20 years are being aggressively devalued. To see how our guests are future-proofing their own paths, we are we've gathered some controversial takes on the 2026 leadership landscape. We're moving beyond will AI take my job to ask a tougher question. If an agent can manage the team, audit the data, and write the strategy, what's left for the leader to actually do? So here's the first statement. The common advice is to focus on empathy because AI can't. That's a trap. AI agents in 2026 are already better at conflict resolution and unbiased performance feedback than most humans. Leaders who lean solely on soft skills are retreating into a shrinking corner of value.

SPEAKER_05 29:27

I disagree because agents, we don't tell agents everything. An agent is working off a set of data that may be reflected in performance and all those kinds of things. But if you have a human connection with your team and you know what people are working on, what is actually happening in the outside world or in the real world, you will have more information that may not have been fed into the agent. So the agent will give you a point of view and say, okay, from a data perspective, this is what's actually going on. But you so I think it's very helpful because now you don't have to go and chase silly KPIs and metrics and check on work or all those kinds of things. You will be able to adapt with your human team in a much more meaningful manner. But me and I were talking about it. But I think with your agent team, I think you will be very autocratic. And that has been the big change for me personally. Uh you don't enable your agents. You have to be a dictator, a talent to them, in my view. Um but with a human team, you can be nicer.

SPEAKER_04 30:24

But your point of view? Point of view in this empathy is a complete red herring. There's not a lot of empathy out there right now. Look at what's happening with tech companies. They're laying off people and blaming AI right now.

Rajiv Parikh 30:35

Now giving while giving their topper executives a pick.

SPEAKER_04 30:38

Exactly. So I don't believe this empathy thing at all. The evidence is not there for it. Now, having said that, once we sort of wake up a little bit and realize what it's going to take to be successful, the skills are a little bit different. You're absolutely right. No empathy for the agents. They need to be treated like interns, otherwise, they will go off the rails. That will change over time. Maybe they'll start getting offended and they'll put that in their context memory, but not yet. But for humans, the skills are very different. And the skills a leader needs to bring are very different. I can cover these now, or maybe you have another question in a second.

SPEAKER_00 31:10

That's great. That's great.

SPEAKER_04 31:11

So I'm going to give an example of this. Now, there was a study done recently, I don't remember the it happened in the last couple of months. And basically, the skills that make people a good leader of people also apply to agents. And it's the ability to make decisions, have a lot of context that is not necessarily captured by the agent, and other things. And I'm going to give you an example from military history, actually, or rather the military. So I was very good friends with a Marine in the UK. And I remember many years ago, he described during his training how giving orders as an officer was very difficult. Because you had to offer often in times of high stress, a lot of ambiguity, etc. And the characteristics of good orders apply to agents, interestingly. So, characteristics clear and concise. No reasonable doubt left about what the goals are. Focus on intent. Like what do people want? You've got to answer the what and the why so that people understand what to do if they have to make a decision in the field quickly. Same applies to agents. Lawful and authorized. Hopefully there'll be a little more of that within 2026. We'll see. But you've got to give it a guardrail about what it can and cannot do. And then last, it must be considered infeasible. Give it something that's within its capabilities. And that applies as models increase in capability. Right now, we have to give it them restricted commands because we could can't trust them so so much. As that changes, that universe will expand as well.

Rajiv Parikh 32:42

Okay.

SPEAKER_01 32:42

Great answer.

Rajiv Parikh 32:43

So here's the next one. While AI allows the average worker to do more in less time, the most successful leaders in 2026 will use that reclaim time to double their output, not take Fridays off. AI hasn't shortened the work week, it has just raised the ceiling for what one person can achieve. Neil?

SPEAKER_04 33:02

Yes. It's bifurcating right now. I'm seeing this right now. There are a small minority of people who are of any age who have natural curiosity and want to get ahead of the cycle and they are diving in. I have many friends, and you can kind of I can kind of predict who they are, who are spending hundreds of dollars on tokens as they learn and they get up the learning cycle. On the other hand, there are a lot of folks who don't necessarily have that curiosity or the technical skill, who are not engaging in this. They maybe never intended to. And unfortunately, I think a lot of them are going to find themselves left behind in the next few years.

SPEAKER_00 33:36

Nice.

SPEAKER_05 33:37

An interesting pattern I noticed there. I talked to quite a few people. This was, I guess, one of the big four consulting dummy conferences. I was talking about this whole AI native companies. I surveyed the room. It was interesting. These are old school companies, the big almost everybody was using AI, right? I mean the Chat GPTs and all that. Right. Well, but they were very sophisticated in using it in their personal lives, right? Planning, vacations, all those kinds of things. But they had this weird mental block of using it for work. So some people, to your point, some people they treat work as like their mission. This is what they're doing. They're always bettering themselves. And for some people, this is a nine to five, right? If you haven't trained me to use AI, I will not work on AI. Right. This is why I was always amused by people needing training in Excel and Word back in the day. And I was always wondering, like, who who does that? Yeah, I think for those people it's going to be really tough, right? Like they will, they're all still just grammarly checking their emails, and that's about it.

Rajiv Parikh 34:41

It's the unlock of the curious.

SPEAKER_05 34:43

Yeah.

Rajiv Parikh 34:43

And it doesn't necessarily take change your time in the job. So here's the next one. Most middle management roles exist to translate data into decks for executives. AI does this instantly. Leaders should prepare for a barbell organizational structure, a few highly strategic executives at the top and specialized doers at the bottom with no one in between.

SPEAKER_04 35:04

I disagree, but I think that the shape of the pyramid will change. I think you that the So not barbell. Not barbell, but tighter. Because more work is going to be done by agents rather than a large, wide pyramid. And the skill sets are going to be different across the board at different levels. At the executive level, the requirement to be a visionary and move faster, given the entire operation can move faster, they're going to have to be have a higher clock speed themselves. As you move down into the middle layer, they're going to have to understand how to work with teams of people and also with teams of agents and know how and where to sort of spread out the work. And when you get to the individual contributor level, they're not even going to be individual contributors anymore. They're going to be individual contributor plus pluses. They're going to have agents working for them, helping them, augmenting their work, and also doing entire functions. So I don't see a world in which the any one of those levels of the organization isn't going to change if you're going to move fast and be successful many years out.

SPEAKER_01 36:04

Awesome. Great point of view.

SPEAKER_05 36:06

Yeah, no, I completely agree.

SPEAKER_01 36:07

Barbell no barbell.

SPEAKER_05 36:08

No barbell. No barbell. Well, more like I agree with Neil. The way most executives don't actually go and task their people. They're focused on vision, they're focused on what needs to be set up. And then they're making bets across the different teams. Allocators. That allocators, they're kind of like, well, this worked out, this doesn't work out. Should I fund this? Should I not fund this? Those individual teams, which today are like layers of VPs, directors, SVPs, and all those things, those layers will tighten up into a team just as getting shit done, right? And they will be able to produce an output. A lot of the reporting and KPIs that will be automated. So there'll be not many people doing that kind of stuff. But then the executive function will still remain the same. And the executive will need teams of people. And those teams will be organized into people who are doing different kinds of tasks. Some more capable, some I guess all of them capable in their own way with different uh perspectives. That's interesting.

Rajiv Parikh 37:04

I would have might have, I might have argued more barbell, because what I'm seeing is that I'd rather I can now afford in my organization, marketing organization, to get senior marketers to be the interface for the client that can do broad strategy and then have their teams do all the prep and execution at a level that and so now the folks at the top can be more the strata strategy, developing the plans, communicating at a high level, and then their teams just get a ton done.

SPEAKER_05 37:38

Yeah, but that's that's that is the middle translation layer. I don't necessarily Yeah, I guess we're I guess but we need to reconcept what is the translation layer. But for me, the translation layer is the armies of senior directors and VPs. So I'm looking at the executive. Right. Well, yeah, maybe the yeah. But I think what you are describing is is similar to the model we talked about, which is you are the executives. You're not, hopefully, or all the time talking to individual contributors and tasking them. You are talking to them just to see what's going on. But then you are looking at a grouping of functions and initiative leaders, right? And there have been companies. I I I've talked to a lot of people at Tesla's Tesla companies, so let's just call it that. They are very like individual bands of people getting stuff done. I know a while ago G Aerospace worked like that, which was very, very interesting to learn that you know they're making aircraft engines in a little bit of a collection of teams. So um I think there were Zappos that had that, but I don't know that that sort of went nowhere. Yeah.

SPEAKER_04 38:37

I'm gonna give an example of where I think the barbell data is already not showing the barbell shaping up. If you look at what's happening to employment for recent college grads coming out looking for an entry-level job and a wide collar function, their employment is going generally down unless they happen to be in a sort of an AI forward thing. And that indicates to me that one end of the barbell is already starting to shrink. We already have the data for that. And I think that part of the reason for that is you can hire somebody out of college, and the traditional route is that you're gonna have to invest a fair amount in their training until they have the context to do their job. The problem is a lot of agents already have that context, and so the economics of hiring somebody young versus using an agent and leveraging who you have are now out of whack. I can't quite predict how that's gonna shape up in the future, but I imagine it's gonna start at the bottom and slowly creep its way up and the pyramid will shrink. Shrink.

Rajiv Parikh 39:34

The bottom level shrinking, and we've seen that. Yes. I've seen that amongst my more educated kids that have graduate degrees.

SPEAKER_05 39:43

Right.

Rajiv Parikh 39:43

That it's harder to find a job. They weren't, they're not getting the tons of offers that they were getting. Some of it just changed within a year or two.

SPEAKER_04 39:52

Yep. So maybe that's that's well, I again they have to come in more ready to go. That's right. I mean, if I think about who I would hire in most functions, whether it's for a big company or a small company, it almost doesn't matter as long as it's why cohort function. I'm certainly looking for people who are junior that have AI forward skills. It's almost a prerequisite. I don't see any good reason to hire somebody that doesn't have those if I want to move fast, unfortunately. Awesome.

Rajiv Parikh 40:20

Next statement everything in a business that can be optimized will be handled by AI. This means a leader's entire 40-hour week will consist only of high conflict, high emotion, and unsolvable human problems. The easy parts of management are gone. If you aren't a specialist in extreme psychological mediation, you aren't a leader.

SPEAKER_04 40:41

So I would say that's only partly true. There's a lot of human context will be required no matter what. But I do think that you're right. As I said earlier, everything that can be optimized and run by a model will be. The question is, where do the humans come in and where do the models do the work because they're better at it, faster, cheaper, et cetera.

SPEAKER_00 41:01

You agree.

SPEAKER_05 41:01

Yep. So I was uh reading up on um, I think uh Andrej Karpathy, the father of uh Wipe Coding. Like he's been running this crazy experiment of like this. I don't fully understand it, to be honest. But the six so you don't understand it.

Rajiv Parikh 41:16

We got some work to do.

SPEAKER_05 41:17

Well, I mean, he is he is really out there, but he he's the guy who sets the stage and he's been running this uh sort of this loop and this uh this optimization, right? Of this uh this model that keeps optimizing itself. And what's interesting is that after a while, it started ripping hitching hitting diminishing returns, right? So I think as your organization gets optimized, your KPIs are there and all those kinds of things. Look, if you're a static organization, you're gonna be dead anyways, right? There's so much more to do in an organization. And I think one of the things that everybody's been talking about this whole thing is it's a very old school factory mindset, which is oh, I have 100 engineers, uh, they all make these bricks, and now AI is gonna do it, so I can do it with 50. I I I think you this is your great opportunity to have the big software business unlock to push. Because if you're not gonna push, somebody else is gonna push and you're gonna be dead anyways. So if you're looking backwards in in penny counting your organization and these large teams and trying to figure this out out on a spreadsheet of like putting multipliers because you saw some tweet from somewhere, uh I it's it's not gonna work out very well, right? At least in the midterm to long term, maybe in the short term.

Rajiv Parikh 42:31

I I actually think that the people who are doing the work are invaluable if they are curious in helping to build the applications. I think I've talked about I think like of the 200 people I have, I now can go from 20 developers to 180 developers. Because those folks can translate their work and their inference into action. So if I get rid of them, I'm losing their ability to establish the subtle nuance that needs to get into the better execution or better strategy and execution.

SPEAKER_05 43:10

I would look at it completely differently. We had this conversation last night when you had that video for Lenovo. Yeah. Right? You've just unlocked a whole new market. Why weren't you completely press your advantage there and not worry about how many people you have?

Rajiv Parikh 43:25

I would say that, well, this is how you're configured. If you if you feel like you're way overconfigured, then yes. But I look at it as these folks are the ones that are researching all like there's a a whole grid of different technologies that the team had had looked at and put together. Gigantic grid.

SPEAKER_03 43:42

Right.

Rajiv Parikh 43:43

And if you don't have those people who are curious looking at those grids and putting together the work, we wouldn't know how to put together the products to do those amazing videos. So, yes, the videos, which was a live shot video of the Lenovo's auto twist notebook that normally would be live shot. For a few hundred thousand dollars can be done done for almost an order of magnitude less. But I needed that team to sit there, the the ones that see lighting, scripts, use like looking at it from that team to collaborate using the agents.

SPEAKER_01 44:14

Right.

Rajiv Parikh 44:14

So I I I I don't want to that's a lot of expertise. I don't want to let go of.

SPEAKER_05 44:19

But what I'm saying is, why even think about that? Why not just say that I can now do a hundred of these videos and open a whole new market? So you don't never think about whether you have 200 or 400 people because you're making making money hand over for we're both on the same page. Yeah. Absolutely.

unknown 44:32

Absolutely.

SPEAKER_04 44:33

I'd like to come back to something. Something I find interesting. People blaming AI for cutting their workforces and particularly engineers. Engineering teams, by and large, in Silicon Valley and also elsewhere, are usually capacity constrained. And so I find it very strange that the reaction to AI is in fact cut people, as opposed to figure out, well, we can move probably twice as fast now. Why wouldn't we do that and get a competitive advantage? And maybe that just tells me there's not enough to do anyway out for the strategy that that can be.

Rajiv Parikh 45:06

Okay, welcome to the Spark Tank. Today we're joined by two leaders who don't just observe digital transformation, they define their careers by hacking the system. Today we have Neil Shepard, the founder of the American Dream Index. Next, we have Amit Mahota, a private equity operating advisor, and a legendary technology builder. But today we're stepping away from your cloud deployments and growth models to look at the original founders of the American industrial and scientific spirit. We are putting your analytical minds to the ultimate test with our version of Two Truths and a lie. We're gonna see if you can spot the one glitch in their historical records and see if your instincts for disruptive innovation is as sharp as your AI agents. So you ready? What I do is I count down three, two, one. You will just pick the number that's the lie, and then we'll see who wins. We have three questions. We're first gonna start with Nikola Tesla. All right, number one. Nikola Tesla became so obsessed with a single injured white pigeon in New York that he let pigeons fly freely into his hotel suite, telling friends he loved that bird as a man loves a woman. Number two, Tesla patented a practical, fully working household wireless power table that could charge multiple lamps and appliances at once, and Macy's briefly sold it as a luxury item. Or number three, at his Colorado Swings lab, Tesla once generated lightning so powerful that it reportedly knocked out power in the entire city, blowing the local power station's generator. Okay, ready? Which one's a lie? Three, two, one, show your fingers. Two. Both two. Sorry. We'll get better. We'll get better. Neil, why are you so sure?

SPEAKER_04 47:02

Because I don't think Nikola Tesla had the wherewithal to figure out how to sell stuff through the Macy's. Yes.

Rajiv Parikh 47:09

Is that the same? You knew when the freaking when the pigeon one came out, like that's true. How do you know that was true?

SPEAKER_05 47:17

I just know a lot about it. And you knew about number three too? I even listened to a song about Nikola Tesla. Yeah. It's by the handsome family. There's a song about him. It tells his own story. It's amazing. It's like a rap. No, it's like a country song. It doesn't go together.

Rajiv Parikh 47:40

Actually, you know the word.

SPEAKER_04 47:42

Unfortunately, I do. Wow. How many times have you listened to it?

Rajiv Parikh 47:46

No, I mean a rap. You wouldn't need to listen to it 30 times, but country song. Oh, that's great. You both nailed it. Number two's alive. So the white pigeon, uh, Tesla spent hours feeding and rescuing pigeons in New York. He was a germophobe. And he, even though he's a germophobe, he fell in love with this white pigeon. And he adored it so much that when she died, he felt his life's worth work was finished.

unknown 48:13

Okay.

Rajiv Parikh 48:13

How strange is that? Number three, that's true as well. There was an overload during high voltage experiments in Colorado Springs in 1899. Tesla's massive lightning-like discharge discharges reportedly overloaded equipment and blew out the local power station's generator, causing temporary blackout. Classic tinker goes too far energy. And why number two is a lie is Tesla was the king of the prototype, but he was never the king of the product. And while he dreamed of a wireless world, he never quite made it to the showroom floor at Macy's. If he had, the history of the 20th century might have been very different. Okay, round two. Thomas Jefferson designed and used a rotating wheel cipher device so his diplomatic messages could be encoded and decoded more securely. Number two, Jefferson rigged a self-recording weather vane at his home in Monticello that transmitted wind direction into his study so he could log data without going outside. And number three, he patented a mechanical copying press and collected royalties from his sale throughout its lifetime, using the income to help finance Monticello. So which one is the lie? Three, two, one. One is the lie. You guys were emphatic about it. Why is one the lie?

SPEAKER_04 49:31

I don't think he was messing with cryptographic stuff. I think it was all pen and paper in those days. And I don't recall a machine ever having been used at that stage of history.

SPEAKER_05 49:42

Okay. Yeah, same.

Rajiv Parikh 49:44

Same thing.

SPEAKER_05 49:44

Well, it's more uh for me it was elimination. I knew the second two stories.

Rajiv Parikh 49:48

Do you think the second you think the the second two stories are completely correct?

SPEAKER_05 49:51

I know the weather win is true. I don't kind of iffy whether he how he got it to record. So that's I'm a little bit iffy on that. Um towards towards right.

Rajiv Parikh 50:01

You're confident about that. Yeah. Okay. Guess what? You're both wrong. Uh so the rotating wheel cipher is true. Jefferson developed devised a cipher device made of rotating alphabet discs, often called wheel cipher or Jefferson discs. It allowed letters to be scrambled mechanically so that only someone with the identical wheel setup could decode the message. The concept was so solid that a similar design was later used by the US military in the 19th century.

SPEAKER_04 50:32

That's early than I thought.

Rajiv Parikh 50:34

Yeah.

SPEAKER_04 50:34

So the cryptographic word can come back.

Rajiv Parikh 50:36

I know we brought cryptographs just for you. Just for you, dude.

SPEAKER_04 50:39

Yeah.

Rajiv Parikh 50:40

And number two, the self-recording weather vane, that is true. Uh at Monticello, Jefferson's tinkering extended to home instrumentation, kind of like Neil. He is arranged at a system where the weather vane was on the roof that was mechanically linked down to his house to play displaying wind direction inside so he could log observations without climbing up outside and looking outside. Very on-brand for the data-obsessed enlightenment tinkerer. And why number three is a lie, and you may we may get into an argument about this one. Jefferson was the open source founder. Though he essentially ran the U.S. Patent Office, he refused to patent a single one of his own designs. He wanted his code to be free for everyone, which is a noble sentiment. Uh, but as his bank account eventually showed, it wasn't the best business model for maintaining a mansion in Monticello. If you remember in his history, he's constantly fixing the damn place up. So that's he did it.

SPEAKER_04 51:32

So you're implying that back in those days, people had a view of government as not self-enriching themselves.

SPEAKER_01 51:38

That's right.

SPEAKER_04 51:39

That's really pleasant. Wow. That was the good old days.

Rajiv Parikh 51:41

What is that like? Yeah.

SPEAKER_04 51:42

They wrote it.

Rajiv Parikh 51:44

There was there was a concept. There was still the British Eastern India Company at the time.

SPEAKER_04 51:48

That's true.

Rajiv Parikh 51:48

And so there were other more idealistic thinkers. Okay. Number three is Mark Twain. Number one, Mark Twain lost a significant chunk of his fortune investing in an automatic typesetting machine that constantly broke down, forcing him to go on an exhausting world lecture tour just to pay off his debt. Number two, Twain patented a specialized writing desk on wheels that allowed him to move his workspace from room to room as the sun shifted, ensuring he always had the perfect natural light for his manuscripts. Or number three, he patented a game to help his daughter's team geography using a US map and pins on strings so they could fish for the right states and capitals. Which one is the lie? Three, two, one, three. Oh, we got some disagreement. So someone's actually gonna be a winner. Why do you feel three?

SPEAKER_05 52:45

Um, I've kind of wiggly horror the first two. Um I kind of don't see Mark Twain geographically, but I'm not. It's been a long time since I read about Donald Mark Twain. So getting by elimination.

SPEAKER_00 52:57

That's why you have three. An Neil?

SPEAKER_04 53:00

It's a bit of a guess, but moving a desk from place to place just seems a little sort of beneath Mark Twain, so why would he patent it? And maybe he did do that because people were patenting anything back in the 19th century. It's there's some really wacky stuff out there that had no practical purpose, but it's impossible to patenting wacky stuff. That's I'm continuous patenting, okay. Think of that. I think we're all doing that.

Rajiv Parikh 53:26

Maybe he just moved to different parts of his house.

SPEAKER_04 53:28

It's it's true. I I look two is entirely possible. It's just that's the best guess I've got.

Rajiv Parikh 53:33

Right. Well, guess what? Neil, you win. Yes. All right.

SPEAKER_04 53:38

Winner. Chicken dinner. Love it. Oh, God, it's good.

Rajiv Parikh 53:43

Prize is bragging rights. So, number one, the disastrous typesetting machine. Uh, that is true. Uh, Twain invested heavily in the page typesetting machine, highly complex automatic typesetter that constantly was constantly plagued by mechanical problems and delays. The project consumed years and much of his money. When it failed commercially, Twain was left in serious financial trouble and embarked on a grueling round-the-world lecture tour to pay off his debts. Number three was true, the geography game. Twain held several patents, and one of them was for an educational game to teach geography. He'd used map cards, labels, pins and cords so children could match places and names in a more hands-on way. He invented it with his own children in mind, blending play with learning, very much a tinker dad move. And why too is a lie. Twain didn't actually actually have a patent for a clothing strap, but it was for his collars. And it was a failed attempt to kill off the suspender. Twain's real spark wasn't in the gadgets that failed, but in the fact that he used his voice to pay off his debts when his machines let him down.

SPEAKER_04 54:49

So nothing he invented made really any money. No. But he was a good writer, right?

Rajiv Parikh 54:53

Incredible writer.

SPEAKER_04 54:54

Yeah, yeah.

Rajiv Parikh 54:55

And thinker. All right, we're gonna go to personal closers, just quick answers to these off kind of off the wall questions. All right, we'll start with you, Amit. If you could be guaranteed to be really good at one thing that you're currently terrible at, what would you choose?

SPEAKER_05 55:09

Maybe a little bit more arts.

Rajiv Parikh 55:11

Yeah.

SPEAKER_05 55:11

Yeah. Um what kind of art? Just generally, just trying to understand uh how it works. Understand I can I can understand it, I can see it, but I can't quite Do you want to be a maker of art or a a pre- more appreciative of art? I think more like arts as a concept rather than actually doing something. So you know, it's just just trying to get a little bit more I guess I've been trying to go to conferences like South by Southwest and looking for non-AI things, right? Like in and some of it is That's why we're into release cars. Yeah, yeah, yeah. Um yeah, so I I like good design branding, that kind of stuff. Um like how things sustain over a period of time, why thought certain things work. Um so just supposed to be.

Rajiv Parikh 55:52

So, Neil, if you could give everyone in the world one piece of information or one realization, what would it be?

SPEAKER_04 55:58

There's a lot to think about there. I just think the one realization is at the end of the day, everything's about humanity. Everything that drives us is about the connections between human beings and how we treat each other. We're heading towards a world where everything's moving faster, capitalism is winning for now, and people are being cruel to each other needlessly. And that's not what makes people happy. What makes people happy is doing things for other people. And as this world speeds up and potentially looks like it's more cruel, you have to remember that if you can spend your time doing that, you'll be a happier person no matter where the journey takes you.

Rajiv Parikh 56:42

Ahmed, what's something you're grateful your younger self did or didn't do that's paying off now?

SPEAKER_05 56:48

I I don't want to say follow my instinct, uh, but I I did. Um, in the environment I was in, it was part of a military family. Like there was nothing to do with computers and technology. I just, you know, loved encyclopedias and build things, took about watches and that kind of stuff, which was very kind of atypical at the time. And you know, just started going down the rabbit hole of, you know, trying to buy a home computer in India when they didn't have the concept of once or to fake a company to do that. Like those kinds of things and just following that path. So it's just been fun. Just be things apart. Even today I still like memorize 6502 assembly. Um, you know, it's just still in my head. I still the CPU used to be. Sorry. Yeah. So it's it's just it's it's fun to have that continuity and the fact that I fell into it, it's fun. Yeah.

Rajiv Parikh 57:32

Yeah. I you've given me great explanations of um like how Lotus builds its cars.

SPEAKER_05 57:37

Yes.

Rajiv Parikh 57:38

And how the world of system integration when it comes to vehicles. So I had amazing discussions about that's putting things together. Neil, what's a mistake you made that taught you more about yourself than any success ever did?

SPEAKER_04 57:49

Yeah, so this is an interesting one. Um I trained as an engineer, I was a hacker at school, I was really good at engineering, and in the UK at the time when I graduated, there were no good jobs for engineers that paid any money at all. So I did what a lot of people did. I chased the money. I became an investment banker, became an entrepreneur, became a VC, went to business school, came over here, and then went into marketing and a lot of other things. But at the end of the day, I keep getting pulled back to who I was back then, and I wish I had stayed that course.

SPEAKER_00 58:19

Really?

SPEAKER_04 58:20

Because I would maybe be at a completely different place, but I would be in my element. And as a lesson for all of us out of this, don't chase just what other people want you to do. Don't chase what you think is right and what peer pressure brings you to. Chase what's true to you and where you will differentiate yourself from this world. And that applies now as much as it ever did to me back then.

SPEAKER_01 58:42

It's a great answer.

SPEAKER_05 58:44

I think it ties into some of the things I was saying about having that initial journey, being uh techie, putting things apart, taking things apart, but then becoming more of a manager and doing a kind of a I was never a good long-term planner, which was good. Like I didn't try to overplan and took things as they came, which is why I've got all these different things. Um But yeah, I I think I'm I've been loving in the last couple of years going back into coding and back into what I used to do. And um I just didn't have time or patience to deal with build systems and make files and all that stuff, which AI takes care of it now. So for me, I'm just able to go back and sit in the world. In a way it's unlocked here, yeah.

Rajiv Parikh 59:19

Maybe that tinkering, tinkering background in today's world unlocks it. It's amazing. Yeah.

SPEAKER_04 59:24

Yeah, it's reduced the barrier to re-entry, and which has been fantastic.

Rajiv Parikh 59:28

Fantastic. Here's for both of you. If you could instantly know the truth about one conspiracy theory or unsolved mystery, which one would you choose?

SPEAKER_04 59:40

I am not a believer in conspiracy theories because there's no data. The people who believe in conspiracy theories, you don't impress me because they don't consider the real data that's out there. The only conspiracy theory that I want to understand, and it's just because it's been around a while and I'm not even convinced as much to it, is how we really been visited by aliens. Is there any evidence at all? And I haven't seen it. I don't even think a government could hide it.

SPEAKER_00 1:00:07

Even the UFO data that they really put in? What data? Okay.

SPEAKER_05 1:00:14

I think, well, definitely the alien one is there. For me, I've always been curious, you know, we had the Bronze Age, and then we had like nothing for like a thousand years. Like, what happened? Right? Like I I really wonder and where would we be if we didn't have that gap? Right?

SPEAKER_00 1:00:32

Yeah.

SPEAKER_05 1:00:33

Um so always, I don't know if it's it's not a conspiracy, but nobody seems to know that we're passing in that period. Yeah, worldwide play, worldwide. Yeah, and how why why did we just forget everything, right?

Rajiv Parikh 1:00:46

For like a thousand years.

SPEAKER_05 1:00:47

Yeah.

Rajiv Parikh 1:00:48

All right. That's food for thought. Thank you both for joining with me today and over this four-day amazing event, sharing of information and ideas and thoughts and provocative thinking, as well as a lot of play. Um, you both are fantastic folks. Love your tinkering spirit applied across technology and business. And just both are great friends. So thank you for joining.

SPEAKER_04 1:01:12

Thank you also. Appreciate that.

Rajiv Parikh 1:01:19

All right, thanks for listening. If you enjoyed the pod, please take a moment to rate it and comment. You can find us on Apple, Spotify, YouTube, and everywhere podcasts can be found. The show is produced by Ann Shah, production assistants by Taryn Talley, and edited by Lauren Ballant. We have this amazing live crew here with us, so thank you for being here with me. I'm your host, Rajiv Parit from Position Square. We're a leading growth marketing company based in Silicon Valley. Come visit us at position2.com. This has been an F Funny production, and we'll catch you next time. Remember, folks, be ever curious.

More on AI Episodes