SPEAKER_01: 0:00
We're just seeing the models now every month coming out with refreshes, competing. Today, in you know, the first half of 2026, it truly is a Google, an open AI, and an anthropic game.
Rajiv Parikh: 0:13
Is there a favorite for you?
SPEAKER_01: 0:15
I do think Claude is winning the market. They're doing phenomenal. At the time of this recording, OpenAI is trying to catch up to Claude. It's actually predicted recently in the information that by late 2026, Anthropic may have more revenue than OpenAI. I mean, that's how much the momentum has changed.
Rajiv Parikh: 0:39
We really have a great episode for you today. This is David Yakubovich. My team found him a while ago when he was still product lead at Google. We had him speak at one of our events and then subsequently invited him on the show and had him come to our growth marketing summit. His level of depth and knowledge is just incredible. It comes from him being an early math geek and person who loved to play with VCRs and TVs, breaking them apart at a young age. And if you listen to him and what he talks about with regard to where AI is going and how you're able to really accomplish a ton today in a way that you couldn't accomplish before, it's really amazing. Like before we were talking more about chat and knowledge, and this time we're going more into software building, application building, building businesses beyond the knowledge and discovery end of things, but actually executing businesses with them. He gives his perspective on the rise of Google in the space, as well as the seeming dominance of open AI and anthropic, but there's a whole ecosystem around it all that can benefit people and their investors and the rest of the world. So it's a super hyper competitive world that David talks about, but he's got this joy and innocence about it that is really interesting and intriguing. He also gets into American competitiveness and the immigrant economy and why we're being fueled in a way that other countries don't have as an advantage. So that's really interesting to get his points of view on. And then as a person, he's an endurance athlete and he talks about how he keeps moving through that and learning from it. And I think that perspective of very curious person, loves to dig in, goes out and builds community and meets community and actively invests in early and late stage companies. That kind of polymath individual that David is really makes this a fantastic episode to listen. Welcome to the Spark of Ages podcast. Today we're joined by David Yakovovich, the general partner and managing director of Data Power Capital, a New York City-based venture capital firm investing across applied AI, inference infrastructure, and deep tech. With a portfolio of over 36 companies, David is an investor in the most defining frontier technology firms of our era, including OpenAI, Anthropic, XAI, Neuralink, Databricks, Grof, Crusoe, Andrew, and SpaceX. David is a leading voice as the host of Humane, a podcast focused on applied and responsible AI, making David uniquely positioned to discuss the intersection of inference infrastructure, exploring how we build a future that is both high tech and high trust. Previously, David served as the global product lead at Google, where he built data products for Google Ads. He also served as an AI policy ambassador for Google's global affairs. Some of the key takeaways you can expect from this episode. What scaling in today's data infrastructure looks like, chips, centers, and microagents? How to invest and manage in the 21st century economy from chatbots to agents to digital subordinates. And finally, what is American dynamism? Developing space, defense, and machine economy. David, welcome to the Spark of Ages.
SPEAKER_01: 4:02
Rajiv, thanks for having me back. My pleasure.
Rajiv Parikh: 4:04
Well, great to have you back. I think we had you on about two years ago, and we just went back and forth about a wide range of topics about AI, entrepreneurship, and we're going to get back into it again and update everyone on what we learned. So this is going to be super fun.
SPEAKER_01: 4:18
Yeah, it's going to be great. I mean, just two years ago, I had just left Google a few months prior. So it's great to see what two years has done.
Rajiv Parikh: 4:25
That's right. And that is true. We asked a lot of questions about your experience at Google. So your new perspective will be interesting. In our last chat, we were excited about the App Store moment for AI. Now we're seeing the rise of vibe coding, where AI writes the bulk of software. My friends are talking about it all the time. But there's also a backlash regarding the quality and security of code. Some of my friends who said when some guy writes their vibe-coded software, it breaks other parts of software and causes the whole system to go down. So now that AI generated code is proliferating, do you and data power ventures view AI written code bases as a liability due to potential tech debt or an asset due to speed of aeration when you evaluate a seed stage technical team?
SPEAKER_01: 5:06
We're in a special moment where I think two years ago, when we were talking about the rise of large language models to both generate content and generate code, we're thinking about, you know, late 2023, early 2024. They were getting really good at content. They were not that good at code. And we're just seeing the models now every month coming out with refreshes, competing. Today, in you know, the first half of 2026, it truly is a Google, an open AI, and an anthropic game, right? The three of them, it's it's amazing. They're neck and neck. And at the time of this recording, Anthropic had just raised a$30 billion round and released Claude Opus 4.6 for coding, which is also their model that competes with Cursor and then Sonnet for content, right? The 4.5, 4.6. And today the coding models are not producing AI slop as much as they were two years ago. But I will say I do agree that while we're in the world where anyone can code because anyone can vibe code, it doesn't mean that you just push things to production. You have to come up with plans, you have to review the code. And whether that means an engineer reviews it, a swarm of agents review it, you do a lot of testing, both agentically and through human-led, that will create good results. Of course, as the repo or your code base grows more massive anytime you make a change, then these large language models like Opus 4.6 have to use more tokens to review all the code and provide the new options. So it definitely can be a lot of tech debt that you introduce. And that's why my thesis is that we're looking for technical founders who are very studious about what they're doing. They're very meticulous about what they're building. And an interesting case study for us is one of our investors is a former executive at a leading data research lab. And their view is that what used to be possible with a team of 30 to 50 people can now be done with six.
Rajiv Parikh: 7:25
That's a game-changing moment, right? Like you would think the key was to just hire. Well, actually, going back a little bit, when we were talking, we were talking mostly about chat and information displayed through chat. We were talking about prompt engineering. Now, as you say, as we're talking about, folks are saying, I'm writing code, I'm writing systems using these agents, whether it's Cloud Code or Google or what was the third one you mentioned that was one of the three?
SPEAKER_01: 7:48
Yeah, OpenAI, which is Codex, Claude Code, Anthropic. You obviously have the Google Gemini offering, and then Cursor, right? But Cursor is kind of a wrapper through Claude.
Rajiv Parikh: 7:58
So right, replit, right? There's a whole bunch of these. Is there a favorite for you?
SPEAKER_01: 8:03
I do think Claude is winning the market. They're doing phenomenal. At the time of this recording, OpenAI is trying to catch up to Claude. It's actually predicted recently in the information that by late 2026, Anthropic may have more revenue than OpenAI. I mean, that's how much the momentum has changed. I think when we recorded a couple years ago, Anthropic was just about 1 billion revenue, and OpenAI was like five to 10. But Anthropic's really picked up steam and their.
Rajiv Parikh: 8:32
I think OpenAI did 20 last year. I remember one of my venture friends said they were predicted at 15 and they actually did 20. So it was within months that they blew out their numbers. And Anthropic, I know, is growing really fast. It's more enterprise that are using them, but really for enterprise applications, you're using a lot more tokens to write the code and check the code, generate PRDs. So that's a thing that my friends talk about, or we're doing as well. You come up with an idea, you generate a PRD with Claude, and then you have the PRD that write the code with another agent, and then you have another system that checks it and runs tests against it. So it's like a whole system being built.
SPEAKER_01: 9:07
Yeah. And to your earlier comments about like how prompt engineering was a big topic we talked about two years ago. As a career, prompt engineering's gone away, right? No one hires prompt engineers, but prompts could never be more important. Today, people create prompt libraries. So, like you said, if someone's a product manager, they might have 20 or 30 best in class prompts that are helping generating and refining the PRDs. But before, you may have written the prompt manually by hand and you were like kind of guessing with hypotheses and whatever work, then you customize them in your Google Doc. But now you could ask back to Claude or whichever model, can you review my prompt? Can you customize my prompt? I'll put the prompt for the best results. So you don't have to be as skilled on that, but you still have to be very strategic and critical thinking.
Rajiv Parikh: 9:58
That's the interesting part. So PRD for everyone who's not in product development, it's a product requirements document. It's what you usually used to write to give to your engineering teams to go and build something. And it was the basis by which you would have your conversation. You can't, you never would cover everything in it. You try to do as much as you can. And then the engineering team would work back and forth with you on it and then what they build. But now you have it where, as you're talking about, David, where you write it, you can have a system that's helping you build it essentially through prompting or conversation back and forth. And then now you can feed it to a system to build it. And if you actually understand code, you can actually modify it together. So that's just an incredible capability. So these are your three favorite companies. Any startups that you think could blow them up, or any one of the players that are exploding, some of them are hitting what,$100 million in six months or more or less. Any thoughts there?
SPEAKER_01: 10:48
Yeah. So before on the startups, I did want to share one more thing you shared about the PRDs. That's so interesting. So Andrew Nying, who runs an AI fund, founder of Deep Learning AI, uh, he recently was at a conference just in the last couple months. And what he said is product is becoming the great equalizer. While in the past, you may have had one product manager for every four to 10 engineers who would help, you know, with all the PRD design, the business requirement docs, et cetera. Because vibe coding and coding with agents is speeding up production, maybe order magnitude, some would say from 2x to 100x. Now you actually need more product managers. You need product leads who are generating these PRDs, who are using agentic tools like Dovetail and others. And so now we're actually seeing where you might need one product lead. Andrew goes so far to say for every one engineer.
Rajiv Parikh: 11:43
So it may not be there yet, but that's the direction right because maybe the one engineer is controlling a fleet of agents who are in parallel or sequentially writing code on behalf of the product manager, essentially.
SPEAKER_01: 11:56
Right. And the speed is so much faster of production, right? What used to be a startup doing their sales kickoff once a year and their semi-annual or AGM release of new product. Now you see starts every week, every few weeks releasing new products competing at breakneck speed. It's that much quicker to go to production.
Rajiv Parikh: 12:14
One of my friends at one of the Fortune 500 ones that made the move to their infrastructure software or workflow software or how they run your business. They had six months between product releases, and now they've gone to monthly releases because of AI. It's even happening in very large companies, what you're talking about.
SPEAKER_01: 12:31
Yeah. And I think that, you know, with the large companies with the incumbents, one of the big discussions we have on the street is is AI killing enterprise SaaS? Right? We're seeing obviously the stock market's volatile. It's going through the SaaS apocalypse.
Rajiv Parikh: 12:46
I can't even say the word together like people are saying it, but SAS apocalypse.
SPEAKER_01: 12:51
It's like to your point, Rajeev, it's like, well, you know, look, you have these massive startups that are accelerating what anthropic and open AI does. You have lovable, right? The new gold standard today in Vibecode, and you've versal, right, which is helping you deploy these vibe coded apps. You have databases like Superbase, which are, you know, taking over from Databricks and Snowflake to just run these AI-powered apps. And a lot of incumbents are not sitting by the wayside. A great example is Wix, you know, one of the leading builders of websites today. Recently they acquired base 44. And base 44 is a competitor of lovable and versatile and all this. And why would they do this? Wix paid, you know, like 100 million bucks. And for them, it's like, well, we're not an AI-powered company. How do we become an AI-powered company? The best way to do that isn't necessarily a hardcore pivot, it's acquire and acquire talent and sort of bringing all those models into the system. Time will tell how Wix evolves. But a lot of companies are doing that. And we could talk about other incumbents who are doing well in the space.
Rajiv Parikh: 13:54
All right. Well, that's really cool. I think it would be cool at some point to talk about your favorites, but you've already named a bunch. That's great. So now let's shift to go to market, right? Which is one of my favorite areas, right? So there's a prediction that AI agents will soon become the primary buyers of software. They can act as the zero if member of a buying committee. So if we look forward, how does a startup pitch its product when the initial prospect is likely an AI agent filtering vendors? Does marketing to algorithms change your advice on go-to-market strategy for your portfolio?
SPEAKER_01: 14:23
I think today what founders need to expect is everyone is using large language models, everyone's doing a lot of work in OpenAI, Gemini, Claude all the time. And if you're sending materials to an investor, they're using these AI-powered apps as well. So you should expect that it's going into models and going into agents to help with research and deep thinking. And so when we get startup pitches, you know, it's I also expect startups to be using AI-powered tools and software. Just to my earlier comment about this, you know, founder in one of the leading data labs, having six engineers, not only that, when they hire engineers, he said, I expect them to be doing everything in claw or cursor. I mean, look, it's great. They could understand the foundations of infrastructure and data systems, and obviously like dive deep into the code if they have to, but the productivity won't be there if you just code by scratch. There was even on Hacker News recently, there was an engineer who's been coding for like 40 years, right? He's like a very high-level team lead. He said, This is the most exciting time in his life to be building products because it's so fun. What used to be mundane and routine, like unit testing. Have you ever heard an engineer say, I love unit testing? I can't wait to write more unit testing.
Rajiv Parikh: 15:42
So much fun. I want to write more test cases too. I think that's really fun.
SPEAKER_01: 15:47
Yeah. I mean, and look, and the beauty of the vibe coding and these tools is they could discover the edge cases, they could solve these quicker. So we expect that for startups. I even spoke to a founder the other day who's just finishing up his round. Right now it's him and you know, a couple people, what used to be a 10-person team. So that's on the startup part. A few other comments to what you were sharing about like marketing to algorithms. The World Wide Web is no longer just a human-led ecosystem. We've seen so many articles about sites crashing from the robots.txt files getting scraped constantly by these models. It's so much scraping and data being mined from models that some websites, including Cloudflare, has created specific text that you could put that bans models and agents from going on your website.
Rajiv Parikh: 16:39
You're just getting nailed. I mean, we we have to deal with this as the ones that manage companies' websites, making sure that the bots aren't taking down the site, making sure the bots aren't filling out the lead forms. Although maybe we do kind of want the bots to fill out the lead forms if they're, you know, buyer agents. So there's a bit of both in this, right? Like you want the help, but you also don't want to. And so there's just a whole way of thinking about it. I think about it as I want to set up my website for four different things. I want to make sure that it's good for a human who's doing it, good for a search engine. It's good for an AI engine. And I also need to think about making it good for a buyer agent. And so those are very different considerations when you're developing content.
SPEAKER_01: 17:27
Yeah, I think you're absolutely right. And a lot of these trends that we see, for example, like Open AI, it was almost a year ago that they launched their agent store and agents for shopping. And a lot of people at the time were like, oh, this is crazy. That's what you know, different business leaders were saying. And here we are a year later, everyone's building AI agents for shopping. It's still in the early days in 2026, but even Amazon's building it and others. So there's going to become a time and place, right? Where you just talk into, you know, Whisperflow and submit it to Claude or whatever. And it's just going to go and buy my groceries for the week, deliver them, and use the sensors in the fridge, see what I have. So I think we're not that far off. Of course, we do need to put the right guardrails in place. You wouldn't want Claude code running in the background. And then tomorrow you're like, oh, it just spent$50,000 fixing the bug on the website. What do you mean that costs$50,000? Or you ordered a hundred packs of bounty paper towels, like well, that was like the original problem with Alexa.
Rajiv Parikh: 18:27
But yeah, you don't want that to happen. So you have to have good guardrails. So previously, when you're on the show, you predicted that 2024 would be the year of applied AI truly in practice, moving beyond simple chatbots, where we definitely saw how model capabilities have grown. Yet recently, from that MIT technology review article from October 2025, which seems eons ago, it suggested a massive pilot to production chasm where enterprises are stuck. So, as an investor, are you seeing this failure rate as a flaw in technology's reasoning capabilities, or is it the failure of data? Infrastructures haven't cleaned it all up yet? How does it affect your view in terms of how you invest going forward?
SPEAKER_01: 19:06
Yeah, so a few takeaways there. First, I think a lot of investors and founders and developers in 2024 were not thrilled about the chat interface. And as an interface, some people called about the death of chat, like the UI will change. And here in 2026, the UI is two places. It's terminal or it's chat, right? Everyone's gone all in on chat. And it's probably just because it's so intuitive, right? You're chatting with, you're learning, you're getting the feedback. So, you know, for whatever reason, that was an incorrect bet, right, that myself and a lot of others made. But, you know, hey, it's it's great. It's great to work in that interface. It's a lot of fun. To the MIT article that you called out, honestly, since working in data science since the mid-2010s, it's always been really hard to bring from development to production. And historically, it's between 15 to 25% of data science projects or big data, or now we call them agentic AI projects, get to production. It's the same thing here. It's always been. And why is that the case? Well, there's so much testing that you require. You can't just press a button, push a diversal, and expect it to be perfect, right? You need forward-deployed engineers, these FDEs, which are the big title that everyone's hiring now from OpenAI. And OpenAI recently, actually in early 2026, said they're hiring hundreds of FDEs, which are these software engineers with machine learning capabilities, to take the products from what a sales engineer would typically do, like test it, demo it, get the sale to close, but then actually get it into production so it's successful. So the client has success. So I think that's what we need. We need a lot more human and AI hand holding throughout the process to ensure the products don't fail at a certain stage.
Rajiv Parikh: 20:51
That's right. So you're saying that basically, look, failure is normal. Don't expect everything to be perfect. But and that's where you have to keep iterating. And I see this in my own firm as we're building more growth marketing AI agents, that the initial take looks beautiful. Then once you start using it, there's that point of disappointment. It's not accurate. And then we have to decide do we want to keep grinding with it or use something else? And that's where the grinding is actually where the value happens, is getting it to something that we can really deploy for our team to use and for our clients to use. And you have to have that practice.
SPEAKER_01: 21:23
A constant practice, yeah, 100% on development and also expectation management. Again, two years ago, these models were only good at 10 to 20% of code. Today, depending on what metric you look at, some of these models are hitting 80 to 98% success. So will it be perfect? Not always. But again, I guarantee you, or not guarantee you, but I highly suspect by the end of 2026 or 2027, these models are going to be close to 100% across so many benchmarks, which is gonna beg the question you know, do we need humans for certain things? Or how do we make the humans and machines work well together?
Rajiv Parikh: 22:00
It's a transformation. So with 90% certainty, you can tell me that. Let's look at some forward-looking predictions. So, drawing on your time as the global product lead at Google, do you believe the moat for startups you invest in has officially shifted away from the model itself? So if models become commodities, what is the venture backable asset a company possesses that is investable in our current and future economy? So there were some companies that were, you know, really into building different models or building off of the current model, but now the models become so powerful that it's almost superseding all that. So how do you think about it?
SPEAKER_01: 22:34
Yeah, I think it's a good case, is actually one of our portfolio companies, Harvey AI. So Harvey AI today is the leader for legal tech LLMs and helping lawyers look through case briefings much faster than manually. And originally they wanted to build custom LLMs for the legal space. I think they put in tens of millions of dollars in their first mover advantage, and then at one point realized they're not going to be able to maintain that long term. So they they made an adjustment, right? That they're really building this co-pilot. And now they're doing very well. Just raised, you know, several hundred million dollars Series E led by Sequoia. So I think it is tough to build and expect to win with the models, especially when, again, OpenAI, Google, and Anthropics are, at least in the United States, right, the three core leading, unless you're using open source like Mistral or something in mainland China. I do think the business domain knowledge is the expertise, right? And so if you're in a niche industry or vertical, that's your competitive edge building a startup. I think the data moat, right? You get that data that's offline and you don't throw it into the incumbents, but you keep it protected for your startup, that is a competitive advantage. And I think this is what's helping startups do well. You have to remember, at the end of the day, when companies like Anthropic OpenAI and Google are just these juggernauts worth, you know, hundreds of billions or trillions, they're not going after small problems. You know, when I used to work at Google, we would say that if you got like a 10% measurement, 0.1% measurement improvement, that could generate, you know, like$100 million for the company. And whenever Google would launch a new product or a bet, it's like, can this be at least a$10 billion a year revenue business? Otherwise, they would kill the project. It's crazy to think that. But at Google today, now being at hundreds of billions of revenue, being perhaps at the time of this recording, the most valuable company in the world, other than NVIDIA and Apple, it's tough to put too many bets out there. So sometimes we have to sharpen the focus.
Rajiv Parikh: 24:33
Great point. And then you mentioned China, right? So these are the top three players, right? OpenAI, Anthropic, Google. They're US based and international, but they're precluded from going into China. China's innovated in its own ways, right? With Deep Seek, breaking the model, breaking the traditional notions of what you needed with chipsets and that kind of thing, but offering tremendous capabilities, tremendous amount of models that they're building. So do you have a perspective on that as well for what's happening with China and how they maybe, with their open source capabilities, they're building for their market as well as outside the market?
SPEAKER_01: 25:05
Well, first, a comment on open source. Open source is always important. It's always going to coexist and grow. We can see how in Europe, for example, Mistral AI was maybe early on great benchmarks, but slow for adoption. But as open source, Mistrals really gained steam in the last year. And now in Europe, you know, obviously the proponents for GDPR, they want all the data protected. And so Mistral's become really good. To China specifically and to the broader US market, around early 2025, there was the deep seek moment, right? And then I think it caused the stock market panic, where in one day the stocks fell like 10%. And everyone said, oh, everyone's going to move towards deep seek. And here we are over a year later. And what's happened is OpenAI's market share went down from 85% into like the 40s. And Anthropic went up from like 4% into the 20s or more. And Google went up from like 2%, because they were very late to the party, to almost 25% market share. And Deep Seek, when they launched, they were like a couple percent. They went up to four or five percent. They're still sitting there. The market share never went up. So I think it is showing you at least outside of mainland, those models are not, you know, being used. And it could be a variety of reasons, protecting data, privacy, competitive trade secrets, and so forth. But I will say the models are really quite special. Alibaba's Quen, Buy Dance's Dubau. You have Ziplu's GLM5 that just came out, Kalsha's Kling, Moonshot's Kimmy. Some of these are now showing videos that are even better content in this multi-model approach than what Sora from OpenAI or Nana Banana can do from Gemini. So I do think most US companies are probably only at most six months ahead of China, right? So China's really narrowed the gap. But I would say that their products typically are being used within the four walls of what is mainland.
Rajiv Parikh: 27:00
That's interesting. Yeah, I think maybe one way of looking at it is if you're building a specific product for your enterprise, then you may just take the open source version of their model, run it on a particular AI platform provider, and then go from there, right? Because then you don't have to pay for the IP cost back to the player. So I wonder if that's one way to look at it, or maybe it doesn't matter because they're all gonna just fight it out anyways, and you just go for the best model, it wins.
SPEAKER_01: 27:23
You know, it depends on the product, right? We're we're seeing a trend in Europe where they don't want to pay for everything. I think France recently said we don't want to use Microsoft Office anymore, right? So they're getting like an open source version of all those products. And so the same thing happens on models. People want data sovereignty, they want to own their own data, they don't want it to go to the incumbents. And so we have seen the rise of what's called small language models or SLMs or models on the edge. And actually, one of the incumbents is all for that. It's actually Apple. Apple has made the least amount of CapEx investment of any of the Mag 7. It's fascinating. And their belief, and we'll see how it turns potentially true over the next five, 10 years, is where do we think all the data is going to be trained and inferred? Is it going to be a massive Oracle led and open AI-led data centers in Texas and Dubai? Or will some of that be on device and on chips? And I think that depends on the size and scale of the models. For a lot of tasks that we do for content generation, you don't need Opus 4.6.
Rajiv Parikh: 28:26
You could just do something really straightforward and simple. We even talked about it on your iPhone, right? It could be that straightforward.
SPEAKER_01: 28:32
Yeah. And so I think that's the trade-off. And one of our Porcos actually specializes in deploying SLMs on air gapped containers. They're called Polygraph AI. They recently did their seed round. And, you know, with the government, they all work with them and say, like, it's your data, it's your chips. In fact, they've developed the SLMs to run only on CPUs. So in case GPUs are your constraints. So we are seeing a lot of different chip players, both domestic and foreign, that are looking at alternatives to the NVIDIA ecosystem, even in their cloud.
Rajiv Parikh: 29:02
Very interesting. Okay, so now two years ago, you told us that you stopped searching and started chatting. So now, what is your personal AI workflow look like today? Do you have a favorite? Because I remember before you were trying multiples and you had a different one for different things. So does the chat interface itself disappear in favor of invisible background agents? Or do they all sort of work together? What do you see?
SPEAKER_01: 29:26
Yeah, I mean, I'd say for someone like myself, everything is between the terminal and chat, right? So it depends on what I'm developing and working on projects. These days, I spend pretty much between the big three, as I mentioned. I think Gemini, even 3.1, just came out today. They have incredible products. So I'm using their studio, their Nana Banana solutions. I use Claude a ton. I also like OpenAI Codecs. So I think sometimes it's nice to use multiple offerings. And when you're building a product, you can even have agents on different models helping do different things together to collaborate.
Rajiv Parikh: 29:59
Having that antagonistic situation, right? Where you have one model evaluating the other or one's output is being evaluated by the other to get the best result in the end. Yeah.
SPEAKER_01: 30:07
It's almost like you have your own real factual open claw or MOLT book, right? Where these agents are collaborating. It's it's incredible to see the evolution of agents, especially now, you know, open claw just got somewhat acquired by open AI in early 2026, which is exciting.
Rajiv Parikh: 30:22
That's amazing. So then if everyone keeps doing what you're doing, and my team does the same thing. They're using so many different ones. Does everyone win? Does a single like a lot of these valuations are based on a single player winning? So does that continue or does one start to dominate?
SPEAKER_01: 30:37
Yeah. So if we think back to open source, a great uh analogy is the database market, right? Today, I mean, Databricks, Snowflake, and others are making billions and billions of dollars with their own cloud data lakes and databases and data stores. But you know, Postgres is free. And you could run a Postgres database on your app and not pay the big players. And in fact, 90% plus the world all use Postgres. Our phones and most apps are free databases. But still, there's something special about having enterprise grade security, having your data trusted, no viruses on the systems, making sure there's no prompt injection attacks, having the security and the guardrails. So I think it's a trade-off. You know, I think open source is definitely going to continue to gain market share, but I think the big ones are not gonna go away. These token factories are continuing to grow and scale because more and more people are now trusting the models. Two years ago, you and I might have typed a prompt in a lot of these, and half the time or more, it hallucinate an answer. Today, I you'd be quite surprised. You put in some real data with, you know, spreadsheets and things, and it's not hallucinating nine out of ten times. So the models have gone really good.
Rajiv Parikh: 31:48
It's doing much better. With growth marketing, it hallucinates a little more, and we've had to learn how to constrain it. But with my healthcare information, it's pretty amazing. I have a cardiologist right at the ready. So you've been focusing on American dynamism, and that includes defense tech and other deep tech areas. So driven by innovations from this incredibly painful war in Ukraine, we've seen industry shifts highlighted by some of the investors that have been on the show, like Matt Bigge from Crossling Capital, right, where they're moving away from high-cost, exquisite platforms towards low-cost expendable systems like autonomous drones. So, in your defense tech thesis, are you betting on the hardware side of reindustrializing the American base to build these systems? Or is the venture opportunity specifically in the agentic software that allows swarms of these low-cost systems to coordinate without human pilots?
SPEAKER_01: 32:40
It's a combination of both. Early on, when we founded, you know, Data Power Capital, we were thinking about bringing chips and AI together. And the American dynamism thesis is about if you're creating hypersonic missiles where they're having software to also track that. Or if you're looking at sovereign defense in space, right? What is the way to autonomously manage that for the greater good? So I think it is a combination of both. And so when we're looking at these companies, it has to be how are they not just building, of course, very novel physical, right, frontier edge tech, but how are they integrating the AI to it? And so I think for us, that's quite special. We've invested in several companies in the space that do that. Castellian is one of my favorite, right? They're truly an American dynamism. They build hypersonic missiles here in the deserts in the US. And it's truly to help America maintain its competitive edge to, you know, uh foreign threats. And I think we're gonna see that with other companies, like you said, like autonomous drone agents. It's incredible, right? If you got the chance to watch the 2026 Olympics in Milan Cortina this year, they're using drones for all the footage. There's so much autonomy there.
Rajiv Parikh: 33:49
Right. And you hear that whirring in the background, but it's following a skier down the slopes or it's following a speed skater. It's mind blowing with these things.
SPEAKER_01: 33:56
Yeah, and the scale could be bigger, right? If we think about the Luna New Year that occurs every year in Southeast Asia, you know, now they're using swarms of drones to show beautiful images of hundreds or thousands of these drones creating, you know, images in the sky and not having a thousand humans managing this, right? It's all autonomous. So I think it is incredible. And so I think that's a challenge is all of this is software that has to get made. And whether you're talking about American dynamism or globally, the challenge is having a renewed mind. I think earlier on, uh, you mentioned about the backlash on vibe coding. The challenge with a lot of software is that people try something once and then they write it off. Ah, it's not good, it's not production ready.
Rajiv Parikh: 34:36
Yeah, they just walk away and they jump to something else.
SPEAKER_01: 34:39
Yeah, but six months, 12 months later, it's like, well, it's improved by the order of magnitude. And so even though there are engineers pushing up AI slop and AI hit pieces or agents are attacking people for rejecting PRs and all this like futurism that we're seeing today, I think we're gonna move into an exciting space where you know you and I can build our own applications, our own websites. And that's why companies, again like Wix.com, bought base 44 because they think the future is not you and me dragging and dropping something. It's gonna be writing the prompts and then you blink your eyes, and then there's the improvement.
Rajiv Parikh: 35:15
I have CEO friends that are literally just playing with cloud code themselves and writing parts of the products they envision. So it's really amazing. Let's jump to the Spark Tank. So today we're joined by David Yakubovich, general partner and managing director of Data Power Capital, someone who's essentially investing in the engines of the next century. But when you aren't backing the most defining firms of our era, you are putting your own biological engine to the test. As an endurance athlete who tackles ultramarathons, you've discovered that high-stakes fund management and 100-mile races share a common requirement, the ability to keep moving forward. Today we're putting that relentless stamina and your data-driven mind to the ultimate test with the Frontier Endurance Challenge. We're going deep into the history of human grit, from the human machines of the early Olympics to the mind-bending psychological hurdles of today's toughest ultras. David, are you ready to prove your mental inference is as sharp in the history of human endurance as it is in the future of deep tech?
SPEAKER_01: 36:16
Not at all, but I'm ready to have a lot of fun.
Rajiv Parikh: 36:20
We will, you know, when I got these questions and answers, I usually get most of these wrong anyway. So I'm always blown away when my guest gets something right. So here we go. One of the most legendary endurance feats is the man versus horse marathon in Wales. While it took 25 years for a human to finally win, what is the specific biological advantage that allows elite human runners to outlast horses over extreme distances? A. Humans have a higher concentration of mitochondria in the quadriceps. B. Humans possess a unique elastic recoil in the Achilles tendon not found in equines. C, humans utilize specialized sweat glands and a lack of fur for superior thermoregulation, or D. Humans have a more efficient glycogen to torque conversion rate in the anaerobic threshold. This is humans against horses. Humans eventually won, and I can go through the different ones, but basically mitochondria, elastic recoil in the Achilles, specialized sweat glands, or glycogen to torque.
SPEAKER_01: 37:25
Well, they all sound good to me, but I'll go with a mitochondria.
Rajiv Parikh: 37:28
Why do you say that?
SPEAKER_01: 37:29
You know, I think from what I know about horses and wars, right, they can go dozens of miles and really, you know, think about ancient Rome and bringing messages. But at some point, you know, I think they they do get tired, right? And I think probably humans have ways to replenish their energy and do things, which might also go to the glycogen one that you mentioned. So that would maybe be my second guess.
Rajiv Parikh: 37:53
So this is the really hard one. It's actually C. And I think you're on the right track, though. Horses can go for a long time, but they do much better in cooler weather. Humans were designed or we evolved in hotter weather, and so we don't have fur in the way. So humans are the ultimate endurance hunters because we can shed heat versus sweating while moving. Horses have to stop or slow down significantly to cool off, even though they do have sweat glands. They're not like dogs where it's just with the tongue. So that's why the human runner can eventually close the gap in high temperatures. So who knew? Okay, here's number two. In the 1904 Olympic marathon in St. Louis, it's considered the most ultra-marathon-like disaster in history. The winner, Thomas Hicks, finished only after his trainers gave him a near-lethal performance enhancer mid-race. What was it? A, an early version of anabolic steroids created from testosterone from the first synthetic oral derivative, methyl testosterone. B, doses of strychnine, a rat poison, mixed with brandy, C, pure oxygen from a compressed tank and salted beef broth. Or D, a crude form of adrenaline extracted from sheep glands.
SPEAKER_01: 39:02
Okay, we're thinking about 1904, running the ultra, the runners collapsing. What does he need? And that time frame.
Rajiv Parikh: 39:10
It was still an Olympic marathon. It was the biggest disaster in history. And he still finished, even though it gave him a near-lethal performance enhancer. So testosterone, strychnine, oxygen in a tank, and salted beef broth, or a crude form of adrenaline extracted from sheep glands.
SPEAKER_01: 39:27
I think these are really tricky. If it's during a race, I would say like it's live during a race. I feel that like the oxygen and the soup is most likely. Like I would receive that if I was in the mountains. But if it's not during a race, I might go with a different answer.
Rajiv Parikh: 39:42
With which one?
SPEAKER_01: 39:43
If it wasn't during the race, I'll probably go for the testosterone.
Rajiv Parikh: 39:46
Testosterone. But remember, it was near lethal.
SPEAKER_01: 39:49
I know you're saying maybe it's the rat poison and the brandy.
Rajiv Parikh: 39:52
Guess what? It's the rat poison and the brandy. It was a pseudo-trick question. And your answers make a lot of sense, by the way, because frankly, none of these make sense. But it's actually true that it was strychnine in small doses that was thought to be a stimulant at the time. Hicks collapsed after the finish line and nearly died. It remains the most bizarre medically assisted win in Olympic history. All right, here we go. Number three. This one I think may be closer to home. So maybe you can get this one. There's a race in Queens, New York, which is your backyard, called the Shri Chidmoy Self-Transcendence 3,100-mile race. What is the mind-bending logistical requirement of this ultramarathon? A runners must complete the entire distance on a standard treadmill set to a 2% incline. B, runners must carry a 15-pound pack symbolizing the weight of the world. C, runners are only allowed to sleep for 90 minutes for every 100 miles completed. Or D, runners must complete 5,649 laps around a single city block in an industrial neighborhood. So Shri Chinmoy's self-transcendence, 3100-mile race. That's a long ass race.
SPEAKER_01: 41:03
You know, we're going with mind numbing, right? And self-transcendence feels like you have to get used to monotony. So I feel like the city block, I feel like the 5,600 laps would be something you'd have to meditate on. So I'm going with D.
Rajiv Parikh: 41:17
Guess what, David? You are right. The correct answer is D. And for the right reasons. It's the world's longest certified foot race. Runners circle one block in Jamaica, Queens from 6 a.m. to midnight for up to 52 days. Is the ultimate test of keep moving forward at 2 a.m. mindset. So love it. Good thinking. Okay. The fourth question and final question. In 1983, a 61-year-old potato farmer named Cliff Young entered the inaugural Westfield Sydney to Melbourne Ultra, which is 544 miles. He beat the world's elite runners by two days. What was his non-consensus secret to winning? A, he wore heavy work boots that provided superior ankle support on uneven roads. B, he consumed a diet consisting entirely of raw milk and pumpkin seeds. Or C, he ran straight through the night while the professional athletes slept. So how did our man Cliff win?
SPEAKER_01: 42:14
Yeah, so my guess was going to be so off, but I think it's something with the ankle boots. Perhaps this race was on the beach in the sand, and maybe the support gave him a leg up. We'll see.
Rajiv Parikh: 42:25
All right. He was 61 years old. So there's work boots on uneven roads. He's against the world's elite runners, and he beat him by two days. So work boots, raw milk, and pumpkin seeds, or he ran straight through the night while the other slept.
SPEAKER_01: 42:43
Well, in that case, you're leaning me towards running straight through the night.
Rajiv Parikh: 42:47
Guess what? You're right. So yeah, and the crucial part is knowing that he's that old, right? Because if you were around the same age, then maybe, or you know, slightly out of shape. But most pros believe they needed six hours of sleep a night to survive. Cliff didn't know that. He kept running while others were in bed, proving that endurance and perhaps venture capital investing, sometimes the best strategy is simply not stopping when everyone else does. Which is actually something you did for a while. I think in the pandemic, a lot of folks were not packaging investments together. And you did, and you kept persisting.
SPEAKER_01: 43:21
Yep. And you know, here we are, five and a half years in and continuing to build. Yeah, it's quite special. I mean, we I've been through a lot of war rooms with our founders, you know, when something has to get done and be like, okay, let's fly out, let's meet up, let's work on this for a day. Like, let's figure out the problem. And I think that that is the persistence. And I think that's a special secret sauce, honestly, being able to. I may not do ultra marathons as long as the ones that you just shared, but you know, there's something special when you can still put hours and hours to pavement. It shows that uh commitment, right? And that discipline.
Rajiv Parikh: 43:53
Yeah. I mean, it's it's a lot of self belief, right? You're believing stuff when folks are going the other way. So it's really, really Amazing. Okay, so let's talk about what's your spark. So you grew up in Florida taking apart circuit boards. I remember you talking about how you worked in your fixing VCRs in your father's uh electronics repair shop. So that gave you that early desire of how things work. Do you find that this tactile, hardware-centric upbringing makes you more skeptical of pure software AI companies that lack a physical infrastructure component? Or has it made you yearn to return to robotics, which is something you mentioned was a path you wish you had gone all in on?
SPEAKER_01: 44:26
Yeah, it's uh I think two years later, we can see now the robotics is becoming a new trend, right? Jensen at NVIDIA said that physical AI will be the next big wave. And we've made bets in physical intelligence and some other robotics companies like Abtronic. So this might be my way to reclaim to the heritage. You know, I look at a lot of great founders that I think inspire me, right? Dario at Anthropic, Ali at Databricks, Mati at 11 Labs. They're all software founders. None of them are building on hardware. So I think traditionally I love seeing how software changes the world, but definitely at that inflection point, right? I'm looking at what Brett's doing at Figure AI and some incredible companies out here today. So, but I tell you, I just I love something about tactile experiences, the hardware putting it together. So when I get to meet a founder who's building in hardware, we can go a lot deeper into conversations, right? You can really see that endurance that they're also committing to.
Rajiv Parikh: 45:26
It's an incredible challenge, right? When you're dealing with hardware. You have things that you don't have to deal with in software. You have actual physical inventory. It's real stuff. It has physical limits, it has environmental limits, it has manufacturability, it has inventory. And so I think a lot of companies didn't do well because maybe VCs weren't as well designed for that. And now that we've moved further and become more mature, maybe if some of Elon with Tesla, we we decided that we can do more with hardware and more with robotics.
SPEAKER_01: 45:51
Yeah. And I mean, I think we're seeing even a lot of incumbents are going into that space. You know, Meta saw a lot of great success with its partnership with Lexotica with the Meta Ray Band glasses. And now they're planning to perhaps later this year launch a meta watch. OpenAI, of course, you know, acquired Joni Ives startup, Joni Ive, famous designer from Apple. His startup love now, OpenAI is gonna be launching, supposedly end of this year, next year, a pendant necklace, right? So we're seeing a lot more physical products or fidgetal products moving into the real world.
Rajiv Parikh: 46:25
Figital. I like that. We're gonna have to use that dark lips. I love that. You are a certified 500-hour Baptiste yoga teacher in an AI industry defined by breathless acceleration, effective accelerationalism, and constant FOMO. How do you practically apply the discipline of yoga to maintain investor discipline? Do you have a specific practice for clearing the noise before signing a term sheet?
SPEAKER_01: 46:50
Well, this is a fun question. And I I would say I don't regularly practice as much as I used to when I was all in on the yoga scene. Uh, and that might have been when it was truly zeitgeist in America. There was that period when everyone, you know, your your mom and your grandpa was all doing yoga. But I think there's still something special about doing hard things. And for me at that time, going through teacher trainings, show me what was possible in my body of balance and headstands and commitment to a pose, right? Refinement meditation. I think that's kind of led me down this endurance athlete approach, right? It's still very meditative. It's a consistent pose of running and striking your forefoot or the ball of your foot, depending where you're at. So yeah, I think, you know, honestly, I am a huge fan of athletes who are founders, right? I I have mad respect, whether it's athletes, investors.
Rajiv Parikh: 47:43
You have to focus on the detail and you have to really look at things over and over. You can't just jump from thing to thing, right? You have to really focus to perfect your craft.
SPEAKER_01: 47:52
Yeah. And it's look, you don't have to always win gold medal at the Olympics, but you know, if you were if you were up there, it kind of shows that commitment. You know, I even back in high school or in college, I did, you know, math competitions and Putnam. And today these are all solvable by Claude and OpenAI. But, you know, I'll meet every once in a while a founder or an investor who also did the competitions. And you just build such common ground really quickly. Like, wow, you also spent thousands of hours solving these math problems by hand. You know, unfortunately, we don't need to do that anymore, but that was really mad respect, right?
Rajiv Parikh: 48:23
That's great respect for someone willing to go through that. So we always ask guests to name a historical event or person or movement that inspires you. And you answered the immigrant founder. What in particular about that journey lights you up or sparks you?
SPEAKER_01: 48:36
Look, my dad's an immigrant. I'm first generation to go to college in the States. Something really special. I think most of American startups and large companies today are launched and built by immigrants. It truly has built this country. I personally believe that the reason why America is not declining in population, why its economy is still thriving, why its GDP is still growing and not dying from the national debt is because of immigration. I truly think we're an immigrant-first country. And when we see founders like, you know, Dario of Anthropic building between the US and London, you know, Ali of Databricks, right? Natty of 11 Labs from Poland, you know, these founders who are choosing to bring their companies to headquarter in the US, something about the American dream, I think it's still alive. And I think that is, you know, what will help America stay as one of the leading countries in the coming decades is being pro-immigration and really having policies that support people, whether it's through technology, H1B, O1, any of these programs to make a better life. And then, you know, when they come here, I think 9.9 or 10 out of 10 immigrants are always very patriotic, very supportive of the country.
Rajiv Parikh: 49:50
So like you said, they're willing to make the choice to come here. They want this ideal when they come here, right? And of course, they strive because they're here. So they're already the risk takers. Like we've had uh, you know, Manan Meta on our show, his fund unshackled is all about the pre-seed immigrant entrepreneur. So I'm like you, I'm a huge fan of it. Just like you, my father also came from India in 1964 to get a better education. Uh, totally in with you on that. So let's go through some even more personal questions into work life. So, what's something you've gotten significantly better at in the last year that has nothing to do with work?
SPEAKER_01: 50:26
Saying no to things. How you know, and then that that could be both personal and work, right? You know, you get invited to events with friends or or for work, right? Take this meeting, take this call, and sometimes being like, you know, hey, let's punt that to a few weeks. But I don't think that directly answers answers your question. Well, something really interesting I'm training for this year is my first ever triathlon. So even though I'm super big into running, of course, as as we know by now, not that great at biking and swimming. So actually, those are two things that I'm really training and growing a lot this year. I think swimming, much like running, is very meditative, right? You have to have that consistent form going back and forth, the breath work. So those are things I've I think I've improving quite a bit this year.
Rajiv Parikh: 51:12
Which level are you gonna do? How are you gonna start?
SPEAKER_01: 51:14
Oh, you know, having done so much running, but not races like these Ironmans, I would say just start with the Olympic. Yeah, the sprint I think is about a couple hours. Olympic is usually four or five.
Rajiv Parikh: 51:25
It's a great way to start, right? Step one. And like you, I love running, but I don't not as good of a swimmer, and I've been teaching myself to swim more as part of the rotation.
SPEAKER_01: 51:35
So well, I will tell you if you are inspired, maybe we will do a try together because the sprint try, why it's so fun and accessible, and why I'm training for it, it's less than a half a mile of swim, so not too long, a 12 and a half mile bike ride, very, very feasible, very doable, and just a 5k, just a three-mile run. So so really it's the swim is gonna open water half a mile. That's gonna be the challenge.
Rajiv Parikh: 51:58
So, you know what? That would be a great way to get over my fear of not having a wall to touch. So I might join you for that. That'd be a good one. What's the most interesting thing you've learned recently from a random internet rabbit hole? Or maybe it's a random chat rabbit hole.
SPEAKER_01: 52:16
An AI agent chat or something, right? Like I think Maltbook, right? This whole open claw movement has been so fascinating. So, for those who don't know, right, Peter Steinberger, famous startup founder, created a project called Open Claw, which is like where AI agents can talk to AI agents. And in early 2026, it became super viral. It got turned into this Reddit type website called Maltbook. And it's just been so fun going down these threads and rabbit holes to be like, what would the agents say to each other? And look, for all intents and purposes, they don't have humanism. There's no like, you know, God complex to these agents, they're just, you know, text retorting to text, but so fascinating to see the flow of conversation and how they upvote. And it's, you know, I've just find it really fun. And I probably spend a few hours just going down those rabbit holes. Yeah.
Rajiv Parikh: 53:06
That's pretty cool. If you could sit in on any meeting happening anywhere in the world right now, just to observe how it's run, what meeting would you choose?
SPEAKER_01: 53:14
I would love to sit in meetings with a head of state, whoever that head of state is, right? So whether it's a US administration, in the China administration, the European Union.
Rajiv Parikh: 53:26
Okay, so is there one that pops to mind as a this leader is someone I really want to dig into and learn more about and see how they run things?
SPEAKER_01: 53:35
I would love to sit into meetings with Trump because I am so fascinated by the art of negotiation and the art of closing a deal. And, you know, so I don't want to sit into all the meetings. I want to sit into the meeting where like he strikes peace between Putin and Valinsky. Like, how did how did how did that happen with Putin and Zelensky? Or how did he, you know, have this incredible meeting with the Nobel Prize winner from Venezuela, right? Like I want to see the magic come to life. I think it would just be so fascinating.
Rajiv Parikh: 54:06
All right. I love that. That's a great answer. That would be a very fascinating meeting. Having lost a Super Bowl bet and watching Melania, I don't know if I'd want to be in that particular meeting, but we'll see. But it's a great answer. Uh, what's something you wish you could experience again for the first time?
SPEAKER_01: 54:23
I talked to my dad a lot about tech and the tech changes. And I think going through this AI power movement is the biggest generational shift we're seeing since cloud, since mobile device, since World Wide Web. And my dad always says, Oh my gosh, how great would it have been if I was born 30, 40 years later. I'm not saying I want to be born later, but it's just, I can't imagine. Like, if you are today listening to this podcast and you're a middle school, high school, college kid, or just graduating college, the world is your oyster. It truly is. It's so incredible, like with vibe coding, what you can do. I mean, I have spent, as many of us have, hundreds of hours on all types of interesting manual labor, data entry type projects that you could do in seconds today. It's just incredible. And I think most people are not taking advantage of it. And, you know, so it'd be kind of fun to rewind 20 years, you know, go back to my high school days. And it's no longer doing thousands of math competition problems, but it's like, how could I start building modern agentic systems in high school? So I'm really excited to kind of see where this new tech wave takes us.
Rajiv Parikh: 55:34
There'd be a great way to go, right? Get up and be able to almost maybe that's a meeting to zoom into. Look at something from a kid's eye in the future or even today. What's the difference between the leader you are and the leader you're trying to become?
SPEAKER_01: 55:49
You know, I think I still lead a lot with my technical knowledge and this technical view of having managed technical products and technical teams in a data and AI ecosystem. Though sometimes, you know, as such, and this is to our benefit as a firm, we get into the weeds. We really get into the technical details with founders to support them and understand what they're building. And that creates great common ground. But then sometimes we're so in the weeds that, you know, it takes a little bit more to understand the business, right? So I think the leader I'm becoming is truly seeing how do you apply such incredible breakthroughs in reasoning and in token rates and building the next physical AI solution, but applying that research to a business. And that's something we've been doing a lot with the last couple of years, which I think has led us to make investments in exciting companies like 11 Labs in Harvey AI and some other notable companies.
Rajiv Parikh: 56:45
Here's an extra question. With your fund, you're investing in companies that are much later stage, but you're also investing in seed stage companies. So how are you structuring these?
SPEAKER_01: 56:55
Yeah, I'd I'd say, you know, early on, we were exclusively focused on early stage. And just by way of companies graduating into the A's and B's got into growth. And today are pretty much half and half, right? So we're we're looking a lot at these earlies, making bets into a lot of stealth companies. Definitely I consider early stage companies big bets, right? You're looking for could you be the next Google? Could you be the next lovable? Could you be the next anthropic? The growth stage companies is, you know, hey, you're already at 100 million, 200 million revenue. Do we believe you can add firepower and become a billion revenue, right? And become a generational company. So I think those are more core thematics, but very excited on both sides.
Rajiv Parikh: 57:37
That's great. Well, David, thank you so much for coming on the show today and sharing your insights two years afterwards. There's such an incredible progression from where we just were to where we are today. And I really appreciate having you here. You bring such an amazing perspective from all the different areas that you work in. So thank you so much.
SPEAKER_01: 57:55
Thank you. It's always my pleasure and great to be back in Spark of Age.
Rajiv Parikh: 58:04
All right, thanks for listening. If you enjoyed the pod, please take a moment to rate it and comment. You can find us on Apple, Spotify, YouTube, and everywhere podcasts can be found. This show is produced by Anand Shah and edited by Laura Balland, production assistant by Taran Talley. I'm your host, Rajiv Parik from Position Squared. We are a leading AI based growth marketing company based in Silicon Valley. Come visit us at position2.com. This has been Enough and Funny Production, and we'll catch you next time. And remember, folks, be ever curious.