A conversation on modern AI coding with Khe Hy
There's a particular moment that happens when someone with zero coding background starts building software with AI. It usually involves equal parts wonder and frustration and a lot of "wait, that actually worked?" (and probably at least one existential question about what programming even means anymore).
I wanted to have this conversation with Khe Hy because his path to AI is one that I’m seeing with a lot of business leaders. He's got a CS degree from Yale, spent 15 years analyzing quant funds, became Managing Director at BlackRock by 31, and then walked away from Wall Street to focus on his family. I’ve been having conversations with Khe here and there and I was surprised to hear that despite his CS background, he hasn’t been in the weeds of actual coding for decades. The current moment is changing that though, and now he’s back to building software in a world where the tools have fundamentally changed.
I'm coming at this from the other direction. I spend my days in Cursor, running SQL queries, shipping prototypes, and thinking about AI tools as someone who lives inside them. We're both looking at the same technology but from very different angles.
What follows is an unfiltered Q&A about what it actually feels like to build software in 2025, whether you're a technical PM or a business person picking up code for the first time.
On AI coding tools
When you first opened Cursor (or Replit, or whatever AI coding tool you started with), what did it actually feel like? Was there a moment where something clicked, or was it mostly confusion?
Jake: I had been familiar with IDEs in the past, learning how to write a few odd scripts here and there pre-coding agents. Cursor felt, of course, very similar to VS Code with a few distinctions, chief among them being the chat panel. It didn’t take long for me to realize the power of that panel, especially its early Agent mode. I had previously been copy/pasting code snippets from ChatGPT, so watching Cursor’s Agent mode pretty much entirely automate processing that was taking up a considerable chunk of my time was a big “a-ha” moment for me. There was no turning back at that point.
Khe: I’m embarrassed, but I had to take an online course on Cursor/Vibe Coding. I had to Google what IDE stood for and to this day I have never used VS Code. The course was aimed at non-coders and, honestly, just understanding Cursor’s three panels felt like freedom. And don’t get me started on using the terminal.
On the skill curve
There's this debate about whether AI coding tools flatten the skill curve or just hide it. A seasoned developer knows why the code works, not just that it works. Khe, do you feel like you're actually learning, or are you just getting things done without understanding them? And Jake, from your side, do you think that distinction even matters anymore?
Khe: As someone who likes to be in control, I feel deeply uncomfortable knowing that some code is running, yet I have no idea what is happening. This has led me to a new style of learning. First, I spend a lot of time describing the problem I’m trying to solve with code and asking an LLM (typically ChatGPT) to suggest frameworks, unit tests, back-ends. I also try to understand these decisions. Then I have the LLM actually describe the code in pseudo code and see if I can track down the logic. The actual writing of the code is fairly straightforward. Then I pray that there are no bugs, because I don’t really know how to fix them without just saying “try harder.” That being said, I’ve started to incorporate unit tests (which I learned from ChatGPT) to test my code along the way.
Jake: I like Khe’s approach, but in general I think the distinction is mattering less and less. Eventually I think most people won’t know the specifics of how their code works, just like most people don’t know the specific of how their car’s engine works. Once the value of an output is commoditized to a point where anyone can take advantage of it, the underlying understanding slowly fades away. That can and will happen with code (though there will always be code junkies that understand it all, just like motorheads).
On the risks of AI coding
What's the worst thing you've broken while building something with AI assistance? And more importantly, how did you figure out how to fix it?
Jake: It’s less common nowadays with models that have been optimized away from making mistakes, but I used early ChatGPT to debug a lot of my SQL queries. At those times it was a bit risky, and I can recall at least a few times where the model confidently gave me an incorrect query (that worked!) that I used and generated some data reports with, no questions asked.
Eventually, after I realized the reports had some inaccuracies, the cleanup wasn’t dissimilar from when I’d mess up SQL queries on my own. Lots of apologies and lots of tweaks to make sure the output is correct.
Khe: I tend to struggle with what I call the “glue tasks.” These are the services that sit between your code and the end-user and in this case it was Resend.com being unable to send Supabase’s auth emails. There was an issue between the domains, SMTP servers and database access. With a challenge like this, I zoom out to fully understand the chain of steps that leads to the error (or outcome). I guess the antidote is a lot of patience!
On ambition inflation
AI coding tools seem to make people more ambitious about what they try to build. Before, you might have hired someone or used a no-code tool. Now you're trying to build actual software. Is this ambition inflation a good thing, or are people setting themselves up for projects they can't maintain?
Jake: I think it’s a double sided sword. For technical product folks like myself, someone who historically has patched together mockups on Google Slides or Balsalmiq, using AI coding tools to create interactable web apps is a net benefit. I can go from ideation to workable prototype without bugging my devs. Everyone’s happy.
For less technical product folks (or perhaps entry level analysts who have bosses that insist they can vibe code an entire feature set), there are problems and slow down. There’s a whole messy underbelly needed in terms of preview deployments, necessary database structures, etc, that these types of people are going to get caught up in. A lot of this type of infrastructure needs the careful guidance of a senior developer to ensure that the prototype is scalable and secure. As models continue to get smarter and smarter, people are going to be more and more convinced that they don’t need to worry about that stuff. And that’s just not true.
Khe: I think it’s a good thing and vibe coders life myself get a quick dose of reality when we need to think about marketing our apps or having them pass crucial security tests. But the challenge for me has now actually become time. Despite all the AI, these projects are extremely time consuming, which leads me to ask myself: Why am I actually doing this? For now, it’s usually to scratch an itch of a product I’ve wanted to use myself. Other times it’s to learn something new (like the Claude Agent SDK) or to build solutions from my clients. Ultimately, it’s become a pretty vanilla resource allocation question.
On the moment when AI stops being helpful
Let's talk about the moment when AI stops being helpful. That point where the project gets too complex, or the context gets too messy, and suddenly you're spending more time managing the AI than building. Khe, have you hit that wall yet? Jake, how do you navigate it?
Khe: I have, but usually it’s when I try to one-shot a pretty expansive PRD. I then subsequently realize that not only is one-shotting is all about feeding my ego, but it usually overcomplicates what I’m trying to build – and more importantly, what I’m able to test. So now, I still have the expansive PRDs but just build and test in incremental units.
Jake: Historically, yes. I’ve got some old AI-generated repos that I have no desire to go back into and clean up.
Lately, no. The more recent Codex models (5.1 and 5.2 on Extra High reasoning, specifically) are able to chew through an insane amount of context. Tasks in the past that required extensive repository knowledge are handled easily, sometimes taking 10-20 minutes. Their autonomy is fascinating to watch; oftentimes the model will explore websites/documentation unprompted to find stable solutions. Makes me feel like a proud parent.
On product-market fit
There's a difference between building something that works and building something people actually want to use. Khe, you come from a world of analyzing investments, which is fundamentally about understanding what makes something valuable. How does that mindset translate when you're the one building? Jake, how do you think about product-market fit when you can ship so much faster?
Khe: It’s interesting since while I do have a finance background, I think I’m better served by my decade of Digital Marketing experience by building out the RadReads platform. So first I ask myself, what channels/platforms could I use to get early users? How would I message, position and name it? I haven’t done much building in the enterprise category for investment firms yet – if (or when) I get to that point, I’d be much more focused on the TAM, unit economics, distribution strategy, etc.
Jake: Product-market fit for me is an upfront task. Once that’s settled, and you’re iterating/shipping quickly, the product-market fit becomes the feedback you’re getting from active users. If you can cross that first barrier and get a solid atomic network, the speed at which AI lets you ship becomes a tool multiplier in your iterative development process.
On getting your hands dirty
Both of you could easily hire developers or agencies to build things. What made you decide to get your hands dirty instead? Is this about cost, control, learning, or something else?
Jake: Honestly it’s just who I am. I started out my career by parking myself next to an entrepreneur at a local business center and telling him that I’d work for him until he could pay me, and then I’d work for him more. In high school I spent an inordinate amount of time cataloging my digital music library.
I’ve got control issues and a gnarly work ethic. It gets me into trouble sometimes. But for vibe coding and rapidly generating prototypes it’s pretty much the perfect combination.
Khe: The fact that I never hired a developer should be telling here. Prior to agentic coding, most of my coding projects were half-baked ideas that I thought “would be cool.” So not only could I not justify the cost, I didn’t even know how to “speak developer.” I wouldn’t have known which questions to ask. When I tried learning how to code via a programming language, it just felt like too steep of a curve – and too long of a climb.
On knowing when to trust the output
AI makes confident mistakes. It'll write code that looks perfect but fails silently, or suggest solutions to problems that don't exist. How do you develop the judgment to know when to trust the output and when to be skeptical? Khe, without deep technical background, how do you even know when something's wrong?
Jake: Those types of issues are happening less and less, but I do still think it's pertinent (and will remain so) to know what code your agent is outputting. You’ve gotta be the master planner and ensure that, even if you don’t know the color of the cogs, you know where they’re being placed and what they’re connecting to.
Also: ask your agent about the code, often, and test locally (or in preview branches) whenever possible. Use the time saved with vibe coding to test your software more intently. It’s worth it.
Khe: Ummm, I don’t! That’s why I focus heavily on testing and understanding edge cases before I start building. I’m currently using ChatGPT to help me understand the principles of testing. I do wonder if the LLMs will get good enough at identifying mistakes, which would then mean that the “meta skill” I need is how to do this alongside AI.
On building software yourself
Khe, you're building LaTour AI to help buy-side firms save time with AI. What have you learned from building software yourself that changes how you think about what you're selling to finance professionals?
Khe: I’ve focused primarily on small and mid-sized investment firms conducting financial research. These firms tend to be quite ingrained in their existing practices and subject to tight compliance constraints (particularly around emerging tech). Right now, I’ve worked with individual clients to build specific tools that they might use themselves for creating dashboards, scraping web data and assessing trends.
Jake, flip side: how does understanding business context (from your PM work) change how you approach building?
Jake: The critical role in my product management work is always in reconciling the business side with the development side. Engineers don’t (and shouldn’t have to) speak the language of sales, and vice versa. My job is to gather a holistic picture of the prospective user, the market they reside in, and the technical requirements needed to serve them.
Agents benefit from this same kind of information. I try to synthesize it all down to the critical bits, feed it that information, and then pair program alongside it. That’s where I see the best results.
On building and maintaining
Everyone talks about building. Nobody talks about maintaining. Jake, what percentage of your AI-assisted coding time is building new things versus fixing, updating, or managing things you've already built? Khe, are you thinking about this at all, or is it a problem for future-you?
Jake: I’m primarily a prototyper so post-build management is something I don’t deal with too often. I’ve got some legacy websites I’ve gone back and updated, and manage going forward, but the repos are small enough for modern agent models to handle that pretty handedly.
Khe: Since my apps only have a few users, there isn’t a ton of maintenance (yet). But I’m trying to stay ahead of the curve by trying to understand which logs I should be monitoring and how a tool (like Sentry) could help facilitate the process.
On advice for the future
If you could go back to when you started using AI coding tools seriously, what advice would you give yourself?
Jake: Go fast and break things (responsibly). Try a million different things and stick with what works. Be hard on your agent.
I’d also probably spend a bit more time learning to manually code, for my own posterity. It feels like it's too late to bother with that now.
Khe: Follow the aliveness. For me, this isn’t core to my job so I can treat it more like a hobby. I want to lean into the thing that I can’t stop thinking about and am always trying to fix. Then I use that as a launching pad to better understand the underlying architecture.
On the future of coding
Final question: In five years, what does "knowing how to code" even mean? Is it going to be more about prompt engineering and system design than syntax and algorithms? Are we both going to look back at this conversation and laugh at how primitive these tools were?
Jake: I always balk at a future of “prompt engineering”. If we get to a point where the only thing needed is managing how we talk to AI agents, then the AI agents themselves can probably manage that too.
I think a non-insignificant portion of the software engineering career field shifts into agent management. We’re seeing a lot of that now, and we’re still in the pretty early days of this tech. If the AI bubble were to pop today, AI code generation is by far the most likely segment to remain.
I think the concept of coding continues to shift more into the abstract. Those that know the inner depths will likely see a future of fixing shoddy AI code and managing infrastructure scaling and security. Everyone else just needs to get friendly with the bots.
Khe: Primitive, yes! If we had this interview last January we’d be talking about how primitive coding “auto-complete” was – that for sure wouldn’t have gotten me interested.
There’s always going to be a need to write high quality and reliable code. But there’s going to be an entire group of folks untethered to a specific approach AND willing to play around the edges of what AI can do. This group will be large and very heterogeneous, but here’s one thing I’d bet on: they’re going to be very valuable to companies.
Want to follow along with both of our AI journeys? Check out Khe's newsletter on future-proofing your career with AI and Jake's work on all things AI-tech.