Summary & Insights
We’re not in the AI equivalent of a sleek Windows era; we’re still tinkering in the garage with the 64K IBM PC, trying to solve basic problems like memory constraints and display issues. This foundational analogy, offered by former Microsoft president Steven Sinofsky, frames a wide-ranging discussion on the current, embryonic state of the AI platform shift. The conversation explores Andrej Karpathy’s concept of “jagged intelligence”—where AI models excel unpredictably in some areas while failing utterly in others—and what it means for builders today. A key distinction emerges between “vibe coding,” which is often overhyped and constrained, and “vibe writing,” which is already here, transforming roles from writers to editors. The dialogue stresses that we are in an age of “partial autonomy,” where the human must remain firmly in the loop, acting as a pilot or editor rather than being replaced, especially for tasks involving judgment, uncertainty, or exception handling.
The path to full automation is fraught with economic and practical hurdles, particularly for the much-hyped “agents.” True agentic AI requires not just technical capability but a viable economic model for the services it would utilize. A headless, faceless agent that simply finds the cheapest mortgage or flight ignores the complex reality of consumer choice and the need for businesses to differentiate themselves. Automation will likely arrive first for high-friction, low-judgment tasks, while areas requiring nuanced decision-making, like tax preparation or medical diagnosis, will retain a necessary human component for the foreseeable future. This leads to a reassessment of which jobs are truly at risk, moving past the hype to a more grounded understanding of augmentation versus replacement.
Looking at industry dynamics, the panel examines Google’s flurry of announcements at I/O not as a guarantee of future dominance, but as a classic large-company “shock and awe” tactic during a platform transition. The real test for incumbents is whether they can change their core product-building and go-to-market contexts, not just showcase new technology. Similarly, in creative fields, AI is poised to massively elevate the floor—generating competent “slop” like marketing copy or basic art—while also raising the ceiling for native artists who learn to wield it as a new tool. The ultimate impact may be less about creating masterpieces and more about democratizing access to “good enough” content and services for a much broader population.
Surprising Insights
- The immediate, transformative power of AI may lie in “vibe writing” (e.g., drafting emails, marketing copy, essays) rather than in the more discussed “vibe coding,” because it offers a more readily achievable form of partial autonomy today.
- The development of prompt-based AI interfaces is essentially the creation of a new programming language, repeating the historical cycle of innovation in programming paradigms rather than eliminating the need for them.
- Effective AI “agents” face a significant, often overlooked economic barrier: for an agent to complete a task like refinancing a loan, the underlying services need to exist in a commoditized, API-accessible form, which many industries resist because differentiation is core to their business.
- Much of the world’s essential writing and content is “slop”—competent but unexceptional material like business case studies or generic articles. AI is exceptionally good and efficient at generating this tier of content, which actually fulfills a vast market need.
- The timeline for sophisticated, reliable AI agents is likely a decade long, a stark contrast to the current hype cycle that suggests they are just around the corner.
Practical Takeaways
- Start with writing, not coding: If you want to integrate AI into your workflow today, begin by using it as a writing co-pilot to draft, edit, and refine text-based content, accepting that your role shifts from creator to editor.
- Apply the “high-friction, low-judgment” filter: When evaluating tasks to automate with AI, prioritize those that are tedious and process-driven but don’t require deep nuance or personal preference (e.g., compiling research, initial data sorting) over those requiring significant judgment.
- Maintain a healthy skepticism toward demos: Be wary of flashy “text-to-app” or agent demos on social media; they are often prototypes that don’t hold up in production. Assume a human was deeply involved in the “prompt programming” to make it work.
- Design for a human-in-the-loop: Whether building with AI or using it, structure processes to keep a human in a position of oversight and decision-making, especially for outputs that carry risk or require accuracy, using the “Iron Man control slider” model of partial autonomy.
- Look for augmentation, not just replacement: In considering AI’s impact on roles like product management or radiology, focus on how the tool can augment and elevate the work by handling routine parts, rather than framing it as a binary replacement.
Make sure to check out our new AI + a16z feed: https://link.chtbl.com/aiplusa16z
a16z General Partner Anjney Midha joins the podcast to discuss what’s happening with hardware for artificial intelligence. Nvidia might have cornered the market on training workloads for now, but he believes there’s a big opportunity at the inference layer — especially for wearable or similar devices that can become a natural part of our everyday interactions.
Here’s one small passage that speaks to his larger thesis on where we’re heading:
“I think why we’re seeing so many developers flock to Ollama is because there is a lot of demand from consumers to interact with language models in private ways. And that means that they’re going to have to figure out how to get the models to run locally without ever leaving without ever the user’s context, and data leaving the user’s device. And that’s going to result, I think, in a renaissance of new kinds of chips that are capable of handling massive workloads of inference on device.
“We are yet to see those unlocked, but the good news is that open source models are phenomenal at unlocking efficiency. The open source language model ecosystem is just so ravenous.”
More from Anjney:
The Quest for AGI: Q*, Self-Play, and Synthetic Data
Making the Most of Open Source AI
Safety in Numbers: Keeping AI Open
Follow everyone on X:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Stay Updated:
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://twitter.com/stephsmithio
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Leave a Reply
You must be logged in to post a comment.