Summary & Insights
If you think of the internet as a global brain, APIs are the dendrites that connect its neurons, allowing different systems to “think” and act together. This foundational idea frames a conversation with Alex Rattray, founder of Stainless, about how the very nature of these connections is undergoing a seismic shift. The discussion centers on the evolution from APIs designed solely for human developers to interfaces that must also serve a new class of user: AI agents. Drawing from his experience building Stripe’s revered developer platform, Alex explains why a high-quality Software Development Kit (SDK) is often the true API for developers, providing the type safety, idiomatic design, and robustness that make integration possible. Yet, the rise of AI, particularly through protocols like the Model Context Protocol (MCP), introduces novel challenges and forces a reimagining of what an intuitive interface looks like when the user is a large language model.
The core of the challenge lies in the fact that AI agents interact with APIs fundamentally differently than humans. While a human developer can navigate extensive documentation, an LLM can be overwhelmed by the sheer volume of potential endpoints and parameters, quickly exhausting its limited context window. This requires a new paradigm for API design—one that dynamically reveals functionality and filters responses to keep interactions lean and focused. The conversation explores practical solutions, such as allowing users to select only the API resources they need, creating dynamic tool discovery flows, and employing JSON query (JQ) filters to let the AI request only the specific data fields relevant to its task, thereby preserving precious context.
Ultimately, this shift doesn’t diminish the importance of traditional developer experience; it amplifies it. As AI coding agents become more common, they exhibit a strong preference for using well-typed, polished SDKs over making raw HTTP requests, as the SDKs provide guardrails and reduce hallucination. This creates a virtuous cycle where the need for AI-integration accelerates demand for the very best developer tools. The future, as Alex envisions it, involves developers focusing on high-level business logic and API design, while robust platforms and frameworks—like the one Stainless is building—handle the intricate, low-level infrastructure of SDK generation, documentation, and versioning, catering seamlessly to both human and machine users.
Surprising Insights
- The SDK is the API: For many developers interacting with a service daily, the SDK is their primary interface, not the underlying REST API. Its quality—being idiomatic, robust, and well-documented—directly defines the developer experience.
- AI agents prefer SDKs: Contrary to the assumption that LLMs would work best with simple HTTP endpoints, coding agents actively seek out and prefer to use typed SDKs because they provide structure, reduce errors, and allow for faster iteration within a local environment instead of making risky “testing in production” API calls.
- Context window is a major design constraint: The most significant hurdle in designing APIs for LLMs isn’t functionality, but the limited context window. Presenting a full API spec with all parameters is often impossible, forcing new strategies for dynamic, just-in-time tool and parameter discovery.
- AI unlocks new tooling for efficiency: Simple additions, like equipping an AI with a JQ (JSON Query) filter capability, allow it to request only the specific data it needs from a large API response, dramatically improving efficiency and keeping context usage light.
Practical Takeaways
- Treat your SDK as a first-class product, investing in its design, type safety, and documentation, as it is the main touchpoint for both human developers and AI coding agents.
- When building for AI agents via protocols like MCP, design for dynamic discovery. Instead of dumping your entire API spec into the context, create systems that allow the agent to list, describe, and then execute endpoints in separate steps to conserve tokens.
- Implement filtering mechanisms like JQ in your API responses to enable AI agents to request only the specific data fields they need, preventing large payloads from breaking the interaction.
- Prioritize making your API’s functionality selectively exposable, allowing users (or the AI itself) to enable only the specific resources (e.g., only customers, only read operations) they need for a given task to reduce initial complexity.
- Ensure your documentation is machine-readable and version-accurate so that AI coding agents can access the correct reference material, reducing hallucinations and integration errors based on outdated SDK versions.
As more digital natives have entered the workplace, they have brought with them the expectation that their software should both be a joy to use and allow them to be power users. That is, users who configure and control it to better serves their needs. And often, these digital natives aren’t just aspiring power users, they are also prosumers, who can and will pay for a premium experience. But first generation SaaS products have often struggled to deliver the experience these users crave.
For today’s founders and builders, how do you get the user experience right when a product has to delight your power users, while being something a less savvy user can pick up and learn?
In this episode, a16z general partner David Ulevitch and Superhuman founder Rahul Vohra discuss how to build products that can turn any user into a power user. The conversation touches on themes from David’s recent talk on products that adopt developer tools, like the command palette and keyboard shortcuts, to improve usability, and Rahul’s talk on how to apply game design principles to product design. They cover how to onboard users to drive virality, when to expand to a second product, and how to use pricing to position a premium product.


Leave a Reply
You must be logged in to post a comment.