Agent feedback is the new User feedback
In this article, lchoquel examines the crucial differences between agent (LLM) and human user feedback in software development. The author argues that agent feedback, being brutally honest and unemotional, offers a unique perspective that can rapidly identify weaknesses in products and documentation. By integrating agent-driven testing and co-design early in the development cycle, developers can iterate faster and address fundamental issues before engaging with human users. The piece introduces the concept of the Minimum Agent Prototype (MAP), which allows for cheap, rapid prototyping with agents, ultimately aligning products more closely with user needs.
Agent Feedback: The Cold, Hard Truth Your Software Needs
lchoquel dives into a new paradigm in software development—leveraging agent (especially Large Language Model, or LLM) feedback to refine products. Traditionally, developers build a Minimum Viable Product (MVP) to gather user input, establishing a feedback loop often described by the ‘Build-Measure-Learn’ methodology from Eric Ries’ Lean Startup.
Agent vs. Human Feedback
The author highlights a key difference: while human user feedback can be influenced by emotions, experience, and misunderstanding, agent feedback is unemotional and direct. If an agent can’t use the product, it’s usually because of an actual product flaw or poor documentation—there are no excuses or ambiguity.
Rethinking Documentation and Design
Initially, the instinct may be to refine the documentation, adding instructions or guidelines in an attempt to guide the agent. However, when an agent repeatedly strives to use a product differently, it can signal an underlying usability issue. Perhaps the agent’s approach aligns more closely with what actual users desire. Recognizing this, lchoquel suggests considering agent feedback as part of the design process—instead of fighting it, incorporate it.
The MAP Approach
The article introduces the Minimum Agent Prototype (MAP), a concept that mirrors MVPs but is tailored for agents. MAPs can be created quickly and affordably, sometimes limited to specs, docs, or even a pitch deck, just to observe how an agent interacts with the logic and flow before major investment.
Practical Advantages
Testing with agents can reveal obvious failure modes early, allowing for rapid iteration. This is especially crucial as more human users employ their own agents or automation tools—if software isn’t agent-friendly, it risks being sidelined or misunderstood.
Conclusion
lchoquel encourages fellow developers to integrate agent feedback into their development pipelines, not as a replacement for user feedback, but as an early, invaluable tool for surfacing and resolving issues before they reach human testers. The article ends with a call for shared experiences from others on this emerging approach.
This post appeared first on Reddit AI Agents. Read the entire article here