As AI transforms the way we approach everyday decisions, I set out to prototype a tool powered by a custom GPT model I built and configured using OpenAI’s platform to understand and respond to product questions.
This self-initiated project explores how to help anyone make smarter, more confident purchasing decisions by surfacing time, effort, and cost-saving alternatives in a single experience.
My goal was to create an app that leverages generative AI to deliver actionable options with clarity, while expanding my skills in rapid prototyping and LLM-driven product design.
The process of deciding whether to buy something new, attempt a DIY solution, or repurpose an existing item is often scattered and time-consuming. Users bounce between e-commerce sites, product reviews, and DIY tutorials, rarely seeing actionable comparisons in one place.
Most digital tools and LLM chat interfaces are not optimized for nuanced, multi-step product decisions, resulting in fragmented information and missed opportunities for savings or reuse.
There is a clear opportunity for an AI-powered platform to unify the decision-making journey by aggregating product info, DIY guides, pricing, and reviews. It can guide users toward the best choice for their needs, skills, and resources.
Build or Buy: AI decision-making app for products and solutions
Outcome
I developed and documented a working product concept, from custom GPT setup and prompt design to mobile wireframes and clickable prototypes, showing an end-to-end approach to AI-driven product design.
By reimagining the journey, the Build or Buy app reduced the average steps to an informed decision from 18+ down to as few as 7 structured steps. This streamlined experience saves time and cognitive load, and makes creative, cost-saving alternatives much more accessible.
I drafted a clear Product Requirements Document (PRD) outlining core goals, user problems, and essential features.
The PRD became my reference point for feature development and prototyping. Most importantly, it directly influenced and documented my approach to developing and refining GPT prompts, ensuring that the AI always aligned with real user scenarios and product objectives.
I created low-fidelity wireframes to quickly map user flows and visualize all key screens needed for the app. This step made it easy to test layouts, identify gaps in the journey, and clarify each step for users.
Wireframing provided a blueprint for prototyping and kept user needs front and center throughout the design and AI prompt process.
Before moving into prototyping, I used AI tools to generate mood boards, color palettes, and UI concepts, establishing the brand, onboarding flows, and visual direction for the app.
AI also helped me experiment with prompt wording and input styles, which guided both the look and logic of the product. This step accelerated ideation and ensured a cohesive experience from branding to prompt structure.
I built and tested interactive flows in Claude, V0, and Lovabble, connecting each platform to my custom GPT model or leveraging built-in LLMs for real-time output. This no-code approach allowed me to quickly design and iterate, while also helping me discover assets and components I hadn’t initially considered. As I moved between tools, I observed how each platform handled user inputs and showcased AI-powered features within product prototypes.
With v0, I used its visual builder and straightforward API integration to test GPT-driven flows with minimal setup. Lovable made it easy to build the initial MVP and connect screens efficiently, though it was less flexible and constrained by the platform’s available options. Claude became my environment for prompt development and validation. While not a traditional no-code builder, it allowed me to compare LLM performance against ChatGPT and experiment with alternative conversational outputs.
Comparing outputs from each platform resulted in practical, clickable prototypes and directly improved my app’s structure, usability, and flexibility. This approach provided a clear path to validating the user experience and optimizing the overall click path.
What I Learned
Prompt Engineering & UX Must Evolve Together: Crafting prompts and designing flows in tandem was essential. High-quality, context-rich input led to more useful LLM outputs.
No-Code Tools Accelerate Validation: Rapid prototyping exposed both platform strengths and technical constraints, helping me scope a realistic MVP.
Custom GPT Configuration Deepened AI Product Understanding: Building and refining a GPT model from scratch required technical problem-solving and tight alignment with user needs.
AI-Integrated Design Is a Mindset Shift: Merging AI capabilities with UX design changed how I approached input, interface, and feedback.
Visual Communication Matters: Using AI for visuals sped up design and helped define a cohesive brand direction.
Next Steps
Building on my initial MVP and platform comparisons, my next priorities include:
Deepening the integration between my custom GPT model and the mobile app by exploring more advanced API features or multi-model workflows within v0 or Lovable.
Conducting targeted user testing to gather insights on onboarding, output clarity, and feature adoption, with iterations based on this feedback.
Exploring additional no-code or low-code solutions that could further streamline the build process or support real-time analytics and user management.
Documenting and sharing my learnings with the design and AI community and evaluating partnership or beta launch opportunities for a production-ready release.