Build or Buy

Build or Buy

Build or Buy

As AI transforms the way we approach everyday decisions, I set out to prototype a tool powered by a custom GPT model I built and configured using OpenAI’s platform to understand and respond to product questions.


This self-initiated project explores how to help anyone make smarter, more confident purchasing decisions by surfacing time, effort, and cost-saving alternatives in a single experience.


My goal was to create an app that leverages generative AI to deliver actionable options with clarity, while expanding my skills in rapid prototyping and LLM-driven product design.

Challenge & Oppourtunity

Challenge & Oppourtunity

The process of deciding whether to buy something new, attempt a DIY solution, or repurpose an existing item is often scattered and time-consuming. Users bounce between e-commerce sites, product reviews, and DIY tutorials, rarely seeing actionable comparisons in one place.


Most digital tools and LLM chat interfaces are not optimized for nuanced, multi-step product decisions, resulting in fragmented information and missed opportunities for savings or reuse.


There is a clear opportunity for an AI-powered platform to unify the decision-making journey by aggregating product info, DIY guides, pricing, and reviews. It can guide users toward the best choice for their needs, skills, and resources.

Build or Buy: AI decision-making app for products and solutions

Outcome

I developed and documented a working product concept, from custom GPT setup and prompt design to mobile wireframes and clickable prototypes, showing an end-to-end approach to AI-driven product design.


By reimagining the journey, the Build or Buy app reduced the average steps to an informed decision from 18+ down to as few as 7 structured steps. This streamlined experience saves time and cognitive load, and makes creative, cost-saving alternatives much more accessible.

Moving from Strategy to Execution

Moving from Strategy to Execution

With goals in place and a shared workspace established, I transitioned into the next phase of design, where early structure and direction began to take shape.


AI tools helped me accelerate research synthesis, visualize potential user flows, and quickly generate layout directions. This allowed me to move faster through exploration and decision-making.

With goals in place and a shared workspace established, I transitioned into the next phase of design, where early structure and direction began to take shape.


AI tools helped me accelerate research synthesis, visualize potential user flows, and quickly generate layout directions. This allowed me to move faster through exploration and decision-making.

With goals in place and a shared workspace established, I transitioned into the next phase of design, where early structure and direction began to take shape.


AI tools helped me accelerate research synthesis, visualize potential user flows, and quickly generate layout directions. This allowed me to move faster through exploration and decision-making.

Custom GPT Model Development

Custom GPT Model Development

I built and configured a custom GPT model using OpenAI’s tools, specifically for “build vs. buy” scenarios.


Through prompt logic and model tuning, I ensured actionable, context-aware results for a wide range of product questions.

I reviewed early notes and interviews shared by the client, which largely reflected their initial assumptions. To bring more clarity and structure, I used AI tools like UX Pilot and FigJam’s built-in synthesizer to extract themes around how users relate to sound, memory, and place. Comparing insights across tools helped validate assumptions and uncovered opportunities to shift away from licensed media toward more personal, place-based audio moments. Using these tools saved hours of manual transcription, sticky-note clustering, and affinity mapping which allowed me to get to a direction faster.

Product Requirements

Product Requirements

I drafted a clear Product Requirements Document (PRD) outlining core goals, user problems, and essential features.


The PRD became my reference point for feature development and prototyping. Most importantly, it directly influenced and documented my approach to developing and refining GPT prompts, ensuring that the AI always aligned with real user scenarios and product objectives.

User Flow Mapping and Wireframing

I created low-fidelity wireframes to quickly map user flows and visualize all key screens needed for the app. This step made it easy to test layouts, identify gaps in the journey, and clarify each step for users.


Wireframing provided a blueprint for prototyping and kept user needs front and center throughout the design and AI prompt process.

Wireframing

Using AI for Visual Inspiration

Using AI for Visual Inspiration

Before moving into prototyping, I used AI tools to generate mood boards, color palettes, and UI concepts, establishing the brand, onboarding flows, and visual direction for the app.


AI also helped me experiment with prompt wording and input styles, which guided both the look and logic of the product. This step accelerated ideation and ensured a cohesive experience from branding to prompt structure.

No-Code Prototyping and Prompt Testing

No-Code Prototyping and Prompt Testing

I built and tested interactive flows in Claude, V0, and Lovabble, connecting each platform to my custom GPT model or leveraging built-in LLMs for real-time output. This no-code approach allowed me to quickly design and iterate, while also helping me discover assets and components I hadn’t initially considered. As I moved between tools, I observed how each platform handled user inputs and showcased AI-powered features within product prototypes.


With v0, I used its visual builder and straightforward API integration to test GPT-driven flows with minimal setup. Lovable made it easy to build the initial MVP and connect screens efficiently, though it was less flexible and constrained by the platform’s available options. Claude became my environment for prompt development and validation. While not a traditional no-code builder, it allowed me to compare LLM performance against ChatGPT and experiment with alternative conversational outputs.


Comparing outputs from each platform resulted in practical, clickable prototypes and directly improved my app’s structure, usability, and flexibility. This approach provided a clear path to validating the user experience and optimizing the overall click path.

Claude

Claude

V0

V0

Lovable

Lovable

What I Learned

  • Prompt Engineering & UX Must Evolve Together: Crafting prompts and designing flows in tandem was essential. High-quality, context-rich input led to more useful LLM outputs.

  • No-Code Tools Accelerate Validation: Rapid prototyping exposed both platform strengths and technical constraints, helping me scope a realistic MVP.

  • Custom GPT Configuration Deepened AI Product Understanding: Building and refining a GPT model from scratch required technical problem-solving and tight alignment with user needs.

  • AI-Integrated Design Is a Mindset Shift: Merging AI capabilities with UX design changed how I approached input, interface, and feedback.

  • Visual Communication Matters: Using AI for visuals sped up design and helped define a cohesive brand direction.

Next Steps

Building on my initial MVP and platform comparisons, my next priorities include:

  • Deepening the integration between my custom GPT model and the mobile app by exploring more advanced API features or multi-model workflows within v0 or Lovable.

  • Conducting targeted user testing to gather insights on onboarding, output clarity, and feature adoption, with iterations based on this feedback.

  • Exploring additional no-code or low-code solutions that could further streamline the build process or support real-time analytics and user management.

  • Documenting and sharing my learnings with the design and AI community and evaluating partnership or beta launch opportunities for a production-ready release.