
TL;DR
AI becomes truly valuable when it moves from experimentation to structured workflow. By using GPTs intentionally—across research, ideation, and documentation—teams can turn scattered “AI jams” into repeatable, scalable product processes that enhance (not replace) human thinking.
~~~
Earlier this month I attended the OpenAI Academy Small Business AI Jam in Bellevue. The event brought together small business owners exploring practical ways AI might fit into everyday workflows.
What I appreciated most was that the discussion stayed grounded in real work. For small teams, the question is rarely “How can we use AI?” It is usually “Where could this actually make something we already do easier?”

Following along…
One idea from the workshop stuck with me, and we decided to test it right away in our own workflow.
Instead of treating prompts as open-ended questions, the session framed GPTs as small systems designed to solve a specific job.
The framework was simple:
• What job are you trying to solve?
• What inputs and context should the AI use?
• What should the output look like?
• Do you have examples of good outputs?
Those four questions turn a vague prompt into something much more structured.
Applying the idea in our studio
After the event, our Head of Engineering, Michael English, experimented with applying that framework to one of our internal workflows.
In product development, there is often a gap between an approved estimate and the engineering tasks that follow. Estimates describe features at a high level. Engineering teams need structured tickets with clear acceptance criteria.
Turning one into the other takes time.
The GPT Michael built helps generate a first draft of development tickets from an approved estimate. Our team then reviews and refines those tickets before they move into our project management system.
The goal is not to automate product management. It is to make the transition from planning to execution a little smoother.
Structuring the GPT
The system itself is relatively simple and organized into two parts.
Instructions
The instructions define the role of the GPT and the job it is responsible for performing. In this case the system acts like a product manager translating estimates into engineering tasks. The instructions also specify the format the resulting tickets should follow.
Context
The context includes supporting information that GPT should reference when producing those tickets.
This includes:
• guidelines for writing user stories
• examples of acceptance criteria
• ticket naming conventions
• product requirements and technical constraints
Separating instructions from context helps the system produce more consistent results.
Iteration is part of the process
The first version produced a ticket for almost every individual task described in the estimate.
Technically correct, but not how our team prefers to organize work.
We refined the instructions so the GPT groups related work into more meaningful development tasks. After a few iterations the output began to resemble what a product manager on our team would typically create.
Even now, the output is always reviewed by our team before it becomes part of the development plan.
A small workflow improvement
For small product teams, a surprising amount of time goes into structuring work rather than building it.
Tools like this can help provide a starting point, but they work best when paired with human judgment.
In our case the GPT helps organize information quickly, while our product and engineering team still validates the work, adjusts scope, and ensures the tickets reflect the right priorities.
The bigger takeaway
The most useful idea from the AI Jam was not about a specific tool.
It was about designing workflows thoughtfully.
When AI is treated as part of a system rather than a replacement for expertise, it can help small teams move faster without losing the human insight that good products require.

Bellevue Leadership