Skip to content

Status note (2026-05-08): This plan was authored before the project's legal posture was established. The recruitment + retailer/supplier interview sprint described below is deferred until post-employment per ../constraints-and-separation.md. The risk-ranked assumptions, falsification thresholds, and commit/pivot/kill decision rule remain valid framing. Customer zero is now named — see customer-zero-profile.md. The during-employment validation path runs solely through the GMI design-partner relationship.

Customer Discovery Plan

Status

  • Stage: Problem validation
  • Authored: 2026-05-06
  • Owner: TBD (must be named before kickoff)
  • Provisional. Revise as evidence comes in.

Purpose

Convert the assumptions and hypotheses log into a validation program with falsification criteria, recruitment targets, and a decision rule. Produce evidence sufficient to commit to, pivot, or kill the proposed direction.

What this plan does

  • Ranks assumptions by load-bearing risk.
  • Specifies the cheapest test for each.
  • Defines what would falsify each one.
  • Sets recruitment targets, sample sizes, and a timeline.
  • Defines a commit / pivot / kill decision rule at the end of the sprint.

What this plan does not do

  • It does not validate solution shape, taxonomy depth, ingestion model, API surface, or pricing tiers. Those are post-validation artifacts.
  • It does not produce a roadmap.
  • It does not commit to MVP scope.

Risk-ranked assumptions

Ranked by load-bearing risk: if this is false, how much of the product thesis collapses?

A1. Retailers experience catalog pain that is severe, frequent, and currently funded

  • Severity: Critical. If false, no business.
  • Source: assumption log A1.
  • What "true" looks like: 8 of 12 interviewed retailers can name a person, hours/week, or dollar cost they spend on supplier-data wrangling, and at least 4 have actively tried to fix it (built spreadsheets, hired a junior staffer, bought a tool, paid an agency).
  • What would falsify it: Most retailers describe it as annoying but not costly enough to fund. No prior workarounds. No willingness to dedicate even a discovery hour.
  • Falsification threshold: Fewer than 4 of 12 show active funded behavior (time, money, or workaround already in place).

A2. The pain is consistent enough across retailers that one normalized layer is genuinely shared infrastructure

  • Severity: Critical. If false, this becomes a bespoke services business, not a platform.
  • Source: derived. Not in current assumptions log — should be added.
  • What "true" looks like: Across 12 retailers, the top 5–8 supplier sources overlap heavily (>60% supplier overlap across the segment), and the fields they care most about overlap heavily.
  • What would falsify it: Each retailer has a long-tail supplier mix with little overlap, or each has idiosyncratic field needs that defeat normalization.
  • Falsification threshold: Less than 50% supplier overlap across the target segment, or no consistent top 5 fields.

A3. There is a willing customer zero who will pay (or pre-commit) before the product exists

  • Severity: Critical. If false, MVP scoping is theoretical.
  • Source: derived from open-question #1.
  • What "true" looks like: At least 1 named retailer signs a design-partner agreement (paid pilot, LOI with target spend, or prepayment) within the discovery sprint.
  • What would falsify it: Strong verbal interest from many, written commitment from none.
  • Falsification threshold: Zero design-partner agreements after 12 interviews and explicit asks.

A4. Suppliers will participate when retailer demand exists

  • Severity: High. If false, the data side breaks. But mitigable — retailer scraping/manual ingestion can bridge early.
  • Source: assumption log + legacy precedent (MusiPOS/APIC).
  • What "true" looks like: 4 of 6 interviewed suppliers say "yes, if a retailer asks us to send to your platform, we will," and at least 2 will commit in writing.
  • What would falsify it: Suppliers actively block, citing channel control, exclusivity, or lack of resources.
  • Falsification threshold: Fewer than 2 of 6 willing to commit even informally.

A5. The first integration use case is concrete enough to define a minimal API

  • Severity: High. If false, "minimal API" stays vague forever.
  • Source: derived. Open-question #8.
  • What "true" looks like: At least 2 retailers can name the exact downstream system (POS X, ecom platform Y) and the exact catalog operation (initial load, daily sync, new product onboarding) they want first.
  • What would falsify it: Retailers can describe pain but cannot name the integration shape they would actually use first.
  • Falsification threshold: No retailer can articulate a concrete first integration without prompting from us.

Deferred (not validated in this sprint)

  • API-as-premium pricing assumption: too early. Validate after a paid pilot exists.
  • "MVP can deliver value before deep attribute normalization": this is a scoping decision, not a discovery one. Defer to MVP scope memo.
  • Mid/small retailers adopt faster than large: useful segmentation hint, but not load-bearing. Treat as recruitment heuristic, not a tested hypothesis.

Research methods

Primary

  • Retailer interviews (semi-structured, 45–60 min, video). Mom Test discipline: ask about past behavior, not future intent.
  • Artifact review during interview: ask retailer to walk through their last supplier-data update in real time (screen-share if possible). This is the single highest-signal activity.

Secondary

  • Supplier interviews (30–45 min, video).
  • Existing-tool teardown: 1–2 hours per relevant adjacent tool (legacy MusiPOS/APIC behavior, generic PIM tools, retailer ERPs). Used to sharpen "why existing solutions are insufficient."

Explicitly out of scope

  • Surveys. Low signal for this question.
  • Landing-page smoke tests. Premature; we don't have positioning yet.
  • Building a prototype. Premature.

Target retailer profile (recruitment heuristic)

Hypothesised, not validated. Used to seed the recruitment list, not to constrain learning.

  • Australian music retail (instruments, audio, related accessories).
  • 1–10 stores; some ecommerce presence.
  • 20+ active suppliers in their catalog.
  • Owner/operator or small leadership team (decision speed).
  • Has felt pain at least once: a supplier price update, a new supplier onboarding, a POS-to-ecom drift incident.

Disqualifiers (do not put at top of list): - Largest retailers with mature in-house data ops. - Single-supplier or single-brand retailers (pain is too narrow). - Pure dropshippers (different problem shape).

Recruitment plan

  • Target list size: 25–30 retailers identified, 12 interviewed.
  • Channels: industry associations, supplier referrals, owner-operator networks, LinkedIn outreach, warm intros from anyone in your network.
  • Incentive: founder access, influence over taxonomy and roadmap, optional gift card. Avoid cash for interviews — biases responses.
  • Recruitment is usually the bottleneck. Start day one. If after week 2 you have fewer than 6 booked, the plan is failing and recruitment becomes the priority, not the interviews.

Supplier targets

  • 6 suppliers, mixed by size and current data sophistication.
  • At least 2 with prior MusiPOS/APIC participation.
  • At least 2 with no prior central-data participation.

Sample sizes and timeline

Stream Target Notes
Retailer interviews 12 Saturation usually around 8–12 in a single segment
Supplier interviews 6 Secondary stream, lighter cadence
Design-partner conversations 3–5 Subset of retailers showing strongest signal

Sprint length: 6 weeks.

Week Activity
1 Recruitment, finalise discussion guides, first 2 retailer interviews
2 4 retailer interviews, 1 supplier interview, weekly synthesis
3 4 retailer interviews, 2 supplier interviews, midpoint review
4 2 retailer interviews, 3 supplier interviews, design-partner conversations begin
5 Follow-ups, design-partner offers, gap-fill interviews
6 Synthesis, decision memo, go/pivot/kill

Cap: do not extend past 8 weeks. If the answer isn't visible by then, the answer is "not yet."


Retailer discussion guide (semi-structured)

Open with rapport, not the product. Never describe the product unless explicitly asked at the end.

1. Their world

  • Walk me through your business: store count, supplier count, channels.
  • Who handles your product data day to day?

2. Concrete recent event

  • The last time you onboarded a new supplier or did a major catalog update — walk me through what actually happened.
  • (Screen-share request) Can you show me the last spreadsheet, email, or file you got from a supplier?
  • How long did it take? Who did it? What broke?

3. Pain frequency and cost

  • How often does this happen?
  • What does it cost you in time or money in a typical month?
  • What's the worst version of this you've had in the last year?

4. Current alternatives

  • What have you tried to fix this? Spreadsheets, staff, agencies, tools?
  • What did you pay for?
  • Why did it stop working, or why is it still in place?

5. Workarounds and adjacent spend

  • Are there things you'd love to do that you currently can't, because of data quality?
  • Have you ever paid for a tool that touched this problem?

6. Integrations

  • What systems would the data need to land in for it to be useful — POS, ecom, ERP, accounting?
  • Which one matters first?

7. Closing

  • If a service existed that gave you clean, normalized supplier data already mapped to your categories, who at your company would care most, and how would you decide whether to use it?
  • Can we come back to you in a few weeks with a concrete proposal?

Do-not-ask

  • "Would you pay for X?" — meaningless.
  • "Would this be valuable?" — meaningless.
  • Leading questions about features.

Supplier discussion guide (semi-structured)

1. Their world

  • How do you currently distribute product data to retailers?
  • How many retailers do you serve in AU? Which channels?

2. Concrete recent event

  • Last time a retailer asked for a catalog update — walk me through what happened.
  • What format did you send? Why that format?

3. Pain and cost

  • How much time/effort goes into retailer-facing data?
  • Where does it break?

4. Centralization willingness

  • If a retailer asked you to send data once to a shared platform that retailers pulled from, would you do it? Why or why not?
  • (For those with MusiPOS/APIC history) What worked, what didn't, what would you not repeat?

5. Constraints

  • Channel control concerns?
  • Pricing visibility concerns?
  • Resourcing constraints on your end?

6. Closing

  • If a retailer you already supply asked you to participate, would you?
  • Could we come back with a specific ask in a few weeks?

Synthesis cadence

  • Weekly 30-min synthesis: review tagged interview notes, update assumption status (supported / weakened / falsified / undetermined), update open-questions log.
  • Midpoint (end of week 3) review: are we on track to hit thresholds? If not, recruit harder or narrow segment.
  • Final synthesis: decision memo using templates/decision-memo-template.md.

Note-taking standard

  • Verbatim quotes preserved.
  • Behavior > opinion. Tag each insight as behavior (something they did) or opinion (something they said they would do). Behavior outweighs opinion in synthesis.

Decision rule (commit / pivot / kill)

Evaluate at end of week 6.

Commit (proceed to MVP scope memo)

All of: - A1, A2, A3 meet their thresholds. - At least one named design partner with written commitment. - At least one concrete first-integration use case named by 2+ retailers.

Pivot (re-scope, not re-validate from scratch)

Any of: - A1 holds but A2 fails → reconsider as bespoke services or per-retailer onboarding, not platform. - A1 and A2 hold but A3 fails → segment was wrong; redo recruitment for a different retailer profile, max 4 weeks. - A4 fails → consider retailer-side ingestion path; suppliers become a later concern.

Kill

  • A1 fails. The pain is not funded. Without funded pain, none of this matters.

Outputs of this sprint

  1. Updated assumptions-and-hypotheses.md with status per assumption.
  2. New retailer-workflow-and-pain-map.md (current-state document).
  3. New customer-zero-target-profile.md (refined ICP based on evidence).
  4. Decision memo: commit / pivot / kill.
  5. Optional: design-partner agreement(s).

Open questions before kickoff

These must be resolved before week 1 begins: - Who owns this sprint end-to-end? - Who conducts the interviews? (Founder presence in B2B discovery is high-leverage; do not delegate to a researcher who can't make commitments.) - What is the recruitment-channel plan, concretely? Names, intros, lists. - Is there budget for a part-time recruiter or coordinator if recruitment lags? - What is the minimum commitment we are willing to offer a design partner, and what do we expect in return?


Suggested next step

Resolve the five open questions above this week. Begin recruitment in parallel — recruitment lead time is typically 1–2 weeks and is the rate-limiting step.