PromptJoy

Public prompt detail for browsing before sign in.

Sign inSign up
Back to discovery
Research
Gemini 2.5 Pro

Customer interview synthesis into sharp product themes

Clusters research notes into recurring pains, evidence-backed insights, and product opportunities.

Prompt metadata
Enough context to judge fit before you copy the prompt into a real workflow.

Workflow

Research

Model fit

Gemini 2.5 Pro

Author

PromptJoy Editorial

Author role

Editorial curation

Origin

PromptJoy curated launch library

Published

Apr 10, 2026

Copy, save, and vote
Copy works on this curated prompt now. Its visible copy count updates on this device, and signed-in users can also save or vote here while the shared library fills in.

Copies the full prompt body so you can use it immediately in your model of choice.

Visible copy count on this device: 435

Current vote signal: +142

Sign in to save or vote on this curated prompt. Your state will stay on this device for your signed-in account while the shared library fills in.

What happens here

Copy keeps the prompt body exact and updates the visible count on this device. Save and vote also work on this curated prompt for signed-in users until the shared public library is live.

Live trust breakdown

Task clarity

78

+18 names the role the model should play

Input contract

46

+24 tells the user what inputs to provide

Output contract

26

+14 specifies how the answer should be organized

Guardrails

64

+22 includes explicit constraints

Safety

100

+12 does not request secrets or sensitive identifiers

Reusability

44

+12 separates context, output, and rules into readable blocks

Community signal

69

+24 strong positive vote ratio

Remix into your draft

Trust score

60

Vote signal

+142

Saves

119

Copies

435

Trust score breakdown
A rubric-based quality score with visible evidence and improvement opportunities. No LLM API is used to produce these numbers.

Task clarity

Checks whether the prompt clearly names the role, task, and workflow it is meant to support.

78

Evidence

  • +18 names the role the model should play
  • +18 states the job the prompt is meant to do
  • +14 title describes the use case rather than a generic label
  • +10 has task-oriented tags

Improve

  • No obvious improvement flagged.

Input contract

Checks whether the prompt tells the user what context, data, or materials to provide.

46

Evidence

  • +24 tells the user what inputs to provide
  • +12 uses bullets to separate input fields or context

Improve

  • Ask for the goal, audience, or situation behind the request.
Prompt body
Written by PromptJoy Editorial for research work on Gemini 2.5 Pro, with attribution preserved on the detail page.
research
synthesis
interviews
You are a senior product researcher.

I will paste raw customer interview notes. Synthesize them into:
- recurring pains and jobs to be done
- exact customer language worth preserving
- tensions or contradictory signals
- opportunity areas ranked by evidence strength
- follow-up questions for the next interview round

Rules:
- quote customer phrasing when it is memorable
- separate strong evidence from weak inference
- avoid vague summaries like "users want simplicity" unless supported by specifics
Public detail stays open, but save and vote still require sign-in
This page stays public for browsing and copying. When you sign in, the save and vote controls on this curated prompt stay available on this device without moving the public prompt library behind auth.
Create account to saveLog in to saveSign up to remixLog in to remix

Output contract

Rewards prompts that define the shape, order, and priority of the answer.

26

Evidence

  • +14 specifies how the answer should be organized

Improve

  • Add a clear return or output instruction.
  • Define the desired output sections in order.

Guardrails

Looks for constraints that reduce generic answers, unsupported assumptions, and avoidable cleanup.

64

Evidence

  • +22 includes explicit constraints
  • +22 guards against unsupported assumptions
  • +12 sets quality constraints for the response

Improve

  • No obvious improvement flagged.

Safety

Checks for sensitive-data exposure, harmful misuse language, and whether the prompt sets safe-use boundaries.

100

Evidence

  • +12 does not request secrets or sensitive identifiers
  • +10 avoids obvious harmful misuse instructions
  • +8 avoids jailbreak or policy-bypass wording

Improve

  • Add a short safety boundary for sensitive data, authorization, or privacy.

Reusability

Checks whether the prompt is portable, readable, and discoverable enough to reuse.

44

Evidence

  • +12 separates context, output, and rules into readable blocks
  • +10 has enough metadata for discovery and reuse
  • +8 does not depend on private one-off context

Improve

  • Aim for a reusable prompt between roughly 80 and 260 words.

Community signal

Uses copy, save, and vote behavior cautiously, with low confidence until enough activity exists.

69

Evidence

  • +24 strong positive vote ratio
  • +14 saved 119 times
  • +17 copied 435 times
  • +14 based on 714 total interactions

Improve

  • No obvious improvement flagged.