PromptJoy

Public prompt detail for browsing before sign in.

Sign inSign up
Back to discovery
Engineering
Claude 3.7 Sonnet

Bug reproduction and likely root-cause trace

Turns scattered console output and symptom notes into a debugging plan with ranked hypotheses.

Prompt metadata
Enough context to judge fit before you copy the prompt into a real workflow.

Workflow

Engineering

Model fit

Claude 3.7 Sonnet

Author

PromptJoy Editorial

Author role

Editorial curation

Origin

PromptJoy curated launch library

Published

Apr 8, 2026

Copy, save, and vote
Copy works on this curated prompt now. Its visible copy count updates on this device, and signed-in users can also save or vote here while the shared library fills in.

Copies the full prompt body so you can use it immediately in your model of choice.

Visible copy count on this device: 539

Current vote signal: +165

Sign in to save or vote on this curated prompt. Your state will stay on this device for your signed-in account while the shared library fills in.

What happens here

Copy keeps the prompt body exact and updates the visible count on this device. Save and vote also work on this curated prompt for signed-in users until the shared public library is live.

Live trust breakdown

Task clarity

64

+18 names the role the model should play

Input contract

60

+24 tells the user what inputs to provide

Output contract

66

+18 defines what the model should return

Guardrails

64

+22 includes explicit constraints

Safety

100

+12 does not request secrets or sensitive identifiers

Reusability

62

+18 has enough detail without becoming bloated

Community signal

74

+24 strong positive vote ratio

Remix into your draft

Trust score

70

Vote signal

+165

Saves

146

Copies

539

Trust score breakdown
A rubric-based quality score with visible evidence and improvement opportunities. No LLM API is used to produce these numbers.

Task clarity

Checks whether the prompt clearly names the role, task, and workflow it is meant to support.

64

Evidence

  • +18 names the role the model should play
  • +18 states the job the prompt is meant to do
  • +10 has task-oriented tags

Improve

  • Make the title describe the task and context.

Input contract

Checks whether the prompt tells the user what context, data, or materials to provide.

60

Evidence

  • +24 tells the user what inputs to provide
  • +12 uses bullets to separate input fields or context
  • +14 asks for task-specific context, not just raw text

Improve

  • No obvious improvement flagged.
Prompt body
Written by PromptJoy Editorial for engineering work on Claude 3.7 Sonnet, with attribution preserved on the detail page.
debugging
production
root cause
You are a senior full-stack engineer helping debug a production issue.

I will give you the symptom, expected behavior, recent changes, console errors, and any relevant code or logs.

Return:
1. the most likely root causes ranked from most to least likely
2. the fastest way to prove or disprove each cause
3. the minimum safe patch to try first
4. the regression risks to watch after the fix

Rules:
- do not suggest broad rewrites
- prefer the smallest reversible debugging step first
- call out when the issue is probably config, auth, caching, or data related
Public detail stays open, but save and vote still require sign-in
This page stays public for browsing and copying. When you sign in, the save and vote controls on this curated prompt stay available on this device without moving the public prompt library behind auth.
Create account to saveLog in to saveSign up to remixLog in to remix

Output contract

Rewards prompts that define the shape, order, and priority of the answer.

66

Evidence

  • +18 defines what the model should return
  • +22 uses a multi-part output format
  • +14 specifies how the answer should be organized

Improve

  • No obvious improvement flagged.

Guardrails

Looks for constraints that reduce generic answers, unsupported assumptions, and avoidable cleanup.

64

Evidence

  • +22 includes explicit constraints
  • +22 guards against unsupported assumptions
  • +12 sets quality constraints for the response

Improve

  • No obvious improvement flagged.

Safety

Checks for sensitive-data exposure, harmful misuse language, and whether the prompt sets safe-use boundaries.

100

Evidence

  • +12 does not request secrets or sensitive identifiers
  • +10 avoids obvious harmful misuse instructions
  • +8 avoids jailbreak or policy-bypass wording
  • +8 contains an explicit privacy or authorized-use boundary

Improve

  • No obvious improvement flagged.

Reusability

Checks whether the prompt is portable, readable, and discoverable enough to reuse.

62

Evidence

  • +18 has enough detail without becoming bloated
  • +12 separates context, output, and rules into readable blocks
  • +10 has enough metadata for discovery and reuse
  • +8 does not depend on private one-off context

Improve

  • No obvious improvement flagged.

Community signal

Uses copy, save, and vote behavior cautiously, with low confidence until enough activity exists.

74

Evidence

  • +24 strong positive vote ratio
  • +18 saved 146 times
  • +18 copied 539 times
  • +14 based on 872 total interactions

Improve

  • No obvious improvement flagged.