Coding prompts
Debugging, reviews, refactors and tests. Pick one and open it in Promptea.
Debugging with reproducible steps
I’m hitting this bug: Context: - Stack: [Next.js / Node / Python / etc.] - Environment: [local/prod] - Expected: - Actual: Logs: [paste logs] Minimal code: [paste snippet] I need: 1) most likely hypothesis 2) repro steps 3) fix (diff/snippet) 4) edge cases If missing info, ask 1–3 questions first.
Opens on home with the prompt prefilled.
Open in PrompteaCode review with criteria
Review this code: [paste code] Evaluate: - Correctness - Security - Performance - Readability - Missing tests Return: - prioritized issues - concrete fixes (snippets) - test checklist
Opens on home with the prompt prefilled.
Open in PrompteaFeature design (step plan)
I want to implement: [feature] Constraints: [time, stack, limits] Users: [who uses it] Propose: - simple architecture - endpoints/modules - data model - edge cases - step plan
Opens on home with the prompt prefilled.
Open in PrompteaRefactor with clear goals
Refactor this code with goals: - reduce complexity - improve naming - separate responsibilities - keep behavior (don’t break API) Code: [paste code] Return: - brief explanation - diff/snippets - suggested tests
Opens on home with the prompt prefilled.
Open in PrompteaWrite tests (unit + integration)
I need tests for this module: Stack: [jest/vitest/etc.] Code: [paste code] Provide: - cases to cover (list) - unit tests (snippets) - 1–2 integration tests - how to run them
Opens on home with the prompt prefilled.
Open in PrompteaPerformance: diagnose + optimize
I have a performance issue. Context: - app: [web/api] - symptom: [latency/CPU/memory] - metrics: [p95, RPS, etc.] Code/logs: [paste] Give me: - hypotheses by impact - how to measure (profiling) - fixes and tradeoffs
Opens on home with the prompt prefilled.
Open in PrompteaFAQ
How do I use these templates?
Pick an example and click “Open in Promptea”. It pre-fills the prompt, purpose (Code), and a suggested model.
What should I paste (minimum)?
A minimal snippet, relevant logs, and expected vs actual behavior. That improves score and output quality.
How do I avoid generic answers?
Request an output format (steps, diff, checklist) and add constraints (stack, versions, limits).
Can I change the model?
Yes. For strong structure, try Claude/GPT. For quick bullets, Grok can work well.