AI Prompts for Engineering
How to Get Better Results from These Prompts
Engineering prompts produce dramatically better output when you include the relevant code, schema, or architecture context. Do not describe your code; paste it. Do not summarize your schema; include it. AI that can read the actual implementation catches issues that a description-based prompt would miss.
For code review prompts, specify your team’s conventions: naming style, error handling patterns, test coverage expectations, and performance requirements. Without these, AI applies generic best practices that may not match your codebase standards.
When to Use AI Prompts vs IDE-Integrated Tools
Use prompts for complex, multi-file reasoning: architecture decisions, incident analysis, and documentation that spans multiple components. Use IDE-integrated tools (Cursor, Copilot) for inline code generation, completion, and single-file refactoring. Prompts handle the thinking. IDE tools handle the typing.
How to AI Prompts for Engineering in 10 Steps
- 1 Review Code for Bugs and Performance
- 2 Generate Unit Test Cases
- 3 Write a Technical Design Document
- 4 Draft an Incident Post-Mortem
- 5 Write API Documentation
- 6 Create a Pull Request Description
- 7 Debug a Production Issue
- 8 Write an Architecture Decision Record
- 9 Generate Database Schema Documentation
- 10 Write Release Notes
Review Code for Bugs and Performance
You are a senior software engineer conducting a code review. Analyze the following code for issues. Language: {LANGUAGE} Context: {CONTEXT} (e.g., REST API endpoint for user authentication) Team conventions: {CONVENTIONS} ``` {PASTE_CODE} ``` Check for: 1. Logic errors and off-by-one bugs 2. Security vulnerabilities (injection, auth bypass, data exposure) 3. Performance issues (N+1 queries, unnecessary allocations, missing indexes) 4. Error handling gaps 5. Edge cases not covered For each issue: state the line number, severity (critical/medium/low), what is wrong, and the fix. Do not comment on style unless it causes a bug.
Generate Unit Test Cases
You are a QA engineer. Generate comprehensive unit tests for the following function. Language: {LANGUAGE} Test framework: {FRAMEWORK} (e.g., Jest, pytest, JUnit) ``` {PASTE_FUNCTION} ``` Generate tests covering: 1. Happy path (expected inputs produce expected outputs) 2. Edge cases (empty inputs, boundary values, max/min) 3. Error cases (invalid inputs, null/undefined, type mismatches) 4. Async behavior (if applicable) For each test: descriptive name, arrange/act/assert structure, and a comment explaining what it validates. Aim for {COVERAGE_TARGET}% branch coverage.
Write a Technical Design Document
You are a staff engineer. Write a technical design document for the following feature. Feature: {FEATURE_DESCRIPTION} Problem statement: {PROBLEM} Existing architecture: {CURRENT_ARCHITECTURE} Constraints: {CONSTRAINTS} (e.g., latency requirements, backward compatibility, budget) Team size: {TEAM_SIZE} Timeline: {TIMELINE} Include sections for: 1. Overview and Goals 2. Non-Goals (what this does NOT solve) 3. Proposed Architecture (with component diagram description) 4. Data Model Changes 5. API Changes 6. Migration Plan 7. Rollback Strategy 8. Monitoring and Alerting 9. Open Questions Be opinionated about the recommended approach. Present alternatives briefly but state which one you recommend and why.
Draft an Incident Post-Mortem
You are an SRE writing a post-mortem. Document the following incident. Incident: {INCIDENT_TITLE} Severity: {SEVERITY} Duration: {DURATION} Impact: {IMPACT} (e.g., 500 users could not log in for 45 minutes) Timeline: {PASTE_TIMELINE} Root cause: {ROOT_CAUSE} Mitigation actions taken: {ACTIONS} Detection method: {HOW_DETECTED} Write the post-mortem with sections for: 1. Summary (3 sentences) 2. Impact (users affected, revenue impact, SLA breach) 3. Timeline (minute-by-minute) 4. Root Cause Analysis (5 Whys) 5. What Went Well 6. What Went Wrong 7. Action Items (owner, deadline, priority for each) Blameless tone. Focus on systems and processes, not individuals.
Write API Documentation
You are a technical writer. Generate API documentation for the following endpoints. Base URL: {BASE_URL} Auth method: {AUTH_METHOD} Endpoints: {PASTE_ENDPOINT_SPECS} For each endpoint, document: - HTTP method and path - Description (what it does, when to use it) - Request parameters (path, query, body) with types and required/optional - Request example (curl and language-specific: {LANGUAGE}) - Response schema with field descriptions - Response examples (success and error cases) - Rate limits - Error codes specific to this endpoint Format: {FORMAT} (e.g., Markdown, OpenAPI YAML)
Create a Pull Request Description
You are a developer writing a PR description. Generate a thorough PR description from the following diff or change summary. Branch: {BRANCH_NAME} Related ticket: {TICKET_ID} Change summary: {CHANGE_SUMMARY} Code changes: {PASTE_DIFF_OR_SUMMARY} Generate a PR description with: 1. Summary (2 to 3 sentences: what changed and why) 2. Changes Made (bulleted list of specific changes) 3. Testing Done (what you tested and how) 4. Screenshots/recordings needed: yes/no 5. Migration steps (if applicable) 6. Rollback plan 7. Reviewer notes (what to pay attention to) Keep it concise. Reviewers have 50 PRs in their queue.
Debug a Production Issue
You are a senior engineer debugging a production issue. Analyze the following symptoms and suggest causes. Symptoms: {SYMPTOMS} Error messages/logs: {PASTE_LOGS} Environment: {ENVIRONMENT} (e.g., AWS, K8s, Heroku) Recent changes: {RECENT_DEPLOYMENTS} Affected services: {SERVICES} Started: {START_TIME} Pattern: {PATTERN} (e.g., intermittent, increasing, specific to region) Provide: 1. Most likely root causes ranked by probability 2. For each cause: what evidence supports it, what evidence would confirm/deny it 3. Diagnostic commands to run 4. Quick mitigation options (buy time while investigating) 5. Questions to ask the team that might narrow it down Start with the most actionable diagnosis, not the most interesting.
Write an Architecture Decision Record
You are a principal engineer. Write an ADR for the following decision. Decision: {DECISION} (e.g., Use PostgreSQL instead of MongoDB for the billing service) Context: {CONTEXT} Constraints: {CONSTRAINTS} Options considered: {OPTION_1}, {OPTION_2}, {OPTION_3} ADR format: 1. Title: ADR-{NUMBER}: {DECISION_TITLE} 2. Status: Proposed 3. Context: Why this decision is needed now 4. Decision: What we decided and why 5. Consequences: Positive, negative, and neutral outcomes 6. Alternatives Considered: For each alternative, why it was rejected 7. Related ADRs: {RELATED_DECISIONS} Be direct about tradeoffs. Every architectural decision has downsides. Document them.
Generate Database Schema Documentation
You are a data engineer. Document the following database schema. Database: {DATABASE_TYPE} Schema: {PASTE_SCHEMA} For each table, provide: - Purpose (one sentence) - Column descriptions with types and constraints - Relationships (foreign keys, join patterns) - Common query patterns - Indexes and their purpose - Gotchas (nullable columns that cause issues, timestamp timezone handling, etc.) Also include: - Entity relationship summary - Data retention policy notes - Performance considerations for large tables Format: Markdown tables.
Write Release Notes
You are a developer relations writer. Generate release notes from the following changelog. Version: {VERSION} Release date: {DATE} Changelog: {PASTE_CHANGELOG} Breaking changes: {BREAKING_CHANGES} Migration required: {MIGRATION_STEPS} Write release notes with: 1. Headline (the single most impactful change) 2. New Features (user-facing description, not implementation details) 3. Improvements (performance, UX, reliability) 4. Bug Fixes (what users experienced, not the code fix) 5. Breaking Changes (with migration guide) 6. Deprecation Notices 7. Known Issues Audience: Developers using this product. Technical but not internal jargon.