Sort By
Impact Level
Languages
Tools
Structured Project Documentation
Maintain a Project_milestones.md file referenced in .cursorrules
Helps maintain project scope and track progress systematically
Large-scale project organization
Incremental Development
Work in small increments rather than large feature drops
Maintains better control and reduces complexity of changes
Development workflow
Request Overview Before Code Generation
Always ask the AI to present an overview of its planned implementation before generating any code
Prevents wasted time on incorrectly implemented features and catches AI hallucinations early
When using AI coding assistants for feature implementation
Write Tests First
Implement tests before writing the actual code, even when using AI assistance
Ensures code meets requirements and maintains quality control over AI-generated code
When developing with AI coding assistants
Test-First Development Approach
Write 1-2 integration tests before implementing features
Ensures new features don't break existing functionality while maintaining development speed
When working with AI-generated code in large projects
Task Size Optimization
Break down tasks into well-defined, manageable pieces
Prevents AI from going off track and maintains efficient development flow
When using Cursor for feature implementation
Use AI Tools as Assistants, Not Replacements
Treat AI coding tools as assistive technologies that enhance productivity rather than complete replacements for human judgment
AI tools are designed to augment human capabilities, not replace them. Understanding this leads to more effective tool usage and better code quality
General AI tool usage in development workflow
Maintain Critical Thinking While Using AI Tools
Always review and validate AI-generated suggestions rather than blindly accepting them
Human oversight ensures code quality and prevents potential errors that AI might introduce
When using Cursor or similar AI coding assistants
Automated Architecture Documentation
Maintain automated documentation of architecture decisions and rules
Prevents architecture drift and ensures consistent understanding across both human and AI contributors
When working with AI tools in large codebases
Model Selection Based on Use Case
Choose AI models based on specific project requirements rather than overall performance scores
Different models excel in different areas - speed, creativity, debugging, etc.
When selecting an AI model for application development
MVP-First Approach
Focus on building core features for MVP before expanding functionality
Enables faster time-to-market and validation of core concepts
Mobile app development lifecycle
Maintain Project Architecture Documentation
Create and maintain a live documentation of project architecture and technical decisions
Ensures consistency in development and helps tools understand project context
Large codebases with complex architecture
Semantic Versioning Implementation
Use three-number versioning system (X.Y.Z) with clear rules for major, minor, and patch versions
Provides clear structure for version management and helps users understand the impact of updates
Version number management and release process
Use Debug Statements Strategically
Include liberal use of debug statements to guide the AI in issue identification
Prevents AI from getting stuck in loops and helps it identify problems more accurately
When troubleshooting code issues with AI assistance
Take Regular Breaks During Prompt Engineering
Step away from prompt engineering when facing difficulties and return with a fresh perspective
Mental fatigue can lead to deteriorating prompt quality, creating a negative feedback loop. Fresh perspective often leads to better solutions.
When experiencing perceived degradation in LLM output quality
Systematic Problem Analysis
Consider 5-7 different possible sources of the problem before implementing fixes
Prevents tunnel vision and helps identify non-obvious root causes
During initial debugging phase
Evidence-Based Validation
Add logs to validate assumptions before implementing code fixes
Ensures solutions are based on concrete evidence rather than speculation
Before making code changes
Implement Reasoning-Based Prompts
Structure prompts to encourage the AI to reason about the code rather than just generate it
Leads to sharper and more precise code suggestions, especially for complex customizations
When configuring AI code completion tools
Use Git for AI-Assisted Development
Implement Git version control when working with Cursor AI to maintain control over code changes
Provides a safety net for experimental changes and allows easy recovery from unsuccessful AI modifications
When using AI tools for code generation and modification
Enable Interactive Questioning in Cursor AI
Add a rule requesting Cursor to ask clarifying questions when instructions are unclear
Prevents code hallucination and improves output quality through better understanding of requirements
When using Cursor AI for code generation