The problem
A project management SaaS took 25-35 seconds to load the main dashboard. Users watched spinning loaders while the database executed 3,847 queries per page load. The PostgreSQL server CPU hit 100% with just 50 concurrent users. Customer complaints flooded in: "Is the site broken?", "This is unusable", "We're switching to competitors". The company was bleeding 200+ customers per week.
How AI created this issue
The developer had asked Claude to create a project dashboard showing tasks, comments, and team members. Claude generated classic N+1 query problems:
# Claude's implementation - N+1 query nightmare
def get_dashboard_data(user_id):
# Get user's projects
projects = Project.objects.filter(
members__user_id=user_id
).order_by('-updated_at')[:20]
dashboard_data = []
for project in projects:
# N+1: Separate query for each project's tasks
tasks = Task.objects.filter(project_id=project.id)
task_list = []
for task in tasks:
# N+1: Query for each task's assignee
assignee = User.objects.get(id=task.assignee_id)
# N+1: Query for each task's comments
comments = Comment.objects.filter(task_id=task.id)
comment_list = []
for comment in comments:
# N+1: Query for each comment's author
author = User.objects.get(id=comment.author_id)
comment_list.append({
'text': comment.text,
'author': author.name
})
task_list.append({
'title': task.title,
'assignee': assignee.name,
'comments': comment_list
})
# More N+1s for project members
members = []
member_links = ProjectMember.objects.filter(project_id=project.id)
for link in member_links:
member = User.objects.get(id=link.user_id)
members.append(member.name)
dashboard_data.append({
'project': project.name,
'tasks': task_list,
'members': members
})
return dashboard_data
Claude's code worked fine with 5 test projects. But with real data (20 projects × 50 tasks × 10 comments), it generated thousands of queries. The AI never mentioned query optimization, eager loading, or database indexes - just functionally correct code that destroyed performance.
The solution
- Eager loading with select_related and prefetch_related:
# Optimized version - 3 queries instead of 3,847 def get_dashboard_data(user_id): # Single query with all related data projects = Project.objects.filter( members__user_id=user_id ).select_related( 'owner' ).prefetch_related( # Prefetch tasks with their assignees Prefetch( 'tasks', queryset=Task.objects.select_related('assignee') .prefetch_related( # Prefetch comments with authors Prefetch( 'comments', queryset=Comment.objects.select_related('author') .order_by('-created_at')[:5] # Limit comments ) ).order_by('-priority', '-created_at')[:10] # Limit tasks per project ), # Prefetch project members Prefetch( 'members', queryset=ProjectMember.objects.select_related('user') ) ).order_by('-updated_at')[:20] # Now building response uses no additional queries dashboard_data = [] for project in projects: task_list = [] for task in project.tasks.all(): comment_list = [ { 'text': comment.text, 'author': comment.author.name, 'created': comment.created_at } for comment in task.comments.all() ] task_list.append({ 'id': task.id, 'title': task.title, 'assignee': task.assignee.name if task.assignee else None, 'priority': task.priority, 'comments_count': len(comment_list), 'recent_comments': comment_list }) dashboard_data.append({ 'id': project.id, 'name': project.name, 'owner': project.owner.name, 'task_count': len(task_list), 'tasks': task_list, 'members': [m.user.name for m in project.members.all()] }) return dashboard_data
- Database indexing strategy: Added composite indexes on frequently queried columns:
-- Critical indexes for dashboard queries CREATE INDEX idx_project_members_user_project ON project_members(user_id, project_id); CREATE INDEX idx_tasks_project_priority_created ON tasks(project_id, priority DESC, created_at DESC); CREATE INDEX idx_comments_task_created ON comments(task_id, created_at DESC); -- Covering index for member lookups CREATE INDEX idx_users_covering ON users(id) INCLUDE (name, email, avatar_url);
- Query result caching: Implemented Redis caching for dashboard data
- Pagination and limits: Limited tasks and comments per project
- Database connection pooling: Optimized connection reuse
The results
- Page load time: 30s → 2.8s (91% improvement)
- Database queries: 3,847 → 3 per page load
- Concurrent users supported: 50 → 2,000 on same hardware
- Database CPU usage: 100% → 15% under load
- Customer churn reversed: Gained 150 customers/week
- Infrastructure costs reduced 75% by removing unnecessary servers
The team learned that AI-generated code often contains hidden performance bombs. Claude produced working code but ignored the multiplicative effect of nested loops with database queries. They now profile every feature and treat query count as a critical metric, not an afterthought.
Ready to fix your codebase?
Let us analyze your application and resolve these issues before they impact your users.
Get Diagnostic Assessment →