You are an AI assistant tasked with generating a single-page Single Page Application (SPA) for a code validation tool designed to ensure AI-generated code meets predefined acceptance criteria, particularly focusing on performance and correctness beyond just compilation and passing basic tests. This tool addresses the problem where LLMs often generate 'plausible' but inefficient code.
PROJECT OVERVIEW:
The application, named 'Doğru Kod' (Correct Code), aims to bridge the gap between AI-generated code's apparent functionality and its actual real-world performance and correctness. Developers integrate LLMs into their workflow but face challenges with code that compiles and passes tests but is drastically inefficient (e.g., thousands of times slower than optimized counterparts). Doğru Kod provides a platform for users to define specific acceptance criteria (performance benchmarks, stylistic rules, security checks) for their AI-generated code. The system then automatically analyzes and validates the code against these criteria, providing detailed reports and highlighting critical inefficiencies. The core value proposition is to instill confidence in using LLM-generated code by ensuring it is not only functional but also performant, secure, and maintainable, saving developers significant time in manual debugging and optimization.
TECH STACK:
- Frontend Framework: React (using Vite for fast development environment)
- Styling: Tailwind CSS for rapid UI development and a consistent design system.
- State Management: Zustand for efficient and straightforward global state management.
- Routing: React Router DOM for handling navigation within the SPA.
- API Interaction: Axios for making HTTP requests to a potential backend (though MVP is client-side focused with local processing simulation or simplified backend interaction).
- Testing Utilities: Vitest for frontend unit/integration tests.
- Icons: Heroicons for a clean and consistent icon set.
- Form Handling: React Hook Form (optional, for more complex forms if needed later).
- Syntax Highlighting (for code display): Prism.js or similar.
CORE FEATURES:
1. **Project Upload/Connection**:
* User Flow: Upon accessing the app, the user is presented with options to either upload a project folder (as a zip) or connect a Git repository (initially simulating this by asking for repo URL and branch, with backend fetching to be implemented later).
* Details: For the MVP, focus on uploading a zip file containing the codebase. The frontend will simulate analyzing this code. A more advanced version would involve Git integration to pull code directly.
* State Management: Store uploaded file details, repository info, and analysis status.
2. **Acceptance Criteria Definition**:
* User Flow: After project upload/connection, the user navigates to a 'Criteria' section. Here, they can define rules. Initially, this includes defining performance benchmarks for specific function types (e.g., 'database lookup', 'API call') and setting basic code quality/style checks.
* Details: Provide input fields for benchmark names (e.g., 'Primary Key Lookup'), expected maximum execution time (ms), and thresholds for complexity metrics. Offer predefined templates for common scenarios.
* UI: A form-based interface with clear labels, input fields, sliders for thresholds, and a 'Save Criteria' button.
3. **Code Analysis & Validation**:
* User Flow: Once criteria are defined, the user initiates the analysis. The app simulates running the uploaded code against the defined criteria.
* Details: This is the core simulation for the MVP. The frontend will present mock analysis results based on predefined scenarios (e.g., simulate a slow database lookup). For a real backend, this would involve sending code snippets or the entire project to a backend service for execution and analysis against actual benchmarks.
* Feedback: Display 'Running Analysis...' state with a progress indicator.
4. **Reporting & Results**:
* User Flow: After analysis, the user sees a detailed report page.
* Details: The report should clearly show which criteria were met and which failed. For performance failures, it should display the measured time vs. the accepted time, and potentially a comparison factor (e.g., '20,000x slower than expected'). Highlight potential areas for optimization.
* UI: A dashboard-like view with summary statistics, pass/fail indicators, detailed breakdowns for each criterion, and visualizations (e.g., bar charts comparing actual vs. expected performance).
UI/UX DESIGN:
- **Layout**: Single page application with a persistent sidebar navigation (e.g., 'My Projects', 'New Analysis', 'Settings') and a main content area. The main area will dynamically display forms, analysis progress, and reports.
- **Color Palette**: Primary: Deep Blue (#1E3A8A), Secondary: Teal (#14B8A6), Accent: Yellow (#FBBF24) for highlights and calls to action, Neutral: Grays (#F3F4F6, #6B7280, #1F2937) for background, text, and borders. Aim for a professional, tech-oriented, and trustworthy feel.
- **Typography**: Use a clean, modern sans-serif font like Inter or Inter Variable. Headings should be bold and well-spaced. Body text should be readable at various sizes.
- **Responsive Design**: Mobile-first approach. Ensure the layout adapts gracefully to different screen sizes. Sidebar might collapse into a hamburger menu on smaller screens. Tables and complex reports should be scrollable or presented differently on mobile.
- **Key Components**: Navigation Sidebar, Project Upload Form, Criteria Definition Form, Analysis Progress Indicator, Results Summary Card, Detailed Report Table, Code Snippet Viewer.
COMPONENT BREAKDOWN:
- `App.js`: Main entry point, sets up routing and global layout.
- `NavigationSidebar.js`: Handles sidebar navigation links. Receives `activeLink` prop.
- `ProjectUploader.js`: Component for uploading project zip files. Uses `useState` for file handling. Callback prop `onUploadSuccess`.
- `CriteriaForm.js`: Form for defining acceptance criteria. Manages form state using `useState` or `useReducer`. Props: `onSubmitCriteria`, `initialData`.
- `AnalysisRunner.js`: Simulates the analysis process. Displays progress. Receives `codebase`, `criteria`. Callback prop `onAnalysisComplete`.
- `ResultsReport.js`: Displays the analysis results. Receives `analysisResults`. Contains `ResultSummary` and `DetailedResultsTable`.
- `ResultSummary.js`: Card showing overall pass/fail status and key metrics.
- `DetailedResultsTable.js`: Table displaying detailed results for each criterion. Props: `results`, `criteria`.
- `CodeViewer.js`: Component to display code snippets with syntax highlighting. Props: `code`, `language`.
- `ProgressBar.js`: Reusable progress bar component.
DATA MODEL:
- **State Structure (Zustand Store)**:
```javascript
{
project: { name: string | null, files: object | null, repoUrl: string | null },
criteria: Array<{ id: string, name: string, type: 'performance' | 'style', config: object }>, // e.g., { metric: 'db_lookup', maxTime: 10 } for performance
analysis: { status: 'idle' | 'running' | 'completed' | 'error', results: object | null, error: string | null },
userSettings: { ... }
}
```
- **Mock Data Format (Analysis Results)**:
```json
{
"overallStatus": "Fail", // 'Pass' | 'Fail' | 'Partial'
"summary": {
"passed": 2,
"failed": 3,
"total": 5,
"performanceScore": 0.4 // 0.0 to 1.0
},
"details": [
{
"criterionId": "crit_db_lookup_1",
"name": "Primary Key Lookup",
"type": "performance",
"status": "Fail",
"measuredTimeMs": 1815.43,
"expectedMaxTimeMs": 0.09,
"comparisonFactor": 20338.1
},
{
"criterionId": "crit_style_indent_1",
"name": "Code Indentation",
"type": "style",
"status": "Pass",
"message": "Indentation consistent with project standards."
}
// ... more results
]
}
```
ANIMATIONS & INTERACTIONS:
- **Page Transitions**: Subtle fade-in/fade-out transitions between different sections using `Framer Motion` or CSS transitions.
- **Button Hovers**: Slight scale-up or background color change on interactive elements.
- **Loading States**: Use `ProgressBar.js` for the analysis simulation. Display skeleton loaders or spinners for data fetching (if backend implemented).
- **Micro-interactions**: Subtle animations on pass/fail indicators (e.g., green checkmark appearing, red cross animating). Input field focus states.
EDGE CASES:
- **Empty State**: When no project is uploaded or no criteria are defined, display informative messages and clear calls to action.
- **Error Handling**: Gracefully handle file upload errors, analysis failures (e.g., code execution errors in simulation/backend), and network issues. Display user-friendly error messages.
- **Validation**: Implement frontend validation for the criteria form (e.g., ensure time values are positive numbers). Handle invalid file uploads.
- **Accessibility (a11y)**: Use semantic HTML, ensure sufficient color contrast, provide ARIA attributes where necessary, ensure keyboard navigability.
SAMPLE DATA:
1. **Project Structure (Simulated Zip)**:
```
my_llm_project/
├── src/
│ ├── database.rs
│ ├── main.rs
│ └── utils.rs
├── Cargo.toml
└── README.md
```
2. **Sample Criteria Definition**:
```json
[
{ "id": "crit_perf_db_lookup", "name": "Database Primary Key Lookup", "type": "performance", "config": { "metric": "primary_key_lookup", "maxTimeMs": 1.0 } },
{ "id": "crit_perf_api_call", "name": "External API Call", "type": "performance", "config": { "metric": "api_call", "maxTimeMs": 500.0 } },
{ "id": "crit_style_indent", "name": "Code Indentation", "type": "style", "config": { "rule": "consistent_4_spaces" } },
{ "id": "crit_security_input", "name": "User Input Sanitization", "type": "security", "config": { "check": "xss_prevention" } }
]
```
3. **Mock Analysis Results (as described in Data Model)**:
* (See Data Model section for a detailed example structure)
4. **Specific Code Snippet Example (for display)**:
```rust
// Simulating a slow DB lookup function
fn get_user_data(user_id: i32) -> Result<User, DbError> {
// Simulate 1.8 seconds of processing time
std::thread::sleep(std::time::Duration::from_millis(1800));
// ... actual database logic ...
Ok(User { id: user_id, name: "..." })
}
```
DEPLOYMENT NOTES:
- **Build Tool**: Vite is recommended for its speed. Configure `vite build` for production.
- **Environment Variables**: Use `.env` files for managing API keys (if a backend is introduced) or other configuration settings. Use `import.meta.env.VITE_` prefix for access in React components.
- **Performance Optimizations**: Code splitting using React.lazy and Suspense. Memoization with `React.memo` and `useMemo`/`useCallback`. Optimize image assets. Ensure efficient state updates.
- **Static Hosting**: The SPA can be easily deployed to static hosting services like Netlify, Vercel, or GitHub Pages. Ensure routing is configured correctly for client-side routing (e.g., using a `_redirects` file or framework-specific settings).
- **Error Monitoring**: Integrate a service like Sentry for production error tracking.