Generate a fully functional, multi-page Next.js MVP application for 'ScreenSense'. This application transforms existing laptops into touchscreens using the built-in webcam and minimal hardware ($1 mirror). The core value proposition is to provide an interactive touch experience without the need for expensive new hardware.
PROJECT OVERVIEW:
ScreenSense aims to solve the problem of limited interactivity on standard laptop screens. By leveraging computer vision and a simple hardware setup (a small mirror placed in front of the webcam), the software analyzes finger proximity and movement on the screen surface. It then translates these movements into virtual mouse and touch events, allowing users to interact with their OS and applications as if they had a touchscreen. This provides a cost-effective solution for users seeking enhanced productivity and a more engaging computing experience.
TECH STACK:
- Frontend Framework: Next.js (App Router)
- Styling: Tailwind CSS
- UI Components: shadcn/ui (for accessible, reusable components like buttons, dialogs, input fields, and cards)
- State Management: React Context API / Zustand (for global state)
- Backend/API: Next.js API Routes (or a separate lightweight backend if complexity grows, but for MVP, API routes are sufficient)
- Database: PostgreSQL (via Drizzle ORM)
- ORM: Drizzle ORM (for type-safe database interactions)
- Authentication: NextAuth.js (or Clerk, for easy integration of email/password, OAuth providers)
- Computer Vision Library: OpenCV.js (or a Python backend with OpenCV if JS performance is insufficient for real-time processing - initial implementation will use JS, with a note to consider Python if needed)
- Real-time Communication (Optional for MVP, but good for future): WebSockets (e.g., Socket.IO) if client-side processing is too slow and a server-side solution is considered.
DATABASE SCHEMA (using Drizzle ORM for PostgreSQL):
1. `users` table:
- `id` (UUID, Primary Key)
- `name` (Text)
- `email` (Text, Unique)
- `emailVerified` (Timestamp)
- `image` (Text)
- `createdAt` (Timestamp, Default NOW())
- `updatedAt` (Timestamp)
2. `settings` table (User-specific settings for ScreenSense):
- `id` (UUID, Primary Key)
- `userId` (UUID, Foreign Key to users.id, Unique)
- `sensitivity` (Integer, e.g., 1-10)
- `mirrorAngle` (Decimal, e.g., degrees)
- `calibrationData` (JSONB, storing calibration points/transforms)
- `preferredGestures` (JSONB, user-defined gestures)
- `createdAt` (Timestamp, Default NOW())
- `updatedAt` (Timestamp)
3. `sessions` table (from NextAuth.js)
4. `accounts` table (from NextAuth.js)
5. `verification_tokens` table (from NextAuth.js)
CORE FEATURES & USER FLOW:
1. **Onboarding & Setup Guide:**
* User signs up/logs in.
* Welcome screen explaining the concept.
* Step-by-step visual guide (using images/short videos generated by the app's UI) on how to position the mirror in front of the webcam.
* Instructions on downloading the companion CV processing application (or enabling webcam access within the web app if using JS CV).
* User Flow: Login/Signup -> Welcome -> Hardware Setup Guide -> Software Setup -> Calibration.
2. **Webcam Feed & Calibration:**
* If using client-side JS CV: The app requests webcam permission.
* Displays the live webcam feed.
* Calibration phase: User follows prompts to touch specific screen points (e.g., corners, center).
* The system captures these points and calculates the transformation matrix to map screen coordinates to webcam view coordinates.
* User Flow: Access Calibration Page -> Grant Webcam Permission -> Follow Calibration Prompts -> Save Calibration.
3. **Touch Input Simulation:**
* The core CV algorithm runs (either client-side JS or server-side via API calls to a Python backend).
* Detects finger(s) in the calibrated webcam view.
* Tracks finger movement.
* Translates detected movements (hover, tap, drag) into virtual mouse events (e.g., `mousemove`, `mousedown`, `mouseup`, `click`). For tap, it detects a 'down' and subsequent 'up' at roughly the same position.
* For drag, it tracks the movement while the finger is 'down'.
* The system aims to simulate standard touch gestures like tap, long-press, swipe, and drag.
* User Flow: After calibration, the system runs in the background or a dedicated 'Active' mode. User interacts with any application on their laptop.
4. **Settings Management:**
* Users can access a 'Settings' page.
* Adjust `sensitivity` (how close a finger needs to be to register as a 'touch' or 'hover').
* Re-run `calibration` if needed.
* Configure `preferredGestures` (map specific movements to custom actions - advanced feature).
* User Flow: Navigate to Settings -> Adjust Slider -> Click Save -> Optionally trigger Recalibration.
5. **User Authentication:**
* Standard email/password signup and login.
* OAuth options (Google, GitHub).
* Password reset functionality.
* User Flow: Access Login/Signup Page -> Enter Credentials/Use OAuth -> Redirect to Dashboard/Onboarding.
API & DATA FETCHING:
- **`/api/auth/*`**: Handled by NextAuth.js for authentication flows.
- **`/api/settings` (GET, PUT):**
* GET: Fetch current user's settings.
* PUT: Update user's settings (sensitivity, calibration data, etc.).
* Request Body (PUT): `{ sensitivity: number, calibrationData: object, ... }`
* Response Body (GET/PUT): `{ settings: { id, userId, sensitivity, ... } }`
- **`/api/calibration` (POST):**
* Receives calibration points from the client.
* Calculates the transformation matrix.
* Saves the `calibrationData` via the settings update mechanism.
* Request Body: `{ screenPoints: Array<{x, y}>, webcamPoints: Array<{x, y}> }`
* Response Body: `{ success: boolean, message: string, calibrationMatrix: object }` (Matrix might be stored in user settings)
- **CV Processing:**
* **Client-Side (JS):** OpenCV.js runs in the browser. Video frames are processed directly. Results (e.g., detected coordinates) can be sent to the frontend for display or directly trigger simulated mouse events.
* **Server-Side (Python - if needed):** Webcam feed frames are sent via WebSocket or API to a Python backend. OpenCV processes the frames. Detected coordinates are sent back to the Next.js app via WebSocket/API. This offloads processing but adds complexity and latency.
- Data Fetching: Use Server Components for initial data loads where possible (e.g., user settings on dashboard). Use client-side fetching (e.g., SWR, React Query, or simple `fetch`) within Client Components for dynamic data or actions.
COMPONENT BREAKDOWN (Next.js App Router Structure):
- **`app/`**
* **`(auth)/`** (Route Group for Auth Pages)
* `layout.tsx`: Auth layout (e.g., centered card)
* `login/page.tsx`: Login form component.
* `signup/page.tsx`: Signup form component.
* `reset-password/page.tsx`: Password reset form.
* **`(app)/`** (Route Group for Authenticated App)
* `layout.tsx`: Main app layout (includes Navbar, Sidebar if applicable, Footer)
* `dashboard/page.tsx`: Main dashboard. Shows setup status, quick links to features. Possibly a minimal preview. Uses Server Component for user data.
* `setup/page.tsx`: Multi-step wizard for hardware setup guide.
* `calibrate/page.tsx`: Webcam feed display and calibration interface. (Client Component)
* `settings/page.tsx`: User settings form. (Client Component)
* `activity/page.tsx`: (Optional MVP+) Visualizer showing touch events or logs.
* **`layout.tsx`**: Root layout (html, body, global providers).
* **`page.tsx`**: Landing Page (marketing content, features, CTA).
* **`api/`**
* `auth/[...nextauth]/route.ts`: NextAuth.js route handler.
* `settings/route.ts`: API route for settings CRUD.
* `calibration/route.ts`: API route for calibration processing.
- **`components/`**
* **`ui/`** (shadcn/ui components or custom wrappers):
* `Button.tsx`
* `Input.tsx`
* `Card.tsx`
* `Dialog.tsx`
* `Progress.tsx` (for setup wizard)
* `Alert.tsx`
* `Spinner.tsx`
* `Tooltip.tsx`
* **`auth/`**:
* `LoginForm.tsx`
* `SignupForm.tsx`
* **`setup/`**:
* `HardwareGuide.tsx` (Displays images/steps)
* `SoftwareConfig.tsx` (Instructions for enabling webcam/downloading companion app)
* **`calibration/`**:
* `WebcamFeed.tsx` (Client Component using `navigator.mediaDevices.getUserMedia` and canvas for drawing guides)
* `CalibrationPoints.tsx` (Renders target points on the screen)
* `CalibrationProvider.tsx` (Context to manage calibration state)
* **`settings/`**:
* `SensitivitySlider.tsx`
* `CalibrationResetButton.tsx`
* **`layout/`**:
* `Navbar.tsx`
* `Footer.tsx`
* `Sidebar.tsx` (Optional)
* `Logo.tsx`
- **`lib/`**: Utility functions, database connection (Drizzle), API clients.
- **`styles/`**: Global CSS (`globals.css` for Tailwind directives).
- **`hooks/`**: Custom React hooks (e.g., `useWebcamFeed`, `useCalibration`).
UI/UX DESIGN & VISUAL IDENTITY:
- **Style:** Minimalist Clean with subtle futuristic elements.
- **Color Palette:**
* Primary: `#007AFF` (Vibrant Blue)
* Secondary: `#34C759` (Vibrant Green for success states)
* Accent/Hover: `#5856D6` (Medium Purple)
* Background: `#F2F2F7` (Light Gray)
* Card/Surface: `#FFFFFF` (White)
* Text (Primary): `#1C1C1E` (Near Black)
* Text (Secondary): `#8E8E93` (Gray)
* Alert/Error: `#FF3B30` (Vibrant Red)
- **Typography:** System fonts (San Francisco for macOS/iOS, Roboto for Android/ChromeOS) for native feel. Use Tailwind's font families.
* Headings: Inter (or similar sans-serif like Poppins)
* Body: Inter (or system default)
- **Layout:** Use a clean, card-based layout for settings and information display. Full-width sections on the landing page. Centered layouts for auth forms.
- **Responsiveness:** Mobile-first approach. Ensure usability on various screen sizes. Use Tailwind's responsive prefixes (`sm:`, `md:`, `lg:`).
- **Visual Elements:** Subtle gradients on buttons or call-to-action sections. Clean icons (e.g., fromlucide-react). Smooth transitions between states.
ANIMATIONS:
- **Page Transitions:** Use Next.js's built-in router transitions with libraries like `Framer Motion` for smooth fade/slide effects between pages.
- **Button Hovers:** Slight scale-up or background color change (`transition-all duration-200 ease-in-out`).
- **Loading States:** Use `shadcn/ui`'s `Spinner` component or subtle skeleton loaders while data is being fetched.
- **Calibration Feedback:** Visual feedback (e.g., pulsating circles) on calibration points as the user touches them.
- **Webcam Feed:** Smooth display, possibly a subtle border animation indicating it's active.
EDGE CASES:
- **No Webcam Access:** Gracefully handle cases where the user denies webcam permission. Display informative messages and guide them on how to enable it.
- **Unsupported Browser/Device:** Detect and inform the user if the browser doesn't support necessary WebRTC/Canvas APIs.
- **Empty States:** Design clear empty states for sections like activity logs or when no calibration data is saved yet.
- **Authentication:** Handle expired sessions, incorrect login credentials, and email verification flows.
- **CV Processing Failures:** If the CV algorithm fails to detect fingers consistently, provide feedback and suggest recalibration or checking the hardware setup.
- **Validation:** Implement robust form validation for all user inputs (signup, settings).
- **Performance:** For client-side CV, implement debouncing/throttling for processing to avoid freezing the UI. Consider Web Workers for intensive tasks. If performance is an issue, clearly communicate the need for a server-side component or suggest limitations.
SAMPLE/MOCK DATA:
1. **User Settings (for a user with ID `uuid-123`):**
```json
{
"id": "set-uuid-456",
"userId": "uuid-123",
"sensitivity": 7,
"mirrorAngle": 45.5,
"calibrationData": {
"screenPoints": [{"x": 0, "y": 0}, {"x": 1920, "y": 0}, {"x": 0, "y": 1080}, {"x": 1920, "y": 1080}],
"webcamPoints": [{"x": 100, "y": 50}, {"x": 500, "y": 60}, {"x": 110, "y": 400}, {"x": 510, "y": 410}],
"transformMatrix": [[0.1, 0.0, 50.0], [0.0, 0.1, 20.0], [0.0, 0.0, 1.0]]
},
"preferredGestures": {"swipeLeft": "ALT+LEFT_ARROW"},
"createdAt": "2023-10-27T10:00:00Z",
"updatedAt": "2023-10-27T10:30:00Z"
}
```
2. **User (for login):**
```json
{
"id": "uuid-123",
"name": "Alice Smith",
"email": "alice@example.com",
"emailVerified": "2023-10-26T09:00:00Z",
"image": "/images/avatars/alice.png"
}
```
3. **Calibration State (during calibration):**
```json
{
"step": "touch_top_left",
"points": [],
"isCalibrating": true
}
```
4. **Initial Settings (new user):**
```json
{
"sensitivity": 5,
"mirrorAngle": 30.0,
"calibrationData": null,
"preferredGestures": {}
}
```
5. **Finger Detection Result (example output from CV):**
```json
{
"fingers": [
{"id": 1, "x": 350, "y": 200, "state": "hover"}
]
}
```
*(State could be 'hover', 'down', 'move')*
6. **API Response (Settings GET):**
```json
{
"settings": {
"id": "set-uuid-456",
"userId": "uuid-123",
"sensitivity": 7,
"mirrorAngle": 45.5,
"calibrationData": { ... }, // Full calibration data object
"preferredGestures": { ... },
"createdAt": "2023-10-27T10:00:00Z",
"updatedAt": "2023-10-27T10:30:00Z"
}
}
```
7. **API Response (Calibration POST success):**
```json
{
"success": true,
"message": "Calibration successful! Your settings have been saved.",
"calibrationMatrix": [[0.1, 0.0, 50.0], [0.0, 0.1, 20.0], [0.0, 0.0, 1.0]]
}
```
8. **API Response (Calibration POST failure):**
```json
{
"success": false,
"message": "Calibration failed. Please ensure all points were detected accurately.",
"calibrationMatrix": null
}
```
9. **User Info (Navbar/Dashboard):**
```json
{
"name": "Alice Smith",
"email": "alice@example.com",
"image": "/images/avatars/alice.png"
}
```
10. **Error Message Example:**
```json
{
"error": "Unauthorized",
"message": "You must be logged in to access this page."
}
```
This prompt is designed to guide an AI model to generate a comprehensive, functional MVP. It covers the technical stack, database design, user flows, API structure, UI/UX considerations, and essential fallback scenarios, ensuring a robust starting point for the ScreenSense application.