CSV Import Engine
Overview
The CSV Import Engine is a specialized utility within the Teirac Project Management platform designed to facilitate bulk data entry across various project modules. It bridges the gap between legacy spreadsheets and the Supabase-backed relational database, ensuring that project data—ranging from Task Lists to RACI Matrices—can be synchronized efficiently.
The engine utilizes a dynamic mapping system to route parsed CSV data into the appropriate database tables while maintaining referential integrity through project_id and user_id associations.
Dynamic Table Mapping
The engine operates on a predefined TABLE_MAP configuration. This map defines the relationship between the user-facing module name, the target Supabase table, and the required column schema.
Supported Modules and Schemas
| Module Name | Target Table | Required CSV Headers |
| :--- | :--- | :--- |
| Task List | project_tasks | wbs_code, task, owner, start_date, end_date, status |
| Budget | project_budget_items | category, budget_amount, spent_amount, note |
| Risk Tracking | project_risks | risk_level, description |
| Action Plan | project_action_plans | action, owner, due_date, priority, status |
| RACI Matrix | project_raci | activity, responsible, accountable, consulted, informed |
| Issues | project_issues | issue_id_label, description, priority, owner, raised_date, status |
Note: For a full list of supported modules, refer to the TABLE_MAP constant in src/components/ImportModal.tsx.
Data Processing Logic
The import process follows a three-stage pipeline: Parsing, Normalization, and Ingestion.
1. Parsing
The engine uses a lightweight, client-side parser to convert raw CSV text into JavaScript objects. It handles basic formatting such as:
- Trimming whitespace from headers and values.
- Removing surrounding double quotes (
") from fields. - Mapping header strings to object keys.
2. Normalization
Once parsed, the engine injects mandatory metadata required for database security and project context:
project_id: Links the records to the current active project.user_id: Identifies the record creator for audit purposes.
3. Ingestion
Data is batch-inserted into Supabase via the supabaseClient. This ensures that even large imports are handled in a single network request where possible, reducing overhead.
Usage Interface
The engine is primarily accessed through the ImportModal component. To initiate an import, the user must provide the following props:
interface ImportProps {
projectId: string; // The UUID of the target project
userId: string; // The UUID of the authenticated user
}
CSV Requirements
To ensure a successful import, CSV files must adhere to the following standards:
- First Row: Must contain headers that exactly match the keys defined in the
TABLE_MAP. - Encoding: UTF-8 encoded plain text.
- Delimiter: Comma (
,). - Date Format: ISO 8601 (YYYY-MM-DD) is recommended for date fields to ensure compatibility with Supabase
DATEandTIMESTAMPTZtypes.
Implementation Example
Inside the ImportModal, the parsing logic is triggered upon file selection:
function parseCSV(text: string): Record<string, string>[] {
const lines = text.trim().split(/\r?\n/);
if (lines.length < 2) return [];
const headers = lines[0].split(',').map(h => h.trim().replace(/^"|"$/g, ''));
return lines.slice(1).map(line => {
const vals = line.split(',').map(v => v.trim().replace(/^"|"$/g, ''));
return Object.fromEntries(headers.map((h, i) => [h, vals[i] || '']));
});
}
Error Handling
The engine performs basic validation during the ingestion phase. If a required column is missing from the CSV header or if the database constraints (e.g., data types) are violated, the Supabase client will return an error, which should be caught and displayed via the system's Toast notification service.