Software Requirements Specification (SRS) โ
๐ SRS Document
A comprehensive requirements specification for the EcoGuard sustainability platform
This document defines every layer of requirements โ from high-level business goals down to interface contracts โ following IEEE 830 standards and managed through OSRMT (Open-Source Requirements Management Tool).
๐ Requirements Hierarchy โ
The diagram below illustrates how the five requirement types relate to each other, flowing from strategic business needs down to technical interface contracts.
Each level adds specificity. Business Requirements define why the system exists, User Requirements define what users need, Functional Requirements define how the system behaves, Non-Functional Requirements define how well it performs, and Interface Requirements define how it connects to external systems.
๐ข 1. Business Requirements โ
Business requirements capture the high-level objectives that justify the project's existence. They are defined by executive stakeholders and drive every downstream decision.
BR-001: Sustainability Compliance
Description: EcoGuard shall enable organizations to measure, track, and report the carbon footprint of their CI/CD pipelines in compliance with emerging EU sustainability reporting directives.
Priority: Critical
Rationale: Regulatory bodies are increasingly requiring digital sustainability reporting. Failure to comply exposes organizations to legal and reputational risk.
BR-002: Cost Optimization
Description: The platform shall identify resource inefficiencies in pipeline execution and recommend optimizations that reduce both compute costs and energy consumption by at least 15%.
Priority: High
Rationale: Cloud compute costs are a major operational expense. Aligning cost reduction with sustainability creates a dual incentive for adoption.
BR-003: Transparent Reporting
Description: EcoGuard shall produce clear, auditable sustainability dashboards and reports suitable for both internal engineering teams and external stakeholders.
Priority: High
Rationale: Transparency builds trust with customers, investors, and regulatory bodies.
Business Requirements Traceability โ
๐ค 2. User Requirements โ
User requirements describe the system from the perspective of the people who will interact with it. They define expected behaviors in natural language.
UR-001: View Emission Trends
Actor: DevOps Engineer
Description: As a DevOps engineer, I want to view COโ emission trends for my pipelines over the past 30 days so that I can identify which jobs are the biggest contributors.
Acceptance Criteria:
- Dashboard displays a line chart of daily emissions
- User can filter by project, branch, or job name
- Data refreshes within 5 minutes of pipeline completion
UR-002: Receive Optimization Alerts
Actor: Team Lead
Description: As a team lead, I want to receive alerts when a pipeline exceeds emission thresholds so that I can prioritize optimization before the next sprint.
Acceptance Criteria:
- Configurable threshold per project (kg COโ per build)
- Alerts delivered via GitLab notification and email
- Alert includes specific job and recommended action
UR-003: Generate Compliance Reports
Actor: Sustainability Officer
Description: As a sustainability officer, I want to generate monthly compliance reports with one click so that I can submit them to regulatory bodies without manual data aggregation.
Acceptance Criteria:
- Report includes total emissions, energy usage, and trend analysis
- Exportable as PDF and CSV
- Signed with generation timestamp for audit trail
User Journey Map โ
โ๏ธ 3. Functional Requirements โ
Functional requirements define the specific behaviors, features, and functions the system must perform.
FR-001: Pipeline Data Collection
Traces to: UR-001, BR-001
Description: The system shall automatically collect job-level metadata (duration, runner type, resource usage) from GitLab CI/CD pipelines via the GitLab REST API.
Input: GitLab project ID, API token
Output: Structured JSON containing job metrics per pipeline run
Processing:
- Query
/api/v4/projects/:id/pipelinesfor recent pipelines - For each pipeline, fetch individual job details
- Extract duration, runner tags, artifacts size, and status
- Store normalized data in
dashboards/data/
FR-002: Carbon Emission Calculation
Traces to: UR-001, BR-001
Description: The system shall calculate COโ emissions for each pipeline job using the formula:
Where Energy = Duration (hours) ร Power Draw (kW) and Carbon Intensity is fetched from the Electricity Maps API for the runner's region.
FR-003: Optimization Agent
Traces to: UR-002, BR-002
Description: The system shall analyze pipeline efficiency and generate actionable recommendations including:
- Parallelization opportunities for sequential jobs
- Cache optimization for repeated dependency installations
- Runner right-sizing based on actual CPU/memory utilization
- Scheduling non-urgent jobs during low carbon intensity windows
FR-004: Dashboard Visualization
Traces to: UR-001, UR-003, BR-003
Description: The system shall render interactive dashboards with the following views:
- Daily/weekly/monthly emission trend charts
- Per-project and per-job breakdowns
- Sustainability goal progress indicators
- Carbon intensity heatmap by time of day
FR-005: Eco-Friendly Deployment Scheduling
Traces to: BR-001, BR-002
Description: The system shall recommend optimal deployment windows based on forecasted grid carbon intensity. When carbon intensity exceeds a configurable threshold, the system shall suggest delaying non-critical deployments.
Functional Decomposition โ
๐ก๏ธ 4. Non-Functional Requirements (NFRs) โ
Non-functional requirements define the quality attributes and constraints that the system must satisfy.
โก Performance
| NFR-001 | Dashboard shall load within 3 seconds on a standard broadband connection |
| NFR-002 | Data collection for 100 pipelines shall complete within 60 seconds |
| NFR-003 | Emission calculations shall process within 500ms per job |
๐ Security
| NFR-004 | GitLab API tokens shall be stored as environment variables, never in source code |
| NFR-005 | All external API calls shall use HTTPS/TLS 1.2+ |
| NFR-006 | Dashboard access shall respect GitLab project-level permissions |
๐ Scalability
| NFR-007 | System shall handle data from up to 50 concurrent GitLab projects |
| NFR-008 | Historical data storage shall support at least 12 months of metrics |
๐ง Maintainability
| NFR-009 | Codebase shall maintain a minimum of 80% test coverage |
| NFR-010 | All Python modules shall follow PEP 8 style guidelines |
| NFR-011 | Documentation shall be updated alongside every feature change |
โฟ Usability
| NFR-012 | Dashboard shall be responsive and usable on screens from 375px to 2560px |
| NFR-013 | Color palette shall meet WCAG 2.1 AA contrast standards |
๐ Reliability
| NFR-014 | System shall gracefully degrade if external APIs are unavailable |
| NFR-015 | Failed data collection jobs shall retry up to 3 times with exponential backoff |
NFR Quality Model โ
๐ 5. Interface Requirements โ
Interface requirements define how EcoGuard connects to external systems, APIs, and user-facing surfaces.
5.1 External API Interfaces โ
IR-001: GitLab REST API
Direction: EcoGuard โ GitLab
Protocol: HTTPS REST (JSON)
Authentication: Personal Access Token (PAT) via GITLAB_TOKEN environment variable
Endpoints Used:
GET /api/v4/projects/:id/pipelinesโ List pipeline runsGET /api/v4/projects/:id/pipelines/:pipeline_id/jobsโ Job detailsGET /api/v4/projects/:id/issuesโ Compliance issue trackingPOST /api/v4/projects/:id/issuesโ Create optimization recommendations
Rate Limits: Respects GitLab rate limit headers; implements retry with Retry-After header.
IR-002: Electricity Maps API
Direction: EcoGuard โ Electricity Maps
Protocol: HTTPS REST (JSON)
Authentication: API key via ELECTRICITY_MAPS_API_KEY environment variable
Endpoints Used:
GET /v3/carbon-intensity/latestโ Current carbon intensity by zoneGET /v3/carbon-intensity/forecastโ 72-hour forecast for deployment scheduling
Fallback: If the API is unavailable, use a default carbon intensity of 475 gCOโ/kWh (global average).
5.2 Internal Interfaces โ
IR-003: Flask API Server
Direction: Dashboard โ Backend
Protocol: HTTP REST (JSON)
Endpoints:
GET /api/metrics/dailyโ Daily metrics summaryGET /api/metrics/weeklyโ Weekly metrics summaryGET /api/metrics/monthlyโ Monthly metrics summaryGET /api/summaryโ Overall project summaryGET /api/goalsโ Sustainability goal progress
5.3 User Interface โ
IR-004: Web Dashboard
Technology: HTML5, CSS3, JavaScript with Chart.js / D3.js
Supported Browsers: Chrome 90+, Firefox 88+, Safari 14+, Edge 90+
Responsive Breakpoints:
- Mobile: 375px โ 768px
- Tablet: 769px โ 1024px
- Desktop: 1025px+
Interface Architecture โ
๐ง Requirements Management with OSRMT โ
OSRMT (Open-Source Requirements Management Tool) is used to gather, organize, trace, and validate all requirements throughout the project lifecycle.
Why OSRMT? โ
๐ Structured Capture
OSRMT provides a tree-based hierarchy to organize requirements into categories (Business, User, Functional, NFR, Interface) with unique identifiers for traceability.
๐ Traceability Matrix
Every requirement is linked to its parent (upstream traceability) and its implementation artifacts like test cases and code modules (downstream traceability).
๐ Change Tracking
OSRMT logs every modification with timestamps, authors, and justifications โ creating a complete audit trail for compliance and review.
โ Validation & Verification
Requirements are tagged with validation status (Draft โ Reviewed โ Approved โ Implemented โ Verified) to track progress through the lifecycle.
OSRMT Workflow for EcoGuard โ
Full Traceability Matrix โ
| Req ID | Type | Traces To | Status | Owner |
|---|---|---|---|---|
| BR-001 | Business | UR-001, UR-003 | Approved | Product Owner |
| BR-002 | Business | UR-002 | Approved | Product Owner |
| BR-003 | Business | UR-003 | Approved | Product Owner |
| UR-001 | User | FR-001, FR-002, FR-004 | Approved | DevOps Lead |
| UR-002 | User | FR-003 | Approved | DevOps Lead |
| UR-003 | User | FR-004, FR-007 | Approved | Sustainability Officer |
| FR-001 | Functional | IR-001 | Implemented | Backend Dev |
| FR-002 | Functional | IR-001, IR-002 | Implemented | Backend Dev |
| FR-003 | Functional | โ | Implemented | Backend Dev |
| FR-004 | Functional | IR-003, IR-004 | Implemented | Frontend Dev |
| FR-005 | Functional | IR-002 | Implemented | Backend Dev |
| NFR-001 | Non-Functional | FR-004 | Verified | QA Lead |
| NFR-004 | Non-Functional | IR-001, IR-002 | Verified | Security Lead |
| IR-001 | Interface | FR-001, FR-002 | Verified | Backend Dev |
| IR-002 | Interface | FR-002, FR-005 | Verified | Backend Dev |
๐ค 6. AI-Assisted Requirements (Comparison) โ
To enhance the system's capabilities beyond manual rule-based logic, the following AI-assisted requirements are introduced to compare against traditional manual features. These features aim to reduce human bottleneck by automating complex pattern recognition and code modification.
AI-Assisted vs. Manual: Each AI requirement below directly supersedes or augments a manual requirement, offering greater adaptability, speed, and scale โ at the cost of added complexity, compute overhead, and governance concerns.
AI-FR-001: Predictive Emission Forecasting
Compared to: Manual trend review (UR-001)
Description: Instead of relying solely on past data visualization for manual review, the system shall utilize machine learning models (e.g., LSTM time-series or Prophet) to forecast future carbon emissions up to 7 days ahead. This includes predicting energy spikes during seasonal traffic increases, large code branch merges, or scheduled release cycles.
Input: Historical emission time-series data per project (minimum 30 days), runner metadata, calendar events
Output: Probabilistic emission forecast with confidence intervals, surfaced in the dashboard and triggering pre-emptive scheduling recommendations
Acceptance Criteria:
- Forecasts shall achieve a Mean Absolute Percentage Error (MAPE) โค 15% on a rolling 7-day horizon
- Predictions are refreshed automatically after each new pipeline run completes
- A confidence band (80% interval) must accompany every forecast shown in the UI
- If training data is insufficient (< 30 data points), the system shall display a manual trend chart and suppress AI forecasts
AI Technology: Facebook Prophet / LSTM via scikit-learn or TensorFlow Lite; model artifacts versioned in the repository
NFR References: NFR-016 (determinism), NFR-018 (benchmarking)
AI-FR-002: Intelligent Remediation Generation
Compared to: Static Optimization Agent (FR-003)
Description: While the manual agent highlights issues via static rules, the AI-assisted system shall employ an LLM-based agent (e.g., GitLab Duo / OpenAI GPT-4o) to contextualize pipeline failures and inefficiencies. It must generate automated merge requests with context-aware, code-level optimizations โ rewriting `.gitlab-ci.yml` and Dockerfile configurations โ to directly reduce the carbon footprint without requiring a developer's initial draft.
Input: Raw `.gitlab-ci.yml` content, job duration logs, CPU/memory utilization metrics, identified inefficiency categories from FR-003
Output: A fully formed merge request containing patched CI/CD configuration files, inline comments explaining each change, and an estimated emission reduction percentage
Acceptance Criteria:
- Generated merge requests must pass automated CI syntax validation before being opened
- Each MR description must include a carbon saving estimate (kg COโ) and a confidence score
- Remediation suggestions must not remove any job flagged as a required status check
- Human approval is mandatory before any AI-generated MR is merged (human-in-the-loop gate)
- The system shall achieve โฅ 70% MR acceptance rate measured over a rolling 30-day window
AI Technology: LLM API (GitLab Duo / OpenAI GPT-4o) with structured output / function calling; prompt templates version-controlled
NFR References: NFR-017 (sandboxed fallback), NFR-018 (benchmarking)
AI-FR-003: Dynamic Anomaly Detection
Compared to: Static Threshold Alerts (UR-002)
Description: Rather than relying on rigid, pre-configured high-emission thresholds, the system shall train an unsupervised anomaly detection model (e.g., Isolation Forest or DBSCAN) on the historical baseline behaviour of each specific CI/CD pipeline. Alerts are raised automatically for statistically anomalous deviations, adapting to evolving pipeline structures without manual threshold updates.
Input: Per-job emission time-series, pipeline structural metadata (job count, parallelism), runner utilization rates
Output: Anomaly score per pipeline run (0โ1), binary alert flag, and a human-readable root-cause hypothesis surfaced to the team lead
Acceptance Criteria:
- Model retrains automatically every 7 days or after 500 new pipeline runs, whichever comes first
- False-positive rate shall remain below 10% measured against a manually labelled validation set
- Anomaly alerts must fire within 5 minutes of pipeline completion (same as UR-001 data freshness SLA)
- The system must surface the top-3 contributing jobs to each anomaly in the alert payload
- Baseline model bootstrapping requires a minimum of 50 pipeline runs; system falls back to static thresholds during the cold-start period
AI Technology: scikit-learn Isolation Forest; model serialized with joblib and stored in models/
NFR References: NFR-016 (determinism), NFR-017 (fallback), NFR-018 (benchmarking)
AI-FR-004: Context-Aware Documentation & Compliance
Compared to: Manual Report Generation (UR-003)
Description: The system shall auto-generate detailed, narrative-driven compliance reports formatted specifically for varying regulatory bodies (EU CSRD, ISO 14064, GHG Protocol). An LLM layer translates raw emission metrics into structured audit narratives, shifting the manual burden of compiling different data sets for different audits entirely to AI.
Input: Aggregated monthly emission metrics, sustainability goals progress data, regulatory body selector (EU / ISO / GHG), organization profile
Output: A formatted PDF/DOCX report with executive summary, data tables, trend narrative, methodology disclosure, and digital audit signature
Acceptance Criteria:
- Report must be generated and available for download within 60 seconds of user request
- All numerical data in the narrative must be validated against the source JSON with a ยฑ0.01 kg COโ tolerance โ no AI hallucination of figures permitted
- Every report shall include a machine-readable JSON-LD metadata block for automated regulatory ingestion
- The system shall support at minimum three output formats: PDF, DOCX, and CSV
- Narrative text quality shall be reviewed via automated Flesch-Kincaid readability scoring (target grade level โค 12)
AI Technology: LLM with retrieval-augmented generation (RAG) over the organization's emission data; output grounded and fact-checked before rendering
NFR References: NFR-016 (determinism), NFR-017 (sandboxed fallback)
AI Requirements Lifecycle Flow โ
โ๏ธ 7. Conclusion: Manual vs. AI-Assisted Execution โ
When evaluating the platform's execution, there are distinct trade-offs between manual (rule-based) approaches and AI-assisted workflows. A successful implementation requires balancing the precision of manual rules with the adaptability of AI. The conclusions drawn below are informed by the four paired requirement comparisons documented in Section 6.
๐ Limitations of Manual Execution
- Scalability lag: Manual review of pipeline emissions becomes unmanageable across hundreds of repositories. Team leads simply cannot review every job log.
- Cognitive Overload: Engineers are forced to interpret raw data and manually translate insights into code changes โ a high-effort, low-leverage activity.
- Static Rules: Heuristics for optimization cannot adapt to unique pipeline structures without constant, labor-intensive human updates.
- Delayed Action: Reporting and remediation completely depend on human time availability, drastically delaying potential energy savings.
- Threshold Drift: Manually configured emission thresholds become stale as pipelines evolve, leading to alert fatigue from false positives or missed anomalies.
- Inconsistent Reporting Quality: Human-compiled compliance documents vary in structure, depth, and language across reporters, creating audit trail inconsistencies.
โ ๏ธ Limitations of AI-Assisted Execution
- Consistency lag: AI models exhibit non-deterministic behavior, proposing completely different code optimizations for the exact same pipeline data over time.
- Compute Overhead: Running LLMs for code optimization generates its own severe carbon footprint, which can ironically outweigh the pipeline energy savings if not metered carefully.
- Hallucinations: The AI may confidently suggest invalid configuration changes that structurally break CI/CD pipelines or fabricate emission figures in reports.
- Data Privacy: Sending proprietary CI/CD logs and internal code to external LLM providers introduces significant data governance and IP risks.
- Cold-Start Problem: AI models for anomaly detection and forecasting require substantial historical data (30โ50+ pipeline runs) before producing reliable outputs.
- Model Drift: Without continuous retraining, AI models degrade in accuracy as pipeline structures and team workflows evolve, requiring ongoing MLOps investment.
Consistency Assurance Requirements โ
To heavily mitigate the consistency and reliability lags identified in AI-assisted execution, the following hard safeguards are implemented:
NFR-016: AI Output Consistency & Determinism
Description: The system shall enforce deterministic parameter settings (e.g., Temperature = 0.0, strict seed values) for all analytical LLM requests. This ensures maximum determination, yielding highly consistent optimization recommendations for identical inputs every time.
NFR-017: Fallback to Manual Heuristics (Sandboxing)
Description: The system shall securely evaluate all AI-generated code optimizations using an isolated sandboxed validation test. If the AI output fails automated syntax and logic validation, the system must instantly override the AI and transparently fall back to the manual rule-based logic (FR-003).
NFR-018: Continuous LLM Benchmarking
Description: The system shall automatically log and test the acceptance rate and output variance of the LLM responses over time, establishing an internal confidence score. If the score drops below 85% consistency, AI capabilities for that module will automatically disable.
Strategic Recommendations โ
Based on the comparative analysis above, the following strategies are recommended to harness AI benefits while preserving the reliability of manual baselines:
๐ฌ Hybrid Execution Model
Deploy AI and manual systems in parallel rather than replacing one with the other. AI handles high-volume pattern recognition tasks; manual rules serve as the authoritative fallback and override mechanism. Neither system is exclusively trusted.
๐งโโ๏ธ Human-in-the-Loop Gates
All AI-generated merge requests, anomaly alerts above severity level 2, and compliance reports must receive explicit human approval before being acted upon. This prevents automated changes from propagating through production pipelines unchecked.
๐ On-Premise / Local LLM Priority
To mitigate data privacy concerns, the architecture shall prefer self-hosted or on-premise LLM deployments (e.g., Ollama + Llama 3) for tasks involving proprietary pipeline code. External API calls are reserved for non-sensitive analytical tasks only.
๐ AI Carbon Accounting
The platform shall measure and report the AI subsystem's own energy consumption as a separate dashboard metric. This ensures that AI-driven optimizations deliver a net-positive carbon outcome โ the AI must save more emissions than it consumes.
๐ Requirements Summary โ
๐ 8. Manual vs. AI-Assisted: Comparison Table โ
The table below provides a definitive, side-by-side evaluation of all four paired requirements across six evaluation dimensions.