This Guidance demonstrates how you can automate your carbon footprint tracking with the Sustainability Insights Framework (SIF) on AWS. If you are looking to build a new carbon footprint tracking system or to improve an existing one, this Guidance will help you accelerate the design and automate tracking processes.
Architecture Diagram
-
Overview
-
Access Management
-
Impacts
-
Reference Datasets
-
Calculations
-
Pipelines
-
Pipeline Processor
-
Calculator
-
Overview
-
[Architecture diagram description]
Overview
The SIF is composed of a suite of modules focusing on a specific set of features. This conceptual architecture shows these modules and their interactions.Step 1
Users interact with SIF through REST APIs.Step 2
The Access Management module is where users and permissions are managed and resources are separated by groups.Step 3
When performing data processing calculations, the Impacts module enables users to manage resources, such as impact factors, that can be referenced from within the Calculations and Pipelines modules.
Step 4
The Reference Datasets module enables users to manage datasets, such as lookup tables. These datasets can be referenced from within the Calculations and Pipelines modules.
Step 5
The Calculations module enables users to define and manage equations or functions which can be referenced in other modules to perform data processing calculations.
Step 6
The Pipelines module enables users to configure data processing pipelines used to perform calculations.
Step 7
The Pipeline Processor module is responsible for orchestrating pipelines and performing pipeline aggregations.Step 8
The Calculator module is a backend component that runs operations in a pipeline. This can include arithmetic or the lookup of resources. -
Access Management
-
[Architecture diagram description]
Access Management
The Access Management Module uses the concepts of users and groups to allows for permissions management and segregation of resources within SIF. SIF users can define users and groups through an external REST API.Internally, other SIF modules make calls to the Access Management module to validate permissions. Each tenant has their own copy of the Access Management infrastructure.
Step 1
Users of SIF interact with the Access Management module through an externally available API.Step 2
The externally available API consists of a REST API in Amazon API Gateway. The application logic is deployed in AWS Lambda.Step 3
User authentication is done through tokens received from Amazon Cognito.
Step 4
The Access Management data is stored in an Amazon DynamoDB table.
Step 5
Access Management resource changes emit events to a message bus in Amazon EventBridge. Events can be tracked to update other components of the framework.
-
Impacts
-
[Architecture diagram description]
Impacts
The Impacts Module enables users to manage impact-related resources. These resources can be referenced from within the Calculations and Pipelines modules when performing data processing calculations, such as emissions.An example Impact could be the carbon dioxide equivalent (CO2e) of a specific activity, such as mobile diesel fuel consumption. The Impacts module has the capability to create many Impact resources in bulk through an Impact Tasks API. Impacts are versioned to provide traceability.
Step 1
Users of SIF interact with the Impacts module through an externally available API.Step 2
The externally available API consists of a REST API in API Gateway. The application logic is deployed in Lambda.Step 3
User authentication is done through tokens received from Amazon Cognito. Authorization is done through the Access Management module.
Step 4
Impact data is stored in a DynamoDB table.
Step 5
Amazon Simple Queue Service (Amazon SQS) is used along with a Lambda Impact Task Processor to orchestrate bulk Impact creation tasks.
Step 6
Amazon SQS asynchronously processes metadata updates to resources such as adding searchable tags.
Step 7
Impacts resource changes emit events to a message bus in EventBridge. Events can be tracked to update other components of the framework. -
Reference Datasets
-
[Architecture diagram description]
Reference Datasets
The Reference Datasets Module enables users to manage datasets, such as lookup tables. These datasets can be referenced from within the Calculations and Pipelines modules when performing data processing calculations, such as emissions.An example Reference Dataset is a table that enables lookup of the mix of electricity generation (such as coal, nuclear, wind) for a particular location. Reference Datasets are versioned to provide traceability.
Step 1
Users of SIF interact with the Reference Datasets module through an externally available API and file uploads to Amazon Simple Storage Service (Amazon S3) through a pre-signed URL.Step 2
The externally available API consists of a REST API in API Gateway. The application logic is deployed in Lambda.
Step 3
User authentication is done through tokens received from Amazon Cognito. Authorization is done through the Access Management module.
Step 4
Data (such as dataset names) and metadata (such as tags) are stored in a DynamoDB table.
Step 5
The dataset is stored in Amazon S3.
Step 6
The dataset is indexed on Create/Update using AWS Step Functions.Step 7
Amazon SQS asynchronously processes metadata updates to resources such as adding searchable tagsStep 8
Reference Dataset resource changes emit events to a message bus in EventBridge. Events can be tracked to update other components of the framework. -
Calculations
-
[Architecture diagram description]
Calculations
The Calculations Module enables users to define and manage equations or functions. These equations or functions can then be referenced in other Calculations or Pipelines modules when performing data processing calculations, such as emissions.Example calculations could be simple, such as unit conversions, or complex, such as the agreed upon calculation for emissions by the business. Calculations are versioned to provide traceability.
Step 1
Users of SIF interact with the Calculations module through an externally available API.Step 2
The externally available API consists of a REST API in API Gateway. The application logic is deployed in Lambda.Step 3
User authentication is done through tokens received from Amazon Cognito. Authorization is done through the Access Management module.
Step 4
Calculations data is stored in a DynamoDB table.
Step 5
Amazon SQS asynchronously processes metadata updates to resources such as adding searchable tags.
Step 6
Calculations resource changes emit events to a message bus in EventBridge. Events can be tracked to update other components of the framework.
-
Pipelines
-
[Architecture diagram description]
Pipelines
The Pipelines Module enables users to manage Pipeline configurations. These configurations define data processing pipelines used to perform calculations, such as emissions. A Pipeline can be configured to aggregate outputs across executions and groups into metrics. Metrics capture key performance indicators (KPIs), such as total emissions over time.
A user can request a dry run of a Pipeline configuration to have the configuration processed by the Calculator and to check for errors and validate before creation. Pipeline configurations are versioned to provide traceability.
Step 1
Users of SIF interact with the Pipelines module through an externally available API.Step 2
The externally available API consists of a REST API in API Gateway. The application logic is deployed in Lambda.Step 3
User authentication is done through tokens received from Amazon Cognito. Authorization is done through the Access Management module.
Step 4
Pipeline configuration is stored in a DynamoDB table.
Step 5
The Pipelines module can directly invoke the Calculator module to dry run a pipeline configuration.
Step 6
Amazon SQS asynchronously processes metadata updates to resources such as adding searchable tags.
Step 7
Pipelines resource changes emit events to a message bus in EventBridge. Events can be tracked to update other components of the framework.
-
Pipeline Processor
-
[Architecture diagram description]
Pipeline Processor
The Pipeline Processor Module is responsible for the orchestration of Pipelines. This includes starting a pipeline execution in response to input files provided by a user and performing any aggregations defined in the pipeline configuration. The Pipeline Processor module also provides the status of pipeline executions.Step 1
Users of SIF interact with the Pipeline Processor module through an externally available API.Step 2
The externally available API consists of a REST API in API Gateway. The application logic is deployed in Lambda.Step 3
User authentication is done through tokens received from Amazon Cognito. Authorization is done through the Access Management module.
Step 4
The REST API allows a user to query the execution status of a pipeline, query for activities processed by a pipeline, and query for metrics aggregated from activities.
Step 5
Activity data processed by a pipeline is stored in an Amazon Aurora Serverless v2 database.
Step 6
Metrics data processed by a pipeline are stored in a DynamoDB table.
Step 7
Pipelines execution is done through tasks defined in Step Functions. This verifies a pipeline and input data, performs calculations by invoking the Calculator, performs aggregations on Calculator outputs, stores aggregations as metrics, and records the status of the execution. -
Calculator
-
[Architecture diagram description]
Calculator
The Calculator Module is a backend component which parses and executes the operations defined within a pipeline. This can include arithmetic operations or lookups of resources, such as Reference Datasets and Impacts.The Calculator also captures an audit log of all operations performed in the pipeline, such as input values, and the version of each resource (for example, Reference Datasets, Impacts, Calculations) used in the execution.
Step 1
The Calculator module is invoked through a Step Functions, defined in the Pipeline Processor module.Step 2
The Calculator uses the Pipeline configuration to execute all of the operations in the configuration.Step 3
These operations may be lookups in Reference Datasets, retrieving Impacts, or retrieving functions defined in the Calculations module. This is done by invoking the Lambda APIs for each module. Retrieved resources can be cached to DynamoDB.
Step 4
Outputs for each activity processed as part of a pipeline are written to the activity data in an Aurora v2 Serverless database.
Step 5
Audit logs are written to an output location in Amazon S3 through writes to Amazon Kinesis Data Firehose.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Deployments for infrastructure and application code changes can be done through AWS CloudFormation and the AWS Cloud Development Kit (AWS CDK). Integration tests exist for all of the modules in addition to tests for end-to-end scenarios. These tests can be run to verify deployments.
-
Security
The infrastructure components of this Guidance were selected to help secure your workloads and minimize your security maintenance tasks. Amazon Cognito and the Access Management module are utilized for user authentication and authorization, respectively. Database services use encryption at rest, where permissions are set between tenants and tenant data is separated. Both external and internal interfaces are implemented in services that require TLS (HTTPS/SSL) to enforce data encryption in transit. Customer managed keys in AWS Key Management (AWS KMS) are used to encrypt data in Kinesis Data Firehose.
-
Reliability
For a workload to perform its intended function correctly and consistently, managed services including Lambda (for computing), API Gateway (for API), and Amazon SQS (for messaging) are used. This ensures that your core services are deployed across multiple Availability Zones.
Key components in this Guidance are split into separate microservices with clear REST interfaces defined between the services. Retries with backoff limits are implemented in clients between services, allowing for reliable application-level architecture.
Deployment of this Guidance can be done through infrastructure as code (IaC). This makes it possible to deploy one-off deployments and hooks in continuous integration and continuous deployment (CI/CD) pipelines. Parameters and environment variables for the applications are handled through standard mechanisms such as AWS Systems Manager Parameter Store.
-
Performance Efficiency
Database services in this Guidance were chosen based on the access patterns and use cases required. DynamoDB was chosen for the NoSQL datastore use cases, and Aurora Serverless v2 was chosen for the data layer requiring relational access patterns. Additionally, deployment of this Guidance can be done through IaC. Customers can quickly deploy and test this Guidance with their data and use case, and they can terminate services just as quickly when they are done. Customers are able to select their preferred AWS Region to deploy this Guidance using the provided IaC tooling.
-
Cost Optimization
To help you build and operate cost-aware workloads, this Guidance gives you the option to enable a flexible pricing model. Compute Savings Plans can be enabled for Lambda to help reduce your costs. You can also assign cost-allocation tags to organize your resources and track your AWS costs on a detailed level. To help you scale using only the minimum resources, this Guidance utilizes services in layers. The compute layer uses Lambda while the data layer incorporates the auto scaling capabilities for Aurora and DynamoDB, ensuring resources are scaled based on demand.
-
Sustainability
Primary services within the architecture, such as Lambda, DynamoDB, and Aurora, offer automated scaling, which optimizes resource utilization. These services can scale from zero to peak demands to ensure the minimum provisioned capacity is used to meet demand. This Guidance also follows a serverless architecture, in which compute can be scaled up and down with demand.
Implementation Resources
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin. There are two sample code options for this Guidance:
The Sustainability Insights Framework (SIF) sample code provides foundational software building blocks to help accelerate the design and implementation of your application to automate your carbon footprint tracking.
The SIF Command Line Interface (SIF-CLI) sample code is an open-source tool that empowers you to interact with SIF components through your command-line shell. With minimal configuration, SIF-CLI simplifies many of the complexities associated with managing SIF.
Related Content
Guidance for Carbon Accounting on AWS
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.