AWS Database Blog
Amazon DynamoDB re:Invent 2024 recap
For the Amazon DynamoDB team, AWS re:Invent 2024 was an incredible experience to connect and reconnect with our customers. The key themes this year were “better together” integrations, data modeling, and building globally resilient, scalable applications on DynamoDB. In case you missed some of these sessions, or you wanted to get caught up on why customers like Klarna, Krafton, Vanguard, Fidelity, and JPMorgan Chase are building on DynamoDB, you can read this helpful summary of some of the DynamoDB highlights from re:Invent 2024.
Transition from AWS DMS to zero-ETL to simplify real-time data integration with Amazon Redshift
The zero-ETL integrations for Amazon Redshift are designed to automate data movement into Amazon Redshift, eliminating the need for traditional ETL pipelines. With zero-ETL integrations, you can reduce operational overhead, lower costs, and accelerate your data-driven initiatives. This enables organizations to focus more on deriving actionable insights and less on managing the complexities of data integration. In this post, we discuss the best practices for migrating your ETL pipeline from AWS DMS to zero-ETL integrations for Amazon Redshift.
Optimize Amazon RDS performance with io2 Block Express storage for production workloads
Choosing the right storage configuration that meets performance requirements is a common challenge when creating and managing database instances. In this post, we provide an end-to-end guide for what storage class to choose depending on your use case. In addition, we compare the performance of different storage volumes on open source engines supported by Amazon RDS, to validate them from a database-centric perspective.
How Monzo Bank reduced cost of TTL from time series index tables in Amazon Keyspaces
At Monzo, we use Amazon Keyspaces (for Apache Cassandra) as our main operational database. Today, we store over 350 TB of data across more than 2,000 tables in Amazon Keyspaces, handling over 2,000,000 reads and 100,000 writes per second at peak. In this post, we share how we used a different mechanism for row expiry than the Time to Live setting in Amazon Keyspaces to reduce our operating costs for an index while preserving its semantics.
Migrating Oracle Databases from Exadata to Amazon RDS for Oracle: Addressing Performance Considerations
In this post, we provide a comprehensive guide for addressing performance considerations when migrating Oracle databases from Exadata to Amazon RDS for Oracle. We explore methods to analyze Exadata workload characteristics, including determining Smart IO usage, examining database-level I/O patterns, and identifying SQLs that utilize Exadata-specific features. We also discuss various alternatives available on RDS for Oracle to mitigate potential performance impacts.
Reduce latency and cost in read-heavy applications using Amazon DynamoDB Accelerator
Amazon DynamoDB Accelerator (DAX) is a fully managed, in-memory cache for DynamoDB. By using DAX with DynamoDB, you can improve the latency for read requests in your application. In this post, we discuss how to improve latency and reduce cost when using DynamoDB for your read-heavy applications.
Authenticate Amazon RDS for Db2 instances using on-premises Microsoft Active Directory and Kerberos
In this post we demonstrate, how you can extend your existing AD infrastructure and Kerberos authentication to Amazon RDS for Db2.
FundApps’s journey from SQL Server to Amazon Aurora Serverless v2 with Babelfish
FundApps, founded in 2010, is one of the pioneers in the Regulatory Technology (RegTech) space, which includes compliance monitoring and reporting. FundApps decided to rearchitect their environment and transform it to a cloud-based architecture on AWS to better support the growth of their business. For more information, see Faster, cheaper, greener: Pick three — FundApps modernization journey. In this post, we focus on the persistence layer of the FundApps regulatory data service. You learn how FundApps improved the service scalability, reduced cost, and streamlined operations by migrating from SQL Server database to a cloud-centered solution combining Amazon Aurora Serverless v2 with Babelfish for Aurora PostgreSQL and Amazon Simple Storage Service (Amazon S3).
Shrink storage volumes for your RDS databases and optimize your infrastructure costs
Recently, Amazon RDS launched the ability to shrink storage volumes using Amazon RDS Blue/Green Deployments – a nice addition to the list of new use cases that Blue/Green Deployments now supports. In this post, we cover how to use the new storage volume shrink feature in Amazon RDS Blue/Green Deployments to minimize the downtime required to perform the storage size reduction operation. We also review various mechanisms to monitor the progress of storage shrink and best practices on how to arrive at the optimal storage size for your shrink storage task.
Best practices for creating a VPC for Amazon RDS for Db2
You can create an Amazon RDS for Db2 instance by using the AWS Management Console, AWS Command Line Interface (AWS CLI), AWS CloudFormation, Terraform by Hashicorp, AWS Lambda functions, or other methods. One of the prerequisites for creating an RDS for Db2 instance is to configure the virtual private cloud (VPC) appropriately. This post shows how to create a VPC with best practices for any Amazon RDS database in general and Amazon RDS for Db2 in particular through a one-click automated deployment.