An application was behaving very sluggishly, and I decided to take a look at it to identify and fix the cause of it. The problem was narrowed down to the RDS database taking a long time to respond to requests, and the mitigation action decided by the team was to "get a bigger instance." However, my analytical mind wanted to really understand the root of the problem to ensure that throwing more compute at the problem will actually solve it.
Periodic changes to production cloud resources should be expected as the cloud offers elasticity to scale in/out with demands. Although some changes are riskier than others, the AWS RDS processes for applying (and rolling back) these changes have been battle-tested. Despite this, it is always good for organizations to have their own backup and restore strategies before riskier changes are applied - after all, the data does belong to them. In this post, I'll propose several methods to backup production AWS RDS databases that are managed via Terraform, as well as their considerations.
The casual observer would then wonder - if there are existing RDS offerings for both MySQL and PostgreSQL, why would Amazon Aurora be introduced? To understand its unique selling point, and claims for scalability and cost effectiveness, we need to look at how traditional relational databases handle scaling out.
I came across and took interest in the pg_auto_failover PostgreSQL extension as I had previously done several posts about failovers in PostgreSQL on Docker. I realized that the monitor for detecting unhealthy nodes and triggering the failover is a single point of failure. In this post, I suggest several ways to build some resiliency into the monitor.
Most databases allow database administrators to configure a variety of parameters. In this post, I will explore and document the behaviors of MySQL 8.0 and PostgreSQL 12 databases when the TLS cipher suite list parameter is set as an empty string.
In my earlier post, I used MongoDB to identify some trends in several years of documents from a previous volunteer role. In this post, I'll share about the process, a sample document schema, and the queries used to derive the insights to the questions.
While tidying up 10 years worth of digital documents from a previous volunteer role, I found my excuse to get familiar with MongoDB to identify trends, and answer questions that I often pondered about.
A lesser known limit of the PostgreSQL JDBC driver is the passing of at most 32767 variables into a PreparedStatement. This upper bound value on the number of parameters is derived from the maximum value of the signed Short data type in Java. In this post, I will show that this limit is not present when using the psql CLI client to connect directly to the PostgreSQL database, and only appears when using the JDBC driver. Subsequently I will suggest ways to address this limit.
A while ago, I dived deep into time zones on MySQL. Here is my scratchpad.
I found that installing and starting PostgreSQL on Alpine Linux to be a not so straight forward task. So here's a short post on how I did it.