8/4/2023 0 Comments Aws postgresql timed out![]() In our case, the size of “pgbench_accounts” is 126GB, which is 16,515,072 blocks. In other words, most of the transactions will cause full page writes, which will also increase the amount of write-ahead logging (WAL) written per second. With high scale factor, there is a high chance that a single page from a large table (in this case, “pgbench_accounts”) will not be updated by many transactions. Usually pgbench updates the tables uniformly. However, it is recommended to use a connection pooler instead of using too many database connections. As long as those connections are not used, we can ignore the effects of setting max_connections to a higher value. Although we performed the test for various clients, we kept the max_connections as 1000. This controls the number of IO requests that the storage can handle concurrently. Since we used SSD as storage, a higher value of effective_io_concurrency is recommended. The checkpoint writes are spread out as much as possible, since we set the completion target to 0.9. ![]() We ensured that at least one timed checkpoint happens after 15 minutes during the 30 minute test period. We also had to increase the max_wal_size in order to spread out the process of writing large quantities of new or changed data over a longer period of time. We set the shared_buffers at 25% of the RAM. You can find the details of these parameters in the PostgreSQL documentation: We also need to tune the Postgres parameters to get a better performance. We also need to increase the max_map_count kernel parameter to avoid running out of map areas to accommodate large numbers of processes. ![]() This parameter controls the number of “not-yet-accepted” connections in the backlog buffers before failing. Setting to a higher value (default is 128) is needed on a high load server where connection rates are high. 150GB) fits into RAM (396GB), we have minimized the swappiness. It's calculated as follows: grep VmPeak /proc//status Since we’re using a large shared buffer (100GB), we've set the huge pages parameter accordingly so that the entire shared buffer fits into that. Using huge pages reduces overhead when working with large contiguous chunks of memory, as PostgreSQL does, particularly when using large values for shared_buffers. Here are the operating system specifications that we used for the test: Poorly configured OS kernel parameters can cause degradation in database server performance. It is imperative that we configure the OS parameters according to the database server and its workload. The CPU and memory consumption by the pgbench processes are also negligible. Hence, we can assume that the effect of network latency is negligible. We configured the pgbench on the same server as our database server instance. Pgbench -c $threads -j $threads -T 1800 -M prepared postgres We performed 30 minute read-write pgbench tests across different client counts (1, 16, 32, 64, 128, 256, and 512). The scale factor that we used for the test is 10,000, which generates approximately 150GB of data. In our case, we used the default scripts for the pgbench read-write test. However, you can provide your own scripts for the test. By default, pgbench runs a TPC-B-like script that includes simple UPDATE, SELECT, and INSERT statements. The most popular tool for measuring the performance of PostgreSQL is pgbench. In this post, we share the benchmark testing results that we’ve performed across different PostgreSQL versions (v9.5.21 to v12.1) on an AWS m5.metal instance.ģ.1 GHz Intel Xeon Platinum 8175 processors with new Intel Advanced Vector Extension (AVX-512) instruction setĭata is kept in a 300GB SSD with 2000 IOPS, and WAL is kept on another 150GB SSD with 2000 IOPS This article discusses the results of benchmark tests made using pgbench to test the performance of different versions of PostgreSQL on an instance of AWS m5.metal.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |