Jaz's Blog A space where I rant about computers

Your Data Fits in Memory (GraphD Part 1)

We need a fast way to query multiple potentially large sets of data on-demand at interactive speeds. Sometimes the easiest solution to a hard problem is to build the right tool for the job.

Scaling Go to 192 Cores with Heavy I/O

When running on baremetal, however, we found two key limitations of the Go runtime so far: 1. Systems with a lot of RAM can have a lot of allocations, prompting the Go Garbage Collector to aggressively steal CPU. 2. Applications performing hundreds of thousands of requests per second may make use of thousands of TCP sockets, bottlenecking the Go runtime's network backend on syscalls.

Solving Thundering Herds with Request Coalescing in Go

Using request coalescing, we can serve the 200,000 user strong thundering herd by making only one request to our DB, so every other identical request wait for the results from the first request to hit the cache before they resolve.

Speeding up Postgres Queries by 200x with Analyze

Postgres uses an internal table called 'pg_statistic' to keep track of some metadata on all tables in the DB. Postgres's Planner uses these statistics when estimating the cost of operations, which, if out of date, can cause the Planner to pick a suboptimal plan for our query. To trigger an update of 'pg_statistic' manually for a table, we can run 'ANALYZE' on it, helping the Planner estimate costs better and speeding up queries dramatically (in some cases).

How to use ChatGPT to Write Good Code Faster

ChatGPT has incredible potential for accelerating your development _flow_. When working on new projects and starting things from scratch, it allows you to rapidly iterate, make decisions that would usually mean a painful refactor, or make use of libraries and/or APIs you're unfamiliar with, without having to make 30 Google searches to read docs and StackOverflow samples.