Using request coalescing, we can serve the 200,000 user strong thundering herd by making only one request to our DB, so every other identical request wait for the results from the first request to hit the cache before they resolve.
Instead, let's structure the query as a Common Table Expression and leverage the power of doing significantly less work to make things go faster. Using a CTE instead of a naive full table join cuts down our query time from 12 seconds to ~0.12 seconds!
Postgres uses an internal table called 'pg_statistic' to keep track of some metadata on all tables in the DB. Postgres's Planner uses these statistics when estimating the cost of operations, which, if out of date, can cause the Planner to pick a suboptimal plan for our query. To trigger an update of 'pg_statistic' manually for a table, we can run 'ANALYZE' on it, helping the Planner estimate costs better and speeding up queries dramatically (in some cases).
ChatGPT has incredible potential for accelerating your development _flow_. When working on new projects and starting things from scratch, it allows you to rapidly iterate, make decisions that would usually mean a painful refactor, or make use of libraries and/or APIs you're unfamiliar with, without having to make 30 Google searches to read docs and StackOverflow samples.
I think we’ll soon find that LLMs have strong parallels with Cloud Compute that will ensure they stay affordable and accessible resources, allowing the next generation of software projects and companies to thrive.
A closer examination of these contrasting trajectories reveals how the research-driven innovation behind LLMs can triumph over consumerism-driven hype of the Metaverse that puts the cart of "monetizable experiences" before the horse of technology capable of sustaining them.