How HLS Works
HLS (HTTP Live Streaming) is basically just a bunch of text files in a trench coat pointing at a handful of itty-bitty seconds-long video filesAn entire Social Network in 1.6GB (GraphD Part 2)
Roaring Bitmaps offer an even more efficient way to store and query an entire social graph, fitting the entire network of 5.5M users and 164M+ follows into a ~1.6GB SQLite DB on disk.Your Data Fits in Memory (GraphD Part 1)
We need a fast way to query multiple potentially large sets of data on-demand at interactive speeds. Sometimes the easiest solution to a hard problem is to build the right tool for the job.Scaling Go to 192 Cores with Heavy I/O
When running on baremetal, however, we found two key limitations of the Go runtime so far: 1. Systems with a lot of RAM can have a lot of allocations, prompting the Go Garbage Collector to aggressively steal CPU. 2. Applications performing hundreds of thousands of requests per second may make use of thousands of TCP sockets, bottlenecking the Go runtime's network backend on syscalls.Solving Thundering Herds with Request Coalescing in Go
Using request coalescing, we can serve the 200,000 user strong thundering herd by making only one request to our DB, so every other identical request wait for the results from the first request to hit the cache before they resolve.Speeding Up Massive PostgreSQL Joins with Common Table Expressions
Instead, let's structure the query as a Common Table Expression and leverage the power of doing significantly less work to make things go faster. Using a CTE instead of a naive full table join cuts down our query time from 12 seconds to ~0.12 seconds!Speeding up Postgres Queries by 200x with Analyze
Postgres uses an internal table called 'pg_statistic' to keep track of some metadata on all tables in the DB. Postgres's Planner uses these statistics when estimating the cost of operations, which, if out of date, can cause the Planner to pick a suboptimal plan for our query. To trigger an update of 'pg_statistic' manually for a table, we can run 'ANALYZE' on it, helping the Planner estimate costs better and speeding up queries dramatically (in some cases).How to use ChatGPT to Write Good Code Faster
ChatGPT has incredible potential for accelerating your development _flow_. When working on new projects and starting things from scratch, it allows you to rapidly iterate, make decisions that would usually mean a painful refactor, or make use of libraries and/or APIs you're unfamiliar with, without having to make 30 Google searches to read docs and StackOverflow samples.Workload Agnosticism in Large Language Models: The Foundation for the Next Generation of Computing
I think we’ll soon find that LLMs have strong parallels with Cloud Compute that will ensure they stay affordable and accessible resources, allowing the next generation of software projects and companies to thrive.A Tale of Two Technologies: Why Large Language Models are the Future and the Metaverse Isn't
A closer examination of these contrasting trajectories reveals how the research-driven innovation behind LLMs can triumph over consumerism-driven hype of the Metaverse that puts the cart of "monetizable experiences" before the horse of technology capable of sustaining them.
Older
Newer