Everything I know about good system design

#632 – June 29, 2025

System design is how you assemble services

Everything I know about good system design
20 minutes by Sean Goedecke

If software design is how you assemble lines of code, system design is how you assemble services. The primitives of software design are variables, functions, classes, and so on. The primitives of system design are app servers, databases, caches, queues, event buses, proxies, and so on.

Bufstream: Schema-Driven Governance for Streaming Data
sponsored by Buf

Kafka offers no data quality guarantees, but Bufstream's Broker-side Schema Awareness eliminates bad data at the source, ensuring reliable, schema-adherent streams. Join Buf's workshop on July 10 for a technical deep dive, use cases, and deployment best practices. Your questions answered!

Implementing an undo/redo system
16 minutes by mlacast

Undo/redo systems in creative software are often invisible heroes—until they fail. As vital as they are though, they’re expected to “just work,” and building one that does—especially in a complex, visual app—is far from simple.

Writing toy software is a joy
11 minutes by Joshua Barretto

In 2025, the beauty and craft of writing software is being eroded. AI is threatening to replace us and software development is being increasingly commodified, measured, packaged, and industrialised. Software development needs more simple joy, and Joshua has found that creating toy programs is a great way to remember why he started working with computers again.

Which data architecture should I use?
25 minutes by Dr. Fatih Hattatoglu

The article provides a comprehensive guide to selecting data architecture, comparing approaches like data warehouse, data lake, data lakehouse, and data mesh. Fatih shows advantages and challenges of each architecture type, recommends platforms for implementation, and stresses that architecture choice should align with data types, analytical needs, organizational structure, and long-term goals.

TPU Deep Dive
18 minutes by Henry Ko

Their origins go back to Google in 2006, when they were first evaluating whether they should implement either GPUs, FPGAs, or custom ASICs. Back then there were only a few applications that necessitated specialized hardware and they decided those needs could be met by bringing in excess CPU compute from their large datacenters. But this changed in 2013 when Google's voice search feature ran on neural networks and internal projections speculated that they would need much more compute if it took off.

And the most popular article from the last issue was:

newsletters