A few months ago I opened an empty Android Studio project at 10 p.m. on a Thursday, stared at MainActivity.kt, and asked myself a question I’d been avoiding for two years: do I actually need to build this, or do I just want to? I already had a full-time job in industrial IoT. I already had three half-finished side projects in various directories. I had no audience for the thing I was about to start, no commitment from anyone that they’d use it, no clear product-market-fit story. What I had was a shape in my head — a clock face with events drawn on it as pie slices — and the knowledge that every calendar app on my phone felt wrong to me in a way I couldn’t articulate in a bug report.

I built it anyway. And then I built three more things after it, each one starting from some private itch and each one reshaping into something I didn’t plan. A radial time visualizer. An interactive learning platform for hardware protocols I work with daily. A network camera CLI toolkit. And this site, the one you’re reading right now. Four apps, one question running underneath all of them:

How far can I take an idea before I know if it’s worth it?

That’s the thread. Not “how do I validate an idea fast” — I’m not good at that, and I’ve stopped pretending I am. Not “how do I ship an MVP” — half these things aren’t products. The question is more honest than either of those: when I’m building something I’m not sure anyone will use, how deep do I let myself go before the act of building either justifies itself or quietly tells me to stop?

I’ve ordered these four stories from least-audience to most-audience on purpose. First: the radial time visualizer, built entirely for me, with no intended users. Second: the hardware learning platform, also built for me but now used daily as a reference — audience of one, but that one is recurring. Third: the network camera CLI toolkit, built for a specific practitioner audience I happen to belong to, so “me” and “the users” overlap but aren’t identical. Fourth: this site, which exists so that I’d write more — meaning you, reading this right now, are the justification for its existence. The audience scales up story by story, but the question underneath each one stays the same.

Why do I keep building when I’m not sure? I used to have an answer that sounded good in interviews: “I learn by doing.” That’s true but it’s also a dodge. The more honest answer is that I don’t know how to tell if an idea is worth pursuing without putting my hands on it. Market research slides past me. Competitor analysis makes me anxious, not informed. The only signal I trust is the one I get about ten days into a build, when the shape of the problem has finally come into focus and I can feel whether I’m still leaning forward or already drifting. Every project below hit that ten-day mark and kept going. The story of each one is mostly the story of what I thought I was building versus what I realized I was actually building, usually somewhere around day twelve.

A Radial Time Visualization for Android

The frustration I couldn’t explain

I think in spatial chunks, not lists. When someone asks me what my Tuesday looks like, I don’t mentally scroll through a list — I see a wedge of morning, a wedge of afternoon, the gap around lunch, the evening compressing toward a bedtime I don’t want to admit. For years I used a read-only radial-clock app — the dominant one in the niche, the one with millions of downloads that shows up first when you search “clock widget” on the Play Store. It took my Google Calendar events and drew them as pie slices on a 24-hour face. It was perfect for one thing and completely broken for another: I couldn’t create, edit, or even drag an event. If I wanted to change my day, I had to tab out to Google Calendar, make the change in a list, and tab back to see the result on the clock.

So the original plan was straightforward: clone the dominant app, add CRUD on top. A “with extras” project. I even wrote that in my first planning doc. “Feature parity with [app] + event creation/editing/sync.” Clean. Contained. A weekend, maybe two.

What changed around week two

I was building the event-creation flow and hit a question I didn’t have a good answer for: when a user taps an empty wedge at 3:15 p.m., what should the default duration be? The dominant app doesn’t have to answer this question because it’s read-only. I started reading its reviews, sorting by most recent, and saw the same three complaints over and over — people asking for exactly the thing I was building. Not as a “nice to have” but as the thing they’d been waiting years for.

That’s when I realized feature parity wasn’t the goal, and that I’d been thinking about this project wrong. The existing app is read-only not because its creators forgot to add editing — it’s read-only because the niche is visual planners, and visual planners were being served a read-only display of a calendar that lived somewhere else. The job-to-be-done wasn’t “show me my calendar as a clock.” It was “let me plan my day the way I actually think about it.” Creation, editing, sync — those weren’t extras. Those were the whole point. The dominant app had been living in a degraded version of its own market for a decade.

I started thinking I was building a clone with extras. I ended thinking I was building the calendar I would have wanted in 2018, when I first realized lists weren’t how I scheduled things in my head.

That reframing made every subsequent decision easier. When I didn’t know whether to add a feature, I’d ask: does this serve “display someone else’s calendar” or does it serve “plan my day visually”? The second answer always won.

Tech decisions, with reasons

Kotlin and Jetpack Compose, because I wanted to practice Compose and because the rotation animations I had in mind would be painful in XML layouts. That turned out to be half-right.

The half that was wrong: I tried to build the clock face as a tree of Composables, one per wedge, with transformable() modifiers for rotation. It didn’t work. Rotation gestures on the clock as a whole versus individual wedges fought each other constantly, and performance tanked around 40 events. So I scrapped it and built the entire clock face as a single custom Canvas renderer — one Canvas composable, all the drawing math done by hand in a single draw pass. That fixed both the gesture conflicts and the 60fps target.

Glance for widgets, because I wanted a home-screen clock that matched the in-app view. Glance is limited — you render a Compose tree into a bitmap and the system shows the bitmap — but that limitation matched my use case perfectly. I draw the same clock to an offscreen Canvas, hand Glance the bitmap, done.

Room for local persistence, plus a dual-write to the system CalendarProvider. This was the sync decision that caused me the most grief. The idea was: create an event in my app, Room gets the write immediately, and on the next sync tick I push it to the system calendar, which eventually propagates to Google Calendar via whatever account the user has configured. It works, mostly. It broke in one nasty way I’ll describe below.

OAuth2 with token refresh in an OkHttp interceptor, and EncryptedSharedPreferences for the refresh token. I’d never implemented an interceptor-based refresh flow before, and the first version had a race condition where two simultaneous 401 responses would both try to refresh and one would invalidate the other’s token. Fixed with a mutex around the refresh call. Obvious in hindsight.

OSRM for routing, because some events are “drive to site, do work, drive back” and I wanted the clock to show the drive wedges automatically. OSRM because it’s free, self-hostable, and doesn’t rate-limit me the way Google Directions would.

Dead ends

The transformable() modifier for clock face rotation. I spent probably six hours fighting Compose’s gesture system before I accepted that Canvas drawing plus a manual pointerInput handler was the right answer. The lesson I internalized: Compose is great for UI trees, but when your UI is one drawing, stop trying to make it a tree.

“Newest wins” as a sync conflict resolution strategy. This is embarrassing. I built a sync that said: if the local version and the remote version of an event both changed, take whichever has the newer updatedAt timestamp. Seemed reasonable. Then I tested it across a timezone change — phone in one timezone, laptop creating an event in another, clocks drifting by a few seconds — and the “newest” version was nondeterministically one or the other depending on the order sync ticks arrived in. I fixed it with a proper per-field three-way merge against a stored “last synced” snapshot, which is the right way to do sync conflict resolution and which I should have known from the start. I’d read about three-way merges in the context of git. I just didn’t connect the dots until I had a reproducible bug on my own phone.

The hard parts, summarized

Rotation gestures on a clock face at 60fps: fixed by going to raw Canvas. OAuth token refresh in an interceptor with concurrent requests: fixed with a mutex. Multi-calendar conflict resolution and DST: fixed with three-way merge. Widget bitmap rendering: solved once I accepted that Glance is a bitmap pipeline and stopped trying to use live Compose.

Honest verdict

Eleven milestones. Eighteen features. Around 180 files. It builds, it runs, I use it every day. The clock is on my home screen, the widget updates, sync works in both directions. I don’t know if anyone else will use it. I don’t have a launch plan. I have a .apk I sideloaded to my own phone and a private git repo. And I’m increasingly OK with that, because somewhere around milestone seven I stopped building to ship and started building to finish, which are different projects with different success criteria. The success criterion for finish is: does it do the thing I wanted it to do? Yes. We’re done.

It took me four months of evenings. The question of whether it was worth it is still open. The question of whether I’d do it again is not.

The next thing I built started the same way — an itch I kept scratching with the same half-answer — but the build process taught me something about what I thought I was making.

An Interactive Learning Platform for Hardware Protocols

The re-googling problem

Here’s the itch. I work in industrial IoT. The protocols I touch in a given week include some subset of: NFC, CAN bus, UART, I2C, SPI, RS-485, a handful of wireless IoT stacks, whatever application-layer IoT protocol a given deployment is using, bits of IP networking where the embedded side meets the network side, and enough Linux to keep an embedded target alive. I don’t use all of these every day. Maybe three out of ten in any given week. Which means the other seven atrophy, and when I come back to them — say, because a project suddenly needs CAN arbitration math — I’m re-googling the same thing I re-googled three months ago.

I tried to fix this with notes. Obsidian vault. Didn’t stick. I tried a private GitHub wiki. Didn’t stick. Notes don’t stick for me because the act of writing a note is frictionless enough that I write bad ones, and bad ones are worse than no ones because I have to sort through them the next time I’m looking. What I needed was something that forced me to structure the knowledge the first time I touched it, and structured it in a way that would survive me coming back to it six months later.

So the original plan was a tutorial site. Beginner-friendly. “Hardware protocols for people getting started in embedded.” I even had a homepage mockup.

What changed on the first lab

I sat down to write the first lab: RS-485 differential signaling. Started writing “RS-485 is a serial communication standard that uses differential signaling over a twisted pair…” and immediately felt my brain disengage. I was writing for a version of me from five years ago who didn’t know what UART was. But that wasn’t the person who needed this. The person who needed this was me right now, who knew what UART was perfectly well and just couldn’t remember whether the A/B line convention was the same across all RS-485 transceivers or whether some vendors inverted it.

That’s when I understood the distinction that shaped the whole platform:

Practitioner docs are not learner docs. Practitioners want quick reference, working examples, and the crucial gotchas that will ruin your day. Learners want context, motivation, and incremental building of intuition. A document that tries to serve both serves neither.

I deleted the tutorial homepage. I rewrote the RS-485 lab as: one-screen reference card (pinout, voltage levels, max cable length, termination rules), three runnable examples (basic transmit, basic receive, master-slave poll), and a gotchas section with four specific things that had burned me in the field. The lab got tighter. I stopped resisting it.

Tech decisions

React + Vite for the front end, FastAPI for the back end. The separation was deliberate: content is markdown and JSON in the frontend, execution happens on the backend. That split meant I could add new labs by editing files without touching Python, and I could add new executable examples without touching React. It’s the kind of decision that feels over-engineered for a solo project until the third time you want to add something without redeploying everything.

JSON file persistence for progress tracking. No database. I considered SQLite and then asked myself: what happens if this file gets corrupted? Answer: I lose my progress and I re-mark a few labs as done. That’s fine. Premature database complexity is a trap I’ve fallen into before.

A CLI alongside the web UI. This one was a decision I made and then questioned for a week. Why both? Because terminal practice is part of the curriculum. If you’re learning embedded Linux, you should be at a terminal. The web UI is for browsing and reading; the CLI is for doing. They share the same content backend, just different front ends onto it.

Python modules per topic, where each module is both a data file and an executable example file in one. That meant the same file that defines the CAN arbitration rules can also run a little simulation showing arbitration resolving between two competing frames. The lab can embed the output of its own example, which means I can’t accidentally let the example drift from the reference — if the example breaks, the page breaks.

The hands-on problem without hardware

The hardest problem was this: how do you make a “hands-on” lab for hardware protocols without physical hardware? A reader on a laptop doesn’t have a CAN analyzer. They don’t have an RS-485 bus. They don’t have an SPI peripheral hanging off a breadboard. And the value of a hardware lab is supposed to be that you’re doing something.

I tried a few approaches. The one that stuck: every lab gets an anchor — something runnable that produces an output the reader can see and poke at. For RS-485 it’s a Python simulation of two devices on the bus with configurable termination, where you can set wrong termination and watch the waveform distort. For CAN it’s an arbitration simulator. For I2C it’s a simulated bus with addressable devices and a clock-stretching example. For the networking labs it’s the actual Linux networking stack on the reader’s own machine — ip, tcpdump, ss — because those tools are everywhere and they’re the real deal.

This approach has a cost. The simulations are not the real thing. Someone reading the RS-485 lab will not feel what it’s like to ground a scope probe to the wrong place and watch the frame corrupt in real time. But they’ll see the shape of the problem. That’s the best I can do without shipping hardware with the app, and I made peace with that trade-off early.

The dead end I want to remember

Pyodide. I spent a weekend trying to embed a Python sandbox in the browser so the reader could edit the example code in-page and rerun it without a backend roundtrip. Pyodide is impressive technology. It’s also slow to load (multiple megabytes on first visit), behaviorally different from CPython in edge cases that are exactly the kind of thing a hardware protocol example cares about (integer overflow semantics, timing), and added a whole second runtime to maintain. I cut it. The backend roundtrip for code execution is fine. The reader doesn’t care that there’s a Python process on a server; they care that they hit “run” and output appears. Which it does.

How the goal changed

I started thinking I’d teach hardware protocols. I ended thinking I’d capture everything I keep re-googling, in the format I wished existed.

Honest verdict

Seven topics. Twenty-eight labs. A hundred and ten discrete tasks across those labs. Used by exactly one person, which is me, and I open it weekly. The platform IS a reference now. When I forgot CAN arbitration last month, I opened my own site instead of Googling. That’s the only metric I care about, and it’s green.

Would it help someone else? I suspect it would help anyone whose career touches the same band of protocols mine does. Will I promote it? I don’t know. Right now it’s published, indexed, and there if someone finds it. That’s as far as I’ve taken the “is it worth it” question for this one — the answer, for me, is already yes, and the answer for anyone else is unresolved but not my problem today.

The next project pushed the audience question further. This one had actual users in mind — not just me.

A Network Camera CLI Toolkit

The knowledge gap

If you work with IP cameras — not consumer cameras, but the network cameras you’d see in a factory or a construction site or a logistics yard — you live in a specific kind of knowledge gap. The vendor documentation is reference material: complete but uncontextualized. You can look up every parameter, but you can’t tell which parameters matter. The forums are noise, or worse, they’re noise mixed with outdated advice from three firmware versions ago. There’s no single tool or document that says: here is what to do when you’re commissioning a site, here are the gotchas, here are the numbers you need to plug into the bandwidth calc, here is how to tell if a camera is going to give you trouble in six months.

I knew this because I’d been that person, crouched in a server room with a laptop and three vendor PDFs open, doing math on a napkin. The original plan was modest: a CLI with maybe five commands — network discovery, an RTSP test, a bandwidth calculator, a PoE budget calculator, a health check. Something I could pip-install and carry around. Weekend project, two weeks max.

What changed after the first three commands

I built discovery, RTSP, and bandwidth-calc first, because they were the ones I’d wanted for years. And when I was done with those three, I sat back and realized the real blocker for people working with these cameras isn’t tooling. It’s knowledge. The bandwidth calculator is a useful tool, but the harder question is: what framerate should you even be setting? And the answer depends on use case, codec, retention requirements, local regulations, and a dozen other things that nobody writes down in one place.

So I started writing a knowledge base alongside the tools. It was supposed to be a supplementary doc. Within a week it was the bigger half of the project by line count. And as I wrote it, the tools got better — because now I had a place to put all the institutional knowledge that had been living in my head, and the commands could link into it.

I started thinking I’ll build tools for camera installers. I ended thinking I’ll build the field manual I wished existed when I started, with the tools attached.

Tech decisions

Python over Node.js. This is not a religious decision; it’s a library ecosystem decision. The networking libraries I needed — raw socket access for ONVIF WS-Discovery, decent RTSP support, PoE math that I could adapt from existing Python implementations — were all better on the Python side. Node.js would have meant writing more from scratch.

Click over Typer. Typer is newer and has nicer type-hint ergonomics, but Click has been around long enough that every StackOverflow question about weird argparse-adjacent CLI behavior has been answered for Click. When I’m building something I want other people to install and use without hitting platform bugs, I pick the older library.

Rich for terminal UI. This one was an obvious win. When discover returns fifty cameras, you want a table. You want colored columns for health status. You want progress bars while a scan is running. Rich does all of that and degrades gracefully in dumb terminals. I spent maybe two hours on the UI layer and got something I’d normally have to spend two days on.

httpx instead of requests. Two reasons: async support, which I needed for the scanning commands because running a synchronous scan against a subnet is miserably slow; and HTTP/2 support, which some newer cameras prefer.

A documented HTTP API over the vendor’s binary SDK. The binary SDKs are fat, platform-specific, and change between versions. The HTTP API is documented enough to call directly, with HTTP digest auth and a handful of vendor quirks I had to write compatibility shims for. But the result is a pure-Python toolkit that doesn’t require shipping a binary SDK, and that was worth the shims.

The hard parts

ONVIF WS-Discovery. This is a multicast UDP protocol where you send an XML blob to a specific multicast address and listen for responses, and the responses are themselves XML blobs that you have to parse carefully because different vendors format them differently. I spent half a day getting the multicast socket right (platform-specific setup on Windows versus Linux versus macOS) and another half-day writing a tolerant parser. The payoff is that discover finds cameras on a subnet without needing any configuration. That’s the kind of thing where you feel the tool earn its keep.

Bandwidth math. This sounds trivial and isn’t. The naive formula is bitrate × framerate × duration. The real formula has to account for codec efficiency (H.264 versus H.265, which have different compression characteristics), resolution scaling (not linear), quality settings (I-frame intervals affect bandwidth in non-obvious ways), and scene complexity (a camera watching a still wall has different bandwidth than one watching a busy street). I have three different bandwidth modes now: conservative, typical, and aggressive. The conservative mode assumes worst-case scene complexity; the aggressive mode assumes you’ve tuned the camera well. The knowledge base explains when to use each.

PoE budgeting. IEEE 802.3af/at/bt define device classes 0 through 8, with different power budgets per class, and a PoE switch has a total budget that doesn’t perfectly equal the sum of per-port budgets because of overhead. Getting this right for mixed deployments — some PTZ cameras pulling 25+ watts, some bullet cameras pulling 4 — required building a proper budgeter that tracks class allocations and flags when you’re about to over-subscribe a switch. The worst thing you can do with PoE is guess. I’ve seen it go wrong in the field.

Troubleshooting decision trees. When someone runs the troubleshoot command and says “camera is offline,” the tool walks them through a decision tree: is the link up, can you ping it, does the RTSP port respond, does the camera respond to its HTTP API, is the stream actually flowing, is the framerate what it’s supposed to be? I mapped out ten root cause categories and wrote the conditional branches by hand, and that hand-written structure is the thing that makes the command feel like it knows what it’s doing. I thought about using an expert-system library; I’m glad I didn’t. The explicit code is easier to audit and easier to extend.

The dead end I want to remember

Automated remote configuration sync. The idea was: you maintain a master config for a site, and the tool pushes that config to every camera. I built it. It worked on the first three test cameras. The fourth camera had slightly different firmware, and it silently rejected a handful of config fields — returned HTTP 200, didn’t apply the changes, didn’t tell me. I caught it because I spot-checked. Then I thought about deployments where someone might push a config to fifty cameras and not spot-check.

I pulled the feature. The replacement is config-backup and config-restore — the tool can snapshot a camera’s current config and restore it later, but it won’t push a config you wrote from scratch. If you want to change settings, you use the vendor tool and then snapshot. The constraint came from a real failure mode that I couldn’t defend against with code alone, and removing the feature was the right answer. This one stung because I liked the feature; it was the one I’d been most excited to build.

Honest verdict

Eleven modules. All working. The knowledge base is bigger than the tools by line count — 300+ KB of practitioner reference, searchable, cross-referenced, and linked into the commands so kb search ptz returns exactly the sections you need. That ratio, tools-to-docs, is inverted from what I planned. And the inversion turned out to be the feature. When I use it on a real site, I use kb as often as I use the commands themselves, and that wouldn’t have happened if I’d stuck to the original “weekend CLI” scope.

How far did I take this one before I knew if it was worth it? Further than any of the others, and I knew it was worth it earlier — somewhere around the time the knowledge base started growing faster than the commands. That was the signal: the project had found the thing it was actually about, which wasn’t the thing I’d planned to build.

Which brings me to the fourth project, the one with the strangest audience calculus of the four: me, but only insofar as “me-who-writes-more” is a future person I’m trying to nudge into existence.

This Site (zshns.dev)

The friction problem

For a while I’d been producing technical writing that didn’t go anywhere. I’d finish a debugging session at 1 a.m. with a clear understanding of some weird subsystem, type a few paragraphs into an Obsidian note, and then lose them forever because the Obsidian vault had become a junk drawer I never opened. Sometimes I’d make a GitHub gist. Sometimes the understanding ended up in a README for a throwaway project. Sometimes I’d DM it to myself on Slack, which is where ideas go to die. There was no canonical place. And the friction of answering the question “where do I put this?” was enough, on a tired evening, to make me not put it anywhere.

I needed a place to put my brain. The original plan was a polished portfolio site. Cards. Animations. A hero section with a gradient. The kind of site that’s designed to impress a recruiter.

What changed every time I tried to design it

Every time I started building the polished portfolio, I bounced off it. I’d get through the first landing-page mockup and lose interest. This happened three times over the course of a year before I understood what was going on: I wasn’t building a site that would help me write. I was building a site that would help me show off, and showing off is not what I need to do. I need to write.

I started looking at sites I actually enjoyed reading. They were almost all editorial. Long-form. Typography-forward. Substack has a lot to answer for here — its influence is everywhere — but the underlying principle is real: if you want people to read, you build for reading, not for chrome. Serif body text. Generous line height. No sidebars. No nav that moves. No card grids. The site is a surface for prose.

I started thinking I need a portfolio. I ended thinking I need a place to put my brain.

Once that flipped, everything about the build got easier. I wasn’t designing a showcase; I was designing a reading environment.

Tech decisions

Astro 5, because I wanted a static-first site with islands of interactivity where I needed them (search, theme toggle) and Astro’s content collections gave me typed frontmatter out of the box. I’d tried Next.js on a previous iteration and it was more framework than I needed for a content site. Astro is the right size.

Custom CSS over Tailwind. This is the decision I’m most likely to regret in public. I like Tailwind fine for apps. For a site where the whole point is typography, I found Tailwind’s utility-class density got in the way of the two things I cared most about: fine-grained control over font metrics and dark mode that felt warm instead of clinical. Writing the CSS by hand meant I could spend an afternoon on line-height and letter-spacing and have the result match what was in my head. The trade-off is that my CSS file is now mine to maintain, and I accept that cost.

Self-hosted fonts via Fontsource — Spectral for the body, Inter for UI. Spectral is a serif designed for screens, and it’s quietly lovely. I subsetted to Latin-only and kept only the weights I actually use (regular, italic, semibold, bold). That saved enough KB to matter on a mobile-first page weight budget.

Pagefind over Algolia. Algolia is great and I’ve used it in client work. For a personal site with 20-something pages, Algolia is overkill, and it’s also a third-party dependency I didn’t want. Pagefind generates a static search index at build time and runs client-side. Zero backend, zero API keys, works offline. The tradeoff is that the index is bigger than you’d get from a server-side engine, but at this scale it’s negligible.

@astrojs/rss for the feed, because RSS is still the best way to get my writing in front of people who read a lot and don’t want another newsletter. I’m a heavy RSS user myself; I’d feel wrong running a blog without one.

Three content types: Blog (finished posts), Garden (in-progress notes), and Projects (things I’ve built, including this page). The Garden idea is borrowed from the digital gardens community — Maggie Appleton, Andy Matuschak, others — and its virtue is that it gives me a place to publish unfinished thinking without the weight of “blog post” on my shoulders. A garden entry can be marked seedling, growing, or evergreen. I can publish a seedling in fifteen minutes. I can’t publish a blog post in fifteen minutes. The difference in friction is the difference between writing and not writing.

The hard parts

Editorial typography that doesn’t feel generic. The trap with serif-on-ivory is that it can look like every tech blog from 2012. I fought this by paying attention to three things: (1) the relationship between body size and line height — mine is 1.65, which is a touch tighter than the default Substack ratio and feels more bookish; (2) the headline weight and tracking — I use a heavy weight at tighter-than-default tracking, which gives the hierarchy some editorial heft; (3) the blockquote style, which I tuned until it felt like a pullquote in a printed magazine, with an oversized opening mark and a color shift for the body of the quote.

Dark mode that doesn’t lose warmth. The naive dark mode is to invert the ivory background to pure black. Pure black is harsh. My dark mode uses a warm charcoal — roughly #1a1816 — with body text in a slightly off-white and the golden accent desaturated so it reads as “dim lamp” rather than “neon.” I must have iterated on these five colors fifteen times before they felt right at night.

Pagefind styling. Pagefind ships with a default UI that works fine but doesn’t match your site. Restyling it is doable but the documentation assumes you know which CSS custom properties to override. I spent an evening reading Pagefind’s source and figuring out the shadow DOM structure. Worth it in the end, but it was an hour of staring at dev tools.

Garden status taxonomy. Seedling, growing, evergreen. This is a taxonomy I borrowed; the hard part was deciding what each one meant for me. I settled on: seedling is a thought I’ve had and want to come back to; growing is a thought I’ve developed but still expect to revise; evergreen is a thought I stand behind and don’t expect to change much. The status shows as a small badge on each garden entry. Knowing that readers will see the status freed me to publish things I wouldn’t have published as “posts.”

The dead end I want to remember

I spent an hour subsetting Spectral by hand, character-by-character, to save twelve kilobytes on the initial page load. I got it working. Then I realized Fontsource was already subsetting for me, that my hand-subset was worse than the Fontsource subset, and that I’d spent an hour to save zero kilobytes. I reverted. The lesson is mild but repeatable: before you optimize a dependency, check what the dependency is already doing for you.

Honest verdict

Live. Working. Dark mode feels right at night. RSS is publishing. Pagefind is indexing. The garden has things in it. I’m finally writing instead of redesigning, which is the actual success criterion and the only one that matters.

This page you’re reading right now is the proof. If the site hadn’t made the friction of publishing go to near-zero, this 7,000-word post would still be a set of bullet points in an Obsidian note I’d never look at again.

So: how far is far enough?

I want to come back to the question, because the four projects above answered it differently, and the differences are what I actually learned.

I used to think the question was “will this be used?” and I used to try to answer it before building. Market research. Talking to potential users. Mockups. Nothing in the answer shape was ever clear enough for me to act on, and I’d either stall out or build anyway and feel guilty about not having validated. Now I think the question is different:

Will this teach me something I needed to learn?

That question I can answer, and it doesn’t even require me to finish the project. I usually know around day twelve. The radial time visualizer taught me that a clone plus extras can quietly turn into a different product entirely if you let the problem shape you instead of forcing your original frame onto it. The hardware learning platform taught me that practitioner docs and learner docs are different animals, and that trying to write both at once produces neither. The camera CLI toolkit taught me that embedded institutional knowledge can be the feature, not the filler around the feature. This site taught me that the design I was reaching for wasn’t the design I needed, and that reducing friction is more valuable than adding polish.

There’s a pattern in that list, and I didn’t see it until I sat down to write this post. Every one of these projects started as a clone-or-utility — “build the thing I wish existed, with a small improvement” — and every one of them became something the act of building had reshaped. The reshape was the thing I was actually getting out of the project. Whether anyone else uses it is a separate question, and one I’m no longer in a rush to answer before I start.

What I still don’t know how to predict: which projects will outlive their first month. The radial clock and the learning platform and the camera toolkit all did. This site is doing it right now, live, as you read this. But I have a graveyard of projects that passed the day-twelve check and then quietly stopped mattering, and I can’t tell you why. My best guess is that the ones that outlive their first month are the ones whose underlying itch is recurring — something I’ll be annoyed about again next week, next month, next year — and the ones that die are the ones whose itch was acute and resolved. But that’s a pattern I notice in retrospect, not a test I can apply in advance.

Here’s the forward question I’ll leave you with, and the one I’m asking myself as I close the tab on this draft and open the next one:

What’s the thing you keep re-googling, or the tool you keep wishing existed, or the way you wish you could see your own schedule, or the place you wish you could put your brain — and what’s stopping you from building the smallest version of it tonight, even if no one else ever uses it? If the answer is “I don’t know if it’s worth it,” I’d gently suggest that the only way to find out is to get to day twelve and see whether you’re still leaning forward. That’s the test. Everything else is stalling.

I’m going to go start the next one.