blog.lmorchard.com

It's all spinning wheels & self-doubt until the first pot of coffee.

  • about me
  • archives
  • feed
  • 2025 June 18

    • Hello world!
    • Since I'm bouncing between multiple teams' projects, this LLM agent-assisted coding thing reminds me of multi-box mining in EVE Online.
      • I haven't done that in years, but it was a way to make mining more interesting. You could fill in the lulls in gameplay by swapping between ships, treating it more like real-time strategy.
      • Apparently, EVE Online multi-boxing UI has gotten more sophisticated these days? I can only imagine this is the direction coding agent orchestration will head.
      • It's totally spinning plates and it's a more energy-consuming activity than I might have first expected.
      • I'm really leaning on the Command-Backtick button to cycle through IDE windows to shepherd the Claude Code sessions as they crunch through execution plans.
      • There is kind of a hyperfocus flow state available—not in the coding on individual projects, but in swapping between agents, keeping things running with answers to questions, performing rescues from ditches.
      • This seems appealing to my ADHD brain, until or unless I get distracted in a way that lets plates start falling.
      • I am finding that writing or generating gratuitous notes as context for both me and the LLM is really handy. Especially helps me remember what I was trying to accomplish when I last cycled into some particular IDE window.
    # 11:59 pm
    • miscellanea
  • On AI, anger, and the way from here

    Jason Santa Maria, Large Language Muddle:

    As someone who has spent their entire career and most of their life participating and creating online, this sucks. It feels like someone just harvested lumber from a forest I helped grow, and now wants to sell me the furniture they made with it.

    The part that stings most is they didn’t even ask. They just assumed they could take everything like it was theirs. The power imbalance is so great, they’ll probably get away with it. ... I imagine there will be a time when using these tools or not creates a rift, and maybe it will be difficult to sustain a career in our field without using them. Maybe something will change, and I’ll come around to using these services regularly. I don’t think I’ll ever not be angry about it.

    This is involuntary stone soup at scale. I'm also dismayed about how LLMs came to be, yet aware that the bomb still works regardless of my feelings. I'm convinced I need to understand this technology—I don't think I can afford to simply opt out.

    But I'm also staying tuned to skeptical takes, fighting to keep my novelty-seeking brain from falling into cult-like enthusiasm. While I can't dismiss this technology as pure sham, I refuse to swallow inflated claims about what it actually is. I want clear-eyed understanding.

    Jason's anger resonates because it points to a deeper loss:

    And still that anger. It’s not just that they didn’t ask. If these tools have so much promise, could this have been a communal effort rather than a heist? I don’t even know what that would’ve looked like, but I can say I would feel much differently about AI if I could use a model built on communal contributions and opt-ins, made for the advancement of everyone, not just those who can pay the monthly subscription.

    Behind that anger is sadness. How do we nurture curiosity and the desire for self growth?

    I believe there's a path forward that can nurture curiosity and growth.

    I've seen how these models can surface insights and patterns from overwhelming pools of information—hallucinations are always possible, but it's surprising how often they don't happen. I've seen how their "spicy autocomplete" can help me get where I intended to go faster—like talking to a fellow ADHD'er who sees where I'm going and jumps straight there.

    And these models aren't disappearing, even if the companies burning cash do. The models already released openly will power unexpected developments for decades, even if just passed around as warez torrents.

    This feels like the dot-com bubble all over again. When that bubble burst, the web didn't die: people with spare time and leftover experience built the blogosphere, API mashups, and the foundations of Web 2.0.

    I suspect we're heading for a similar pattern. Maybe it's wishful thinking, but I kind of expect we'll see a bust followed by cheap, surplus capacity that—while not the communal effort we deserved—becomes accessible to anyone who wants to experiment and build something better.

    # 11:38 am
    • llms
    • genai
    • ai
  • 2025 June 17

    • Hello world!
    • It continues to be kind of a perfect storm to bring a halt to my recent rapid-fire blogging. 😔
      • I'm pitching in on two teams at work, which really cuts down on time to stop and smell the RSS feeds to find things to write about.
      • Even though I'm doing a lot of LLM-assisted coding lately, I'm doing it for more projects than usual.
      • But also, my homebrew RSS feed reader just broke and I've been too busy to fix it. So, I haven't been, you know, reading feeds much lately.
    • I did just reopen the books on this Pebbling Club side project I've had going off & on since last summer. So maybe I'll resume progress on that too?
      • One of the things I did was to get this thing working on a more mundane stack with redis and postgresql, deployed via docker-compose on a server in my basement.
      • And then I wrote this post-receive git hook that enables a git-push deploy process like I'm running a real PaaS next to my water heater.
    • At some point, I need to sit down and actually write out a pitch or something for Pebbling Club. I'd like to make it into something, but the gears of time and motivation keep slipping.
      • It's kind of a mashup of everything I've been interested in building on the web for a very long time.
      • It's also currently a big mess.
      • I did make this sorta mind-map in an Obsidian canvas to try to sketch out the general aspirational concept though:
    • I need to work out a better way to post diagrams here. I've been meaning to do something with Mermaid and a web component, but... yeah.
    # 11:59 pm
    • miscellanea
  • 2025 June 11

    • Hello world!
    • Why is a celebrity podcast starting a mobile phone provider?
    • Finally watching Front 242's final live show and... dang.
    • Blogging has slowed here, but I'm hoping to pick it back up.
      • It was probably a shiny-new-toy phase for the first few weeks.
      • But also, I just changed projects at work and got suddenly a lot busier. So, my time for idle rumination has vanished for now.
      • I did post a relatively big thing on AI coding last weekend, so that's pretty good though?
    • My brain suddenly demands I play Terraria.
      • This happens every few years. And when it does, I get a sudden hyperfocus rabbit hole thing that lasts a week or so and evaporates abruptly.
      • I've only ever made it past the first few bosses in the game despite playing it since release back in 2011. 💀
      • I think that's my general M.O. with games: I get to a point where I'm like "ooh novelty" and then "ah, okay, I get how it goes from here" and wander off.
      • I very rarely want grinding or more of the same thing, once I see the pattern. Until, I guess, the anti-novelty wears off and it feels novel again? (Thus the repeat visits to Terraria)
      • Sometimes I'm really jealous of folks who can just lock into a Special Interest like Terraria and just milk endless reliable dopamine from the thing.
      • Meanwhile I'm like BORED NOW and have to go hunting again.
    # 11:59 pm
    • miscellanea
  • Moderating my Codegen Enthusiasm

    Most of my work has happened in Windsurf and Claude Code over recent weeks. I can picture a future where I'm essentially an LLM manager—keeping code-generation plates spinning and nudging toddling bots away from falling into ditches.

    Some folks claim they play games while the agent codes, but I'm actively reviewing as it writes. Turns out watching a bot write code for you takes surprising mental effort. 😅

    As I get deeper into this, I'm still processing the skeptical pushback. I know I'm drawn to novelty and clever tricks, so I'm trying to temper my enthusiasm and engage seriously with contrary opinions.

    Some people haven't had success with these tools, but "you're holding it wrong" is a bad response that doesn't address the real objections. I'm having concrete wins personally, but figuring out the precise how and why feels elusive—too many variables and RNG elements to be properly scientific about it.

    My main stake in AI coding is that it's what I'm paid to do right now in this industry. I am also rather fascinated with the stuff. Not exactly an unbiased position, but at least I'm not trying to sell anything other than my time & labor.

    I've seen arguments that this could all be Stockholm syndrome and excuse-making for the machine. Others warn that I shouldn't trust my own judgment on AI because I'm essentially self-dosing with cognitohazards.

    The more antagonistic responses make me sympathize with the guy who says his AI skeptic friends are all nuts—which feels like tit-for-tat, since accusations of mental instability seem to flow both ways.

    Honestly, I can also relate to just being done thinking about the whole thing for now. But, personally, I don't think I can afford to do that.

    # 2:05 pm
    • ai
    • llms
    • claude
    • codegen
    • genai
    • career
    • work
  • 2025 June 07

    • Hello world!
    • Well, this is kinda weird? I just noticed that all the H1s on my blog are the wrong sizes now.
      • Turns out Firefox redefined H1 sizes in the built-in browser styles based on nesting within article, aside, nav, section? I guess this will be a thing in other browsers too?
      • I'll have to fix that. I don't like this.
    • Oh hey: I just discovered that turning off Settings > General > Keyboards > Smart Punctuation on iOS means I can stop typing invalid JSON in Obsidian
    # 11:59 pm
    • miscellanea
  • Baby steps into semi-automatic coding

    So I did a thing. I spent time this week building an actual project using an AI coding agent. I ended up with 11,000 lines of code that actually work. To be clear: it wasn't great code—lots of boilerplate, plenty of "I would have written this eventually anyway" stuff—but it did what I intended it to do. More importantly, it got done without me having to fight my ADHD through every tedious implementation detail. [ ... 1017 words ... ]

    # 11:00 am
    • codegen
    • llm
    • ai
    • agents
    • windsurf
    • claude
    • gpt
  • 2025 June 06

    • Hello world!
    • My brain's been eaten by work for most of this week, so the blogging slowed down a bunch. Hoping to pick it up again soon.
      • I'm almost afraid to mention that I spent a bunch of this week deep down an LLM vibe-coding rabbit hole in Windsurf.
      • Just in time for Anthropic to cut Windsurf off from Claude models - oops.
    • We'll see how good it all ends up being, but I cycled through a handful of models and ended up with about 11,000 lines of code.
      • The code had unit tests and it pretty much did what I intended.
      • It wasn't great code - a lot of it was boilerplate - but it's mostly stuff I would have ended up doing myself more tediously while fighting my ADHD.
    • Trying to compose some thoughts somewhat along the lines of Harper Reed's LLM codegen workflow:
      • I settled on a workflow that wasn't just pestering the agent with wishes.
      • I had a series of discrete sessions, each started by creating a directory named for a new git branch. I wrote a shell script to semi-automate this.
      • In that directory, I wrote a couple hundred words of intention in a spec.md file.
      • I asked the agent to expand my intentions into a step-by-step plan.md file.
      • I edited the plan and asked the agent to review it critically and ask questions.
      • I answered the questions.
      • I asked the agent to review it again and tell me if the plan looked clear enough to start implementing.
      • When it said "yes", I told it to start implementing.
      • The agent started implementing while I watched.
      • Sometimes I interrupted and told it that it was on the wrong track. But, for long stretches I was just reviewing the code as it wrote.
      • When it claimed to be done, I asked it to review the current changes against the plan and judge if it was really done.
      • Sometimes it wasn't and it went back to work.
      • When it petered out finally, I told it to make sure all the tests passed and linting errors were fixed. It did that.
      • I made sure the tests made sense, myself, fixed a few that didn't. Then I told it to run the tests some more.
      • Finally, when I was okay with the results, I told it to review our entire chat history for this session and summarize the results in a notes.md file.
      • In particular, I told it to pay special attention to things we did that hadn't been captured in the plan. Try to come up with unexpected conditions and derive some lessons learned.
      • These notes ended up being actually pretty good?
      • These three artifacts - spec.md, plan.md, and notes.md - were committed along with the code. That marked the end of the session and the branch.
    • Now, I won't say that each of the sessions I ran went perfectly. But, I expected it to be an exploration.
      • I switched models a few times between Claude Sonnet 3.7, GPT-4.1, and SWE-1.
      • I found Claude to usually work the best. It just sort of got to work and did the needful without enticing many objections from me.
      • GPT-4.1 seemed to like to make very detailed plans (even after reading the plan.md), ask lots of questions, and then drive off into the ditch and need rescuing.
      • SWE-1 was about in the middle - but I ended using it more because there's a promotion running right now that makes it free in Windsurf.
      • Occasionally, I'd switch models mid-session just to see what happened. I'm not sure how to characterize the differences, but they each had slightly different coding styles.
      • Claude and SWE-1 did better than GPT-4.1 at picking up from unfinished work in progress, I think?
      • Still, even with the needful babysitting, between these models I did get stuff implemented and it looked a lot like what I would have written if I'd had the executive function to work at it as doggedly.
    • I think I've learned that a focused scope and context window management are essential.
      • A few times, I think I asked the agent to bite off more than it could chew? Maybe I blew out the context windows? This is something I could get quantified answers around, if I paid attention to the metrics.
      • In those cases, I stopped the presses, backed up, and reworked the spec into a smaller scope.
      • Sometimes, I found it handy to get to the point of having the plan.md tuned up, then started a fresh chat with only the plan as context to start. That seemed to work pretty well - again, I think freeing up some of the context window with more condensed material.
    • Occasionally, I wandered off into the weeds myself and my session-based approach devolved into chatty iteration. That worked well for making very small tweaks and fussy updates.
      • I also learned that I'm good at juggling lots of git commits as save states. Whenever things were in a decent enough state, time to commit now and clean up later.
      • I forgot this a few times and lost some progress after driving into a ditch. But that wasn't too much of a hardship, since I could usually just scroll back in the chat and re-attempt the relative bits of the session for similar results.
    • I should clean all these bullets up into a proper blog post, but maybe tomorrow. The tl;dr, I guess, is that I think I'm getting comfortable with this stuff.
      • It's surprising me with how much it gets done.
      • I'm getting less surprised with where & how it goes wrong.
      • The failures seem manageable and the results seem decent.
    • I had a kind of meta-chat with Claude about the above process, trying to think through some improvements.
      • One interesting notion was to use some big cloud models for the spec.md to plan.md stage.
      • But, then, switch to a local model running on my laptop for the actual process of implementing the plan.
      • Then, switch back to a big model for the notes.md summary.
      • If this worked, it could save a lot of tokens!
    • I could also see all the above being bundled up and semi-automated into its own agentic workflow.
    # 11:59 pm
    • miscellanea
  • 2025 June 04

    • Hello world!
    • The Verge, How to move a smart home
      • We've moved a lot. Mostly, I distrust smart home gadgets and don't have many. But, several of the houses we've owned had lingering smart devices. Many of them ended up useless. Occasionally a Nest thermostat could be coaxed to betray its former owner and work for me. For the most part, it's a mess.
    • Once upon a time in college, I got a dial-up network connection working to my Commodore Amiga 1200 in my dorm room. I sprinted across campus to a computer lab to telnet back into my A1200. It was so neat. And pointless. But neat.
      • This, of course, was before it occurred to me that anyone with my temporary IP address could have also telnetted into my A1200. 🤷‍♂️
    • Had some adventures in vibe coding, last night. Maybe I'll write about it? I keep reading folks saying this stuff doesn't work, but... it does?
    # 11:59 pm
    • miscellanea
  • Adventures in Vibe Coding with Grafana and Claude

    Since re-launching my blog, I wanted to monitor traffic and logs more closely. Nothing groundbreaking, but it had been a while since I'd run Grafana, Prometheus, and Loki on my own hardware.

    Turns out there's this handy all-in-one docker-compose setup that runs on Synology NAS. It fired up with minimal fuss, and soon I had metrics machinery humming in my basement—except the package didn't include Loki. A quick docs consultation got it running alongside the rest.

    My blog is a static site hosted via AWS S3 and CloudFront. Both services dump logs into an S3 bucket, but I'd never bothered reading them before—and didn't want to start now. Instead, I loaded up Claude.ai and described my problem:

    I want to get logs out of CloudFront. I have enabled new-style log delivery that stores gzipped JSON logs in an S3 bucket at s3://lmorchard-logs/blog.lmorchard.com/ with names like E5YXU82LZHZCM.2025-06-04-04.d024d283.gz

    Can you help me write a script for my home Loki server to download only new log files and push them into Loki?

    Claude stepped right up:

    I'll help you create a script to process CloudFront logs and push them to Loki. Let me write a Python script that tracks processed files and handles the gzipped JSON format.

    After some vibey iteration, we landed at this artifact:

    It's quite verbose and could use some tightening up. But, I really don't care—it does the quick & dirty needful.

    I wrote zero Python. I just henpecked Claude to add features until the script did what I needed. I wasn't even in an IDE, just the Claude.ai interface in a browser. An interesting thing to note is that Claude didn't have access to my AWS resources—I didn't even give it a sample of my logs. But, still, what I told it about JSON, S3, and CloudFront was enough for it to be off to the races.

    Anyway, after a quick review and a satisfactory dry run, I dropped it into a cronjob to grab new logs every 5 minutes. Then I pestered Claude with Grafana dashboard questions I could have figured out myself. But why read docs when you can just ask? (Which I realize is ironic, since I wrote Too long? Read anyway. but I think I make an exception for LLMs.)

    Total time from idea to working dashboard: about an hour.

    Not revolutionary, but pretty satisfying for barely having to think about it.

    # 3:26 pm
    • grafana
    • claude
    • vibecoding
    • llms
    • ai
  • 2025 June 02

    • Hello world!
    • Jotted down a couple posts today on AI stuff that aren't particularly revelatory.
      • If anything, they're just me trying to think out loud and clarify.
      • I'm probably going to try writing more stuff like this, if only to be Wrong on the Internet and lure someone in to correct me. 😅
    • Dang it, I don't wanna go to bed, I just discovered strudel.cc
    # 11:59 pm
    • miscellanea
  • Quoting W. David Marx on Gen AI

    W. David Marx, GenAI is Our Polyester:

    Everyone knows happened next: There was a massive cultural backlash against polyester, which led to the triumphant revaluation of natural fibers such as cotton and linen. The stigma against polyester persists even now. The backlash is often explained as a rejection of its weaknesses as a fiber: polyester's poor aeration makes it feel sticky. ... While polyester took a few decades to lose its appeal, GenAI is already feeling a bit cheesy. We're only a few years into the AI Revolution, and Facebook and X are filled to the brim with “AI slop.” Everyone around the world has near-equal access to these tools, and low-skilled South and Southeast Asian content farmers are the most active creators because their wages are low enough for the platforms' economic incentives to be attractive.

    This along with remembering that some professors are going back to handwritten essays (and also that handwriting is better for memory and learning) had me wondering if there's going to be a handcrafted backlash in the next few years?

    I write journal entries nearly every day by hand—albeit these days on an e-ink tablet. I think that helps me focus on what I want to dredge out of my head. I keep meaning to get back to that handwriting recognition project I started a few weeks ago, since no product I've tried yet has been able to turn my writing into clean machine-readable text.

    But, then again, maybe producing machine-illegible works by hand will be the next big trend?

    # 4:58 pm
    • genai
    • ai
    • llms
  • My New Rube Goldberg Blogging Machine

    According to the count in my archives, I've published over 50 blog posts in the past few weeks. That's roughly 50 more than I managed in the previous 10 years! These aren't masterpieces—mostly just random thoughts and half-baked ideas. But as I mentioned before, I'd rather throw a bunch of stuff at the wall and see what sticks than spend another decade crafting the perfect post that never gets published. So, here's how I tinkered my way into a writing setup that seems to actually be working. [ ... 873 words ... ]

    # 4:04 pm
    • obsidian
    • writing
    • metablogging
  • The Bomb Still Works: On LLM Denial and Magical Thinking

    I found myself in a frustrating argument with someone convinced that LLMs are pure vaporware—incapable of real work. Their reasoning? Since LLMs were trained on stolen material, the results they produce can't actually exist.

    Not that the results should be considered illegitimate or tainted—but that they're literally impossible. That the training data's questionable origins somehow prevents the technology from functioning at all.

    I couldn't convince them otherwise. But, life isn't fair and both things can be true simultaneously: the origin of something can be problematic and the results can be real.

    This analogy kept coming to mind: If someone steals materials to build a bomb and successfully builds it, they have a functioning bomb. The theft doesn't retroactively prevent the bomb from existing or reduce its explosive capability. Proving the theft might help with future bombs or justify going after the bomb-maker, but it doesn't cause the current bomb to magically self-dismantle.

    This seems obvious to me—embarrassingly so. Yet I keep encountering this form of reasoning about LLMs, and it strikes me as a particular kind of denial.

    There's something almost magical in the thinking: that moral illegitimacy can somehow negate physical reality. That if we disapprove strongly enough of how something was created, we can wish away its actual capabilities.

    The ethical questions around LLM training data are important and deserve serious discussion. But pretending the technology doesn't work because we don't like how it was built isn't engaging with reality—it's a form of wishful thinking that prevents us from dealing effectively with the situation we actually face.

    Whether we like it or not, the bomb has been built. Now we need to figure out what to do about it.

    # 12:20 pm
    • llms
    • ai
    • ml
  • Why Prompt Engineering Isn't Just Good Writing

    Someone told me that prompt engineering isn't real—that it's just techbros rebranding "good writing" and "using words well." I disagree, and here's why:

    Prompt engineering fundamentally differs from writing for human audiences because LLMs aren't people. When done rigorously, prompt engineering relies on automated evaluations and measurable metrics at a scale impossible with human communication. While we do test human-facing content through focus groups and A/B testing, the scale and precision (such as it is) here are entirely different.

    The "engineering" aspect involves systematic tinkering—sometimes by humans tweaking language, sometimes by LLMs themselves—to activate specific emergent behaviors in models. Some of these techniques come from formal research; others are educated hunches that prove effective through testing.

    Effective prompts often resemble terrible writing. The ritual forms, repetitions, and structural patterns that improve LLM performance would make a professional editor cringe. Yet they produce measurable improvements in evaluation metrics.

    Consider adversarial prompts: they're often stuffed with tokens that are nonsense to humans but exploit specific model quirks. Here, the goal is explicitly to use language in ways that aren't human-legible, making attacks harder to detect during review.

    Good writing skills can help someone pick up prompt engineering faster, but mastering it requires learning to use words and grammar in weird, counterintuitive ways that are frankly sometimes horrifying.

    All-in-all, prompt engineering may still be somewhat hand-wavy as a discipline, but it's definitely real—and definitely not just rebranded writing advice.

    # 12:12 pm
    • ai
    • llms
    • promptengineering
  • 2025 May 31

    • Hello world!
    • I need to come up with a process here that keeps these miscellanea posts marked as a draft, if I never get past "Hello world!"
      • I start a new file every morning from a template, with the intent that I'll drop by and jot some things here throughout the day. But, this week turned out to be particularly busy. So, I went a few days never getting past "Hello world!" and that's not super interesting to publish.
      • At some point, I want to hook this stuff up to Mastodon and Bluesky accounts. I don't want to just post templated nonsense. (Just intentional nonsense.)
    • Maybe there's something in the air, because a week or two ago I got suddenly compelled to dive down a rabbit hole about the transformer robot watch I had when I was a kid in the 80s.
      • The one I had was confiscated by a teacher and never given back. I'm still salty about that.
      • But, just a couple days ago, I saw this video from Secret Galaxy on the history of the Kronoform watch
      • From there, I found this giant-sized printable version of the Takara Kronoform in desktop clock form - I'm going to have to give that a try.
      • I kind of want to try building some version of the robot watch with some smart guts. I probably won't get around to it, but why do smart watches have to be so boring?
      • Maybe I can split the difference by sticking a smart display in the desktop clock version? Hook it up to Home Assistant and make it do... I don't know what.
    # 11:59 pm
    • miscellanea
  • No-build frontend web development

    Simon Willison on no-build wedev:

    If you've found web development frustrating over the past 5-10 years, here's something that has worked worked great for me: give yourself permission to avoid any form of frontend build system (so no npm / React / TypeScript / JSX / Babel / Vite / Tailwind etc) and code in HTML and JavaScript like it's 2009.

    This blog has a "backend" build process to produce the static HTML. But, the frontend is pretty much build-free.

    Web development with "vanilla" JavaScript has gotten pretty good in the last decade, thanks to Modules, dynamic import(), Custom Elements, and a pile of other relatively recent APIs.

    The easy path at work these days tends to be Next.js, but I kind of hate it. All my side projects start with touch index.{html,js,css}. I roll on from there with maybe a live-reload HTTP server pointed at the directory (e.g. npx reload src).

    That said, I have started playing with carefully re-introducing some build tooling for a few side projects - but, only for external dependencies. I've tinkered a bit with using esbuild to compose bundles as JS modules importable by the rest of my unbundled modules.

    The nice thing about this is that I can treat those external dependencies as standalone utility modules without infecting the rest of my project with build machinery. I can even just check in a copy of the built asset to keep the project stable and usable, years later.

    # 11:04 am
    • es6
    • js
    • javascript
    • webdev
  • 2025 May 29

    • Hello world!
    • Been doing a bunch of vibe coding lately in Windsurf, "pairing" with Claude. A thing I keep wondering is how to make this process more multiplayer.
      • Like, there's a conversation between Claude and I. But I can't easily share that transcript with another human teammate.
      • That conversation is about as important as the code for making sense of things. More so, if we start to consider the code as an increasingly derivative product of the conversation.
      • So, if my teammate is also working in Windsurf with Claude, they're missing all the context I built up that brought the project to its current state.
      • And this isn't even getting into the notion of "mob coding" where maybe there's 2-3 of us humans with an AI agent riding shotgun.
      • I'm thinking the conversation with the agent is a particular form of documentation that should be preserved - maybe as an artifact paired with each discrete git commit?
      • Of course, the conversation is messy, with lots of iteration. So maybe it would help if there's a summary or a tl;dr ginned up at commit time, too? (That could be the commit message, I guess?)
    • I like the notion of Architecture Decision Records (ADRs) - I wonder if something like that could work for iteration sessions with an AI agent?
      • If we can scope a session to something discrete like a feature and capture the conversation from start to end in one of a rolling series of markdown files, that might be interesting context for both human and AI.
    • I know all the above presupposes that coding with an AI agent is a real and valuable thing. But, after putting a bunch of hours into giving it a try, I've morphed from skeptical disbelief to cautious buy-in.
    # 11:59 pm
    • miscellanea
  • Quoting Jon Udell on MCP and RSS

    Jon Udell, MCP Is RSS for AI:

    It may sound impressive to say “I built an MCP” server, but the mechanics are delightfully trivial — which is why I’m inclined to think of MCP as RSS for AI. The beauty of RSS as a protocol was its simplicity. You can write an RSS feed by hand, or write very simple code to generate one. Writing an RSS reader was the starter project for many a beginning coder. It’s not quite that easy to work with MCP’s protocol, JSON-RPC, but vastly easier than working with, say, the protocols spoken by Fediverse or Bluesky clients and servers.

    I need to play with MCP more. I've gotten through the basic "hello world" tutorial and hacked together a server that emits random cat facts. That was pretty cool, asking Claude to do "research" on cats and vomit a little throwaway essay sourced from that.

    But, yeah, it was very simple. In fact, I worried it's too simple. It's kind of a slapdash protocol. But, I think I worried that about RSS when I first saw it, way back in like 1999. I figured I'd have to learn SGML and XML and NewsML (?!?) to do anything interesting with syndicating content on the web. I don't even remember where I dug up references to NewsML, back then.

    Like Anil Dash notes, "slightly under-specified protocols that quickly get adopted by all the players in a space are what wins". There do seem to be similar vibes coming from MCP as we got from things like RSS and XML-RPC back in the 2000s.

    # 5:27 pm
    • llm
    • ai
    • rss
    • mcp
  • 2025 May 28

    • Hello world!
    • Busy day, so not as many words spewed onto the internet.
    • But, even if I'm not exactly producing best-sellers here, I've been fairly pleased with having gotten into a daily groove of writing.
    # 11:59 pm
    • miscellanea
  • 2025 May 27

    • Hello world!
    • It's a caremad day, I guess.
    • This blog publishes every 10 minutes, if I have changes to the day's markdown document.
      • I'm starting to feel like that's a Pomodoro timer if I'm off on a rant. I need to beat the micro-deadline and be done with it.
    # 11:59 pm
    • miscellanea
  • Only the Metrics Care

    Dan Sinker, The Who Cares Era:

    The writer didn't care. The supplement's editors didn't care. The biz people on both sides of the sale of the supplement didn't care. The production people didn't care. And, the fact that it took two days for anyone to discover this epic fuckup in print means that, ultimately, the reader didn't care either.

    It's so emblematic of the moment we're in, the Who Cares Era, where completely disposable things are shoddily produced for people to mostly ignore.

    This hits me hard right now. It’s part of a broader sadness I’ve been feeling—especially around the shrinking prospects for paid work that actually feels career-meaningful.

    Dan calls it “disheartening,” and I feel that. He also writes, “It’s easy to blame this all on AI, but it’s not just that.” Exactly. This didn’t start with LLMs. They just sped things up—and ensured even fewer people get paid to produce an ever-growing volume of slop.

    What’s worse: much of this output isn’t even for people anymore.

    The user isn’t the customer. And they’re not the product either. The real product is behavioral optimization—metrics on a dashboard. The paying customer is somewhere else entirely, and the "content" is just a means to nudge behavior and juice KPIs.

    That’s why we see this flood of AI-generated blog posts, podcasts, and articles that barely say anything and just conjure a vibe. Why publish something devoid of editorial oversight or substance? Like, who is this for?! It meets a quota, hits a keyword target, triggers an engagement metric. But, it doesn't reach a person except incidentally or by accident.

    The point isn’t to communicate. It’s to simulate relevance in order to optimize growth. It's all goal-tracking, A/B tests, fake doors, and dark patterns.

    It’s not publishing. It’s performance art for algorithms. Interpretive dance for the bots. It's sympathetic magic—building the runways and replicas, hoping the traffic increases.

    And that’s what makes me so sad. It reveals such a grim meathook future ahead, a solipsistic view of humanity: most people reduced to NPCs in someone else's growth funnel. Not peers. Not audiences. Just marks—behavioral units to be nudged for another uptick.

    Anyway, I don’t have a better conclusion than Dan’s: "In the Who Cares Era, the most radical thing you can do is care." He’s right. And honestly, I’m still trying to figure out what that looks like, day to day—besides being caremad and grumpy all the time, that is.

    # 4:41 pm
    • slop
    • ai
    • webdev
    • career
  • Involuntary Stone Soup AI

    Alison Gopnik, "Stone Soup AI":

    Here is a modern version of the tale. Some tech execs came to the village of computer users and said, “We have a magic algorithm that will make artificial general intelligence from just gradient descent, next-token prediction, and transformers.” “Really,” said the users, “that does sound magical.” “Of course, it will be even better and more intelligent if we add more data — especially text and images,” said the execs. “That sounds good,” said the users. “We have some extra texts and images we created stashed away on the internet — we could put those in.”

    I kind of like this reframing of the story, except... While the three hungry travelers did provide the cauldron and the stones and the firewood—they just went ahead and helped themselves to the onions and carrots and chickens from the villagers' stores.

    Then they sell the soup to the villagers and cry that there'd be no soup to sell if they weren't allowed to dig around in folks' pantries without asking.

    # 11:56 am
    • ai
    • llms
  • Quoting Matthew Haughey about YouTube Ad Revenue

    Matthew Haughey, "YouTube revenue and recent good ones":

    For high-end channels that put out weekly videos, they often have a team of 4-5 people behind the scenes, shooting video and editing it all and getting everyone paid but it doesn't make sense based solely on YouTube/Google's ad income. Instead, they take a break into the middle of their video to talk about SquareSpace or BetterHealth, and the creators I follow admit they get $5k-$30k for those ads, while the same video might only make a few hundred bucks from YouTube. When I hear this, I'm both shocked at how high the ad rates are for an embedded ad read but also how little YouTube/Google pays their creators.

    I was just wondering the other day whether YouTube has optimized its ads to make folks think favorably of the advertiser or annoy users into ponying up for a Premium subscription? Really seems like the latter annoyance way more than the former.

    Sounds like the cruddy ad interjections don't even make much money for the folks who make the videos that get interrupted—and I've heard from several of those folks that they make more money from Premium viewers anyway.

    # 11:10 am
    • youtube
    • business
    • money
    • ads
    • advertising
  • 2025 May 26

    • Hello world!
    • Figured I might write a bit more here over the weekend, but instead I mostly puttered around the house catching up on some needfuls.
    • There's a part of me who kind of wished I'd taken photos and written about it - but meh, not everything needs documenting.
    • Dumped some thoughts about Glitch, but also feeling a further rant brewing about how I'm really not feeling like it's 2004 again anymore.
      • I'm not optimistic that anything like the open web as we know it survives the bots.
      • Maybe something different follows?
      • Maybe that looks like MCP?
      • Maybe the web goes the way of dial-up BBSes - they're still around, mind you, but just as a weird little niche hobby that won't fund my mortgage.
      • Mostly I'm really sad, lately. I'm trying to get past that to some renewed enthusiasm.
    # 11:59 pm
    • miscellanea
  • RIP Glitch

    Keith Kurson, "The End of Glitch (Even Though They Say It Isn't)":

    The thing that breaks my heart isn’t just that another platform is shutting down—it’s that we’re losing one of the last places on the internet that prioritized joy and experimentation over engagement metrics and revenue optimization.

    Glitch was always one of those platforms I wanted to love more than I actually did. I kept a paid membership for a while, built a few projects there, and genuinely rooted for what they were trying to do.

    But I kept bumping against its limits. The editor never quite clicked for me, and I found myself gravitating toward other hosting options: AWS, GitHub Pages, even hardware in my basement. When Fastly acquired Glitch, I assumed they'd evolve it toward something more like Amazon Cloud9 or GitHub Codespaces—powerful cloud development environments where I've actually gotten real work done. Fastly has all the pieces to build that kind of product—but maybe the Glitch folks could have done it with a bit of whimsy?

    The social aspects never hooked me either, though that's more a reflection of my own limitations than of Glitch. Being social takes effort for me, and I never quite got pulled into that community.

    The Sustainability Problem

    I admired how the Glitch team cultivated that community space, even if I didn't fully participate in it. Which makes its wind-down all the more frustrating.

    What breaks my heart is the same thing Kurson identifies: platforms that prioritize creativity over metrics rarely survive. Most of these efforts end up ephemeral unless they're self-funded labors of love by people who pay their bills elsewhere.

    I keep wishing Mozilla offered something like this—imagine if every Firefox Account came with hosting, storage, and compute, maybe with an IDE integrated right into MDN. But even Mozilla has struggled to find sustainable models for tools that help people create rather than just consume.

    Casualties of the AI Gold Rush

    There's something particularly galling about Glitch's timing. We're in the middle of an AI gold rush where capital flows freely to companies building the next ChatGPT wrapper, but platforms that actually help humans learn to code and create can't find sustainable footing. The bots are watching us dance and getting all the funding instead.

    It feels like 2000-2002 all over again—a lot of frothy investment that'll eventually crash, leaving us to figure out who was pets.com and who was Amazon. I don't expect it all to evaporate, but we're headed for serious churn until or unless new actually generative cycles emerge. Glitch feels like a casualty of that transition.

    The web needs more places that prioritize joy and experimentation. Losing them one by one makes the internet a little less magical, a little more extractive, and a lot less welcoming to the next generation of creators who might have learned to love building things on the web.

    # 10:05 pm
    • glitch
    • webdev
    • fastly
    • sustainability
  • 2025 May 22

    • Hello world!
    • Taking a couple days off work for a long weekend. First leisure activity scrubbing down my balcony to open it for the season.
    • Home-ownership isn't so much a dream as it's a hobby and a part-time vocation.
    # 11:59 pm
    • miscellanea
  • 2025 May 21

    • Hello world!
    • Boy, Sam & Jony really like to talk. What was that all about?
    • After making the tweaks to this blog to make it easier to post and publish more frequently, I discovered that I'd doubled my AWS S3 bill! 😅
      • Turns out that the process I set up over 7 years ago just re-uploads the entire site, every time I push a change. That wasn't a noticeable problem until I started pushing multiple times per day.
      • So, I'm looking into a new workflow based on rclone that should be able to do more differential uploads based on changes.
      • The first version of that workflow deleted the contents of my blog. (Oops.) The second version re-uploaded everything in about 6 minutes.
      • One more thing I should write about if I get around to an entry describing the current state of this contraption.
    • Juggling a couple AI related ideas in my head that might turn into a longer post:
      • Seems to me like most AI-assisted tools these days are single-player and it's really hard to pass the baton of a project underway to someone else.
      • Seems to me like many folks using AI-assisted tools think they're the only Chosen One in the world with access to those tools and everyone else around is an agency-free NPC
    # 11:59 pm
    • miscellanea
  • 2025 May 20

    • Hello world!
    • Added support for a draft: true flag on entries here, which should at least help me keep half-broken things from deploying mid-rant
    • Now I just have to make sure to use that flag right so I still don't include half-broken things here 😅
    • I should figure out a decent way for showing really long strings here, like the path names in that cloud saves post today 🤔
    • How the hell do I remember things like "Pumas on Hoverbikes is at monkeybagel.com" but I have to remind myself to eat lunch as a separate step from making lunch?
    # 11:59 pm
    • miscellanea
  • Quoting Greg Storey on Minimum Viable Humans

    Greg Storey, "Minimum Viable Humans." hits like a funnel cloud:

    Companies aren't just cutting costs—they're conducting a fundamental reset to find their Minimum Viable Humans model. They're stripping organizations down to the ground—the irreducible human functions that (currently) can't be replaced or augmented by AI.

    This rings true to what I've been seeing. The cruelty feels incidental to the process—not to me and everyone I know, mind you—but indignation won't pay my mortgage.

    Trying to get your old job back is futile—that role, as you remember it, isn’t coming back. The opportunity now lies in positioning yourself for the roles and capabilities that will emerge as organizations rebuild from the bottom up—core human strengths like creativity and ethical judgment, the ability to collaborate with AI, your unique experience and expertise, and the kind of discernment that values judgment over process.

    Brutal, yet maybe positive? If drudgery gets distilled out, what's left are human creativity, judgment, and experience. That sounds nice, like what classic sci-fi promised us.

    But you can't mechanize that with a Taylorist time-and-motion study. Some jobs are just paychecks, some spark enthusiasm, and some flip between both unpredictably.

    Will companies make allowances for that, while rewarding what they're asking for? I know we're seeing a reckoning right now, a firm snatching back from privileged labor. Fine, I don't need a free lunch or a foosball table, but I do need a health plan and a few days off. What's the churn for all of us while they figure that out?

    The manager's role isn't just being eliminated—it's being fundamentally redefined from day-to-day oversight to exception handling, strategic communication, and resource allocation across much larger teams. It will keep going into augmenting the decision-making process for leaders and individual contributors alike. The boundaries of acceptable risk at all levels are now in a constant prototype state as AI becomes more powerful and faster, giving mere mortals more access to information and insight than ever before. The trick will be what training we need in order to make the pairing work.

    This might suit an ADHD-head like me. I could see myself as a human relay, occasionally reallocating pylons and vespene gas amongst largely autonomous protoss units. I'm already having better luck than some with AI-assisted coding. Maybe I'm learning the magic invocations? Maybe I'm a special kind of moron? Either way, I'm ending up with working code and nodding heads. This gives me some hope that I can "stay current" - whatever that means in the current era.

    Admittedly, I'm more comfortable directing Starcraft units and AI agents like this. That feels like a kind of management to me. I'm not so comfortable directing human co-workers with the same interface, though.

    This isn't just another tech cycle. It's a fundamental recalibration of the human role in organizational life. And while there's no easy path through this transition, understanding what's actually happening might help you find your footing on shifting ground.

    I keep slipping into snark because part of me finds this exciting while another part is cynical and burnt out. I'm hoping optimism holds out longer than cynicism. Maybe the thumbless neurotic puma driving a maroon AMC Gremlin have the last laugh after all? (Anybody get that reference anymore?) I dunno, I'm just trying to figure how much of this is like that surface rupture in Myanmar last week and whether my Crocs can carry me to the right side of it.

    # 5:30 pm
    • career
    • ai
    • llms
  • Liberating save games from Xbox Cloud Gaming

    Ah, the problems of a masochistic nerd trying to game. For about a year now, my gaming PC has run Bazzite Linux because I got tired of Windows. I've also got a Game Pass subscription, prepaid for a long while from before I switched to Linux. This was not a well thought out plan.

    While other game stores work pretty great, the only way to use my Game Pass subscription on Linux is via Xbox Cloud Gaming. The Xbox app doesn't on Linux and won't install Game Pass stuff locally on Linux. Still, streaming works pretty well for games I want to try and ditch when I get bored. But, it's so ephemeral that I wouldn't really want to commit to buying a game through that outright.

    So, I started playing Clair Obscur: Expedition 33 via Cloud Gaming. After about 13 hours in, I realized I wanted to buy the game. As it turns out, Steam likes Linux and the game runs well there. It was on sale in a bundle, so I went ahead and bought it from Steam instead of from Xbox.

    But, my 13-hour-old save game was trapped in the cloud. I guessed I'd just have to abandon it and start over. That is, until I pieced a few things together:

    Thanks to Xbox Play Anywhere, the saves in Cloud Gaming sync down to PC game installations. I had one old Windows laptop left in the house that would install Game Pass games - it just played them horribly. Once I installed the game locally from Game Pass and booted it up once, my cloud save descended onto my laptop hard drive.

    Then, I installed the game again from Steam - i.e. the copy I purchased. From there, I could follow this guide to transplant my save file from Game Pass to Steam. Once properly transplanted, the save game found its way onto Steam's cloud sync servers and then back onto my real gaming PC.

    I realize this sounds like the plot to a dork heist. But, it worked!

    # 11:40 am
    • gaming
    • xbox
    • cloud
    • msft
© 2024 Les Orchard <me@lmorchard.com>
  • feed