Cleaning Up The RideHome Archive

I listen to the RideHome podcast every weekday, and some weekends when there are bonus episodes that pique my interest. Around November 24, 2018 I think I started hearing Brian, the host, say things like “I’m sure I’ve talked about this before” and thought to myself that this should be a pretty easy problem to solve. Well, if it’s easy it wasn’t easy for me. But that rarely stops me from trying…

I hacked together some really terrible python to grab the RSS feed, parse it, extract the links, and spit out Markdown. Why Markdown when you already had HTML? I was going to use GitHub Pages and Markdown was the easiest path to that, that I knew of at the time. What I didn’t know was that Markdown was going to become the format of choice for LLMs. Anyways, once I got my hack-a-thon scripts working, I basically just ran them once in a while to update my Markdown files and push everything to GitHub for hosting. It mostly worked most of the time, except when the human making the show notes would change how they did the markup. Silly humans, always ruining everything with their capriciousness.

Now, fast forward to December 8, 2025 and I drop a big “Claude Rework” into the repository. Yes, as regular readers know, I’ve been using Claude Code for my little one-off personal projects. What really helped though was Nick Tune’s claude-skillz. I used the TDD persona and refactored my crappy scripts all the way to the point where I could start asking Claude Code to make new features. What kind of new features? Well, I had always wanted to do some data analysis, but I was always too busy and then…this made it so easy. I made a Wrapped feature for 2025 and then ran it against all the previous data I had gathered.

At work I have a reputation as an “AI Hater”, which if you only want to characterize “AI” as “will replace all humans and reduce human labor dollars to zero”, then yes…that’s accurate. I use Claude, ChatGPT, and Gemini almost every day for something. Sometimes I’m just testing the same question across models. Sometimes I’m having them all independently evaluate something I’ve written, always with the prompt that the model is my intended audience and to ask me questions about what I’ve written to help me find the gaps I’ve left or where I was unclear. Seriously, they’re really good at this…if you can ignore the sycophancy.

So what? Now I have some python that’s less sucky than when I started…big deal. Well, sure…but for me it’s more about finding where the actual value can be derived. Is it worth it to back the systems that are actively trying to replace me and the ridiculous race to turn the planet into either energy plants or data centers to consume that energy? Ask again later, hopefully we and the planet are around long enough for that. Right now, for me…my little project is in a much better state and I can do things that I probably wouldn’t have done on my own. Maybe that’s enough.

MLS Soccer 2026 Schedule Shenanigans

The 2026 MLS Schedule has been released! That's the good news. If you want to see the entire season, it seems like you have to go to a team and then view the entire schedule from there. That's fine, but I wanted it in a Google Sheet, so I had Claude work on a solution.

I present to you the mls-schedule-generator. Is it "vibe coded"? Yeah, mostly. Is this good code? Probably not. Does it work? Yeah. You can see the results in this sheet.

AI is Biased and Doesn’t Share Your Values

I guess if the crypto traders say it, it’s real?

“It’s hard to say how much we can take away from this,” Azhang said. “One thing that we do know is that there are patterns in the models and they’re clearly biased and have preferences.

“For example, Claude almost always goes long and refuses to go short. It’s like an eternal optimist whereas Gemini is happy to short,” Azhang said. “They clearly have these inductive biases when it comes to trading.”

Simon Wardley on LinkedIn:

GPTs are a non kinetic form of warfare designed to embed the values of a small number of people into much wider communities by capturing the process of decision making. The delivery mechanism is the appearance of helpfulness i.e. coherent and authoritative arguments. The payload is helplessness and the creation of a new theocracy.

Simon Wardley on Medium:

By controlling the tools available to an individual, one can shape the type and quality of information they access. For example, limiting access to specific scientific instruments or technologies can constrain a person’s ability to gather empirical evidence and engage in scientific reasoning. Conversely, providing access to biased or misleading tools can lead individuals to draw erroneous conclusions or develop skewed perspectives about the world.

Simon Wardley on LinkedIn:

Do remember, LLMs are a non kinetic form of warfare. By controlling the language, medium and tools then they influence how you reason about any space. If the hypothesis holds then over time, your values and hence your behaviour will become altered by your exposure. This is no different to how we have used art to change societies.

Micorsoft’s AI chief Mustafa Suleyman in the Wall Street Journal (Apple News link)

AI “is going to become more humanlike, but it won’t have the property of experiencing suffering or pain itself, and therefore we shouldn’t over-empathize with it,” Suleyman said in an interview. “We want to create types of systems that are aligned to human values by default. That means they are not designed to exceed and escape human control.”

Which human values to do you think he means?

Posted in AI

Artificially Inflated #1

I do not gather these links to say that Generative AI has no value or utility, only that perhaps we don’t believe the hype.

This first batch of links are entirely me going back through my Messages with Dr Kimberly Jaxon, who is dealing with Generative AI in many courses where text represents the artifact that is required to complete an assignment.

Not new, but relatively new to me. I Will Fucking Piledrive You If You Mention AI Again — Ludicity.

Simon Wardley, of Wardley Maps fame, has been absolutely on fire lately. Do you use ChatGPT for search?

Camille Fournier has entered the chat. Things I Currently Believe About AI and Tech Employment

Speaking of text-based artifacts, here’s a fun thread on a study of human success at detecting AI generated writing.

Not sure how I can not include a fart joke at ChatGPT’s expense.

What if it’s most grift and this particular grift is draining funds from the California State University? via Dr. Cat Hick.

Neal Stephenson, one of my favorite authors, when to New Zealand and said some words. Remarks on AI from NZ

I follow conversations among professional educators who all report the same phenomenon, which is that their students use ChatGPT for everything, and in consequence learn nothing. We may end up with at least one generation of people who are like the Eloi in H.G. Wells’s The Time Machine, in that they are mental weaklings utterly dependent on technologies that they don’t understand and that they could never rebuild from scratch were they to break down.

You know jwz has had enough. College student asks for her tuition fees back after catching her professor using ChatGPT

I really enjoyed Hazel Weakly’s post, Stop Building AI Tools Backwards.


How do I keep up? I don’t. This is but a fraction of a fraction of a tiny minuscule of the words and hang-wringing (including my own!) that are flying around the internet. I take advantage of Listen Later to covert long articles and PDFs into mp3 for podcast feed consumption. Yes, it uses AI…like I said, I don’t deny the utility.

Posted in AI

Coding with Claude

There was something I wanted to and I really didn’t have time for a side quest, so I asked Claude for help. As with most of my coding side quests, I spend a lot of time familiarizing myself with Python, again. Then for this particular quest I would also have to interact with the GitHub API. Oh, and I should probably figure out python virtual environments, because why not?

Ugh, all I want to do is label a bunch of pull requests based on specific files being modified!

If you buy into the AI hype, we’re either weeks away from being replaced or have already been replaced and we just don’t know it yet. Yes, it’s tiresome. Welcome to Technology! So I just threw everything into a prompt to see what would come back and SPOILER ALERT, it wasn’t great. It didn’t even work. Now, that was my fault probably more than a little bit, as I had left out some key details, like authenticating with GitHub Personal Access Tokens (PAT) and that there were close to 10,000 pull requests in this repository.

But, hey…AI is super intelligent! Except, this isn’t artificial intelligence, it’s generative AI. It’s probabilistically picking the next best token based on the garbage input I gave it. Over and over until it’s “done”, at which point it tells you that it’s done and everything is awesome.

A small aside, that most chat bots are so “happy” when you point out how wrong they are is irritating.

It never asked questions to get more context, to get a deeper understanding of the problem domain, or even to think of cases where the generated solution might not be a good solution at all. Nope, it needed to spit out text that would likely be interpreted by the python runtime. Which, that it’s even this good at doing that is truly wild. And yet, I was not feeling great about completing my side quest.

I then threw away everything and decided to start over and be more helpful to my partner in crime. The first iteration was just using a PAT to connect to GitHub and print out basic repository information. It worked! Okay, now let’s get the ten latest pull requests and print them out. It worked! Okay, now let’s test if any files in a specific directory were part of the pull request. It worked! Everything’s coming up Millhouse.

Now we come to the part where the basics are handled and you run into the cold, hard reality that the API doesn’t really handle what you want to do and that you’re going to need to get creative. Unsurprisingly, Claude was not very creative at coming up with solutions. Luckily, Claude had a partner that could help with the problem, which was that we were looking at going through way too many pull requests. Maybe we can batch them? The API supports pagination! Oh, but we still just paging over 10,000 pull requests.

Claude wasn’t aware that I really only wanted to label pull requests after a certain date. So, I provided that detail and we still ran into roadblocks, which I believe are mostly inherent either in the GitHut API or in PyGithub. Again, this was a side quest and I didn’t want to spend a ton of time on this, so we turn to hacks! I knew the ID of the pull request that would be the starting date and I just told Claude to start at that ID and then incrementally iterate through pull requests until we processed them all. It worked!

There’s still tons of problems, the most glaring being that if you don’t stop the script it will just keep trying to request pull requests that don’t exist because error checking is for humans I guess. I can imagine Claude yelling at me, “well, you never asked for error check you idiot. I would have been happy to provide that but noooooooo, you didn’t seem to think it was needed.” There’s a parable here about turning the implicit into the explicit, but that’s out of scope.

All this is to say that, at least for me, slowly walking through a process was far more effective that trying to get it to do everything at once. Anybody that has learned how to program, or even just solve problems, will say, “well, of course you should work in small incremental chunks” but the AI hype is working against that. There’s also the inference cost and context window growing too long, so much that Claude told me I should start a new chat because this side quest was going to blow through my quota faster.

Side quest, completed.