Rewilding Software Engineering Chapter 6

Simon Wardley and Tudor Girba have posted the latest chapter to their book, Rewilding Software Engineering, titled "Myths we tell ourselves."

As these posts are on Medium but also licensed under Creative Commons BY-NC-SA 4.0, I turned the chapters into a book that you can download in PDF form. The "code", as it were, is freely available as it's just the Medium posts converted into asciidoc format.

I very much appreciate them writing and publishing the book and I hope this format makes it easier for more people to read.

The Forces Working Against UX

Why do we run into situations where a company seems intent on making their systems or people incapable of taking our money for their services which we desire?

I had the good fortune to hang out with Joshua Kerievsky at Øredev 2025. We were on the same flight home, so we made our way to the Copenhagen airport and decided to buy a day pass to the SAS (Scandinavian Airlines System) lounge. We had arrived at the airport the recommended three hours before our flight, so the lounge seemed like a good idea. Well, at the time at least. It quickly turned into a user experience case study of how not to delight a customer.

The first hurdle was the front desk. The associate asked when our flight was and we told her that was at noon. She told us that anything less than three hours isn’t eligible. I held up my app and explained that it was allowing me purchase, but I guess the rules are the rules and she still said she couldn’t help and we were welcome to try the app. Joshua proceeded to navigate the SAS phone app to get to the screen where you could purchase a pass, which was a nontrivial navigation experience, but we got there. He tapped the button and nothing would happen. He tried again. And again. “Maybe it’s because I’m using the app as a guest,” he wondered aloud. He didn’t want to create an account because he doesn’t fly SAS typically, so why bother with yet another username/password (no passkey support) and the inevitable barrage of unwanted marketing emails that we are opted into by default everywhere.

All of this was done standing at the lounge front counter, so we decided to find a place to sit down and see if creating an account would unlock the functionality to give SAS the $55 USD they desired for lounge access. He gets out his laptop and begins the account creation process. We all know the pain of trying to use most web sites on spotty Wi-Fi, and flysas.com was no exception. He gets to the create account page and it’s protected with a CAPTCHA, which would not load. If you can’t solve the CAPTCHA, you can’t create an account. So we wrestle with trying various networks, phone tethering, and finally get it to load. Then the password rules. Then resetting the password because it didn’t save in the password manager. Then messaging the password because device syncing isn’t going great because Wi-Fi. It may shock you to learn that some airport Wi-Fi networks are not great.

Finally he’s in the app, gets to the buried part of the app where you can add lounge access and the price has gone up since we started! They wanted to charge him more money to spend less time in the lounge! We both make our purchases and head in, although it’s certainly not the relaxing experience we had expected.

So what was going on? Why was the app and website so resistant to selling him a lounge pass? The app issue is a pure UX issue. If a user can’t do something because they aren’t logged in, you have to tell them. The website, I suspect, has more to do with a sequence of decisions made in the name of security and fraud prevention that did not take into account the cumulative negative impact on the user. “We have to use a CAPTCHA to thwart the bots!” Did anybody think about introducing that as an external dependency that could have performance issues? Many websites have started using services that will flag suspicious behavior to prevent fraud, which sometimes relies on knowing that a user is suddenly far away from the last usage. Weird! Unless you’re traveling…

I’ve been on the implementation side of this fence and the Internet is a very nasty place. Web application firewalls, bot mitigation, and fraud prevention all introduce friction and you have to be aware how it’s accumulating and impacting your customers.

What about the human who simply refused to sell us access before the time window and could have prevented this whole technology ordeal? That limit is likely to prevent customers from missing their flight due to passport control. The delay through passport control could be made available to customers though, not just as a predefined unchangeable time window that doesn’t seem to be enforced in the app. We see signs like this on roads all the time, telling you how long it’s going to take to get to certain destinations. From the customer perspective, enough information to understand the barrier probably would have been sufficient. Instead, it was just a “I can’t” with no explanation.

Companies, I assume, are not intentionally making suboptimal experiences for their customers.To whoever took each decision, in isolation, they probably made all the sense in the world. “We have to block the bots!” “We have to prevent fraud!” “We can’t let our lounge be the reason why customers missed their flight!” Each decision contributes towards the system drifting away from its original purpose: to serve the customer. We all need to remember that our decisions live in a larger system and that every local safeguard has global consequences, up and downstream.

Cleaning Up The RideHome Archive

I listen to the RideHome podcast every weekday, and some weekends when there are bonus episodes that pique my interest. Around November 24, 2018 I think I started hearing Brian, the host, say things like “I’m sure I’ve talked about this before” and thought to myself that this should be a pretty easy problem to solve. Well, if it’s easy it wasn’t easy for me. But that rarely stops me from trying…

I hacked together some really terrible python to grab the RSS feed, parse it, extract the links, and spit out Markdown. Why Markdown when you already had HTML? I was going to use GitHub Pages and Markdown was the easiest path to that, that I knew of at the time. What I didn’t know was that Markdown was going to become the format of choice for LLMs. Anyways, once I got my hack-a-thon scripts working, I basically just ran them once in a while to update my Markdown files and push everything to GitHub for hosting. It mostly worked most of the time, except when the human making the show notes would change how they did the markup. Silly humans, always ruining everything with their capriciousness.

Now, fast forward to December 8, 2025 and I drop a big “Claude Rework” into the repository. Yes, as regular readers know, I’ve been using Claude Code for my little one-off personal projects. What really helped though was Nick Tune’s claude-skillz. I used the TDD persona and refactored my crappy scripts all the way to the point where I could start asking Claude Code to make new features. What kind of new features? Well, I had always wanted to do some data analysis, but I was always too busy and then…this made it so easy. I made a Wrapped feature for 2025 and then ran it against all the previous data I had gathered.

At work I have a reputation as an “AI Hater”, which if you only want to characterize “AI” as “will replace all humans and reduce human labor dollars to zero”, then yes…that’s accurate. I use Claude, ChatGPT, and Gemini almost every day for something. Sometimes I’m just testing the same question across models. Sometimes I’m having them all independently evaluate something I’ve written, always with the prompt that the model is my intended audience and to ask me questions about what I’ve written to help me find the gaps I’ve left or where I was unclear. Seriously, they’re really good at this…if you can ignore the sycophancy.

So what? Now I have some python that’s less sucky than when I started…big deal. Well, sure…but for me it’s more about finding where the actual value can be derived. Is it worth it to back the systems that are actively trying to replace me and the ridiculous race to turn the planet into either energy plants or data centers to consume that energy? Ask again later, hopefully we and the planet are around long enough for that. Right now, for me…my little project is in a much better state and I can do things that I probably wouldn’t have done on my own. Maybe that’s enough.

MLS Soccer 2026 Schedule Shenanigans

The 2026 MLS Schedule has been released! That's the good news. If you want to see the entire season, it seems like you have to go to a team and then view the entire schedule from there. That's fine, but I wanted it in a Google Sheet, so I had Claude work on a solution.

I present to you the mls-schedule-generator. Is it "vibe coded"? Yeah, mostly. Is this good code? Probably not. Does it work? Yeah. You can see the results in this sheet.

Coding with Claude

There was something I wanted to and I really didn’t have time for a side quest, so I asked Claude for help. As with most of my coding side quests, I spend a lot of time familiarizing myself with Python, again. Then for this particular quest I would also have to interact with the GitHub API. Oh, and I should probably figure out python virtual environments, because why not?

Ugh, all I want to do is label a bunch of pull requests based on specific files being modified!

If you buy into the AI hype, we’re either weeks away from being replaced or have already been replaced and we just don’t know it yet. Yes, it’s tiresome. Welcome to Technology! So I just threw everything into a prompt to see what would come back and SPOILER ALERT, it wasn’t great. It didn’t even work. Now, that was my fault probably more than a little bit, as I had left out some key details, like authenticating with GitHub Personal Access Tokens (PAT) and that there were close to 10,000 pull requests in this repository.

But, hey…AI is super intelligent! Except, this isn’t artificial intelligence, it’s generative AI. It’s probabilistically picking the next best token based on the garbage input I gave it. Over and over until it’s “done”, at which point it tells you that it’s done and everything is awesome.

A small aside, that most chat bots are so “happy” when you point out how wrong they are is irritating.

It never asked questions to get more context, to get a deeper understanding of the problem domain, or even to think of cases where the generated solution might not be a good solution at all. Nope, it needed to spit out text that would likely be interpreted by the python runtime. Which, that it’s even this good at doing that is truly wild. And yet, I was not feeling great about completing my side quest.

I then threw away everything and decided to start over and be more helpful to my partner in crime. The first iteration was just using a PAT to connect to GitHub and print out basic repository information. It worked! Okay, now let’s get the ten latest pull requests and print them out. It worked! Okay, now let’s test if any files in a specific directory were part of the pull request. It worked! Everything’s coming up Millhouse.

Now we come to the part where the basics are handled and you run into the cold, hard reality that the API doesn’t really handle what you want to do and that you’re going to need to get creative. Unsurprisingly, Claude was not very creative at coming up with solutions. Luckily, Claude had a partner that could help with the problem, which was that we were looking at going through way too many pull requests. Maybe we can batch them? The API supports pagination! Oh, but we still just paging over 10,000 pull requests.

Claude wasn’t aware that I really only wanted to label pull requests after a certain date. So, I provided that detail and we still ran into roadblocks, which I believe are mostly inherent either in the GitHut API or in PyGithub. Again, this was a side quest and I didn’t want to spend a ton of time on this, so we turn to hacks! I knew the ID of the pull request that would be the starting date and I just told Claude to start at that ID and then incrementally iterate through pull requests until we processed them all. It worked!

There’s still tons of problems, the most glaring being that if you don’t stop the script it will just keep trying to request pull requests that don’t exist because error checking is for humans I guess. I can imagine Claude yelling at me, “well, you never asked for error check you idiot. I would have been happy to provide that but noooooooo, you didn’t seem to think it was needed.” There’s a parable here about turning the implicit into the explicit, but that’s out of scope.

All this is to say that, at least for me, slowly walking through a process was far more effective that trying to get it to do everything at once. Anybody that has learned how to program, or even just solve problems, will say, “well, of course you should work in small incremental chunks” but the AI hype is working against that. There’s also the inference cost and context window growing too long, so much that Claude told me I should start a new chat because this side quest was going to blow through my quota faster.

Side quest, completed.