Jump to Using gorelease below.
I’m neither a lover nor a hater when it comes to Go and its ecosystem. Generally things Go are a bit rudimentary but very open and accessible. You may have to do some work yourself, but you’ll be able to get results.
There are simple best practices around versioning and releasing software in general, regardless of language or tool:
Do those two things and you’ve got an orderly, identifiable, reproducible product.
So what does Go expect around releases? If you think exactly the minimum best practice, you’d be right :-) For Go, you need to, use semantic version numbers, and tag your source. Do that and your stuff will work properly in the Go module system. …
I wanted to really understand the flow of Go code to production at my work. I looked at all the IaC code they had in place and was overwhelmed — so many tools involved, so much configuration. So I did the code monkey thing, and went off to a dark corner and implemented the bare minimum myself to learn.
This is a long post, but stick with it, look in the associated repository, and you’ll be able to create a program and deploy it into a Kubernetes cluster all on your own machine — not too shabby.
The overall lesson plan here is to achieve, all on a single…
I believe in keeping software dependencies up to date in a timely fashion. I’ve never liked the if it ain't broke, don't update it approach. I’ve written about this before. With Go projects I periodically run:
$ go get -u the/module/name
To update the dependencies in a project using the/module/name of the current project. Usually, this just grinds against the go.mod file and does the right thing, updating all the modules and their dependencies to the latest and greatest. Sometimes, when you run this, if a dependency introduced a breaking change, your code won’t build, or your tests fail. …
I’ve been working with Go professionally for a bit, and I’m still not convinced it's the enterprise language it’s touted as, at least not to a software craftsman. It’s too nuts and bolts. The underdeveloped support for things like encapsulation, composition, and code reuse leads to too much code rot.
That said, looking at its origin at Google, as a response to a growing plague of production Python and Bash scripting, Go is a great nail and hammer for that problem. You can produce decent code that you won’t regret next week pretty damn quickly.
In the past, I incorporated the Pomodoro Technique regularly in my workday, but when I started working from home, the mechanical timer I used didn’t fit into my home office (i.e. desk in the living room). The apps I tried were all way too feature busy as though they felt compelled to be more than a timer. I fell out of the habit of using the technique. I decided I needed to pick it up again and looked at my old apps, and alternatives, and decided, instead to whip one up myself in Go. In about 75 minutes (or three timers :-) I had…
I think this scenario will be familiar to many developers. A project with database requirements is moving fast which puts pressure on testing. No strategy for testing persistence is agreed upon in advance and three anti-patterns emerge. For simple tests, people mock the data. Others, without a database, defer the persistence tests to integration testing. Finally, someone spins up a “test” database instance somewhere and starts using that. These solutions very quickly become entrenched and add to the technical debt pile.
I’m not going to address the fundamental position that “database access and persistence are by definition part of integration testing”. …
To learn and test new things in Kubernetes, I like to do it in a local cluster. Being able to spin things up on your laptop is simply a win. This article won’t address the “why Istio” at all, just the “how”. Previously I wrote on getting a local Kubernetes cluster going with Docker Desktop and Helm. Continuing down that path I added Istio into the mix. The work was done in a branch of the same repository.
You might want to grab the repository to follow along.
Remember when everything was configured using XML? Seemed good, but the contents meaning were mysterious and once you encountered that first error it was downhill from there. They solved that with scheme support right? No. That added more work to implement and barely addressed syntax issues doing almost nothing for larger issue of semantics.
Thank goodness YAML came along! I’m being sarcastic. The name alone, Yet Another Markup Language, should have set expectations, and proving we just don’t learn, YAML configurations went through the same evolution as XML. When the things starting getting out of control people added “API Versions” which made the tooling more complex, marginally helped on syntax and didn’t really address semantics. Sigh. …
Recently a software project I worked on required releases for multiple platforms. Here’s how I implemented it using Github Action’s matrix strategy.
You can just look at the code here.
I started a project with just one target platform. I had never implemented a release process with artifacts in Github, but since I’ve recently been using Github Actions for my CI/CD I looked at it for a solution. I found actions/create-release, which let me, with about a page of YAML do the following on a version tag push:
If you work on multiple software projects with unique tool requirements you may have already adopted asdf. It’s a simple and effective way to manage many versions of many tools. It’s basically a context sensitive package manager. With asdf in place, any directory can contain a .tool-versions file and when you change to that directory all the tools and versions listed there will be made available. It’s not a new concept, there have long been similar things for Ruby, Java, etc., but asdf already supports hundreds of packages, and it’s easy to add new ones if you’re so inclined.
There was one missing bit though, syncing up. What I mean is, let’s say you’re on a team and everyone uses asdf. Many repositories have .tool-versions files. As you work in various different projects asdf will ensure that you have the right versions of the right things, if you’re prepared with the appropriate plugins and versions installed. For example you’ve been working in Java and hop over to a repository needing Go for the first time. For asdf to do it’s magic based on the .tool-versions you need the Go plugin and the specified version installed. If you work on a lot of varied projects reverse engineering those requirements can becomes annoying. …
Test Driven Development (TDD) fits my way of approaching problems. Given a goal, to find a solution, you decompose it, identifying manageable tasks, and begin tackling them. To tackle a task you define what would be success, and then you do what’s needed to achieve that. TDD is just a process for divide and conquer¹. There’s more benefits to it but my goal here today isn’t to evangelize TDD, I’m just framing the discussion. I’ve used TDD with different degrees of formality for a very long time and it works for me.
It’s a common objection. I don’t agree with it. The theory goes that testing strategies infiltrate your production code and degrade it. Testing requires observability, predictability, isolation of functionality, and that should fit right in with things like Pure Functions, SOLID, Hexagonal, Layered, Onion… good practices in design and architecture. Writing testable code should help drive better code. …