Setting up web development workflow in go (part 1)

Category: en

1. Backstory

More than a year ago, few other developers and I decided to try Golang to build microservices for our new project. Previously, we used Python frameworks such as Django, Flask, and Starlette/FastAPI. They all have their ups and downs, especially FastAPI. It comes with Swagger UI and a data validation layer using Pydantic.

Python is okayish most of the time, and it is still one of my go-to languages. However, I always wanted to try Go on a large project, and the rest of the developers also had the same idea, so we gave it a try.

2. Intro

This article is about how I setup local development, design the project, setup CI/CD for one of the microservice application I developed in go. I am not going to explain to you what the microservice is. If you want to know, go and read Martin Fowler article. It is a good read, and even if you know what it is, I still recommend you to read it.

I will tell you what functionalities my service needs to provide, what tools/frameworks I used to implement those. Lastly, how I set up testing with CI, and dockerize it so that my service integrates nicely with the project overall. So let me break it down into several sections

  1. Frameworks I used for the functionalities I need to provide
  2. Local development setup, configuration management, and dockerization
  3. CI/CD (also visualizing the test coverage)
  4. How my service integrates with other services in the project overall

3. Frameworks

Web Service and Database

Since it is an API, I need a database (used Postgres) for storage, a web framework (Gin) with Pop as the ORM, and Soda CLI to handle database migration.

Previously, I used database driver (sqlx) instead of Pop. In most cases, I would recommend ORM over SQL drivers mainly for one reason. To not write SQL queries inside the application code. If you still want to use SQL driver, that is also not an issue because you can still use database migration tools like Soda CLI to handle the migrations separately. However, I would not suggest dropping the database migration tool a good idea because it makes your life so much easier. Trust me, doing database migration with .sql files is a nightmare.

It is very similar to Django with all those migration tools, etc, for those coming from Python. The only difference is, I am just taking out whatever frameworks I need for the requirement instead of using the whole ecosystem.

If you are into the complete package, you can look into the Buffalo ecosystem. However, I do not like frameworks like Django anymore because it is kinda overkilled. Even in Python, I prepare smaller frameworks like Starlette with no template engines out of the box but, you can install it only if you need it.

In addition to all those, my service also needs to provide Async Job/Queue for internal services communication. Therefore, I used Machinery with the Redis backend. Additionally, the service I am working on needs to schedule some tasks, and the Machinery is to trigger those cronjobs. I will explain further in detail later in part 2 of the article how my team and I handled the cronjob among our microservices.

Configuration Management

In every service, there will be four application environments in general. Developing, Testing, Staging, and Production. Your application will need to store some configurations like database URL string, debug flag, HTTP server port, etc. Your database URL string will be different between staging and production among the four stages I mentioned above, and the configuration management tool is quite helpful to handle it.

Traditionally, people handle it by the environment variables, and it works obviously. But using the config management tool is far cleaner. For my service, I used Viper to handle configs. Then Cobra is used to define the command line argument for my program.

Why do I need that? I want to build my API service into one standalone binary file that can provide several functionalities such as running the migration, running the webserver, etc. This makes it a lot cleaner than splitting multiple scripts and figuring out what script to run for different functionalities.


service run migration
service run api

4. Local Development Setup

I used to develop with virtual environments before, like PyEnv/VirtualEnv in Python. That is quite a good approach honestly. However, that is until I was introduced to docker 4 years ago. Initially, I just use docker to run the database. Along the way, I started to shift my workflow from running directly on the machine to containerization with Docker.


At work, I set up Dockerfile and docker-compose files together for local development as well. It takes some time to set up those however there’s an advantage when you work with a team. I do not need to worry about what operating system my colleagues are running. As long as docker is installed in their system, a simple compose-up command will work.

In addition to the simplicity, there’s another benefit, especially working with microservice architecture. Imagine the case where I am working on one service, and it depends on another service developed by my colleague. With the help of docker, I do not need to download his source code. All I need is the docker image built from CI/CD pipeline I will explain later.

In my opinion, docker is a really powerful tool that every web developer should get familiar with because it not only makes deployment simpler but also in developing programs.

Running Test

In most cases running a test is a pretty easy task. However, running a test with a database behind is not so simple. For testing service with a database, there are two approaches

  1. Mock database queries: in Go, you can use go-sqlmock but after I worked with it for a few weeks, I decided to drop this approach for one main reason. Mocking database query calls is very time-consuming and makes the test case unnecessarily complicated.

  2. Spawn test database with docker: compared to mocking database calls, this is much simpler. With the help of Viper I mentioned in Configuration Management section, you can simply define different database URL so that when the test runs it connects to the test database.

Spawning test database

At AcePointer, we have our own open-sourced shared library that is built on top of ory/dockertest to spawn test database easily. You can simply spawn the test database in docker as below

func DatabaseBootstrap(configPath string) func(t *testing.T) {
	return func(t *testing.T) {
		// load test config using viper


There’s a reason why you should spawn the database from go instead of docker because the spawning test database takes a few seconds. The program needs to wait until the database is ready before running the test. If not, the database connection will be broken and tests will fail.


Docker is awesome, but you still need to type so many commands. For example, for running test, you spin up test database, run migration, then run go test. Once the test is finished, the database is still there, and you have to delete that so that it does not conflict with the next test when you run again. Using make you can create a proper workflow as below

	go clean -testcache
	-go test -tags=unit,integration -p 1 ./apps/...
	docker stop test-db
	docker rm test-db

Live reload

Using docker has one disadvantage. That is you will need to rebuild the entire application to see changes. That is not so ideal in the development environment because the developer will make a lot of changes (obviously he is writing a code, duh), and rebuilding the image every time would be a very tedious job.

Using air and Docker volume mount, you can configure it such that when the code has changed it go will simply rebuild the app inside the container and reload it. Basically, using volume mount

  • mount the source code folder to the container
  • then using air look for the changes inside the container

A few months ago, I created this docker image and shared it on the docker hub called sparkling. It is an image pre-built with air and soda-cli inside. For local development, you can set up Dockerfile as below

FROM 1iquid/sparkling:latest
CMD ["air"]


Linting is an important thing when it comes to coding. Every language has its standards when it comes to code quality. In Python, there’s PEP 8 and black is a defacto tool for standardizing. It is not surprising that Go also has her guidelines and tools. Usually, for linting, I set it up in 2 stages.

  1. pre-commit hooks
  2. CI stage

I will explain about CI stage in the second part of this article. For now, I will explain about setting up in pre-commit. Go standard library comes with a command called gofmt which is used to format your go code. Along with that, I use 3 other tools for linting go code

  1. goimports for formatting package imports
  2. errcheck to check unhandled errors in the program
  3. govet is used to check the correctness of the go program

Of course, I will not be running each tool manually before every commit. So I usually set up those tools with pre-commit as below

# sh scripts/pre-commit-err-check.sh
for DIR in $(echo "$@"|xargs -n1 dirname|sort -u); do
    errcheck ./"$DIR"

# sh scripts/pre-commit-go-vet.sh
go vet --vettool=$(which shadow) ./...
  - repo: local
      - id: gofmt
        name: gofmt
        entry: gofmt -s -w .
        language: system
        types: [ go ]
        description: Format your Go code

      - id: goimports
        name: goimports
        entry: goimports -w .
        language: system
        types: [ go ]
        description: Format your Go imports

      - id: errcheck
        name: errcheck
        entry: sh scripts/pre-commit-err-check.sh
        files: '\.go$'
        language: system
        description: Check your Go source code with errcheck

      - id: govet
        name: govet
        entry: sh scripts/pre-commit-go-vet.sh
        files: '\.go$'
        language: system
        description: Analyze your Go code

I will stop this article here for part 1. If you are looking for how I setup the example gin api server, you can check the sample server here.

In next part, I will explain about how the project is configured at CI/CD stage. How docker images are built for each microservice, and how everything is put together for the project overall.

continue reading part 2