During my first week at a new job, the build scripts provided by the development team erased my system. I had diligently read wiki pages relating to building the product, cloned several repositories, installed all prerequisites, and was ready to start contributing. Knowing the project was rather large, I kicked off the build and moved on to other tasks.
Upon returning, I noticed several messages from the build script reporting, “rm: cannot remove ‘/some/backup/file/or/another’.”
The files that it could not remove were not located in the build directory, they were located on a separate mounted volume. Understandably, I was concerned.
The Source of the Chaos
I later found out the reason for this chaos. I learned that the entire development team only built using Eclipse on Windows and had never built the product on Linux outside of a Jenkins environment. The script assumed that the WORKSPACE environment variable was set and, instead of taking advantage of the built-in facilities for clearing the workspace, had a manual, recursive deletion step. This resulted in the script changing to the root of my file system and recursively deleting everything in my system.
It was not the best first week to have at a new job.
As a new developer joining a team, it is imperative to be able to build and test the tool being developing locally, ideally without the build destroying the local filesystem. This need is especially critical when working with large teams that are distributed across different time zones.
Docker Is the Solution
Using Docker to build your product locally and in your CI/CD pipeline, is one of the easiest ways to ensure that if it builds and tests pass locally, it will also build and pass elsewhere. While Docker isn’t new, many startups don’t take advantage of this tool when setting up new development environments, often due to a perceived lack of time.
In my next post, I’ll cover how to repeatably and reliably build projects with Docker. Until then, backup your work often!