I was working on setting up a new project while following a tutorial. This tutorial started to introduce a multi-step docker setup shell script file .sh. My preference is to break up these types of scripts so that each step can be run individually in the case of a partial setup/failure. This is an approach that I’ve seen and some of my co-workers use as a standard. But is it the best/only option?
It’s common to need access to your environment variables in multiple places. Your code, your build scripts, and your pipeline may all need access. It’s not always simple to propagate these variables from one end to the other. There’s sometimes a fair amount of exporting/importing and sometimes it can require some level of duplication.
Example setup script with a series of steps and some variables:
echo “echo stop & remove old docker [$SERVER] and starting new fresh instance of [$SERVER]”;
(docker kill $SERVER || :) && \
(docker rm $SERVER || :) && \
docker run — name $SERVER -e POSTGRES_PASSWORD=$PW \
-e PGPASSWORD=$PW \
-p 5432:5432 \
# wait for pg to start
echo “sleep wait for pg-server [$SERVER] to start”;
# create the db
echo “CREATE DATABASE $DB ENCODING ‘UTF-8’;” | docker exec -i $SERVER psql -U postgres
echo “\l” | docker exec -i $SERVER psql -U postgres
Yes, I could do what has been done before. I could break each step out into a separate shell script file .sh. But most of my projects don’t need that many scripts and the simpler I can make things, the better.
This got me thinking:
Do I need external script files? Is this the most valuable approach? Does having more files just make it harder to find where things are happening?
How would it be possible to skip writing these files at all?
How can we have a single source of secret variables that are used everywhere?
A well known and widely used tool that makes getting environment variables easily loaded. Dotenv is built into many of the tools/bundlers/frameworks that you may already be using. And it has a series of plugins/extensions that make it even more useful. Like dotenv-safe, dotenv-extend, and dotenv-cli, which make the development experience smoother and more robust.
Admittedly, these two are not the same tool or interchangeable. But they are both so useful that they’re worth mentioning.
Cross-env makes setting environment variables work across platforms.
Cross-var makes substituting environment variables work across platforms. We’ll be using this in our example.
With our powers combined…
By using dotenv and cross-var together, we’re able to read in whichever .env files we want, or consume existing environment variables (from cli, bash_profile, CI, etc) and then easily substitute them into our package.json scripts and it works across development platforms!