On application deployment, we often want associated services ready to go. After all, what good is a web application without a database? Ideally, when you first deployed your application, a database and role would automatically be created using the same settings that your application uses. In this post I’ll discuss two methods of sharing database (or other) settings between the deployment script and your application.
Flask and Python-Style Config Files
Running scripts on package install or upgrade is fundamental to the DPKG system. They are
normal shell scripts and like any shell script they can include (source
) other scripts.
If you can get your application and its setup script to “speak the same language” then
variables can easily be set in a single file and be read by either one.
Flask is simple to find a solution for because it can read configuration directly from a
Python file (or something that looks like a Python file) using the
app.config.from_pyfile()
method.
You can create a Python file containing just variables, like this:
DB_HOST="localhost" DB_NAME="production_database" DB_USER="production_user" DB_PASSWORD="production_password"
While not PEP8 compliant (spaces are needed around the =
signs), Python can interpret
this. Bash can also understand this file. Let’s say this file is saved at the path
/etc/shared-config.cfg
. You can use it in a bash script like this:
$ cat test-script.sh #!/bin/bash . /etc/shared-config.cfg # include the config file echo ${DB_HOST} # the variables are now in scope $ ./test-script.sh localhost $
To use the configuration in a Flask application:
from flask import Flask app = Flask(__name__) # configure the flask app with the same "Python" configuration file app.config.from_pyfile("/etc/shared-config.cfg") # the variables are now available on the app.config dictionary db_connection = connect_db(app.config["DB_HOST"], app.config["DB_NAME"], app.config[]...)
My DPKG setup scripts also include some reusable functions for checking if DBs and users exist, so I can use the variables like this:
#!/bin/bash # snip a whole bunch of standard debian pre-install stuff # include the "Python" shared config file . /etc/shared-config.cfg # include my postgres functions . /usr/share/bbit/bash/postgres/postgres-functions.sh case "$1" in configure) # call function that will check if DB exists. If not, create it (and role) # with provided database name, username and password setup_postgres_database ${DB_NAME} ${DB_USER} ${DB_PASSWORD} ;; # snip more standard things
Django and JSON Configuration Files
That’s Flask, but what about Django? It depends how you configure your application settings. I’m not sure how many people use this method, but I’m fond of keeping settings in a JSON file like this:
{ "DEBUG": false, "ALLOWED_HOSTS": ["example.com"], "SECRET_KEY": "abc123notarealkey", "STATIC_ROOT": "/srv/www/django/static/", "MEDIA_ROOT": "/srv/www/django/media/", "DATABASES": { "default": { "ENGINE": "django.db.backends.postgresql_psycopg2", "NAME": "production_database", "USER": "production_user", "PASSWORD": "production_password", "HOST": "127.0.0.1", "PORT": "5432" } } }
Then reading it into the Django settings.py
file like this:
try: with open(SETTINGS_PATH) as settings_file: extra_settings = json.load(settings_file) for key, value in extra_settings.items(): locals()[key] = value except (IOError, ValueError) as e: sys.stderr.write("Error loading {0}:\n{1}\n".format(SETTINGS_PATH, str(e))) exit(1)
Essentially it replaces (or adds to) variables defined in settings.py
with the variables
from the JSON file. You can easily define dictionaries, arrays, numbers, and booleans.
Of course Bash doesn’t natively speak JSON so I’ve built a tool, creatively called
json2shell
, that reads a JSON file and outputs it into a Bash config thing.
It can be used on the Django example above by running
./json2shell /etc/django/example.json
. The output is like this:
STATIC_ROOT=/srv/www/django/static/ MEDIA_ROOT=/srv/www/django/media/ DATABASES_default_ENGINE=django.db.backends.postgresql_psycopg2 DATABASES_default_NAME=production_database DATABASES_default_HOST=127.0.0.1 DATABASES_default_USER=production_user DATABASES_default_PASSWORD=production_password DATABASES_default_PORT=5432 DEBUG=false ALLOWED_HOSTS_0=example.com SECRET_KEY=abc123notarealkey
The output variable names are scoped to their full JSON path, separated by underscores.
Array indexes are included in the variable names, (see ALLOWED_HOSTS_0
).
So you can eval
the command and use it in your post install scripts, like this:
#!/bin/bash # snip a whole bunch of standard debian pre-install stuff # The same config path used by Django settings.py CONFIG_PATH=/etc/django/example.json # evaluate the output of json2shell to load the environment variables eval $(python /usr/local/bin/json2shell ${CONFIG_PATH}) # include my postgres functions . /usr/share/bbit/bash/postgres/postgres-functions.sh case "$1" in configure) # call function that will check if DB exists. If not, create it (and role) # with provided database name, username and password setup_postgres_database ${DATABASES_default_NAME} ${DATABASES_default_USER} ${DATABASES_default_PASSWORD} ;; # snip more standard things
Once again your configuration is kept in one place and can be reused by both the setup script and application.
Of course there are other ways of sharing settings through environment variables. For example, by using the Django Configurations application. You would then define all your settings in a Bash-style file which would be accessible from the post install script, as well as be used to populate the variables in the environment running Django.
Use JSON Variables in Bash
The “read JSON from bash” problem seems to be an unsolved question on StackOverflow, so I have released my json2shell tool on GitHub. It requires Python to run. It’s a niche tool but I hope some of you find it handy in some cases.
Conclusion
I used to have to put configuration in multiple places — not ideal! This method of shared configuration allows a single file to be deployed, which is obviously much better. It also makes it easier to have different credentials for Staging and Production releases as they are not defined inside the application DPKG.
I’m sure there’s more improvements yet to come. Maybe using something like etcd to store credentials. I’ll definitely write about any advancements when I find them.