Deploying OpenFoodNetwork to Staging on AWS EC2

coming soon.
$ aws ec2 create-security-group --group-name devenv-sg --description "security group for development environment in EC2"
"GroupId": "sg-e5835b83"

aws ec2 authorize-security-group-ingress --group-name devenv-sg --protocol tcp --port 22 --cidr

Replace the CIDR range in the above with the one that you will connect from for more security. You can use the aws ec2 describe-security-groups command to admire your handiwork.

$ aws ec2 create-key-pair --key-name devenv-key --query 'KeyMaterial' --output text > devenv-key.pem
Change permissions on the local file we just created with the key info:

chmod 400 devenv-key.pem

aws ec2 run-instances --image-id ami-29ebb519 --count 1 --instance-type t2.micro --key-name devenv-key --security-groups devenv-sg --query 'Instances[0].InstanceId'

A client error (InvalidAMIID.NotFound) occurred when calling the RunInstances operation: The image id '[ami-29ebb519]' does not exist

Looks like this is because the --image-id needs to coincide with the region. There’s a list of images here. There’s a table here showing which instance types provide various AMIs. And the meaning of the instance types is here.

T2 use cases are described as: “Development environments, build servers, code repositories, low-traffic web applications, early product experiments, small databases.”, which is probably perfect for this case, so we want HVM EBS-Backed 64-bit, which is image (Amazon Machine Image) ami-e3106686. So let’s update our code:

aws ec2 run-instances --image-id ami-e3106686 --count 1 --instance-type t2.micro --key-name devenv-key --security-groups devenv-sg --query 'Instances[0].InstanceId'

That command returned "i-cda5d372".

“The instance will take a few moments to launch. Once the instance is up and running, the following command will retrieve the public IP address that you will use to connect to the instance.”

aws ec2 describe-instances --instance-ids i-cda5d372 --query 'Reservations[0].Instances[0].PublicIpAddress'

(Notice we put the instance ID returned from the previous command into the above one.) This returns the IP address of our EC2 instance:

Apparently it creates a user called ubuntu. Let’s try to connect: ssh -i devenv-key.pem ubuntu@

Authenticity can’t be established. Are we sure we want to connect? Yes. RSA key added to known hosts.
Permission denied (publickey)..

Seen this before.

$ ssh-keyscan -t rsa >> ~/.ssh/known_hosts

Nope. That’s not the answer. SO

Apparently in this case the user isn’t ubuntu, but ec2-user. Curious how this this in with being able to sudo. We’ll soon find out.

The AWS Rails on Elastic Beanstalk tutorial is recommending using RVM to manage ruby versions. Let’s also look at the rbenv option and at least see if OFN recommends one or the other.

I did go ahead and add the gpg keys for rvm (can’t hurt, right?).

gpg --keyserver hkp:// --recv-keys D39DC0E3

Might want rbenv. What Ubuntu version is this?

$ lsb_release -a
-bash: lsb_release: command not found

$ uname -a
Linux ip-172-31-52-12 4.1.7-15.23.amzn1.x86_64 #1 SMP Mon Sep 14 23:20:33 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

That wasn’t very illuminating. Maybe this isn’t ubuntu? Wait! I think 4.1.7-15.23 is the Linux kernel version. Does Amazon have their own Linux version?

This EC2 chart is somewhat informative. But our specific machine image isn’t in it. Fucking annoying and confusing.

This is weird. Let’s try using rvm as I’m not seeing anything… No. Looks like ofn_deployment ansible scripts are using rbenv. Let’s try to install it.

sudo yum install -y git

Looks like sudo works!

`git clone git:// .rbenv
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bash_profile
echo 'eval "$(rbenv init -)"' >> ~/.bash_profile
mkdir -p ~/.rbenv/plugins
cd ~/.rbenv/plugins
git clone git://`

exit out of the shell and ssh in again (or `source ~/.bash_profile` I believe).

sudo yum install -y gcc make zlib zlib-devel openssl openssl-devel
rbenv install 1.9.3-p194

Apparently this can be a very slow process in a micro instance, as it’s relatively CPU-intensive.

Build failed! After about three minutes.

BUILD FAILED (Amazon Linux AMI 2015.09 using ruby-build 20150928-2-g717a54c)

Inspect or clean up the working tree at /tmp/ruby-build.20151021044920.22730
Results logged to /tmp/ruby-build.20151021044920.22730.log

Last 10 log lines:
ossl_pkey_ec.c:816:29: error: ‘EC_GROUP_new_curve_GF2m’ undeclared (first use in this function)
new_curve = EC_GROUP_new_curve_GF2m;
ossl_pkey_ec.c:816:29: note: each undeclared identifier is reported only once for each function it appears in
make[2]: *** [ossl_pkey_ec.o] Error 1
make[2]: Leaving directory /tmp/ruby-build.20151021044920.22730/ruby-1.9.3-p194/ext/openssl'
make[1]: *** [ext/openssl/all] Error 2
make[1]: Leaving directory
make: *** [build-ext] Error 2

Let’s try again… Nope.

Wait. We want version 1.9.3-p392 anyway, based on OFN_Deployment scripts. Low hopes, but one never knows, does one? Looks like we’re not the first people to receive this error. Looks like we need a patch for the 1.9.3-p392 version of ruby. This may do the trick. Will try tomorrow.

Tomorrow ended up going other places and is basically over, but I did wonder if simply allowing the OFN_deployment ansible playbooks to provision – no – I’m fairly sure Ruby needs to be installed prior to provisioning. Does it? I also haven’t created SSL certificates before, but it looks like it’s fairly simple for AWS. It’s a “self-signed” certificate for testing purposes and I left the password blank.

Hmmm. Digging a little deeper into this, I’m feeling like maybe the path of least resistance at this point would be to use Digital Ocean for staging. Well. Let’s push a tad further. Want to note that to make a non-default ssh key this command can be used:

ssh-keygen -f ~/.ssh/ofn_do_ill -C "ofn_do_ill"

Poking around at OFN_Deployment setup. Copied the privatekey.pem generated for the AWS SSL to ofn_deployment/files/ssl.key and in the same directory combined the csr.pem and server.crt files into a single file: ssl.crt.

I created a hosted zone via Amazon Route53, pointing to:

And pointed the domain name to them via the registrar (namecheap).

I created a Record Set for that Hosted Zone at and directed it to our EC2 IP address: Set to Alias: No and an A record was created.

Then I created a file called Staging containing:

# file: staging

[ofn_servers] ansible_ssh_host=

Let’s see if our DNS has propagated with traceroute

Seems to be poking around all over the place.

That’s not the way to do it, fool! dig is what we want. Don’t you understand how DNS works? It’s resolving. It’s resolving!

OK. The OFN deployment script recommends moving the single vars.yml file used for a single deployment into a directory, host_vars and having one for each deployment: host_vars/production.vars.yml, host_vars/staging.vars.yml, etc. But when I run the following command:

ansible-playbook install.yml -i staging

It’s still looking for user, “vagrant” and we want user “ec2_user” as configured in the staging.vars.yml file. So I still haven’t exactly RT-Ansible-FM, but simply changed the paths in the following two files: provision.yml and deploy.yml under vars_files from vars.yml to host_vars/staging.vars.yml.

Now we’re kind of rocking. Ansible was trying to use apt-get to install packages, but the proprietary AWS EC2 instance runs on it’s own OS which is apparently a combination of (or cross between) CentOS and RedHat and uses yum to manage packages. So updated the main.yml file accordingly. I’m also saving the updates to my own git repository – even though I think this whole AWS approach is going to be temporary.

Have gotten half way through the provisioning. Installing nodejs might be weird so moved it into it’s own Task. Apparently the AWS yum repositories don’t have python-software-properties so I’m learning to add a repository to the EC2 yum config. But how do I find out where there is a repo that contains python-software-properties. For apt I think the command would be dpkg -L python-software-properties. What’s the yum equivalent? Not! I was confused. dpkg and rpm just list the files for an installed package.

I guess the EC2 Linux is what you call RMP based package management and this web portal is a directory of repositories. This is a bit of a long shot. Wait a minute. Some of these packages aren’t even relevant to this OS. Chris Angelico tells me that python-software-properties is for managing apt-installed packages. libpq-dev is Debian-specific. The CentOS equivalent is postgresql-libs.

Unattended upgrades is also different on CentOS. You use `yum-cron` and it also needs to be configured and run. Package `tklib` is not being found either. Let’s enable the epel repo: `sudo yum-config-manager –enable epel`. Looks like the `tk` library is irrelevant if you’re not using a GUI for the OS: “Tk applications run on the desktop, not in the browser.” So we can probably remove that from the task.

Don’t know if we need `zlib1g-dev` (which is also not found). It’s part of the zlib compression library. Let’s try the available `zlib-devel`. For `libssl-dev` we want to use `openssl2-devel` and for `libxml2-dev` we use `libxml2-devel`. Same with `libxslt1-dev`: just add the `el` at the end of the name. Nope. It’s `libxslt-devel`. This is computer programming? This feels a lot more like hacking. More like poking.

We’ll replace `ranger` with `Midnight Commander`, which is a terminal based file manager that runs on CentOS. Yay. Three Ansible tasks successfully run. Three out of like twenty!

Moving right along. Had to add `- epel-release` to the `nodejs` and `npm` install task and just updated the language and database roles’ `main.yml` files to use Yum instead of Apt. Chris says I could have used an Amazon Ubuntu disc image and avoided this whole mess.

Commented out the language package from `staging.vars.yml`, which may or may not work. Looks like CentOS installs with English support? Ended up commenting out two out of three tasks in `languages/tasks/main.yml` and just leaving the task to “export” `LANG`, `LC_ALL` and `LCTYPE`.

Now we’re rocking – or seem to be. Getting `rbenv` installed and a bunch of `gems` it looks like.

This doesn’t look terribly good, but might not be important:

failed: [] => (item={'comment': 'Main user', 'home': u'/home/ec2-user', 'name': u'ec2-user'}) => {"changed": true, "cmd": "$SHELL -lc \"rbenv versions | grep 1.9.3-p392\"", "delta": "0:00:00.084338", "end": "2015-10-24 05:53:29.276847", "item": {"comment": "Main user", "home": "/home/ec2-user", "name": "ec2-user"}, "rc": 1, "start": "2015-10-24 05:53:29.192509", "warnings": []}


failed: [] => (item=[{u'cmd': u'$SHELL -lc "rbenv versions | grep 1.9.3-p392"', u'end': u'2015-10-24 05:53:29.276847', u'stderr': u'', u'stdout': u'', u'changed': True, u'rc': 1, 'item': {'comment': 'Main user', 'home': u'/home/ec2-user', 'name': u'ec2-user'}, u'warnings': [], u'delta': u'0:00:00.084338', 'invocation': {'module_name': u'shell', 'module_complex_args': {}, 'module_args': u'$SHELL -lc "rbenv versions | grep 1.9.3-p392"'}, 'stdout_lines': [], u'start': u'2015-10-24 05:53:29.192509'}, {'comment': 'Main user', 'home': u'/home/ec2-user', 'name': u'ec2-user'}]) => {"changed": true, "cmd": "$SHELL -lc \"rbenv install 1.9.3-p392\"", "delta": "0:02:21.728975", "end": "2015-10-24 05:55:54.201526", "item": [{"changed": true, "cmd": "$SHELL -lc \"rbenv versions | grep 1.9.3-p392\"", "delta": "0:00:00.084338", "end": "2015-10-24 05:53:29.276847", "invocation": {"module_args": "$SHELL -lc \"rbenv versions | grep 1.9.3-p392\"", "module_complex_args": {}, "module_name": "shell"}, "item": {"comment": "Main user", "home": "/home/ec2-user", "name": "ec2-user"}, "rc": 1, "start": "2015-10-24 05:53:29.192509", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}, {"comment": "Main user", "home": "/home/ec2-user", "name": "ec2-user"}], "rc": 1, "start": "2015-10-24 05:53:32.472551", "warnings": []}
stderr: Downloading yaml-0.1.6.tar.gz...
Installing yaml-0.1.6...
Installed yaml-0.1.6 to /home/ec2-user/.rbenv/versions/1.9.3-p392

Downloading ruby-1.9.3-p392.tar.gz...
Installing ruby-1.9.3-p392...

BUILD FAILED (Amazon Linux AMI 2015.09 using ruby-build 20141225)

Inspect or clean up the working tree at /tmp/ruby-build.20151024055332.16232
Results logged to /tmp/ruby-build.20151024055332.16232.log

Last 10 log lines:
ossl_pkey_ec.c:816:29: error: ‘EC_GROUP_new_curve_GF2m’ undeclared (first use in this function)
new_curve = EC_GROUP_new_curve_GF2m;
ossl_pkey_ec.c:816:29: note: each undeclared identifier is reported only once for each function it appears in
make[2]: *** [ossl_pkey_ec.o] Error 1
make[2]: Leaving directory `/tmp/ruby-build.20151024055332.16232/ruby-1.9.3-p392/ext/openssl'
make[1]: *** [ext/openssl/all] Error 2
make[1]: Leaving directory `/tmp/ruby-build.20151024055332.16232/ruby-1.9.3-p392'
make: *** [build-ext] Error 2

failed: [] => (item=[{u'cmd': u'$SHELL -lc "rbenv versions | grep 1.9.3-p392"', u'end': u'2015-10-24 05:53:29.276847', u'stderr': u'', u'stdout': u'', u'changed': True, u'rc': 1, 'item': {'comment': 'Main user', 'home': u'/home/ec2-user', 'name': u'ec2-user'}, u'warnings': [], u'delta': u'0:00:00.084338', 'invocation': {'module_name': u'shell', 'module_complex_args': {}, 'module_args': u'$SHELL -lc "rbenv versions | grep 1.9.3-p392"'}, 'stdout_lines': [], u'start': u'2015-10-24 05:53:29.192509'}, {'comment': 'Main user', 'home': u'/home/ec2-user', 'name': u'ec2-user'}]) => {"changed": true, "cmd": "$SHELL -lc \"rbenv global 1.9.3-p392 && rbenv rehash\"", "delta": "0:00:00.049277", "end": "2015-10-24 05:55:57.240157", "item": [{"changed": true, "cmd": "$SHELL -lc \"rbenv versions | grep 1.9.3-p392\"", "delta": "0:00:00.084338", "end": "2015-10-24 05:53:29.276847", "invocation": {"module_args": "$SHELL -lc \"rbenv versions | grep 1.9.3-p392\"", "module_complex_args": {}, "module_name": "shell"}, "item": {"comment": "Main user", "home": "/home/ec2-user", "name": "ec2-user"}, "rc": 1, "start": "2015-10-24 05:53:29.192509", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}, {"comment": "Main user", "home": "/home/ec2-user", "name": "ec2-user"}], "rc": 1, "start": "2015-10-24 05:55:57.190880", "warnings": []}
stderr: rbenv: version `1.9.3-p392' not installed

Succeeding at 30 tasks!


`msg: Destination directory /etc/monit/conf.d does not exist`

In `roles/app/tasks/main.yml` we need to update `/etc/monit/conf.d` to `/etc/monit.d`.

Update `postgresql` to `postgresql94`. Same with `-contrib`. Deployment script calls for `-client`, which isn’t returned by `yum list postgres*`).

Rbenv is working and we got `1.9.3-p392` installed.
[ec2-user@ip-172-31-52-12 ~]$ rbenv versions
* system (set by /home/ec2-user/.rbenv/version)

If I was smart I’d look at the provisioning script that did it and see how.

In the meantime: 32 tasks successful.

Error: `sudo: unknown user: postgres`

Added `postgresql94-server` to install list. Then ran `sudo service postgresql94 start`.

Failed: `/var/lib/pgsql94/data is missing. Use “service postgresql94 initdb” to initialize the cluster first.`

So. `sudo service postgresql94 initdb`. Response: OK.

Now: `sudo service postgresql94 start`.

Response: `Starting postgresql94 service: [ OK ]`

How to tell Ansible to do that!?!?!

Let’s poke around a little more on the server first.

Need to configure, I think. `vim /var/lib/pgsql94/data/pg_hba.conf`

Update METHODS for following two lines:

local all all peer
host all ident

from `peer` and `ident` to `trust`.

Now restart the psql server with `sudo service postgresql94 restart`.


ec2-user@ip-172-31-52-12:~$ psql -U postgres
psql (9.4.4)
Type "help" for help.


And look at the base DBs:

postgres=# \list
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)

Looks like this is some YAML code that might direct Ansible to do above steps if added to the dbserver tasks file.

- name: Initiate database
command: service postgresql94 initdb
sudo: yes

- name: Start PostgreSQL and enable at boot
service: name=postgresql94

- name: Ensure PostgreSQL is listening on all localhost
lineinfile: dest=/var/lib/pgsql94/data/postgresql.conf
line="listen_addresses = ''"
notify: restart postgresql94

- lineinfile: dest=/var/lib/pgsql94/data/pg_hba.conf
line='host all all md5'
notify: restart postgresql94

New file `roles/dbserver/handlers/main.yml` containing:

- name: restart postgresql94
service: name=postgresql94 state=restarted

Let’s stop psql and try running Ansible deploy again. `sudo service postgresql94 stop`.

Had to do the above with “superuser do” so to each name add the line `sudo: yes`.

`msg: the python psycopg2 module is required`

OK. Add python-psychopg for OFN so `- python-psycopg2` goes in the with_items list.

Same error. And it is installed, as per server message.

Error is coming from this item:

- name: create db user
sudo: yes
sudo_user: postgres
postgresql_user: name={{ db_user }} password={{ db_pass }} role_attr_flags=SUPERUSER

There’s only the Superuser in existence so far as per:

postgres=# \du
List of roles
Role name | Attributes | Member of
postgres | Superuser, Create role, Create DB, Replication | {}

Wait a minute! `python-psycopg2` is installed via Yum, but within python:

>>> import psycopg2
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named psycopg2

And `pip list` or `pip freeze` don’t return it.

Looks like I’m not the first person to have this issue (although in our case seems simpler to solve). Let’s add this to the task:

- name: install psycopg2 python module
pip: name=psycopg2

Mmmm not quite. Let’s try:

- name: upgrade pip
sudo: yes
command: pip install --upgrade pip

- name: install psycopg2 python module
sudo: yes
pip: name=psycopg2

Aaaah. Looks like we may need a more complex task list:

To `staging.vars.yml`:

# Pip variables
pip_download_dest: /tmp
python: python
pip: pip

To the dbserver/tasks/main.yml (for now):

# Causes an error if we try and which something that doesn't exist so use this
# as a workaround.
- name: check to see if pip is already installed
command: "{{ pip }} --version"
ignore_errors: true
changed_when: false
register: pip_is_installed
changed_when: false

- name: download pip
get_url: url= dest={{ pip_download_dest }}
when: pip_is_installed.rc != 0

- name: install pip
command: "{{ python }} {{ pip_download_dest }}/"
sudo: yes
when: pip_is_installed.rc != 0

- name: delete
file: state=absent path={{ pip_download_dest }}/
when: pip_is_installed.rc != 0

# $ pip --version
# pip 1.5.2 from /usr/local/lib/python2.7/dist-packages (python 2.7)
- name: check to see if pip is installed at the correct version
shell: "{{ pip }} --version | awk '{print $2}'"
register: pip_installed_version
changed_when: false
when: pip_version != None or pip_version != "LATEST"

- name: install required version of pip
command: "{{ pip }} install pip=={{ pip_version }}"
sudo: yes
when: pip_version != None and pip_installed_version.stdout != pip_version and pip_version != "LATEST"

- name: Upgrade to latest version of pip
command: "{{ pip }} install -U pip"
register: pip_latest_output
sudo: yes
changed_when: pip_latest_output.stdout.find('Requirement already up-to-date') == -1
when: pip_version == None or pip_version == "LATEST"

Also added `- python27-devel` to items list, although maybe not necessary.

Was getting error: `sudo install -U pip DistributionNotFound: pip==7.1.0`, which apparently indicates a broken python package, so tried removing the ``-installed pip with `[sudo] python -m pip uninstall pip setuptools`.

Back to an error I was getting the other night:

`error while evaluating conditional: pip_latest_output.stdout.find(‘Requirement already up-to-date’) == -1`

Commenting out that line and the following again.

ec2-user@ip-xxx:~$ sudo python
Requirement already up-to-date: pip in /usr/local/lib/python2.7/site-packages
ec2-user@ip-xxx:~$ sudo pip install -U pip
sudo: pip: command not found

Apparently pip is not accessible to sudo, because sudo doesn’t use the usual $PATH, but uses a “Secure Path”. SO, we get the secure path:

$ sudo bash -c ‘echo $PATH’

Get the pip path:

$ which pip

Make the link:

$ sudo ln -s /usr/local/bin/pip /usr/bin/pip

OK OK. I got as far as getting to the ansible-galaxy package ofn_deployment uses for installing NGINX and Rails and beginning to replace it with a CentOS/Red Hat one, when it finally dawned on be that AWS EC2 does in fact offer Ubuntu images. So I’m gonna abandon this path for the time being and see about getting an EC2 Ubuntu image going. Found one which is HVM and eligible for the cheap, free use for 12 months, east-1 region: ami-c42749c4.

$ aws ec2 run-instances –image-id ami-c5ff89af –count 1 –instance-type t2.micro –key-name devenv-key –security-groups devenv-sg –query ‘Instances[0].InstanceId’
$ aws ec2 describe-instances –instance-ids i-ba2f0505 –query ‘Reservations[0].Instances[0].PublicIpAddress’
$ ssh -i ~/.ssh/ofn_aws_key.pem ubuntu@

That was five minutes. Let’s visit route56 again. Make that route53. Update the A record to point to the new IP address.

Again – comment out the install language packages task, maybe aws ubuntu comes with?

Changed app path in `roles/app/templates/post-receive.j2` to `APP_PATH=”$HOME/apps/ofn_america”`.

Hitting a snag at `roles/deploy/tasks/main` where `”database ofn_america does not exist”`. Let’s see what databases exist.

Let’s poke around. `psql -U postgres` complaining again: `psql: FATAL: Peer authentication failed for user “postgres”`.

Need to configure `/etc/postgresql/9.4/main/pg_hba.conf`.

local all all peer
host all md5

Update `peer` and `md5` to `trust`.

Now restart postgres server. Reload first? `sudo /etc/init.d/postgresql reload`.

`sudo /etc/init.d/postgresql restart`

It’s a little different with Ubuntu than CentOS (or Debian), but the foundation is the same.

Still `psql: FATAL: Peer authentication failed for user “postgres”`.

Wait. Read the fucking manual, iLL! `sudo -u postgres psql postgres`

psql (9.4.5)
Type "help" for help.

postgres=# \list
Name | Owner | Encoding | Collate | Ctype | Access privileges
ofn_america | ofn_user | UTF8 | en_US.utf8 | en_US.utf8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
(4 rows)


So it looks like the postgres DB exists. Maybe it’s a rails issue. No. It’s a psql command. Let’s try it manually:

$ psql -h localhost -U ofn_user -d ofn_america -c "SELECT true FROM pg_tables WHERE tablename = 'order_cycles';"
(0 rows)

No complaining. Maybe our new configs solved the issue. Let’s run deploy again… And go to bed again.

Couldn’t seem to get unicorn restart working using Ansible’s `server` methods, so using workaround with `raw`:

- name: restart unicorn step 2
# sudo: yes
raw: sudo systemctl restart unicorn_ofn_america.service
# service:
# name: unicorn_{{ app }}
# state: restarted
when: table_exists.stderr.find('does not exist') == -1
# If unicorn isn't actually started yet we probably need this:
notify: start unicorn

Now deployment completes with eighty-something tasks and I can ssh in and run the rails app from `apps/ofn_america/current` directory: `bundle exec rails server -b`. Unfortunately it’s not accessible in browser. I can confirm it’s running by running from a terminal on the host: `wget`. It returns the `index.html` file.

Maybe port `80` isn’t open on the EC2 instance. It’s open now. Still nothing in the browser.

Let’s look in the `apps/ofn_america/current` directory and see where our URL is being specified and if there’s a config we’re not aware of:

$ grep -r “staging.usfoodcoop” .


In the entire `apps` direcory? Nothing.

But the domain does get set a couple of times in the Ansible playbook. Once in the `staging.vars.yml` file and once in the `staging` file. So Ansible must be configuring something else.

Is anything running on port 80? How would you find out?

$ sudo netstat -tulpn | grep –color :80
tcp 0 0* LISTEN 18680/nginx -g daem

The web server, nginx. Ah. Nginx has a config file. Duh! Where would that be?

$ whereis nginx
nginx: /usr/sbin/nginx /etc/nginx /usr/share/nginx /usr/share/man/man8/nginx.8.gz

I think config files are often in the `etc` directory.

ls /etc/nginx/

There it is (among other things). Let’s look at it:

$ sudo cat /etc/nginx/nginx.conf

A couple of lines catch the eye:

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
include /etc/nginx/sites-enabled/*;


$ ls -l /etc/nginx/sites-enabled/


sudo cat /etc/nginx/sites-enabled/ofn_america

server {
listen 443;

include ofn_america_ssl;

rewrite ^(.*)$1 permanent;

Hey – what’s this about listening on `443`. Is that a port? Maybe we’re not on port 80.

$ sudo netstat -tulpn | grep –color :443
tcp 0 0* LISTEN 18680/nginx -g daem

Ah. Interesting!

Let’s go back to that EC2 security panel and see if we can open port `443` in the security group our instance is assigned to. So we go to add another “rule” and one of the options is `https`. Wait! In the Ansible playbook docs it mentions that we’re running in `https`. That’s why we needed to SSL certificates. And lo and fucking behold, the default port for https is fuckin’ … wait for it: Four fourty mother loving three! Open that bad boy up, yo!

And now in the browser we have….

A page! With a picture of a mad cat… that says “…something went wrong…”. Is that a `500` server error page? In the `/etc/nginx/sites-enabled/ofn_america` file it mentions a page called `500.html`.

Let’s find that file: `find -name ‘500.html’`.

There’s a bunch of them, many in our `apps/releases` directories. Let’s look at one.

$ cat apps/ofn_america/current/public/500.html

Want to let us know what went wrong? Email us at: …

That’s us. Anything revealing in the nginx error log: `$ tail -10 /var/log/nginx/error.log`?

2015/11/01 07:34:48 [error] 18683#18683: *13 connect() to unix:/home/ubuntu/apps/ofn_america/shared/sock/unicorn.ofn_america.sock failed (111: Connection refused) while connecting to upstream, client:, server:, request: "GET / HTTP/1.1", upstream: "http://unix:/home/ubuntu/apps/ofn_america/shared/sock/unicorn.ofn_america.sock:/", host: ""

Nothing obvious to me there. Let’s try the `apps` logs.

$ apps/ofn_america/current
$ ls log/
$ tail -10 log/staging.log
Copied binary asset to jquery.alerts/images/title.gif
Copied binary asset to jquery.jstree/themes/apple/bg.jpg
Copied binary asset to jquery.jstree/themes/apple/d.png
Copied binary asset to jquery.jstree/themes/apple/dot_for_ie.gif
Stripped digests, copied to jquery.jstree/themes/apple/style.css, and created gzipped asset
Copied binary asset to jquery.jstree/themes/apple/throbber.gif
Copied binary asset to select2.png
Copied binary asset to select2x2.png
Copied binary asset to
Generated non-digest assets in 3487ms

Not very revealing. Glad assets are being copied (generated?).

$ tail -10 log/newrelic_agent.log
[10/31/15 20:28:07 +0000 ip-172-31-53-234 (21960)] INFO : Installing Rack::Builder middleware instrumentation
[10/31/15 20:28:07 +0000 ip-172-31-53-234 (21960)] INFO : Installing ActiveRecord instrumentation
[10/31/15 20:28:07 +0000 ip-172-31-53-234 (21960)] INFO : Installing Rails 3 Controller instrumentation
[10/31/15 20:28:07 +0000 ip-172-31-53-234 (21960)] INFO : Installing Rails 3.1/3.2 view instrumentation
[10/31/15 20:28:07 +0000 ip-172-31-53-234 (21960)] INFO : Installing Rails 3 Error instrumentation
[10/31/15 20:28:07 +0000 ip-172-31-53-234 (21960)] INFO : Finished instrumentation
[10/31/15 20:28:19 +0000 ip-172-31-53-234 (21960)] INFO : Doing deferred dependency-detection before Rack startup
[10/31/15 20:28:37 +0000 ip-172-31-53-234 (22008)] INFO : delayed_job not available: No DJ worker present. Skipping DJ queue sampler
[10/31/15 20:28:37 +0000 ip-172-31-53-234 (22008)] ERROR : Invalid license key, please contact
[10/31/15 20:28:37 +0000 ip-172-31-53-234 (22008)] ERROR : Visit to obtain a valid license key, or to upgrade your account.

Huh! Looking through Ansible playbook, our `newrelic_key: none`. And that didn’t stop the Vagrant box instance from running.

$ tail -10 log/development.log

Nothing of note.

$ tail -10 log/delayed_job.log
# Logfile created on 2015-10-31 12:03:20 +0000 by logger.rb/31641

Doesn’t look too alarming. Let’s just try to rerun the rails server. Maybe without the binding flag: `bundle exec rails server`. Hi mad cat.

Check bugsnag? Nope.

Play around in the rails console?

$ bundle exec rails console
Loading development environment (Rails 3.2.21)
irb(main):001:0> app.get("/")

No complaints there. A bunch of “spree” output. Mmmm, but let’s specify the `staging` environment:

$ bundle exec rails c staging
irb(main):001:0> app.get("/")
=> 301

That’s interesting. We can look at a list of all our routes with:

$ bundle exec rake routes

No shortage of routes there. Let’s go back to that nginx error: `.sock failed (111: Connection refused`.

$ ls -l /home/ubuntu/apps/ofn_america/shared/sock/
total 0
srwxrwxrwx 1 ubuntu ubuntu 0 Oct 31 12:01 unicorn.ofn_america.sock

Owned by ubuntu. `777` permissions so everyone can do everything. Is unicorn running? I wonder if the problem is that we’re managing unicorn with systemd (systemctl) as opposed to whatever method Ansible “service” employs.

$ sudo systemctl status unicorn_ofn_america.service

Does say it’s running.

Ansible docs for service:

“Synopsis: Controls services on remote hosts. Supported init systems include BSD init, OpenRC, SysV, Solaris SMF, systemd, upstart.” So should just be using systemd, right?

I wonder if by running systemctl with sudo, I’ve made unicorn unavailable to the `ubuntu` user.