SSH To A Docker Container

Docker containers are awesome. Docker allows you to quickly create development environments in a matter of minutes. Docker gives you the ability to package, ship and share your docker image to anyone. Once your docker image is on the Docker repository, anyone can pull it down and run it on their own operating system.

To keep track of running containers on your system, you can type ‘docker ps -a’ on your terminal.

$ docker ps -a
# gives a result similar to this ...
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
f322d2370415        moul/icecast        "/"              2 hours ago         Up 2 hours>8000/tcp   icecast_icecast_1

If you would like to SSH to a running Docker container, just run the following from the terminal.

docker exec -t -i f322d2370415 /bin/bash
# it will take you to this ... 

f322d2370415 is the Container ID, while /bin/bash is the shell that you would like to use.

Start EC2 AMI from AWS CLI

Amazon Web Services has an API via a CLI (command line interface) which give users the ability to manage servers from a remote host. The AWS CLI must be installed and authenticated to AWS on the host computer. Once a user is logged in to AWS, they can perform certain management tasks such as starting and stopping EC2 instances.

How to start EC2 instance from AWS CLI

Requires an image id, instance count, instance type, key and security group.

aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type c4.2xlarge --key-name your-key  —security-group-ids sg-xxxxxxxx

How to Associate an Elastic IP Address to an Instance

Requires an instance id and Elastic IP address.

aws ec2 associate-address --instance-id i-xxxxxxxxxxxxxxxx --public-ip

How to Terminate an Instance

Requires an instance id(s).

aws ec2 terminate-instances --instance-ids i-xxxxxxxxxxxxxxxx

JQ Proccessor

The AWS CLI spits out a JSON output after each successful execution. If you need to grab the result, assign it to a variable, and use it for your subsequent scripts, you need some kind of JSON parser. You can use a tool like jq which will process or filter out the result for you. From jq’s website,

jq is a tool for processing JSON inputs, applying the given filter to its JSON text inputs and producing the filter’s results as JSON on standard output. The simplest filter is ., which is the identity filter, copying jq’s input to its output unmodified (except for formatting).

In this example, we will use the AWS CLI to give us a list of running EC2 instances. We will then dump the output into a file called output.json. We will then filter out the “InstanceId” by running it through cat and the jq processor. We will then assign the result to a variable called INSTANCE, and then finally use that variable to associate our instance to an Elastic IP Address.

aws ec2 describe-instances --filters Name=instance-state-name,Values=running > output.json
INSTANCEID=$(cat output.json | jq '.Reservations[].Instances[] | {InstanceId} | .InstanceId' --raw-output)
aws ec2 associate-address --instance-id $INSTANCEID --public-ip

Filtering a nested JSON can be a bit tricky. In this particular case, we are using a filter you’ll find inside the single quote right after the jq command. To remove quotes from our result, I’m using –raw-output switch. Finally, I then associate our instance to an elastic public IP address.

jq is a very handy tool.

Install Boto3 AWS SDK for Python

Boto3 is the Amazon Web Services (AWS) SDK for Python, which allows Python developers to write software that makes use of Amazon services like S3 and EC2. Boto provides an easy to use, object-oriented API as well as low-level direct service access.

To install on Mac

sudo easy_install pip
sudo pip install —ignore-installed six boto3

You need to set up your credentials and config file to authenticate to AWS first. The files are:


After it’s installed and configured, try using Boto3 to fetch your AWS S3 buckets.

import boto3
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():

Your S3 buckets will be listed after execution.

Boto3 has access to other AWS resources like EC2, meaning you can start and stop instances via Python and Boto3.

Multisite to Single Site

I have a couple of WordPress blogs that are running multisite. I decided I don’t need the other blogs. So, I converted both sites to the standard or single WordPress site. The good news is that there’s no need to reinstall WordPress. It requires just a simple edit of a couple of WordPress files. The conversion was not as bad as I initially thought. Just follow the steps below.

5 Easy Steps

  1. Back up your database — just in case.
  2. Delete the blogs you don’t want.
  3. Edit wp-config.php.
  4. Edit .htaccess.
  5. Clean up your database.

1. Backing up your database — just in case. You can use several tools for backing up your blogs. You can use PHPMyAdmin or a WordPress backup plugin. You can also use the WordPress Export feature to save the blog in XML format in case you want to restore it in the future.

2. Delete the blogs you don’t want – Go to All blogs in the WordPress Dashboard. If you have multiple blogs, it will be listed here. Delete the ones you don’t want.

3. Edit wp-config.php. Remove the Multisite entries in wp-config.php.

define( 'MULTISITE', true );
define( 'SUBDOMAIN_INSTALL', false );
$base = '/wordpress/';
define( 'DOMAIN_CURRENT_SITE', 'localhost' );
define( 'PATH_CURRENT_SITE', '/wordpress/' );
define( 'SITE_ID_CURRENT_SITE', 1 );
define( 'BLOG_ID_CURRENT_SITE', 1 );

4. Change your .htaccess to the standard WordPress configuration.

# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
# END WordPress

5. Finally, remove the database tables that are no longer needed.