Auditing ssh activity with docker containers

Juan Berner
3 min readFeb 18, 2016

Since I got several request of similar ventures I’m writing a post on a way to log ssh connections with Docker while mentioning other options.

In a previous post I mentioned a simple way to send as logs the commands executed on a unix like server (https://medium.com/@89berner/monitoring-the-activity-of-your-users-in-unix-like-servers-f37a4b70c043) but there are a lot of ways of avoiding getting your commands logged.

I was introduced to the approach of using a tool as rootsh (http://linux.die.net/man/1/rootsh) to log all commands executed and using a bastion host to avoid the user changing or stopping the logging activity. Since the user logged would only be able to modify the ending host, and all logging would be done in the bastion host it would provide a higher degree of confidence than other tools that acknowledge the possibility of an attacker living in the same environment as the logging processes.

A different way of accomplishing this can be done with Docker containers. Since each container provides jail like features, this can improve the control of what a user can do once they have logged in to a server.

How it works:

When a user logs in to the server, they have a specific shell which just runs a docker container where all activity is monitored and access to the actual host is restricted. This allows to restrain the user inside the docker container and have a lot of auditing and logging functionalities which are provided by docker. I also added rootsh to the flow to provide another layer of activity logging. In a way what happens is:

ssh session -> runs a custom shell -> starts rootsh session -> enters the docker container

Once the user ends their container session the ssh session dies. Even if the user is able to break from the container their activity would be logged by rootsh. Additional checks could be added to kill the ssh connection if this were to happen.

To try it you can execute on an ubuntu 14.04:

cd /tmp && git clone https://github.com/89berner/docker-logging.git && bash docker-logging/install.sh

And after logging to the test user with su — exampleuser you should see:

ubuntu@ip-172-31-63-214:/tmp$ sudo su - exampleuser
No directory, logging in with HOME=/

exampleuser@98652fadf3e3:/$
ls bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var

exampleuser@98652fadf3e3:/$ ps
PID TTY TIME CMD
1 ? 00:00:00 bash
16 ? 00:00:00 ps

After that all commands in the container will be logged, for example:

Feb 13 11:32:42 ip-172-31-63-214 rootsh[05b49]: ubuntu: 018: exampleuser@98652fadf3e3:/$ ls
Feb 13 11:32:42 ip-172-31-63-214 rootsh[05b49]: ubuntu: 019: bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var Feb 13 11:32:42 ip-172-31-63-214 rootsh[05b49]: ubuntu: 020: exampleuser@e5741b6afcfb:/$ ps
Feb 13 11:32:42 ip-172-31-63-214 rootsh[05b49]: ubuntu: 021: PID TTY TIME CMD
Feb 13 11:32:42 ip-172-31-63-214 rootsh[05b49]: ubuntu: 022: 1 ? 00:00:00 bash
Feb 13 11:32:42 ip-172-31-63-214 rootsh[05b49]: ubuntu: 023: 19 ? 00:00:00 ps

Which can be redirected to your favourite logging tool, besides we can get the container activity from the container and do some of the following docker commands:

docker logs: It will show the same kind of logging but from the container itself:

root@ip-172-31-63-214:/var/log# docker logs e5741b6afcfb

exampleuser@e5741b6afcfb:/$ ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
exampleuser@e5741b6afcfb:/$ ps
PID TTY TIME CMD
1 ? 00:00:00 bash
16 ? 00:00:00 ps

docker inspect: To get metadata from the container
docker diff: To get changes in the file system

Also, since all the activity happens in the container, you can save an image of the container for later inspection and use limits on the container to specify how much processing power/ram/disk can be used by the user, for example with:

limit nofile 524 1048
limit nproc 524 1048

Some problems that might arise:

Home directory for containers: In this case you can map home directories of the container to a shared filesystem. Can’t scp to the container: This is solved by using other means, like s3 storage to retrieve and store files.

Can’t run tunnels to the container: For this you might need a regular ssh connection and use different logging methods.

Let me know if you try it!

Originally published at secureandscalable.wordpress.com on February 18, 2016.

--

--

Juan Berner

All about security and scalability. Views expressed are my own.