My experience with recent Docker Forensics and importance of logging



On 13th May I got a message on Signal from the co-founder of ‘xyz’. Apparently, the database was dropped by a hacker. He said the database has been restored and asked me to do a forensics investigation on the compromised host the following day. He told me the name of the mobile application, to begin with. I did not have the patience to wait until 14th May. So I thought why not try to find the entry points the attackers’ way? I started doing passive information gathering.


My initial guess was SQL injection. So I quickly downloaded the application, decompiled it, and started grepping for public URLs, GET and POST requests. I found some endpoints but did not check for SQL injection because it was forensics work. I resolved the public URLs to IPs and immediately searched them on Shodan and Censys to get information on other open ports.

It yielded a few ports open – 80 , 443 and 22. Nothing interesting here. I then checked for historic records for the port which gave me some interesting results.

Port 3306 and 6379 were open? Port 3306 is of MySQL and exposing this on the Internet is not a good idea. Neither is exposing 6379.

Looking at the result on Shodan for 6379. It seemed to have asked for the authentication. It means that the attacker probably did not exploit this unless brute-forced.

At this point, I went back to the android app to hunt for credentials. I could not find any credentials but enumerated a few possible services the server might have used. I went to censys this time to see the historic port scan result and found an additional port ‘5000’ which ran Microsoft Kestrel Web Server. At this point I thought of two possibilities :

  1. Probably the API server was running in port 5000 and later put behind the reverse proxy. I would have to download older apks to find if the url pointed to 5000. I did not do this as I was too lazy.
  2. The second possibility was the port was exposed on the Internet which was later reverted back later.

Anyways I got to know that the API server was built with Microsoft kestrel which is a web server for ASP.NET core. The operating system there is Ubuntu as grabbed from the passive port scanning records of shodan. So far my understanding of the architecture was :

App – > Nginx – > Api Server – > Database

I searched for other sub-domains to find the staging and dev servers but it did not turn out to be useful. I did not have a clone of the disk , logs or anything. I had to wait until the following day.

We met the following day and I understood the problems and the server architecture in more details. They suspected the connections from the exposed open ports. I found those ports open from the historic port scanning result of shodan and censys. I ssh’ed into the server and quickly ran docker ps . I found that the dockers were started 20 hours ago. Dockers are stateless and the logs were lost. “MySql”, “API Server” and everything else was running in docker and there was no way to get those logs back. The logs were also not forwarded to a centralized logging solution. By default, docker sends logs to the standard output and they are run in detached mode.

The screenshot here shows “Created 2 days ago” because I took the screenshot now while writing this up. The only log I could analyze was of a reverse proxy which is Nginx. I looked into the access logs but did not find anything interesting except some bots continuously bruting an API endpoint. There was no WAF or anything as such. I read the docker-compose file from where those containers are spawned up. I noticed a weak password used there and immediately looked into rockyou.txt. The password was there. Before those containers were removed the site owner looked into the docker logs and found so many failed log-in attempts. This is a confirmation that the attacker might have brute-forced the MySQL and got in. There was also solr:8.2 which was vulnerable to unauth remote code execution. A malware named “Kingsin” was dropped there. I spent some time trying to figure out how that could have happened. It’s inside the docker container and this was not exposed on the Internet and I could not find any records in shodan and censys. There must be SSRF-like vulnerabilities in API Server for that to happen. I looked into the access logs for possible SSRF exploitation but found none. The co-founder said that the port was exposed on the Internet for around one day. It makes sense now how this might have been compromised. I made a rough layout of what the architecture might look like :

The assumptions here are :

  1. The attacker found a MySQL port exposed and bruted for passwords to get in. Once in, the attacker dropped the database.
  2. The attacker found solr port exposed and exploited this to drop Kingsin malware.
  3. The attacker could not get into another MySQL that was not publicly exposed
  4. Redis server was also not attacked because the password used there was quite strong and no public exploits could be located.

I checked for various indicators of compromise to see if the attacker has escaped the docker container to the host. I listed all the files which are modified in the past 15 days with

find / -type f -mtime -15 -ls 

I could not find any suspicious files. While checking the running processes I found an interesting process running headless chrome.

Why would there be a headless chrome process running? I quickly ran

ps -ef --forest

This seemed to be running as a container and upon further verification, I found that it was used for pdf exporting and certificate generation. So it was the intended one. Furthermore I checked for entries in /etc/passwd and found one user which looked interesting.


Upon further verification, it seems to be the user made by digital ocean. Besides the shell was also /bin/false and there were no suspicious entries in authorized_keys and authorized_keys2 files. From the offensive perspective, I could not find a way to break out of the container to get into the host. Nevertheless, I checked if the attacker broke into the host. During the investigation, I came across some ways to escalate low-level privileges to root. However, no signs of escalations were found.

The process taken should have been better like taking a snapshot for forensics and doing the investigation there. I took the other route because there were some complications initially. Here most of the pieces of evidence was lost because the docker containers were removed and instantiated again. If you are doing docker forensics then I highly recommend making images before you proceed with the investigation. The following command creates a new image out of the container :

docker commit -m "New Image for forensics" container_id new_image_name

You can use docker diff container_id to see files that have been modified inside the docker container and see if there are any interesting files dropped or modified.

root@host:~# docker diff container_id
C /root
A /root/.local
A /root/.local/share
A /root/.local/share/applications
A /root/.local/share/applications/mimeapps.list
C /gotenberg
A /gotenberg/tmp

The tools like docker-explorer by google can be used once if you have the snapshot. I looked into bash_history, apt logs, /var/log/auth.log, and different other potential resources in there. Nothing stood out in my favor. The takeaway here is always enable logging. It makes investigators’ life so much easier. It’s also a win situation for debugging and troubleshooting purposes.


  1. Proper security assessment of the mobile applications, API endpoints, and associated network infrastructures. This should be the first phase
  2. Architectural review of the applications
  3. Implementation of centralized logging of the containers and applications. This was lagging and upon deletion of the old containers, all pieces of evidence were lost. It is super crucial to keep logs of the containers somewhere
  4. Ports that don’t have to be exposed on the Internet should be avoided. For eg there is no point in exposing 3306, 6379, and other applications behind the reverse proxy on the Internet. Firewall rules can be added to the digital ocean console
  5. Passwords should not be exposed like that in configuration files. We can use vaults for secrets and password management
  6. The latest secure images should be pulled from the registry
  7. Minimal images can be used which decreases the ease of exploitation
  8. Harden the underlying operating system. If the same OS is used everywhere then the hardened OS can be used as a base OS for other servers

Coded Brain

Hi , I am an information security enthusiast from Nepal.

Leave a Reply

Your email address will not be published. Required fields are marked *