Skip to content

Developer Notes

Ryan Slominski edited this page Jun 22, 2023 · 45 revisions

Docker Notes


Docker Notes

Oracle Container

See: Oracle XE Container Docs

The Oracle XE Docker Container has a few important considerations:

  • It's over 1.5GB download
  • It may take a minute to run "db setup" the first time
  • To avoid the "db setup" time cost each time you run, you can avoid running the container fresh each time like most containers, and instead you may want to cache the built database files on a mount (this makes things more complicated though). However, this should be a last resort. If you can you should make your Oracle container ephemeral and re-build it each time like a Phoenix. It's much more reproducible and less headaches. Just a little longer boot. Get a faster computer. This gvenzl container is way smaller and faster than the old "official" Oracle container.
  1. Connect to container:
docker exec -it --user oracle oracle bash
  1. Access SQL CLI
sqlplus / as sysdba
  1. Execute script
@/container-entrypoint-initdb.d/01_users.sql

NOTE: The Docker container uses Oracle "container" XEPDB1 so likely you'll want to start with:

alter session set container = XEPDB1;

Note: On Linux the "db setup" step results in the directory oradata at the volume mount point being created with root permission, then later the container will attempt to write to that directory as a different user (and fail). This isn't an issue on Windows (unless you're launching from within WSL2). The workaround for now is to create the ordata directory in advance with sudo chmod 777 oradata permissions. The owner must also be "oracle:oinstall", which corresponds to uid/gid 54321:54321: chown -R 54321:54321 oradata. The errors that you get if you don't do this are not intuitive. This appears to be a hotly debated topic in Docker, with some complex workarounds like custom entrypoint scripts / base containers that accept UID arguments to run the container with same UID as host user.

Note: The "IF NOT EXISTS" clause doesn't work in Oracle, so if you need to re-run a script that has already been run at startup/setup try to ignore the already exists errors.

Note: Oracle has several official images, but they require login to access from DockerHub, or are older versions with painful management on oracle private repo: https://container-registry.oracle.com/

Keycloak Container

It turns out that export/import feature of Keycloak is often changing and also not as useful as you'd expect because it is incredibly verbose, hard to use, and in some cases it does partial exports. The best approach is to use the admin CLI as that's easiest to maintain and tweak. Tweaking and understanding the huge export JSON is painful. See Keycloak CLI bash scripts and Keycloak init scripts instead.

Using the import/export

Again, don't do this, but here are my old notes:

We want to easily import/export a test realm of "dummy" data. The docker container will handle the import for us if you provide realm json file with the KEYCLOAK_IMPORT environment variable. However, the export feature in the Keycloak admin GUI is basically worthless because it does a partial export, excluding all users, all passwords, and scrambles confidential client credentials. However, you can run a separate instance of Keycloak on a different port that will do the export of all real data - they call this a "migration". So once you've configured the test realm like you want, do the following to export:

  1. Get shell on container
docker exec -it keycloak bash
  1. Run a second instance of Keycloak just to do export (replace "test-realm" with your realm name if different)
/opt/jboss/keycloak/bin/standalone.sh \
-Djboss.socket.binding.port-offset=100 \
-Dkeycloak.migration.action=export \
-Dkeycloak.migration.provider=singleFile \
-Dkeycloak.migration.realmName=test-realm \
-Dkeycloak.migration.usersExportStrategy=REALM_FILE \
-Dkeycloak.migration.file=/tmp/realm.json
  1. Wait for export to run then kill keycloak after it's started up and past export step
...
<ctrl-C>
  1. From the host copy the file
docker cp keycloak:/tmp/realm.json .

Wildfly Container

We need to add both configuration and a deployment artifact to the Wildfly container. We can:

  1. Build a new container based off of a published wildfly container with copies of our config and deployment artifact already included
  2. Mount configuration and deployment artifact at runtime (bind volume)
  3. Run wildfly directly on localhost and only run dependent services in Docker <-- This is generally best option - see journey of issues below

Building your own container means any changes require re-building the container, whereas mounting configuration directories / files is more dynamic. For development, having to rebuild the entire container each time is a no-go so it seems volume bind mount is the way to go. However, bind mounts often don't work. Running Wildfly directly on the host is not great because it means setting up build tools on the host instead of in a easily reproducible container. On the plus side, getting it to run directly on a host does mean leveraging IDE developer comforts like hot-deploy.

Bind mounts sometimes do not work

I've run into this issue: 9P client truncates timestamps. When sharing a filesystem from a Linux container to a Windows host, the 9P distributed file system protocol is used, and it sometimes causes an infinite redeploy loop. A workaround is to work directly from within WSL. This requires an IDE that can handle running on Windows but reading a WSL drive (Visual Studio Code and IntelliJ for example). This workaround also is a little buggy, with IntelliJ (slow, and sometimes IntelliJ forgets where git is installed).

This situation is further aggravated by an issue where redeploys on Wildfly with OIDC requires restart, meaning even if you don't hit the redeploy loop, you'll likely hit this other issue where you must restart Wildfly for it to "remember" the OIDC config, and in a docker container generally running with Wildfly as the primary PID, it's impossible to restart Wildfly (without re-architecting the container to NOT run Wildly as root PID and instead run something else, like "sleep infinity". Restarting the whole container is another option.

standalone.xml File

It would be nice to simply volume mount just the standalone.xml file itself. Docker allows single file mounts. However, Wildfly occasionally will overwrite the standalone.xml with standalone.xml.tmp at runtime. This is bad because deleting (moving and replacing) the root of a Docker volume mount causes Docker to get angry (sometimes silently, which is worse). This results in:

java.nio.file.FileSystemException: /opt/jboss/wildfly/standalone/configuration/standalone.xml: Device or resource busy

The workaround is to mount the parent configurations directory. Not great because it means you now must provide ALL the configuration files. We have a docker/wildfly directory that has the static "template" of configuration to use. However, instead of bind mounting that directory, we copy it (via Gradle task) to a run directory that is in .gitignore. We do this because we want to separate static versioned files and files that are touched at runtime. If you don't do this you'll have to constantly rollback changes to standalone.xml such as deployment entries, and if on a different OS than container then all line endings will have changed too.

See: Wildfly Docker standalone.xml mount

Gradle War File Updating

Our Gradle build creates the war file in the build/libs directory. We could simply mount build/libs as the bind volume, but this is problematic if you run Gradle "clean" target, as it will delete the build directory entirely. This confuses the heck out of docker (when you delete the root of the volume mount). To avoid this, you can modify your build.gradle to include war.doLast directive to copy the war to a directory that isn't deleted on clean.

Restarting Wildfly Container in Compose

Often you need to restart Wildfly without restarting the other containers (oracle database, keycloak). The "docker compose down " doesn't work as you'd expect (it removes ALL containers). Use the following instead to restart wildfly container only:

docker-compose rm -svf wildfly && docker-compose up -d wildfly

Note: I'm currently constantly having to restart due to: Wildfly Issue WFLY-16000

Trace Logging

I always forget how to set log level to Trace, so a reminder:

/opt/jboss/wildfly/bin/jboss-cli.sh --connect
# First allow console to allow TRACE
/subsystem=logging/console-handler=CONSOLE:write-attribute(name=level,value=TRACE)

# Next, specific package rule (assumes rule doesn't already exist)
/subsystem=logging/logger=org.wildfly.security.http.oidc:add(level=TRACE)

# If rule exists, then:
/subsystem=logging/logger=org.wildfly.security.http.oidc/:write-attribute(name=level,value=TRACE)

Puppet Show Container

When running an app locally via IntelliJ or another IDE be aware that by default Wildfly boots by binding to only localhost. This is problematic if you are attempting to test Puppet Show HTML to image/PDF because localhost is non-unique and from inside the puppet-show container means the container itself, not the host. You can tweak the Wildfly start command to include -b 0.0.0.0 to bind to all interfaces, then set the Smoothness BACKEND_SERVER_URL param to use the host IP or else a unique name for the host, else the special name host.docker.internal. The config would look like:

PUPPET_SHOW_SERVER_URL=http://localhost:3000
FRONTEND_SERVER_URL=https://localhost:8443
BACKEND_SERVER_URL=https://host.docker.internal:8443

Note: The scenario discussed here assumes you're running the "other" containers with docker compose -f deps.yml up