Skip to content

assword authentication failed when using non-default POSTGRES_PASSWORD #253

@cruunnerr

Description

@cruunnerr

Describe the bug
Description of the Issue

When attempting the initial setup of Open Archiver using the official docker-compose.yml, the database migration (db:migrate) consistently fails with the error FATAL: password authentication failed for user "admin" unless the POSTGRES_PASSWORD environment variable is set to the default value password.

If the POSTGRES_PASSWORD is set to anything else (e.g., supersecret123), the application fails to connect to the PostgreSQL service, even though the configuration appears correct.

To Reproduce
Use the official docker-compose.yml (which specifies postgres:17-alpine).

Create a .env file with the following critical lines:

# Fails with any value other than 'password'
POSTGRES_USER=admin
POSTGRES_PASSWORD=supersecret123  # <-- Any non-default password
POSTGRES_DB=open_archive
DATABASE_URL="postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}"
# ... ensure all required ENCRYPTION_KEYS are present ...

Expected behavior
The open-archiver container successfully connects to the postgres service with any other then the default password.

System:

  • Open Archiver Version:
    latest -> 0.4.0

Relevant logs:

Logs:

open-archiver  | Scope: all 5 workspace projects
open-archiver  | Lockfile is up to date, resolution step is skipped
open-archiver  | Progress: resolved 1, reused 0, downloaded 0, added 0
open-archiver  | Packages: +630
open-archiver  | ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
open-archiver  | Progress: resolved 630, reused 0, downloaded 14, added 0
open-archiver  | Progress: resolved 630, reused 0, downloaded 140, added 137
open-archiver  | Progress: resolved 630, reused 0, downloaded 246, added 243
open-archiver  | Progress: resolved 630, reused 0, downloaded 516, added 514
open-archiver  | Progress: resolved 630, reused 0, downloaded 624, added 624
open-archiver  | Progress: resolved 630, reused 0, downloaded 629, added 629
open-archiver  | Progress: resolved 630, reused 0, downloaded 630, added 630, done
open-archiver  | .../[email protected]/node_modules/esbuild postinstall$ node install.js
open-archiver  | .../[email protected]/node_modules/esbuild postinstall$ node install.js
open-archiver  | .../[email protected]/node_modules/esbuild postinstall: Done
open-archiver  | .../[email protected]/node_modules/esbuild postinstall: Done
open-archiver  | 
open-archiver  | dependencies:
open-archiver  | + concurrently 9.2.0
open-archiver  | + dotenv-cli 8.0.0
open-archiver  | 
open-archiver  | devDependencies: skipped
open-archiver  | 
open-archiver  | ╭ Warning ─────────────────────────────────────────────────────────────────────╮
open-archiver  | │                                                                              │
open-archiver  | │   Ignored build scripts: msgpackr-extract, sqlite3.                          │
open-archiver  | │   Run "pnpm approve-builds" to pick which dependencies should be allowed     │
open-archiver  | │   to run scripts.                                                            │
open-archiver  | │                                                                              │
open-archiver  | ╰──────────────────────────────────────────────────────────────────────────────╯
open-archiver  | 
open-archiver  | packages/frontend prepare$ svelte-kit sync || echo ''
open-archiver  | packages/frontend prepare: Missing Svelte config file in /app/packages/frontend — skipping
open-archiver  | packages/frontend prepare: Done
open-archiver  | Done in 8.7s using pnpm v10.13.1
open-archiver  | 
open-archiver  | > [email protected] db:migrate /app
open-archiver  | > dotenv -- pnpm --filter @open-archiver/backend db:migrate
open-archiver  | 
open-archiver  | 
open-archiver  | > @open-archiver/[email protected] db:migrate /app/packages/backend
open-archiver  | > node dist/database/migrate.js
open-archiver  | 
open-archiver  | [[email protected]] injecting env (0) from .env (tip: ⚙️  suppress all logs with { quiet: true })
open-archiver  | Running migrations...
postgres       | 2025-12-14 04:36:10.294 UTC [33] FATAL:  password authentication failed for user "admin"
postgres       | 2025-12-14 04:36:10.294 UTC [33] DETAIL:  Connection matched file "/var/lib/postgresql/data/pg_hba.conf" line 128: "host all all all scram-sha-256"
open-archiver  | Migration failed! DrizzleQueryError: Failed query: CREATE SCHEMA IF NOT EXISTS "drizzle"
open-archiver  | params: 
open-archiver  |     at PostgresJsPreparedQuery.queryWithCache (/app/node_modules/.pnpm/[email protected][email protected][email protected][email protected]/node_modules/drizzle-orm/pg-core/session.cjs:67:15)
open-archiver  |     at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
open-archiver  |     at async PgDialect.migrate (/app/node_modules/.pnpm/[email protected][email protected][email protected][email protected]/node_modules/drizzle-orm/pg-core/dialect.cjs:56:5)
open-archiver  |     at async migrate (/app/node_modules/.pnpm/[email protected][email protected][email protected][email protected]/node_modules/drizzle-orm/postgres-js/migrator.cjs:27:3)
open-archiver  |     at async runMigrate (/app/packages/backend/dist/database/migrate.js:20:5) {
open-archiver  |   query: 'CREATE SCHEMA IF NOT EXISTS "drizzle"',
open-archiver  |   params: [],
open-archiver  |   cause: PostgresError: password authentication failed for user "admin"
open-archiver  |       at ErrorResponse (/app/node_modules/.pnpm/[email protected]/node_modules/postgres/cjs/src/connection.js:794:26)
open-archiver  |       at handle (/app/node_modules/.pnpm/[email protected]/node_modules/postgres/cjs/src/connection.js:480:6)
open-archiver  |       at Socket.data (/app/node_modules/.pnpm/[email protected]/node_modules/postgres/cjs/src/connection.js:315:9)
open-archiver  |       at Socket.emit (node:events:519:28)
open-archiver  |       at addChunk (node:internal/streams/readable:561:12)
open-archiver  |       at readableAddChunkPushByteMode (node:internal/streams/readable:512:3)
open-archiver  |       at Readable.push (node:internal/streams/readable:392:5)
open-archiver  |       at TCP.onStreamRead (node:internal/stream_base_commons:189:23) {
open-archiver  |     severity_local: 'FATAL',
open-archiver  |     severity: 'FATAL',
open-archiver  |     code: '28P01',
open-archiver  |     file: 'auth.c',
open-archiver  |     line: '329',
open-archiver  |     routine: 'auth_failed'
open-archiver  |   }
open-archiver  | }
open-archiver  | /app/packages/backend:
open-archiver  |  ERR_PNPM_RECURSIVE_RUN_FIRST_FAIL  @open-archiver/[email protected] db:migrate: `node dist/database/migrate.js`
open-archiver  | Exit status 1
open-archiver  |  ELIFECYCLE  Command failed with exit code 1.
open-archiver exited with code 0

Additional context
used compose.yaml and .env to reproduce the error:

compose.yaml:

services:
  open-archiver:
    image: logiclabshq/open-archiver:latest
    container_name: open-archiver
    restart: unless-stopped
    ports:
      - 3000:3000 # Frontend
    env_file:
      - .env
    volumes:
      - ${STORAGE_LOCAL_ROOT_PATH}:${STORAGE_LOCAL_ROOT_PATH}
    depends_on:
      - postgres
      - valkey
      - meilisearch
    networks:
      - open-archiver-net
  postgres:
    image: postgres:17-alpine
    container_name: postgres
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - pgdata:/var/lib/postgresql/data
    networks:
      - open-archiver-net
  valkey:
    image: valkey/valkey:8-alpine
    container_name: valkey
    restart: unless-stopped
    command: valkey-server --requirepass ${REDIS_PASSWORD}
    volumes:
      - valkeydata:/data
    networks:
      - open-archiver-net
  meilisearch:
    image: getmeili/meilisearch:v1.15
    container_name: meilisearch
    restart: unless-stopped
    environment:
      MEILI_MASTER_KEY: ${MEILI_MASTER_KEY}
    volumes:
      - meilidata:/meili_data
    networks:
      - open-archiver-net
  tika:
    image: apache/tika:3.2.2.0-full
    container_name: tika
    restart: always
    networks:
      - open-archiver-net
volumes:
  pgdata:
    driver: local
  valkeydata:
    driver: local
  meilidata:
    driver: local
networks:
  open-archiver-net:
    driver: bridge

.env:

# --- Application Settings ---
# Set to 'production' for production environments
NODE_ENV=development
PORT_BACKEND=4000
PORT_FRONTEND=3000
# The public-facing URL of your application. This is used by the backend to configure CORS.
APP_URL=http://localhost:3000
# This is used by the SvelteKit Node adapter to determine the server's public-facing URL.
# It should always be set to the value of APP_URL.
ORIGIN=$APP_URL
# The frequency of continuous email syncing. Default is every minutes, but you can change it to another value based on your needs.
SYNC_FREQUENCY='* * * * *'
# Set to 'true' to include Junk and Trash folders in the email archive. Defaults to false.
ALL_INCLUSIVE_ARCHIVE=false

# --- Docker Compose Service Configuration ---
# These variables are used by docker-compose.yml to configure the services. Leave them unchanged if you use Docker services for Postgresql, Valkey (Redis) and Meilisearch. If you decide to use your own instances of these services, you can substitute them with your own connection credentials.

# PostgreSQL
POSTGRES_DB=open_archive
POSTGRES_USER=admin
POSTGRES_PASSWORD=password # <-- this works
#POSTGRES_PASSWORD=passwordo # <-- this doesn't work
DATABASE_URL="postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}"

# Meilisearch
MEILI_MASTER_KEY=masterkeypass
MEILI_HOST=http://meilisearch:7700
# The number of emails to batch together for indexing. Defaults to 500.
MEILI_INDEXING_BATCH=500


# Redis (We use Valkey, which is Redis-compatible and open source)
REDIS_HOST=valkey
REDIS_PORT=6379
REDIS_PASSWORD=securepass
# If you run Valkey service from Docker Compose, set the REDIS_TLS_ENABLED variable to false.
REDIS_TLS_ENABLED=false


# --- Storage Settings ---
# Choose your storage backend. Valid options are 'local' or 's3'.
STORAGE_TYPE=local
# The maximum request body size to accept in bytes including while streaming. The body size can also be specified with a unit suffix for kilobytes (K), megabytes (M), or gigabytes (G). For example, 512K or 1M. Defaults to 512kb. Or the value of Infinity if you don't want any upload limit.
BODY_SIZE_LIMIT=100M

# --- Local Storage Settings ---
# The path inside the container where files will be stored.
# This is mapped to a Docker volume for persistence.
# This is not an optional variable, it is where the Open Archiver service stores application data. Set this even if you are using S3 storage.
# Make sure the user that runs the Open Archiver service has read and write access to this path.
# Important: It is recommended to create this path manually before installation, otherwise you may face permission and ownership problems.
STORAGE_LOCAL_ROOT_PATH=/mnt/container-data/open-archiver

# --- S3-Compatible Storage Settings ---
# These are only used if STORAGE_TYPE is 's3'.
STORAGE_S3_ENDPOINT=
STORAGE_S3_BUCKET=
STORAGE_S3_ACCESS_KEY_ID=
STORAGE_S3_SECRET_ACCESS_KEY=
STORAGE_S3_REGION=
# Set to 'true' for MinIO and other non-AWS S3 services
STORAGE_S3_FORCE_PATH_STYLE=false

# --- Storage Encryption ---
# IMPORTANT: Generate a secure, random 32-byte hex string for this key.
# You can use `openssl rand -hex 32` to generate a key.
# This key is used for AES-256 encryption of files at rest.
# This is an optional variable, if not set, files will not be encrypted.
STORAGE_ENCRYPTION_KEY=secretkey123456

# --- Security & Authentication ---

# Enable or disable deletion of emails and ingestion sources. Defaults to false.
ENABLE_DELETION=false

# Rate Limiting
# The window in milliseconds for which API requests are checked. Defaults to 60000 (1 minute).
RATE_LIMIT_WINDOW_MS=60000
# The maximum number of API requests allowed from an IP within the window. Defaults to 100.
RATE_LIMIT_MAX_REQUESTS=100



# JWT
# IMPORTANT: Change this to a long, random, and secret string in your .env file
JWT_SECRET=superlongsecret
JWT_EXPIRES_IN="7d"


# Master Encryption Key for sensitive data (Such as Ingestion source credentials and passwords)
# IMPORTANT: Generate a secure, random 32-byte hex string for this
# You can use `openssl rand -hex 32` to generate a key.
ENCRYPTION_KEY=secretkey123

# Apache Tika Integration
# ONLY active if TIKA_URL is set
TIKA_URL=http://tika:9998

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions