Skip to content

Conversation

@VietND96
Copy link
Member

@VietND96 VietND96 commented Dec 10, 2025

User description

Thanks for contributing to the Docker-Selenium project!
A PR well described will help maintainers to quickly review and merge it

Before submitting your PR, please check our contributing guidelines, applied for this repository.
Avoid large PRs, help reviewers by making them as simple and short as possible.

Description

  • Stop recording in async mode (FFmpeg graceful stop for file integrity), while recording for a new session can start immediately.
  • Upload file by Rclone also async to wait for the file to get ready by the recorder.
  • Graceful shutdown stops the last recording and uploads it, ensuring pending tasks are completed in the grace shutdown period (after SIGTERM and before SIGKILL fires by the container engine).

Motivation and Context

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist

  • I have read the contributing document.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have added tests to cover my changes.
  • All new and existing tests passed.

PR Type

Enhancement


Description

  • Add retry logic with configurable attempts and delays for upload failures

  • Implement async graceful shutdown with background process tracking

  • Add file integrity verification and checksum validation for uploads

  • Enhance logging with session IDs and detailed upload statistics

  • Implement graceful FFmpeg stop with metadata finalization grace period


Diagram Walkthrough

flowchart LR
  A["Video Recording"] -->|Session Change| B["Graceful FFmpeg Stop"]
  B -->|Async Background| C["File Finalization"]
  C -->|After Grace Period| D["Upload with Retry"]
  D -->|Verify File Ready| E["Calculate Checksum"]
  E -->|Retry Loop| F["Rclone Upload"]
  F -->|Success/Failure| G["Update Statistics"]
  H["Graceful Shutdown Signal"] -->|Wait Active| I["Background Uploads"]
  I -->|Wait Finalization| J["Log Statistics"]
  J -->|Cleanup| K["Exit"]
Loading

File Walkthrough

Relevant files
Enhancement
upload.sh
Add retry logic, file verification, and graceful shutdown

Video/upload.sh

  • Add new configuration variables for retry attempts, delays, file
    readiness wait, and checksum verification
  • Implement verify_file_ready() function to check file existence,
    readiness, and stability before upload
  • Implement calculate_checksum() function to compute MD5 checksums for
    file integrity verification
  • Implement rclone_upload_with_retry() function with configurable retry
    logic, checksum validation, and statistics tracking
  • Add wait_for_active_uploads() function to wait for background upload
    processes during graceful shutdown
  • Enhance graceful_exit() to prevent duplicate execution, wait for
    active uploads, and log upload statistics
  • Add graceful shutdown checks in main loop to exit cleanly when
    shutdown is initiated
+188/-2 
video.sh
Implement async graceful shutdown with background finalization

Video/video.sh

  • Add graceful_stop_delay configuration for FFmpeg metadata finalization
    grace period
  • Add default BASIC_AUTH header initialization before conditional
    override
  • Implement stop_ffmpeg_graceful_async() function for async FFmpeg
    termination with background finalization
  • Implement wait_for_pipe_to_drain() function to wait for upload queue
    processing before shutdown
  • Implement wait_for_background_finalization() function to track and
    wait for background finalization processes
  • Refactor stop_recording() to support both async and sync modes with
    proper session tracking
  • Add background finalization process tracking array and enhance logging
    with session IDs
  • Update graceful_exit() to wait for background finalization and pipe
    draining
  • Improve log messages with session context and FFmpeg process IDs
+143/-15
Configuration changes
recorder.conf
Remove fixed stop wait timeout                                                     

Video/recorder.conf

  • Remove stopwaitsecs=30 configuration to allow graceful shutdown to
    complete naturally
+0/-1     
uploader.conf
Remove fixed stop wait timeout                                                     

Video/uploader.conf

  • Remove stopwaitsecs=30 configuration to allow graceful shutdown to
    complete naturally
+0/-1     
Documentation
docker-compose-v3-video-upload-standalone-arm64.yml
Add ARM64 standalone video upload docker-compose example 

docker-compose-v3-video-upload-standalone-arm64.yml

  • Add new docker-compose file for standalone ARM64 video recording with
    FTP upload
  • Configure FTP server, file browser, and standalone Chrome/Firefox
    containers
  • Set up RCLONE FTP configuration for video upload to local FTP server
  • Configure stop_grace_period: 30s for graceful shutdown of containers
+79/-0   

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Dec 10, 2025

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
Insecure temp files

Description: The stats update uses a fixed, predictable lock file '/tmp/upload_stats.lock' and data
file '/tmp/upload_stats.txt' in /tmp, which could be replaced or symlinked by another
process leading to TOCTOU or log injection; use mktemp-secured files in a dedicated
directory with restricted permissions.
upload.sh [151-159]

Referred Code
(
  flock -x 200
  local current_success=$(grep -oP 'upload_success_count=\K\d+' /tmp/upload_stats.txt 2>/dev/null || echo 0)
  echo "upload_success_count=$((current_success + 1))" >/tmp/upload_stats.txt.tmp
  local current_failed=$(grep -oP 'upload_failed_count=\K\d+' /tmp/upload_stats.txt 2>/dev/null || echo 0)
  echo "upload_failed_count=${current_failed}" >>/tmp/upload_stats.txt.tmp
  mv /tmp/upload_stats.txt.tmp /tmp/upload_stats.txt
) 200>/tmp/upload_stats.lock
Symlink attack risk

Description: Repeats writing to '/tmp/upload_stats.txt' with flock on '/tmp/upload_stats.lock' without
validating that the files are not symlinks, enabling a malicious symlink attack to
overwrite arbitrary files; ensure creation with safe permissions and refuse symlinks.
upload.sh [176-184]

Referred Code
# Update statistics file (for cross-process tracking)
(
  flock -x 200
  local current_success=$(grep -oP 'upload_success_count=\K\d+' /tmp/upload_stats.txt 2>/dev/null || echo 0)
  echo "upload_success_count=${current_success}" >/tmp/upload_stats.txt.tmp
  local current_failed=$(grep -oP 'upload_failed_count=\K\d+' /tmp/upload_stats.txt 2>/dev/null || echo 0)
  echo "upload_failed_count=$((current_failed + 1))" >>/tmp/upload_stats.txt.tmp
  mv /tmp/upload_stats.txt.tmp /tmp/upload_stats.txt
) 200>/tmp/upload_stats.lock
Plaintext credentials

Description: The example compose file includes plaintext router credentials and an rclone obscured
password in environment variables, which can be exposed via 'docker inspect' or logs;
recommend using Docker secrets or env files with restricted permissions and avoiding
hard-coded demo credentials.
docker-compose-v3-video-upload-standalone-arm64.yml [37-52]

Referred Code
- SE_ROUTER_USERNAME=admin
- SE_ROUTER_PASSWORD=admin
- SE_RECORD_VIDEO=true
- SE_SUB_PATH=/selenium
- SE_VIDEO_RECORD_STANDALONE=true
- SE_VIDEO_FILE_NAME=auto
- SE_VIDEO_UPLOAD_ENABLED=true
# Remote name and destination path to upload
- SE_UPLOAD_DESTINATION_PREFIX=myftp://ftp/seluser
# All configs required for RCLONE to upload to remote name myftp
- RCLONE_CONFIG_MYFTP_TYPE=ftp
- RCLONE_CONFIG_MYFTP_HOST=ftp_server
- RCLONE_CONFIG_MYFTP_PORT=21
- RCLONE_CONFIG_MYFTP_USER=seluser
# Password encrypted using command: rclone obscure <your_password>
- RCLONE_CONFIG_MYFTP_PASS=KkK8RsUIba-MMTBUSnuYIdAKvcnFyLl2pdhQig
Weak default auth

Description: A default basic auth header 'Authorization: Basic YWRtaW46YWRtaW4=' (admin:admin) is set
if env vars are absent, which could cause unintended authentication with weak credentials;
default to no auth or require explicit credentials.
video.sh [37-41]

Referred Code
BASIC_AUTH="Authorization: Basic YWRtaW46YWRtaW4="
if [ -n "${SE_ROUTER_USERNAME}" ] && [ -n "${SE_ROUTER_PASSWORD}" ]; then
  BASIC_AUTH="$(echo -en "${SE_ROUTER_USERNAME}:${SE_ROUTER_PASSWORD}" | base64 -w0)"
  BASIC_AUTH="Authorization: Basic ${BASIC_AUTH}"
fi
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
Action context: New upload workflow logs many events but lacks an authenticated user identifier in
entries, which may limit auditability depending on system requirements.

Referred Code
# Retry loop
while [ ${attempt} -le ${UPLOAD_RETRY_MAX_ATTEMPTS} ]; do
  echo "$(date -u +"${ts_format}") [${process_name}] - Upload attempt ${attempt}/${UPLOAD_RETRY_MAX_ATTEMPTS}: ${source} to ${target}"

  # Execute rclone command
  if rclone --config ${RCLONE_CONFIG} ${UPLOAD_COMMAND} ${UPLOAD_OPTS} "${source}" "${target}"; then
    echo "$(date -u +"${ts_format}") [${process_name}] - SUCCESS: Upload completed: ${source} to ${target}"

    # Verify checksum if enabled and using copy command
    if [ "${UPLOAD_VERIFY_CHECKSUM}" = "true" ] && [ -n "${source_checksum}" ] && [ "${UPLOAD_COMMAND}" = "copy" ]; then
      # For copy command, verify source file still has same checksum
      local post_upload_checksum=$(calculate_checksum "${source}")
      if [ "${source_checksum}" = "${post_upload_checksum}" ]; then
        echo "$(date -u +"${ts_format}") [${process_name}] - Checksum verification passed: ${source_checksum}"
      else
        echo "$(date -u +"${ts_format}") [${process_name}] - WARNING: Checksum mismatch after upload (before: ${source_checksum}, after: ${post_upload_checksum})"
      fi
    fi

    # Update statistics file (for cross-process tracking)
    (


 ... (clipped 27 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status:
Plain credentials: The example compose file sets BASIC auth-like credentials and FTP credentials in
environment variables and comments, which could be sensitive in some deployments.

Referred Code
standalone_chrome:
  image: selenium/standalone-chromium:4.38.0-20251101
  shm_size: 2gb
  ports:
    - "4444:4444"
  environment:
    - SE_ROUTER_USERNAME=admin
    - SE_ROUTER_PASSWORD=admin
    - SE_RECORD_VIDEO=true
    - SE_SUB_PATH=/selenium
    - SE_VIDEO_RECORD_STANDALONE=true
    - SE_VIDEO_FILE_NAME=auto
    - SE_VIDEO_UPLOAD_ENABLED=true
    # Remote name and destination path to upload
    - SE_UPLOAD_DESTINATION_PREFIX=myftp://ftp/seluser
    # All configs required for RCLONE to upload to remote name myftp
    - RCLONE_CONFIG_MYFTP_TYPE=ftp
    - RCLONE_CONFIG_MYFTP_HOST=ftp_server
    - RCLONE_CONFIG_MYFTP_PORT=21
    - RCLONE_CONFIG_MYFTP_USER=seluser
    # Password encrypted using command: rclone obscure <your_password>


 ... (clipped 26 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status:
Exposed secrets: The compose adds explicit FTP user/password and rclone secret in environment variables
which, if used beyond an example, could constitute improper secret handling.

Referred Code
    - RCLONE_CONFIG_MYFTP_TYPE=ftp
    - RCLONE_CONFIG_MYFTP_HOST=ftp_server
    - RCLONE_CONFIG_MYFTP_PORT=21
    - RCLONE_CONFIG_MYFTP_USER=seluser
    # Password encrypted using command: rclone obscure <your_password>
    - RCLONE_CONFIG_MYFTP_PASS=KkK8RsUIba-MMTBUSnuYIdAKvcnFyLl2pdhQig
    - RCLONE_CONFIG_MYFTP_FTP_CONCURRENCY=10
  stop_grace_period: 30s

standalone_firefox:
  image: selenium/standalone-firefox:4.38.0-20251101
  shm_size: 2gb
  ports:
    - "5444:4444"
  environment:
    - SE_ROUTER_USERNAME=admin
    - SE_ROUTER_PASSWORD=admin
    - SE_RECORD_VIDEO=true
    - SE_SUB_PATH=/selenium
    - SE_VIDEO_RECORD_STANDALONE=true
    - SE_VIDEO_FILE_NAME=auto


 ... (clipped 10 lines)

Learn more about managing compliance generic rules or creating your own custom rules

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Dec 10, 2025

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Use a single script for recording and uploading

Merge the video.sh and upload.sh scripts into a single script. This would
simplify the architecture by removing complex inter-process communication for
state management, synchronization, and graceful shutdown.

Examples:

Video/upload.sh [249-284]
function graceful_exit() {
  # Prevent duplicate execution (trap catches both SIGTERM and EXIT)
  if [ "$graceful_exit_called" = "true" ]; then
    return 0
  fi
  graceful_exit_called=true

  echo "$(date -u +"${ts_format}") [${process_name}] - Trapped SIGTERM/SIGINT/x so shutting down uploader"

  # Signal the pipe consumer to stop accepting new files

 ... (clipped 26 lines)
Video/video.sh [331-338]
function graceful_exit() {
  echo "$(date -u +"${ts_format}") [${process_name}] - Trapped SIGTERM/SIGINT/x so shutting down recorder"
  stop_if_recording_inprogress
  wait_for_background_finalization
  wait_for_pipe_to_drain
  send_exit_signal_to_uploader
  wait_util_uploader_shutdown
}

Solution Walkthrough:

Before:

# video.sh
function stop_recording() {
  # Stop ffmpeg, possibly async
  stop_ffmpeg_graceful_async(...)
}

function graceful_exit() {
  wait_for_background_finalization
  wait_for_pipe_to_drain # Checks for rclone processes
  send_exit_signal_to_uploader
}
trap graceful_exit SIGTERM

# upload.sh
function rclone_upload_with_retry() {
  # ... upload logic ...
  # Update stats in /tmp/upload_stats.txt with flock
}

function graceful_exit() {
  wait_for_active_uploads
  # Log stats from /tmp/upload_stats.txt
}
trap graceful_exit SIGTERM

After:

# video_and_upload.sh
declare -a background_pids
upload_success_count=0
upload_failed_count=0

function upload_video_async() {
  (
    # ... upload logic with retry ...
    upload_success_count=$((...))
  ) &
  background_pids+=($!)
}

function stop_recording_and_upload() {
  # Stop ffmpeg, wait for finalization
  # ...
  upload_video_async(...)
}

function graceful_exit() {
  # Stop any active recording
  # Wait for all PIDs in background_pids
  # Log stats from variables
  exit
}
trap graceful_exit SIGTERM
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies that separating video.sh and upload.sh introduces significant complexity in synchronization, state management, and shutdown, which a unified script would simplify, improving robustness.

Medium
Possible issue
Ensure graceful video finalization on shutdown

Modify the stop_if_recording_inprogress function to default to an asynchronous
stop to ensure graceful video finalization during shutdown.

Video/video.sh [305-310]

 function stop_if_recording_inprogress() {
-  local use_async=${1:-false}
+  local use_async=${1:-true}
   if [[ "$recording_started" = "true" ]] || check_if_ffmpeg_running; then
     stop_recording "${use_async}"
   fi
 }
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies that graceful_exit should use an asynchronous stop to leverage the new background finalization logic, preventing abrupt termination and potential video corruption on shutdown.

Medium
Learned
best practice
Pin docker image tag

Replace the 'latest' tag with a specific, known-good version tag to make the
compose setup reproducible.

docker-compose-v3-video-upload-standalone-arm64.yml [8]

-image: delfer/alpine-ftp-server:latest
+image: delfer/alpine-ftp-server:1.1.5
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why:
Relevant best practice - Pin external tool/image versions instead of using 'latest' tags to ensure reproducible deployments.

Low
Quote and brace variables

Quote variable expansions and add braces to avoid word-splitting or globbing
issues when configs, commands, or options contain whitespace.

Video/upload.sh [136]

-rclone --config ${RCLONE_CONFIG} ${UPLOAD_COMMAND} ${UPLOAD_OPTS} "${source}" "${target}"
+rclone --config "${RCLONE_CONFIG}" "${UPLOAD_COMMAND}" ${UPLOAD_OPTS} "${source}" "${target}"

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 5

__

Why:
Relevant best practice - Quote variable expansions and use braces in shell commands to safely handle paths/filenames and options.

Low
  • Update

@qodo-code-review
Copy link
Contributor

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: Test Selenium Grid on Kubernetes / Test K8s (v1.30.14, minikube, v3.15.4, 27.5.1, 3.12, true, false, ubuntu-22.04, true, job_hostname)

Failed stage: Set up containerd image store feature [❌]

Failure summary:

The workflow failed due to two errors:
- During nick-invision/retry@master running make
setup_dev_env, the action crashed with Error: kill EPERM from
/home/runner/work/_actions/nick-invision/retry/master/dist/index.js:1931, indicating the retry
action attempted to kill a process and lacked permission (or the process was already terminated),
causing the step to fail.
- Later, actions/upload-artifact@main failed with Input required and not
supplied: path, meaning the required input path was not provided to the artifact upload step.

These failures are from the workflow/action configuration rather than a repository test or code
assertion.

Relevant error logs:
1:  ##[group]Runner Image Provisioner
2:  Hosted Compute Agent
...

168:  �[36;1m�[0m
169:  �[36;1m  sudo rm -rf /opt/ghc || true�[0m
170:  �[36;1m  sudo rm -rf /usr/local/.ghcup || true�[0m
171:  �[36;1m  �[0m
172:  �[36;1m  AFTER=$(getAvailableSpace)�[0m
173:  �[36;1m  SAVED=$((AFTER-BEFORE))�[0m
174:  �[36;1m  printSavedSpace $SAVED "Haskell runtime"�[0m
175:  �[36;1mfi�[0m
176:  �[36;1m�[0m
177:  �[36;1m# Option: Remove large packages�[0m
178:  �[36;1m# REF: https://github.com/apache/flink/blob/master/tools/azure-pipelines/free_disk_space.sh�[0m
179:  �[36;1m�[0m
180:  �[36;1mif [[ false == 'true' ]]; then�[0m
181:  �[36;1m  BEFORE=$(getAvailableSpace)�[0m
182:  �[36;1m  �[0m
183:  �[36;1m  sudo apt-get remove -y '^aspnetcore-.*' || echo "::warning::The command [sudo apt-get remove -y '^aspnetcore-.*'] failed to complete successfully. Proceeding..."�[0m
184:  �[36;1m  sudo apt-get remove -y '^dotnet-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^dotnet-.*' --fix-missing] failed to complete successfully. Proceeding..."�[0m
185:  �[36;1m  sudo apt-get remove -y '^llvm-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^llvm-.*' --fix-missing] failed to complete successfully. Proceeding..."�[0m
186:  �[36;1m  sudo apt-get remove -y 'php.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y 'php.*' --fix-missing] failed to complete successfully. Proceeding..."�[0m
187:  �[36;1m  sudo apt-get remove -y '^mongodb-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^mongodb-.*' --fix-missing] failed to complete successfully. Proceeding..."�[0m
188:  �[36;1m  sudo apt-get remove -y '^mysql-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^mysql-.*' --fix-missing] failed to complete successfully. Proceeding..."�[0m
189:  �[36;1m  sudo apt-get remove -y azure-cli google-chrome-stable firefox powershell mono-devel libgl1-mesa-dri --fix-missing || echo "::warning::The command [sudo apt-get remove -y azure-cli google-chrome-stable firefox powershell mono-devel libgl1-mesa-dri --fix-missing] failed to complete successfully. Proceeding..."�[0m
190:  �[36;1m  sudo apt-get remove -y google-cloud-sdk --fix-missing || echo "::debug::The command [sudo apt-get remove -y google-cloud-sdk --fix-missing] failed to complete successfully. Proceeding..."�[0m
191:  �[36;1m  sudo apt-get remove -y google-cloud-cli --fix-missing || echo "::debug::The command [sudo apt-get remove -y google-cloud-cli --fix-missing] failed to complete successfully. Proceeding..."�[0m
192:  �[36;1m  sudo apt-get autoremove -y || echo "::warning::The command [sudo apt-get autoremove -y] failed to complete successfully. Proceeding..."�[0m
193:  �[36;1m  sudo apt-get clean || echo "::warning::The command [sudo apt-get clean] failed to complete successfully. Proceeding..."�[0m
194:  �[36;1m�[0m
...

562:  git switch -
563:  Turn off this advice by setting config variable advice.detachedHead to false
564:  HEAD is now at 0b960f5 Merge 98b8da6da1d7a3e5540216efdcdf68640faa7cfa into 98a4923baad389c0e67b116605950c2a3e85f7f5
565:  ##[endgroup]
566:  [command]/usr/bin/git log -1 --format=%H
567:  0b960f57e14eb5d9029f6191eee8f9638a45de88
568:  ##[group]Run nick-invision/retry@master
569:  with:
570:  timeout_minutes: 10
571:  max_attempts: 3
572:  command: make setup_dev_env
573:  
574:  retry_wait_seconds: 10
575:  polling_interval_seconds: 1
576:  warning_on_retry: true
577:  continue_on_error: false
578:  env:
...

591:  ##[endgroup]
592:  ##[group]Attempt 1
593:  ./tests/charts/make/chart_setup_env.sh ; \
594:  exit_code=$? ; \
595:  make set_containerd_image_store ; \
596:  exit $exit_code ;
597:  + echo 'Set ENV variables'
598:  Set ENV variables
599:  + CLUSTER=minikube
600:  + DOCKER_VERSION=27.5.1
601:  + DOCKER_ENABLE_QEMU=true
602:  + HELM_VERSION=v3.15.4
603:  + KUBERNETES_VERSION=v1.30.14
604:  + INSTALL_DOCKER=true
605:  + [[ true != \t\r\u\e ]]
606:  + trap on_failure ERR
607:  + echo 'Installing Docker for AMD64 / ARM64'
...

844:  Get:11 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 gcc-11-cross-base all 11.4.0-1ubuntu1~22.04cross1 [15.5 kB]
845:  Get:12 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 gcc-12-cross-base all 12.3.0-1ubuntu1~22.04cross1 [15.7 kB]
846:  Get:13 http://azure.archive.ubuntu.com/ubuntu jammy/main amd64 libc6-arm64-cross all 2.35-0ubuntu1cross3 [1147 kB]
847:  Get:14 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libgcc-s1-arm64-cross all 12.3.0-1ubuntu1~22.04cross1 [39.8 kB]
848:  Get:15 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libgomp1-arm64-cross all 12.3.0-1ubuntu1~22.04cross1 [122 kB]
849:  Get:16 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libitm1-arm64-cross all 12.3.0-1ubuntu1~22.04cross1 [28.0 kB]
850:  Get:17 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libatomic1-arm64-cross all 12.3.0-1ubuntu1~22.04cross1 [10.6 kB]
851:  Get:18 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libasan6-arm64-cross all 11.4.0-1ubuntu1~22.04cross1 [2228 kB]
852:  Get:19 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 liblsan0-arm64-cross all 12.3.0-1ubuntu1~22.04cross1 [1034 kB]
853:  Get:20 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libtsan0-arm64-cross all 11.4.0-1ubuntu1~22.04cross1 [2223 kB]
854:  Get:21 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libstdc++6-arm64-cross all 12.3.0-1ubuntu1~22.04cross1 [616 kB]
855:  Get:22 http://azure.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libubsan1-arm64-cross all 12.3.0-1ubuntu1~22.04cross1 [964 kB]
856:  /home/runner/work/_actions/nick-invision/retry/master/dist/index.js:1931
857:  throw err;
858:  ^
859:  Error: kill EPERM
860:  at process.kill (node:internal/process/per_thread:225:13)
...

881:  include-hidden-files: false
882:  env:
883:  CLUSTER: minikube
884:  KUBERNETES_VERSION: v1.30.14
885:  ARTIFACT_NAME: v1.30.14-job_hostname
886:  HELM_VERSION: v3.15.4
887:  DOCKER_VERSION: 27.5.1
888:  TEST_UPGRADE_CHART: true
889:  SERVICE_MESH: false
890:  CHECK_RECORD_OUTPUT: true
891:  SAUCE_ACCESS_KEY: ***
892:  SAUCE_USERNAME: ***
893:  SAUCE_REGION: ***
894:  TEST_PATCHED_KEDA: 
895:  ##[endgroup]
896:  ##[error]Input required and not supplied: path
897:  ##[group]Run actions/upload-artifact@main

Signed-off-by: Viet Nguyen Duc <[email protected]>
Signed-off-by: Viet Nguyen Duc <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants