Skip to content

make server threads configurable with server.properties file #9260

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ahmedali6
Copy link
Contributor

Description

Fixes: #9161

Types of changes

  • Enhancement (improves an existing feature and functionality)

Feature/Enhancement Scale

  • Minor

How Has This Been Tested?

Tested manually by building and running the server.

How did you try to break this feature and the system with this change?

Copy link

codecov bot commented Jun 14, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 15.32%. Comparing base (cb9b313) to head (de779c3).
Report is 529 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff              @@
##               main    #9260      +/-   ##
============================================
- Coverage     15.32%   15.32%   -0.01%     
  Complexity    11687    11687              
============================================
  Files          5459     5459              
  Lines        477294   477294              
  Branches      62055    59052    -3003     
============================================
- Hits          73140    73138       -2     
- Misses       396074   396076       +2     
  Partials       8080     8080              
Flag Coverage Δ
uitests 4.19% <ø> (ø)
unittests 16.07% <ø> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✖️ debian ✔️ suse15. SL-JID 9978

@DaanHoogland DaanHoogland reopened this Jun 17, 2024
@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 10321

@DaanHoogland
Copy link
Contributor

@blueorangutan test alma9 kvm-alma9

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (alma9 mgmt + kvm-alma9) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-10805)
Environment: kvm-alma9 (x2), Advanced Networking with Mgmt server a9
Total time taken: 55648 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr9260-t10805-kvm-alma9.zip
Smoke tests completed. 135 look OK, 2 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_06_purge_expunged_vm_background_task Failure 331.49 test_purge_expunged_vms.py
test_01_redundant_vpc_site2site_vpn Failure 470.34 test_vpc_vpn.py
test_01_vpc_site2site_vpn Failure 325.52 test_vpc_vpn.py

Copy link
Contributor

@DaanHoogland DaanHoogland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clgtm

Copy link
Contributor

@JoaoJandre JoaoJandre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, did not test it

Comment on lines +106 to +107
private int minThreads = 10;
private int maxThreads = 500;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a small a nitpick, but you don't need to initialize these values, as in the properties.getProperty call you already inform these as default.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will you change this @ahmedali6 ? No big deal if you don't, but if you do, you can also comment out the new properties for the same reason.

Copy link

This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch.

Copy link
Contributor

@shwstppr shwstppr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code LGTM (w/ a minor comment)

Comment on lines +56 to +58
# Thread pool configuration
threads.min=10
threads.max=500
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor - too many places we are setting the default value. This could have been a comment

@shwstppr
Copy link
Contributor

@ahmedali6 can you please address the comments and resolve conflicts?

@sureshanaparti
Copy link
Contributor

hi @ahmedali6 can you check/address the outstanding comments, and resolve the conflicts.

@rosi-shapeblue rosi-shapeblue self-assigned this Jul 23, 2025
Copy link
Collaborator

@rosi-shapeblue rosi-shapeblue left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Threads are configurable in the server.properties file. The set min/max thresholds are respected (verified with the help of a script I've developed for the purpose of testing). Test results can be seen below:

NOTE Attached is the script I've developed in case its needed for further testing:
cloudstack-thread-monitor.sh.txt

Test Results with threads.min=3 and threads.max=10

[root@ref-trl-9054-k-Mol8-rositsa-kyuchukova-mgmt1 ~]# ./cloudstack-thread-monitor.sh 
Checking for existing script instances...
Current script PID: 2520864
Current script name: cloudstack-thread-monitor.sh
Instance check completed
Lock file created: /tmp/cloudstack-thread-monitor.lock (PID: 2520864)
Instance check completed. No conflicts detected.

======================================================
CloudStack Management Server - Thread Pool Monitoring
======================================================
Management Server PID: 1112726
Configuration:
  threads.min = 3
  threads.max = 10
======================================================

Initializing CloudMonkey...
Running: cmk sync
CMK sync output: Discovered 788 APIs
CMK exit code: 0
CloudMonkey initialized successfully

================================================================
SYSTEM HEALTH VERIFICATION
================================================================
Pre-flight System Check:
  Running scripts (excluding this one): 0
  Background CMK processes: 0
  Current system load: 0.38
  Available memory: 1.7Gi
  CloudStack service: active

System Readiness Assessment:
   No conflicting scripts detected
   No background CMK processes
  System load is optimal (0.38)
   CloudStack Management Server is healthy

SYSTEM STATUS: OPTIMAL - Perfect conditions for testing
================================================================

=============================
CURRENT JETTY THREAD ANALYSIS
=============================

Current QueuedThreadPool (Jetty) threads:
  TID 1113446: qtp253011924-18
  TID 2424851: qtp253011924-10
  TID 2467273: qtp253011924-10
  TID 2478268: qtp253011924-10
  TID 2483201: qtp253011924-10
  TID 2505282: qtp253011924-10
  TID 2520851: qtp253011924-10
  TID 2520852: qtp253011924-10
  TID 2520854: qtp253011924-10
  TID 2520855: qtp253011924-10
Total QueuedThreadPool threads: 10
PASS: QueuedThreadPool thread count (10) is within configured limits (3-10)

==================================
MONITORING QTP THREADS UNDER LOAD
==================================

Baseline measurement (no load):
QueuedThreadPool threads before load: 10

================================================
MODERATE LOAD TEST - Lightweight API Operations
================================================

API Commands for Moderate Load:
  * list users              -> Admin user accounts
  * list zones              -> CloudStack zones
  * list accounts           -> Customer accounts
  * list domains            -> Domain hierarchy
  * list capabilities       -> Server capabilities
  * list serviceofferings   -> VM service offerings
  * list diskofferings      -> Disk offerings
  * list networks           -> Network configurations
  * list events (recent)    -> System events (limited)
  * list alerts (recent)    -> System alerts (limited)

Load Pattern: 5 concurrent processes, random API selection, 1s intervals (REDUCED LOAD)
Expected Impact: Light thread usage without system overload

Generating moderate load (5 concurrent cmk requests)...
Moderate load verification:
  Active connections to port 8080: 11
  Background cmk processes: 5
  Server load average: 0.51

Real-time Monitoring Table:
 ________________________________________________________________________________________________________
|   Time   | QueuedThreadPool | Total Threads | Active TCP Connections | Within Limit? |     Status      |
|----------|------------------|---------------|------------------------|---------------|-----------------|
| 11:07:28 |               10 |           216 |                      1 |          PASS |   WITHIN LIMITS |
| 11:07:32 |               10 |           216 |                      1 |          PASS |   WITHIN LIMITS |
| 11:07:36 |               10 |           216 |                      1 |          PASS |   WITHIN LIMITS |
| 11:07:40 |               10 |           216 |                      1 |          PASS |   WITHIN LIMITS |
| 11:07:44 |               10 |           216 |                      2 |          PASS |   WITHIN LIMITS |
| 11:07:47 |               10 |           216 |                      2 |          PASS |   WITHIN LIMITS |
| 11:07:51 |               10 |           216 |                      2 |          PASS |   WITHIN LIMITS |
| 11:07:56 |               10 |           216 |                     10 |          PASS |   WITHIN LIMITS |
| 11:08:00 |               10 |           216 |                      4 |          PASS |   WITHIN LIMITS |
| 11:08:04 |               10 |           216 |                      0 |          PASS |   WITHIN LIMITS |
| 11:08:09 |                9 |           215 |                      2 |          PASS |   WITHIN LIMITS |
| 11:08:13 |                9 |           215 |                      1 |          PASS |   WITHIN LIMITS |
| 11:08:17 |                9 |           215 |                      3 |          PASS |   WITHIN LIMITS |
| 11:08:22 |                9 |           215 |                      3 |          PASS |   WITHIN LIMITS |
| 11:08:26 |                9 |           215 |                      1 |          PASS |   WITHIN LIMITS |
 --------------------------------------------------------------------------------------------------------

Measuring peak thread usage during moderate load...

Peak usage under moderate load:
  QueuedThreadPool threads: 9
  Total threads: 215
  Configured max: 10
  Active connections: 5
  Server CPU usage: 1.7% (CORRECTED)

Stopping moderate load...
Moderate load stopped

=========================================
EXTREME LOAD TEST - Heavy API Operations
=========================================

API Commands for Extreme Load:
  * list configurations     -> All config parameters (HEAVY)
  * list isos               -> ISO images catalog (HEAVY)
  * list systemvms          -> System virtual machines
  * list routers            -> Virtual routers
  * list hosts              -> Physical hosts
  * list clusters           -> Host clusters
  * list storagepools       -> Storage pools
  * list volumes            -> Storage volumes
  * list snapshots          -> Volume snapshots
  * list events (full)      -> All system events (HEAVY)
  * list alerts (full)      -> All system alerts (HEAVY)
  * list asyncjobs          -> Background jobs
  * list affinitygroups     -> VM affinity groups
  * list networks           -> Available Networks

Load Pattern: 10 concurrent processes, continuous heavy API calls, 0.5s intervals (REDUCED LOAD)
Expected Impact: Moderate thread pool stress without system overload

Generating extreme load (10 concurrent cmk requests)...
Extreme load verification:
  Active connections to port 8080: 13
  Background cmk processes: 3
  Server load average: 4.70

Real-time Extreme Load Monitoring:
 ________________________________________________________________________________________________________
|   Time   | QueuedThreadPool | Total Threads | Active TCP Connections | Within Limit? |     Status      |
|----------|------------------|---------------|------------------------|---------------|-----------------|
| 11:08:53 |               10 |           216 |                      2 |          PASS |   WITHIN LIMITS |
| 11:09:14 |               10 |           216 |                     13 |          PASS |   WITHIN LIMITS |
| 11:09:26 |               10 |           216 |                     19 |          PASS |   WITHIN LIMITS |
| 11:09:50 |                9 |           216 |                     20 |          PASS |   WITHIN LIMITS |
| 11:10:09 |               10 |           216 |                      4 |          PASS |   WITHIN LIMITS |
| 11:10:27 |               10 |           216 |                      9 |          PASS |   WITHIN LIMITS |
| 11:10:44 |                9 |           216 |                     22 |          PASS |   WITHIN LIMITS |
| 11:10:56 |               10 |           216 |                      0 |          PASS |   WITHIN LIMITS |
| 11:11:14 |               10 |           216 |                      1 |          PASS |   WITHIN LIMITS |
| 11:11:23 |               10 |           216 |                     25 |          PASS |   WITHIN LIMITS |
| 11:11:37 |                9 |           216 |                     27 |          PASS |   WITHIN LIMITS |
| 11:11:56 |               10 |           216 |                     14 |          PASS |   WITHIN LIMITS |
| 11:12:14 |               10 |           216 |                     17 |          PASS |   WITHIN LIMITS |
| 11:12:31 |               10 |           216 |                     11 |          PASS |   WITHIN LIMITS |
| 11:12:44 |                9 |           216 |                      8 |          PASS |   WITHIN LIMITS |
 --------------------------------------------------------------------------------------------------------

Measuring peak thread usage during extreme load...

Peak usage under extreme load:
  QueuedThreadPool threads: 10
  Total threads: 216
  Configured max: 10
  Active connections: 24
  Server CPU usage: 1.7% (CORRECTED)

Stopping extreme load...
Extreme load stopped

===================================================
FINAL ANALYSIS - THREAD POOL CONFIGURATION RESULTS
===================================================

Configuration Analysis:
  Configured: threads.min=3, threads.max=10
  Baseline QueuedThreadPool threads: 10
  Maximum QueuedThreadPool threads observed: 10
  Total Management Server threads: 216

Thread Pool Behavior Assessment:
RESULT: OPTIMAL - QueuedThreadPool reached configured maximum (10)
STATUS: Thread pool is properly respecting the configured limits
EVIDENCE: Peak usage (10) exactly matches threads.max setting

System Load Warning Check:
Current system load average: 13.73
WARNING: System load is high (13.73 > 5.0)
   This may indicate I/O saturation or too many concurrent processes
   Consider reducing load or checking system resources

Key Evidence:
  * Auto-detected CloudStack Management Server (PID: 1112726)
  * Configuration loaded from server.properties
  * CloudMonkey properly authenticated and synced
  * QueuedThreadPool behavior monitored under realistic API load
  * Both lightweight and heavy API operations tested with real-time monitoring

Final Conclusion - Thread Pool Configuration Assessment:
PASS: Thread pool configuration is working correctly.
      The threads.min and threads.max settings are being respected.

================================================================
SYSTEM CLEANUP AND RESOURCE RECOVERY
================================================================
Terminating all background processes...
Clearing system caches...
Verifying cleanup...
  Remaining cmk processes: 0
  Current system load: 12.71
  Memory status:
Mem:          4.8Gi       2.9Gi       1.5Gi       225Mi       366Mi       1.5Gi
Swap:            0B          0B          0B
All background processes terminated successfully
System cleanup completed
================================================================

================================================================
Script Execution Time: 5m 56s
================================================================

Removing lock file...

Test Results with threads.min=3 and threads.max=5

[root@ref-trl-9054-k-Mol8-rositsa-kyuchukova-mgmt1 ~]# ./cloudstack-thread-monitor.sh 
Checking for existing script instances...
Current script PID: 2625492
Current script name: test.sh
Instance check completed
Lock file created: /tmp/cloudstack-thread-monitor.lock (PID: 2625492)
Instance check completed. No conflicts detected.

======================================================
CloudStack Management Server - Thread Pool Monitoring
======================================================
Management Server PID: 2614238
Configuration:
  threads.min = 3
  threads.max = 5
======================================================

Initializing CloudMonkey...
Running: cmk sync
CMK sync output: Discovered 860 APIs
CMK exit code: 0
CloudMonkey initialized successfully

================================================================
SYSTEM HEALTH VERIFICATION
================================================================
Pre-flight System Check:
  Running scripts (excluding this one): 0
  Background CMK processes: 0
  Current system load: 0.04
  Available memory: 1.9Gi
  CloudStack service: active

System Readiness Assessment:
   No conflicting scripts detected
   No background CMK processes
  System load is optimal (0.04)
   CloudStack Management Server is healthy

SYSTEM STATUS: OPTIMAL - Perfect conditions for testing
================================================================

=============================
CURRENT JETTY THREAD ANALYSIS
=============================

Current QueuedThreadPool (Jetty) threads:
  TID 2614262: qtp253011924-17
  TID 2615263: qtp253011924-35
  TID 2615571: qtp253011924-36
  TID 2615900: qtp253011924-37
  TID 2615901: qtp253011924-37
Total QueuedThreadPool threads: 5
PASS: QueuedThreadPool thread count (5) is within configured limits (3-5)

==================================
MONITORING QTP THREADS UNDER LOAD
==================================

Baseline measurement (no load):
QueuedThreadPool threads before load: 5

================================================
MODERATE LOAD TEST - Lightweight API Operations
================================================

API Commands for Moderate Load:
  * list users              -> Admin user accounts
  * list zones              -> CloudStack zones
  * list accounts           -> Customer accounts
  * list domains            -> Domain hierarchy
  * list capabilities       -> Server capabilities
  * list serviceofferings   -> VM service offerings
  * list diskofferings      -> Disk offerings
  * list networks           -> Network configurations
  * list events (recent)    -> System events (limited)
  * list alerts (recent)    -> System alerts (limited)

Load Pattern: 5 concurrent processes, random API selection, 1s intervals (REDUCED LOAD)
Expected Impact: Light thread usage without system overload

Generating moderate load (5 concurrent cmk requests)...
Moderate load verification:
  Active connections to port 8080: 0
  Background cmk processes: 0
  Server load average: 0.11

Real-time Monitoring Table:
 ________________________________________________________________________________________________________
|   Time   | QueuedThreadPool | Total Threads | Active TCP Connections | Within Limit? |     Status      |
|----------|------------------|---------------|------------------------|---------------|-----------------|
| 13:03:04 |                5 |           208 |                      0 |          PASS |   WITHIN LIMITS |
| 13:03:07 |                5 |           208 |                      0 |          PASS |   WITHIN LIMITS |
| 13:03:10 |                5 |           208 |                      0 |          PASS |   WITHIN LIMITS |
| 13:03:13 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:03:16 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:03:19 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:03:22 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:03:24 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:03:27 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:03:30 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:03:33 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:03:36 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:03:39 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:03:42 |                5 |           208 |                      0 |          PASS |   WITHIN LIMITS |
| 13:03:45 |                5 |           208 |                      0 |          PASS |   WITHIN LIMITS |
 --------------------------------------------------------------------------------------------------------

Measuring peak thread usage during moderate load...

Peak usage under moderate load:
  QueuedThreadPool threads: 5
  Total threads: 208
  Configured max: 5
  Active connections: 0
  Server CPU usage: 5.5% (CORRECTED)

Stopping moderate load...
Moderate load stopped

=========================================
EXTREME LOAD TEST - Heavy API Operations
=========================================

API Commands for Extreme Load:
  * list configurations     -> All config parameters (HEAVY)
  * list isos               -> ISO images catalog (HEAVY)
  * list systemvms          -> System virtual machines
  * list routers            -> Virtual routers
  * list hosts              -> Physical hosts
  * list clusters           -> Host clusters
  * list storagepools       -> Storage pools
  * list volumes            -> Storage volumes
  * list snapshots          -> Volume snapshots
  * list events (full)      -> All system events (HEAVY)
  * list alerts (full)      -> All system alerts (HEAVY)
  * list asyncjobs          -> Background jobs
  * list affinitygroups     -> VM affinity groups
  * list networks           -> Available Networks

Load Pattern: 10 concurrent processes, continuous heavy API calls, 0.5s intervals (REDUCED LOAD)
Expected Impact: Moderate thread pool stress without system overload

Generating extreme load (10 concurrent cmk requests)...
Extreme load verification:
  Active connections to port 8080: 0
  Background cmk processes: 0
  Server load average: 2.52

Real-time Extreme Load Monitoring:
 ________________________________________________________________________________________________________
|   Time   | QueuedThreadPool | Total Threads | Active TCP Connections | Within Limit? |     Status      |
|----------|------------------|---------------|------------------------|---------------|-----------------|
| 13:04:01 |                5 |           208 |                      0 |          PASS |   WITHIN LIMITS |
| 13:04:05 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:04:09 |                5 |           208 |                      0 |          PASS |   WITHIN LIMITS |
| 13:04:13 |                5 |           208 |                      4 |          PASS |   WITHIN LIMITS |
| 13:04:17 |                5 |           208 |                      2 |          PASS |   WITHIN LIMITS |
| 13:04:21 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:04:25 |                5 |           208 |                      3 |          PASS |   WITHIN LIMITS |
| 13:04:28 |                5 |           208 |                      2 |          PASS |   WITHIN LIMITS |
| 13:04:33 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:04:37 |                5 |           208 |                      2 |          PASS |   WITHIN LIMITS |
| 13:04:42 |                5 |           208 |                      0 |          PASS |   WITHIN LIMITS |
| 13:04:46 |                5 |           208 |                      2 |          PASS |   WITHIN LIMITS |
| 13:04:51 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
| 13:04:55 |                5 |           208 |                      2 |          PASS |   WITHIN LIMITS |
| 13:05:00 |                5 |           208 |                      1 |          PASS |   WITHIN LIMITS |
 --------------------------------------------------------------------------------------------------------

Measuring peak thread usage during extreme load...

Peak usage under extreme load:
  QueuedThreadPool threads: 5
  Total threads: 208
  Configured max: 5
  Active connections: 1
  Server CPU usage: 5.3% (CORRECTED)

Stopping extreme load...
Extreme load stopped

===================================================
FINAL ANALYSIS - THREAD POOL CONFIGURATION RESULTS
===================================================

Configuration Analysis:
  Configured: threads.min=3, threads.max=5
  Baseline QueuedThreadPool threads: 5
  Maximum QueuedThreadPool threads observed: 5
  Total Management Server threads: 208

Thread Pool Behavior Assessment:
RESULT: OPTIMAL - QueuedThreadPool reached configured maximum (5)
STATUS: Thread pool is properly respecting the configured limits
EVIDENCE: Peak usage (5) exactly matches threads.max setting

System Load Warning Check:
Current system load average: 7.75
WARNING: System load is high (7.75 > 5.0)
   This may indicate I/O saturation or too many concurrent processes
   Consider reducing load or checking system resources

Key Evidence:
  * Auto-detected CloudStack Management Server (PID: 2614238)
  * Configuration loaded from server.properties
  * CloudMonkey properly authenticated and synced
  * QueuedThreadPool behavior monitored under realistic API load
  * Both lightweight and heavy API operations tested with real-time monitoring

Final Conclusion - Thread Pool Configuration Assessment:
PASS: Thread pool configuration is working correctly.
      The threads.min and threads.max settings are being respected.

================================================================
SYSTEM CLEANUP AND RESOURCE RECOVERY
================================================================
Terminating all background processes...
Clearing system caches...
Verifying cleanup...
  Remaining cmk processes: 0
  Current system load: 7.13
  Memory status:
Mem:          4.8Gi       2.5Gi       2.0Gi       225Mi       352Mi       1.9Gi
Swap:            0B          0B          0B
All background processes terminated successfully
System cleanup completed
================================================================

================================================================
Script Execution Time: 2m 18s
================================================================

Removing lock file...

@sureshanaparti
Copy link
Contributor

ping @ahmedali6 can you check the outstanding comments, and resolve the conflicts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

Configure server max and min threads.
7 participants