Skip to content

Conversation

@hsato03
Copy link
Collaborator

@hsato03 hsato03 commented Nov 28, 2025

Description

When migrating a volume snapshot from one secondary storage to another using the migrateSecondaryStorageData and migrateResourceToAnotherSecondaryStorage APIs, its physical size is set to 0.

This behavior was changed to preserve the physical size of the snapshot after migration.

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • Build/CI
  • Test (unit or integration test code)

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

Bug Severity

  • BLOCKER
  • Critical
  • Major
  • Minor
  • Trivial

Screenshots (if appropriate):

How Has This Been Tested?

I migrated a volume snapshot using the migrateSecondaryStorageData and migrateResourceToAnotherSecondaryStorage APIs and verified that its physical size had not changed.

How did you try to break this feature and the system with this change?

@hsato03
Copy link
Collaborator Author

hsato03 commented Nov 28, 2025

@blueorangutan package

@blueorangutan
Copy link

@hsato03 a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@codecov
Copy link

codecov bot commented Nov 28, 2025

Codecov Report

❌ Patch coverage is 0% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 16.18%. Comparing base (e4414d1) to head (dc7c20d).
⚠️ Report is 62 commits behind head on 4.20.

Files with missing lines Patch % Lines
...ack/storage/image/SecondaryStorageServiceImpl.java 0.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##               4.20   #12166      +/-   ##
============================================
- Coverage     16.18%   16.18%   -0.01%     
- Complexity    13300    13301       +1     
============================================
  Files          5657     5657              
  Lines        498478   498478              
  Branches      60501    60501              
============================================
- Hits          80668    80664       -4     
- Misses       408827   408830       +3     
- Partials       8983     8984       +1     
Flag Coverage Δ
uitests 4.00% <ø> (ø)
unittests 17.03% <0.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a bug where volume snapshot physical size was incorrectly set to 0 after migration between secondary storages using the migrateSecondaryStorageData and migrateResourceToAnotherSecondaryStorage APIs.

  • Corrected the lookup of the destination snapshot store to use destination data store ID and snapshot ID instead of source parameters
  • Aligned the snapshot handling pattern with the existing VolumeInfo and TemplateInfo handling in the same method
Comments suppressed due to low confidence (1)

engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/SecondaryStorageServiceImpl.java:291

  • The fixed bug in the updateDataObject method lacks test coverage. Consider adding a test case to verify that after migrating a snapshot between secondary storages, the physical size is correctly preserved. This would prevent similar regressions in the future.
    private void updateDataObject(DataObject srcData, DataObject destData) {
        if (destData instanceof SnapshotInfo) {
            SnapshotDataStoreVO snapshotStore = snapshotStoreDao.findBySourceSnapshot(srcData.getId(), DataStoreRole.Image);
            SnapshotDataStoreVO destSnapshotStore = snapshotStoreDao.findByStoreSnapshot(DataStoreRole.Image, destData.getDataStore().getId(), destData.getId());
            if (snapshotStore != null && destSnapshotStore != null) {
                destSnapshotStore.setPhysicalSize(snapshotStore.getPhysicalSize());
                destSnapshotStore.setCreated(snapshotStore.getCreated());
                if (snapshotStore.getParentSnapshotId() != destSnapshotStore.getParentSnapshotId()) {
                    destSnapshotStore.setParentSnapshotId(snapshotStore.getParentSnapshotId());
                }
                snapshotStoreDao.update(destSnapshotStore.getId(), destSnapshotStore);
            }

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ el10 ✔️ debian ✔️ suse15. SL-JID 15860

Copy link
Member

@bernardodemarco bernardodemarco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code lgtm

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

Copy link
Contributor

@sureshanaparti sureshanaparti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clgtm

@blueorangutan
Copy link

[SF] Trillian test result (tid-14899)
Environment: kvm-ol8 (x2), zone: Advanced Networking with Mgmt server ol8
Total time taken: 52964 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr12166-t14899-kvm-ol8.zip
Smoke tests completed. 150 look OK, 0 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File

@shwstppr
Copy link
Contributor

shwstppr commented Dec 4, 2025

@hsato03 should this go in 4.20 branch?

@github-actions
Copy link

github-actions bot commented Dec 4, 2025

This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch.

@hsato03 hsato03 changed the base branch from main to 4.20 December 4, 2025 14:04
@hsato03 hsato03 closed this Dec 4, 2025
@hsato03 hsato03 reopened this Dec 4, 2025
@hsato03
Copy link
Collaborator Author

hsato03 commented Dec 4, 2025

@hsato03 should this go in 4.20 branch?

Sure, I changed the base branch to 4.20.

@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ el10 ✔️ debian ✔️ suse15. SL-JID 15941

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@sureshanaparti sureshanaparti added this to the 4.20.3 milestone Dec 8, 2025
@blueorangutan
Copy link

[SF] Trillian test result (tid-14915)
Environment: kvm-ol8 (x2), zone: Advanced Networking with Mgmt server ol8
Total time taken: 57300 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr12166-t14915-kvm-ol8.zip
Smoke tests completed. 138 look OK, 3 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_uservm_host_control_state Failure 17.08 test_host_control_state.py
test_01_sys_vm_start Failure 0.10 test_secondary_storage.py
test_03_secured_to_nonsecured_vm_migration Error 399.71 test_vm_life_cycle.py
test_04_nonsecured_to_secured_vm_migration Error 0.01 test_vm_life_cycle.py

@RosiKyu RosiKyu self-assigned this Jan 12, 2026
@RosiKyu
Copy link
Collaborator

RosiKyu commented Jan 12, 2026

@blueorangutan test

@blueorangutan
Copy link

@RosiKyu a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-15167)
Environment: kvm-ol8 (x2), zone: Advanced Networking with Mgmt server ol8
Total time taken: 52226 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr12166-t15167-kvm-ol8.zip
Smoke tests completed. 140 look OK, 1 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
ContextSuite context=TestClusterDRS>:setup Error 0.00 test_cluster_drs.py

Copy link
Collaborator

@RosiKyu RosiKyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Tested on KVM environment with 2 secondary storages. Verified snapshot physical size is correctly preserved after migration using both migrateResourceToAnotherSecondaryStorage and migrateSecondaryStorageData APIs. Also confirmed migrated snapshot remains functional by creating and attaching a volume from it. LGTM.

Test Execution Summary

Test Case Description API Tested Result
TC1 Snapshot physical size preserved after single resource migration migrateResourceToAnotherSecondaryStorage PASS
TC2 Snapshot physical size preserved after bulk migration migrateSecondaryStorageData PASS
TC3 Migrated snapshot remains usable createVolume + attachVolume PASS

Detailed Test Report:

TC1: Verify volume snapshot physical size is preserved after migration between secondary storages

Objective Verify that when a volume snapshot is migrated from one secondary storage to another using the migrateResourceToAnotherSecondaryStorage API, its physical size is correctly preserved and not set to 0.

Test Steps

  1. Deploy a VM with a root volume
  2. Stop the VM
  3. Create a volume snapshot
  4. Record the snapshot's physical size from the database
  5. Migrate the snapshot to a different secondary storage using migrateResourceToAnotherSecondaryStorage
  6. Verify the physical size is preserved on the destination secondary storage

Expected Result The snapshot's physical size on the destination secondary storage should match the original physical size (1510604800 bytes), not 0.

Actual Result The snapshot's physical size was correctly preserved as 1510604800 bytes after migration to the destination secondary storage.

Test Evidence

Environment: 1 MS, 2 Secondary Storages
Secondary storages:

(localcloud) 🐱 > list imagestores
{
  "count": 2,
  "imagestore": [
    {
      "id": "09dbab49-82da-43e1-9031-815dd0fc5fb8",
      "name": "NFS://10.0.32.4/acs/secondary/ref-trl-10708-k-Mol9-rositsa-kyuchukova/ref-trl-10708-k-Mol9-rositsa-kyuchukova-sec1",
      ...
    },
    {
      "id": "d6d105da-939e-4e60-b369-8a4a5ed6bb0a",
      "name": "NFS://10.0.32.4/acs/secondary/ref-trl-10708-k-Mol9-rositsa-kyuchukova/ref-trl-10708-k-Mol9-rositsa-kyuchukova-sec2",
      ...
    }
  ]
}
  • Deploy and stop VM:
(localcloud) 🐱 > deploy virtualmachine name=test-vm serviceofferingid=5de91f25-5d53-485f-8c63-050499315fe1 templateid=e2b7cd99-f87b-11f0-983f-1e00fa000343 zoneid=6b3966e1-7372-4033-bf45-1af3c5bcbc25 networkids=6bde88da-a68b-4bf0-938a-c849bf85c85a
{
  "virtualmachine": {
    "id": "4e98d18e-3f1c-4d9f-8b64-b2facc7e8e71",
    "name": "test-vm",
    "state": "Running",
    ...
  }
}
  • Create snapshot (after stopping VM):
(localcloud) 🐱 > create snapshot volumeid=b196a9aa-2d9e-4d8f-9a72-01ea3820bb84
{
  "snapshot": {
    "id": "0d9550f3-7f93-4c5f-91f5-bd81b12daca7",
    "name": "test-vm_ROOT-4_20260126065644",
    "physicalsize": 1510604800,
    "state": "BackedUp",
    "volumeid": "b196a9aa-2d9e-4d8f-9a72-01ea3820bb84",
    ...
  }
}
  • Verify snapshot physical size before migration (on sec1, store_id=1):
mysql> SELECT id, snapshot_id, store_id, store_role, state, physical_size, install_path 
       FROM cloud.snapshot_store_ref 
       WHERE snapshot_id IN (SELECT id FROM cloud.snapshots WHERE uuid='0d9550f3-7f93-4c5f-91f5-bd81b12daca7');
+----+-------------+----------+------------+-------+---------------+------------------------------------------------------------------------------------------+
| id | snapshot_id | store_id | store_role | state | physical_size | install_path                                                                             |
+----+-------------+----------+------------+-------+---------------+------------------------------------------------------------------------------------------+
|  1 |           1 |        1 | Primary    | Ready |    8589934592 | /mnt/a4e6c7f4-aad3-3063-bef4-45d2face66df/snapshots/5ce8ff95-9983-4b1f-895b-de7612d487c0 |
|  2 |           1 |        1 | Image      | Ready |    1510604800 | snapshots/2/3/5ce8ff95-9983-4b1f-895b-de7612d487c0                                       |
+----+-------------+----------+------------+-------+---------------+------------------------------------------------------------------------------------------+
  • Migrate snapshot from sec1 to sec2:*
(localcloud) 🐱 > migrateResourceToAnotherSecondaryStorage snapshots=0d9550f3-7f93-4c5f-91f5-bd81b12daca7 srcpool=09dbab49-82da-43e1-9031-815dd0fc5fb8 destpool=d6d105da-939e-4e60-b369-8a4a5ed6bb0a
{
  "imagestore": {
    "message": "Migration completed. successful migrations: 1",
    "success": true
  }
}
  • Verify physical size is preserved after migration (on sec2, store_id=2):
mysql> SELECT id, snapshot_id, store_id, store_role, state, physical_size, install_path 
       FROM cloud.snapshot_store_ref 
       WHERE snapshot_id IN (SELECT id FROM cloud.snapshots WHERE uuid='0d9550f3-7f93-4c5f-91f5-bd81b12daca7');
+----+-------------+----------+------------+-------+---------------+------------------------------------------------------------------------------------------+
| id | snapshot_id | store_id | store_role | state | physical_size | install_path                                                                             |
+----+-------------+----------+------------+-------+---------------+------------------------------------------------------------------------------------------+
|  1 |           1 |        1 | Primary    | Ready |    8589934592 | /mnt/a4e6c7f4-aad3-3063-bef4-45d2face66df/snapshots/5ce8ff95-9983-4b1f-895b-de7612d487c0 |
|  3 |           1 |        2 | Image      | Ready |    1510604800 | snapshots/2/3/5ce8ff95-9983-4b1f-895b-de7612d487c0                                       |
+----+-------------+----------+------------+-------+---------------+------------------------------------------------------------------------------------------+

Test Result: PASS The physical size (1510604800 bytes) was correctly preserved on the destination secondary storage (store_id=2) after migration.

TC2: Verify volume snapshot physical size is preserved using migrateSecondaryStorageData API (bulk migration)

Objective: Verify that when volume snapshots are migrated using the bulk migrateSecondaryStorageData API, the physical size is correctly preserved and not set to 0.

Test Steps:

  1. Using the existing snapshot from TC1 (currently on sec2)
  2. Migrate all data from sec2 to sec1 using migrateSecondaryStorageData with migrationtype=Complete
  3. Verify the physical size is preserved on the destination secondary storage

Expected Result: The snapshot's physical size on the destination secondary storage should match the original physical size (1510604800 bytes), not 0.

Actual Result: The snapshot's physical size was correctly preserved as 1510604800 bytes after bulk migration to the destination secondary storage.

Test Evidence:

  • Initial state - snapshot on sec2 (store_id=2) from TC1:
mysql> SELECT id, snapshot_id, store_id, store_role, state, physical_size, install_path 
       FROM cloud.snapshot_store_ref 
       WHERE snapshot_id IN (SELECT id FROM cloud.snapshots WHERE uuid='0d9550f3-7f93-4c5f-91f5-bd81b12daca7');
+----+-------------+----------+------------+-------+---------------+------------------------------------------------------------------------------------------+
| id | snapshot_id | store_id | store_role | state | physical_size | install_path                                                                             |
+----+-------------+----------+------------+-------+---------------+------------------------------------------------------------------------------------------+
|  1 |           1 |        1 | Primary    | Ready |    8589934592 | /mnt/a4e6c7f4-aad3-3063-bef4-45d2face66df/snapshots/5ce8ff95-9983-4b1f-895b-de7612d487c0 |
|  3 |           1 |        2 | Image      | Ready |    1510604800 | snapshots/2/3/5ce8ff95-9983-4b1f-895b-de7612d487c0                                       |
+----+-------------+----------+------------+-------+---------------+------------------------------------------------------------------------------------------+
  • Bulk migrate from sec2 to sec1:*
(localcloud) 🐱 > migrateSecondaryStorageData srcpool=d6d105da-939e-4e60-b369-8a4a5ed6bb0a destpools=09dbab49-82da-43e1-9031-815dd0fc5fb8 migrationtype=Complete
{
  "imagestore": {
    "message": "Migration completed. successful migrations: 3",
    "migrationtype": "COMPLETE",
    "success": true
  }
}
  • Verify physical size is preserved after migration (on sec1, store_id=1):
mysql> SELECT id, snapshot_id, store_id, store_role, state, physical_size, install_path 
       FROM cloud.snapshot_store_ref 
       WHERE snapshot_id IN (SELECT id FROM cloud.snapshots WHERE uuid='0d9550f3-7f93-4c5f-91f5-bd81b12daca7');
+----+-------------+----------+------------+-------+---------------+------------------------------------------------------------------------------------------+
| id | snapshot_id | store_id | store_role | state | physical_size | install_path                                                                             |
+----+-------------+----------+------------+-------+---------------+------------------------------------------------------------------------------------------+
|  1 |           1 |        1 | Primary    | Ready |    8589934592 | /mnt/a4e6c7f4-aad3-3063-bef4-45d2face66df/snapshots/5ce8ff95-9983-4b1f-895b-de7612d487c0 |
|  4 |           1 |        1 | Image      | Ready |    1510604800 | snapshots/2/3/5ce8ff95-9983-4b1f-895b-de7612d487c0                                       |
+----+-------------+----------+------------+-------+---------------+------------------------------------------------------------------------------------------+

Test Result: PASS The physical size (1510604800 bytes) was correctly preserved on sec1 (store_id=1) after bulk migration using migrateSecondaryStorageData

TC3: Verify migrated snapshot remains usable (create and attach volume from snapshot)

Objective: Verify that a snapshot which has been migrated between secondary storages remains fully functional and can be used to create new volumes.

Test Steps:

  1. Using the snapshot from TC1/TC2 that has been migrated between secondary storages
  2. Create a new volume from the migrated snapshot
  3. Attach the volume to a VM to verify it's fully usable

Expected Result: Volume should be created successfully from the migrated snapshot and attach to the VM without errors.

Actual Result: Volume was created successfully with state "Ready" and attached to the VM.

Test Evidence:

  • create volume from migrated snapshot:
(localcloud) 🐱 > create volume name=vol-from-snapshot snapshotid=0d9550f3-7f93-4c5f-91f5-bd81b12daca7
{
  "volume": {
    "id": "c28f97c6-3a19-48cc-9dd4-2641a59695b4",
    "name": "vol-from-snapshot",
    "snapshotid": "0d9550f3-7f93-4c5f-91f5-bd81b12daca7",
    "size": 8589934592,
    "state": "Ready",
    "storage": "ref-trl-10708-k-Mol9-rositsa-kyuchukova-kvm-pri2",
    "type": "DATADISK",
    ...
  }
}
  • Attach volume to VM:
(localcloud) 🐱 > attach volume id=c28f97c6-3a19-48cc-9dd4-2641a59695b4 virtualmachineid=4e98d18e-3f1c-4d9f-8b64-b2facc7e8e71
{
  "volume": {
    "id": "c28f97c6-3a19-48cc-9dd4-2641a59695b4",
    "name": "vol-from-snapshot",
    "attached": "2026-01-26T07:15:30+0000",
    "deviceid": 1,
    "state": "Ready",
    "virtualmachineid": "4e98d18e-3f1c-4d9f-8b64-b2facc7e8e71",
    "vmdisplayname": "test-vm",
    "vmstate": "Stopped",
    ...
  }
}

Test Result: PASS

@borisstoyanov borisstoyanov merged commit 36edd92 into apache:4.20 Jan 26, 2026
24 of 26 checks passed
DaanHoogland pushed a commit that referenced this pull request Jan 26, 2026
* 4.22:
  fix install path for systemvm templates when introducing new sec storage (#11605)
  fix Sensitive Data Exposure Through Exception Logging in OVM Hypervis… (#12032)
  Fix snapshot physical size after migration (#12166)
  ConfigDrive: use file absolute path instead of canonical path to create ISO (#11623)
  Add log for null templateVO (#12406)
  snapshot: fix listSnapshots for volume which got delete and whose storage pool got deleted (#12433)
  Notify user if template upgrade is not required (#12483)
  Fix: proper permissions for systemvm template registrations on hardened systems (#12098)
  Allow modification of user vm details if user.vm.readonly.details is empty (#10456)
  NPE fix while deleting storage pool when pool has detached volumes (#12451)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants