Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kvm: ref-count storage pool usage #9498

Open
wants to merge 1 commit into
base: 4.19
Choose a base branch
from

Conversation

rp-
Copy link
Contributor

@rp- rp- commented Aug 7, 2024

Description

If a storage pool is used by e.g. 2 concurrent snapshot->template actions, if the first action finished it removed the netfs mount point for the other action.
Now the storage pools are usage ref-counted and will only deleted if there are no more users.

Fixes: #8899

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • build/CI
  • test (unit or integration test code)

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

Bug Severity

  • BLOCKER
  • Critical
  • Major
  • Minor
  • Trivial

Screenshots (if appropriate):

How Has This Been Tested?

Run several snapshot to template actions, that are executed on the same host.

How did you try to break this feature and the system with this change?

@rp- rp- self-assigned this Aug 7, 2024
Copy link

codecov bot commented Aug 7, 2024

Codecov Report

Attention: Patch coverage is 59.09091% with 9 lines in your changes missing coverage. Please review.

Project coverage is 15.11%. Comparing base (a0932b0) to head (c599a58).
Report is 1 commits behind head on 4.19.

Files with missing lines Patch % Lines
.../hypervisor/kvm/storage/LibvirtStorageAdaptor.java 59.09% 8 Missing and 1 partial ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##               4.19    #9498      +/-   ##
============================================
+ Coverage     15.08%   15.11%   +0.02%     
+ Complexity    11192    11190       -2     
============================================
  Files          5406     5406              
  Lines        473215   473237      +22     
  Branches      61680    58357    -3323     
============================================
+ Hits          71386    71523     +137     
- Misses       393880   393906      +26     
+ Partials       7949     7808     -141     
Flag Coverage Δ
uitests 4.76% <ø> (+0.46%) ⬆️
unittests 15.80% <59.09%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@DaanHoogland DaanHoogland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clgtm, you do have a good test scenario for this, do you @rp- ? Or is it only intermitted (i.e. not automatable)

@rp-
Copy link
Contributor Author

rp- commented Aug 12, 2024

clgtm, you do have a good test scenario for this, do you @rp- ? Or is it only intermitted (i.e. not automatable)

I'm not sure it is easy to reproducible automate that, as it is a timing/parallelism issue.
I didn't even try yet if an NFS primary storage uses the same code paths, but I might do that this week to see if it would also be affected somehow.

But we have 2 customers who didn't report any issues with this yet.

@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 10620

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-11065)
Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8
Total time taken: 47025 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr9498-t11065-kvm-ol8.zip
Smoke tests completed. 127 look OK, 6 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_01_add_primary_storage_disabled_host Error 0.33 test_primary_storage.py
test_01_primary_storage_nfs Error 0.37 test_primary_storage.py
ContextSuite context=TestStorageTags>:setup Error 0.63 test_primary_storage.py
test_01_primary_storage_scope_change Error 0.21 test_primary_storage_scope.py
ContextSuite context=TestCpuCapServiceOfferings>:setup Error 0.00 test_service_offerings.py
test_02_list_snapshots_with_removed_data_store Error 8.74 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 8.75 test_snapshots.py
test_01_deploy_vm_on_specific_host Error 0.11 test_vm_deployment_planner.py
test_04_deploy_vm_on_host_override_pod_and_cluster Error 0.14 test_vm_deployment_planner.py
test_01_migrate_VM_and_root_volume Error 83.40 test_vm_life_cycle.py
test_02_migrate_VM_with_two_data_disks Error 50.91 test_vm_life_cycle.py
test_01_secure_vm_migration Error 134.41 test_vm_life_cycle.py
test_01_secure_vm_migration Error 134.42 test_vm_life_cycle.py
test_08_migrate_vm Error 0.06 test_vm_life_cycle.py

@rohityadavcloud rohityadavcloud added this to the 4.19.2.0 milestone Sep 3, 2024
@rohityadavcloud
Copy link
Member

@blueorangutan package

@blueorangutan
Copy link

@rohityadavcloud a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 10927

@blueorangutan
Copy link

Packaging result [SF]: ✖️ el8 ✖️ el9 ✔️ debian ✖️ suse15. SL-JID 10950

@rp-
Copy link
Contributor Author

rp- commented Sep 5, 2024

I guess the failed packaging is nothing related to this PR?

@DaanHoogland
Copy link
Contributor

I guess the failed packaging is nothing related to this PR?

11:02:25 [ERROR] Failures: 
11:02:25 [ERROR]   VMSchedulerImplTest.testScheduleNextJobScheduleCurrentSchedule:262 expected:<Wed Sep 04 09:02:00 UTC 2024> but was:<Wed Sep 04 09:03:00 UTC 2024>

looks like a test was too slow , so might have to do with too busy container. retrying @rp-

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 10988

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-11364)
Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8
Total time taken: 56881 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr9498-t11364-kvm-ol8.zip
Smoke tests completed. 125 look OK, 8 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_01_add_primary_storage_disabled_host Error 0.66 test_primary_storage.py
test_01_primary_storage_nfs Error 0.33 test_primary_storage.py
ContextSuite context=TestStorageTags>:setup Error 0.62 test_primary_storage.py
test_01_primary_storage_scope_change Error 0.22 test_primary_storage_scope.py
ContextSuite context=TestCpuCapServiceOfferings>:setup Error 0.00 test_service_offerings.py
test_02_list_snapshots_with_removed_data_store Error 9.77 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 9.77 test_snapshots.py
test_01_volume_usage Failure 848.98 test_usage.py
test_01_deploy_vm_on_specific_host Error 0.10 test_vm_deployment_planner.py
test_04_deploy_vm_on_host_override_pod_and_cluster Error 0.13 test_vm_deployment_planner.py
test_01_migrate_VM_and_root_volume Error 87.68 test_vm_life_cycle.py
test_02_migrate_VM_with_two_data_disks Error 52.01 test_vm_life_cycle.py
test_01_secure_vm_migration Error 316.92 test_vm_life_cycle.py
test_02_unsecure_vm_migration Error 459.21 test_vm_life_cycle.py
test_08_migrate_vm Error 0.09 test_vm_life_cycle.py
test_06_download_detached_volume Error 310.28 test_volumes.py

@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 11033

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-11408)
Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8
Total time taken: 44373 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr9498-t11408-kvm-ol8.zip
Smoke tests completed. 127 look OK, 6 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_01_add_primary_storage_disabled_host Error 0.31 test_primary_storage.py
test_01_primary_storage_nfs Error 0.30 test_primary_storage.py
ContextSuite context=TestStorageTags>:setup Error 0.60 test_primary_storage.py
test_01_primary_storage_scope_change Error 0.21 test_primary_storage_scope.py
ContextSuite context=TestCpuCapServiceOfferings>:setup Error 0.00 test_service_offerings.py
test_02_list_snapshots_with_removed_data_store Error 8.63 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 8.63 test_snapshots.py
test_01_deploy_vm_on_specific_host Error 0.09 test_vm_deployment_planner.py
test_04_deploy_vm_on_host_override_pod_and_cluster Error 0.14 test_vm_deployment_planner.py
test_01_migrate_VM_and_root_volume Error 82.29 test_vm_life_cycle.py
test_02_migrate_VM_with_two_data_disks Error 50.76 test_vm_life_cycle.py
test_01_secure_vm_migration Error 134.37 test_vm_life_cycle.py
test_01_secure_vm_migration Error 134.37 test_vm_life_cycle.py
test_08_migrate_vm Error 0.08 test_vm_life_cycle.py

@rp-
Copy link
Contributor Author

rp- commented Sep 11, 2024

Are there more logs to this "needs access to storage pool"?
Or is this another problem?

@DaanHoogland
Copy link
Contributor

Are there more logs to this "needs access to storage pool"? Or is this another problem?

sorry @rp- , missing context here. If you are referring to the smoke tests, the download does contain the management server logs.

@rp-
Copy link
Contributor Author

rp- commented Sep 11, 2024

I see exceptions like this:

2024-09-06 16:25:57,620 DEBUG [c.c.a.t.Request] (AgentManager-Handler-13:null) (logid:) Seq 1-8334192585425814054: Processing:  { Ans: , MgmtId: 32986405799053, via: 1, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: libvirt failed to mount storage pool 97fc931d-601a-3ec4-b2bd-5634380ea92b at /mnt/97fc931d-601a-3ec4-b2bd-5634380ea92b
	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.checkNetfsStoragePoolMounted(LibvirtStorageAdaptor.java:284)
	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:787)
	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:364)
	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:358)
	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1929)
	at com.cloud.agent.Agent.processRequest(Agent.java:683)
	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1106)
	at com.cloud.utils.nio.Task.call(Task.java:83)
	at com.cloud.utils.nio.Task.call(Task.java:29)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
","wait":"0","bypassHostMaintenance":"false"}}] }

But here the agent logs would be interesting.

@DaanHoogland
Copy link
Contributor

ah, no, i don´t have those. I'll run again without the teardown step and have a look.

@DaanHoogland
Copy link
Contributor

@blueorangutan test keepEnv

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-11452)
Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8
Total time taken: 43615 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr9498-t11452-kvm-ol8.zip
Smoke tests completed. 127 look OK, 6 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_01_add_primary_storage_disabled_host Error 0.41 test_primary_storage.py
test_01_primary_storage_nfs Error 0.30 test_primary_storage.py
ContextSuite context=TestStorageTags>:setup Error 0.61 test_primary_storage.py
test_01_primary_storage_scope_change Error 0.29 test_primary_storage_scope.py
ContextSuite context=TestCpuCapServiceOfferings>:setup Error 0.00 test_service_offerings.py
test_02_list_snapshots_with_removed_data_store Error 8.68 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 8.68 test_snapshots.py
test_01_deploy_vm_on_specific_host Error 0.09 test_vm_deployment_planner.py
test_04_deploy_vm_on_host_override_pod_and_cluster Error 0.17 test_vm_deployment_planner.py
test_01_migrate_VM_and_root_volume Error 81.31 test_vm_life_cycle.py
test_02_migrate_VM_with_two_data_disks Error 53.91 test_vm_life_cycle.py
test_01_secure_vm_migration Error 134.43 test_vm_life_cycle.py
test_01_secure_vm_migration Error 134.43 test_vm_life_cycle.py
test_08_migrate_vm Error 0.07 test_vm_life_cycle.py

Comment on lines 641 to 673
/**
* Thread-safe increment storage pool usage refcount
* @param uuid UUID of the storage pool to increment the count
*/
private void incStoragePoolRefCount(String uuid) {
synchronized (storagePoolRefCounts) {
int refCount = storagePoolRefCounts.computeIfAbsent(uuid, k -> 0);
refCount += 1;
storagePoolRefCounts.put(uuid, refCount);
}
}

/**
* Thread-safe decrement storage pool usage refcount for the given uuid and return if storage pool still in use.
* @param uuid UUID of the storage pool to decrement the count
* @return true if the storage pool is still used, else false.
*/
private boolean decStoragePoolRefCount(String uuid) {
synchronized (storagePoolRefCounts) {
Integer refCount = storagePoolRefCounts.get(uuid);
if (refCount != null && refCount > 1) {
s_logger.debug(String.format("Storage pool %s still in use, refCount %d", uuid, refCount));
refCount -= 1;
storagePoolRefCounts.put(uuid, refCount);
return true;
} else {
storagePoolRefCounts.remove(uuid);
return false;
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rp- , should inc and dec maybe both call the same bit of synchronised code so that no inc and dec can happen at the same time as well? The storagePoolRefCounts can be a synchronised map to prevent strangeties happening on the map, but we also want to prevent multiple access on the elements.

Suggested change
/**
* Thread-safe increment storage pool usage refcount
* @param uuid UUID of the storage pool to increment the count
*/
private void incStoragePoolRefCount(String uuid) {
synchronized (storagePoolRefCounts) {
int refCount = storagePoolRefCounts.computeIfAbsent(uuid, k -> 0);
refCount += 1;
storagePoolRefCounts.put(uuid, refCount);
}
}
/**
* Thread-safe decrement storage pool usage refcount for the given uuid and return if storage pool still in use.
* @param uuid UUID of the storage pool to decrement the count
* @return true if the storage pool is still used, else false.
*/
private boolean decStoragePoolRefCount(String uuid) {
synchronized (storagePoolRefCounts) {
Integer refCount = storagePoolRefCounts.get(uuid);
if (refCount != null && refCount > 1) {
s_logger.debug(String.format("Storage pool %s still in use, refCount %d", uuid, refCount));
refCount -= 1;
storagePoolRefCounts.put(uuid, refCount);
return true;
} else {
storagePoolRefCounts.remove(uuid);
return false;
}
}
}
/**
* adjust refcount
*/
private int adjustStoragePoolRefCount(Sting uuid, int adjustment) {
synchronised(uuid) {
// some access on the storagePoolRefCounts.key(uuid) element
Integer refCount = storagePoolRefCounts.computeIfAbsent(uuid, k -> 0);
refCount += adjustment;
storagePoolRefCounts.put(uuid, refCount);
if (refCount < 1) {
storagePoolRefCounts.remove(uuid);
} else {
storagePoolRefCounts.put(uuid, refCount);
}
}
return refCount;
}
/**
* Thread-safe increment storage pool usage refcount
* @param uuid UUID of the storage pool to increment the count
*/
private void incStoragePoolRefCount(String uuid) {
adjustStoragePoolRefCount(uuid, 1);
}
/**
* Thread-safe decrement storage pool usage refcount for the given uuid and return if storage pool still in use.
* @param uuid UUID of the storage pool to decrement the count
* @return true if the storage pool is still used, else false.
*/
private boolean decStoragePoolRefCount(String uuid) {
return adjustStoragePoolRefCount(uuid, -1) > 0;
}

???

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't fully follow you here, any map access is synchronized so there can't be any concurrent access to elements?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or it just stroke me, that maybe the LibVirtStorageAdaptor is no singleton object and there could be more instances, but than it would probably be good enough to make the map static?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

two remarks:

  1. Can you considder a threadsafe collection ? i.e. concurrent map.
  2. the map can be guarded (as concurrent map or by your own synchronised blocks) but I don't think that will guard its elements from being guarded, only the structure of the map itself.
  3. (there is always a third) blocking access to the map globally may impact performance (though in this case that would have to be on very big hosts.

I think you are right about making the map static, btw.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Sure I can also use concurrenthashmap
  2. Sure if you leak the elements out of the synchronization blocks they are not thread safe anymore, but the current code doesn't do that.
  3. As far as I read, concurrenthashmap only locks on element level, so switching to that would already be good.

If a storage pool is used by e.g. 2 concurrent snapshot->template
actions, if the first action finished it removed the netfs mount
point for the other action.
Now the storage pools are usage ref-counted and will only
deleted if there are no more users.
@rp- rp- force-pushed the 4.19-kvm-refcount-storagepool-usage branch from 669f29d to c599a58 Compare September 17, 2024 14:45
@rp-
Copy link
Contributor Author

rp- commented Sep 17, 2024

@blueorangutan package

@blueorangutan
Copy link

@rp- a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 11134

@rp-
Copy link
Contributor Author

rp- commented Sep 17, 2024

@blueorangutan test

1 similar comment
@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-11507)
Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8
Total time taken: 44477 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr9498-t11507-kvm-ol8.zip
Smoke tests completed. 127 look OK, 6 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_01_add_primary_storage_disabled_host Error 0.42 test_primary_storage.py
test_01_primary_storage_nfs Error 0.33 test_primary_storage.py
ContextSuite context=TestStorageTags>:setup Error 0.66 test_primary_storage.py
test_01_primary_storage_scope_change Error 0.27 test_primary_storage_scope.py
ContextSuite context=TestCpuCapServiceOfferings>:setup Error 0.00 test_service_offerings.py
test_02_list_snapshots_with_removed_data_store Error 10.76 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 10.76 test_snapshots.py
test_01_deploy_vm_on_specific_host Error 0.11 test_vm_deployment_planner.py
test_04_deploy_vm_on_host_override_pod_and_cluster Error 0.14 test_vm_deployment_planner.py
test_01_migrate_VM_and_root_volume Error 81.46 test_vm_life_cycle.py
test_02_migrate_VM_with_two_data_disks Error 56.21 test_vm_life_cycle.py
test_01_secure_vm_migration Error 134.43 test_vm_life_cycle.py
test_01_secure_vm_migration Error 134.44 test_vm_life_cycle.py
test_08_migrate_vm Error 0.07 test_vm_life_cycle.py

Comment on lines +650 to +655
storagePoolRefCounts.put(uuid, refCount);
if (refCount < 1) {
storagePoolRefCounts.remove(uuid);
} else {
storagePoolRefCounts.put(uuid, refCount);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
storagePoolRefCounts.put(uuid, refCount);
if (refCount < 1) {
storagePoolRefCounts.remove(uuid);
} else {
storagePoolRefCounts.put(uuid, refCount);
}
if (refCount < 1) {
storagePoolRefCounts.remove(uuid);
} else {
storagePoolRefCounts.put(uuid, refCount);
}

Why put the refCount only to remove or put it again in the next instruction?

* adjust refcount
*/
private int adjustStoragePoolRefCount(String uuid, int adjustment) {
synchronized (uuid) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this synchronized work as expected? Do we guarantee that every String with the same uuid as value is the same object?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants