Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

volatile_memory: Only use volatile copy for small objects #117

Conversation

rbradford
Copy link
Contributor

@rbradford rbradford commented Oct 2, 2020

Where small objects are those objects that are less then the native data
width for the platform. This ensure that volatile and alignment safe
read/writes are used when updating structures that are sensitive to this
such as virtio devices where the spec requires writes to be atomic.

Fixes: cloud-hypervisor/cloud-hypervisor#1258
Fixes: #100

Signed-off-by: Rob Bradford [email protected]

@rbradford rbradford force-pushed the 2020-10-02-only-use-torn-path-for-slow-writes branch from 2042106 to 6f2f710 Compare October 2, 2020 14:50
@rbradford
Copy link
Contributor Author

Consensus in #100 seems to be that this is the right approach. It would be nice if we could drop our fork of vm-memory.

@jiangliu
Copy link
Member

jiangliu commented Oct 2, 2020

Now the vm-memory create has atomic interfaces for atomic access. Is it suitable to split existing vm-memory interfaces into byte stream access and object access?
Current the read_obj()/write_obj() could be classified as object access, and all other existing interfaces of GuestMemory are byte stream accesses. So instead of heuristic algorithm, we may use:

  1. copy_slice() for read_obj()/write_obj() with correctly aligned address
  2. revert to original slice.write(buf) for other access methods.

Where small objects are those objects that are less then the native data
width for the platform. This ensure that volatile and alignment safe
read/writes are used when updating structures that are sensitive to this
such as virtio devices where the spec requires writes to be atomic.

Fixes: cloud-hypervisor/cloud-hypervisor#1258
Fixes: rust-vmm#100

Signed-off-by: Rob Bradford <[email protected]>
@rbradford rbradford force-pushed the 2020-10-02-only-use-torn-path-for-slow-writes branch from 6f2f710 to 6ec2101 Compare October 2, 2020 15:09
@rbradford
Copy link
Contributor Author

Now the vm-memory create has atomic interfaces for atomic access. Is it suitable to split existing vm-memory interfaces into byte stream access and object access?
Current the read_obj()/write_obj() could be classified as object access, and all other existing interfaces of GuestMemory are byte stream accesses. So instead of heuristic algorithm, we may use:

1. copy_slice() for read_obj()/write_obj() with correctly aligned address

2. revert to original slice.write(buf) for other access methods.

Sorry @jiangliu I don't really follow what you are saying. Could you elaborate more, giving some concrete examples? I think you're saying that we need to check everywhere that the code is being used to ensure we're using the correct API?

@jiangliu
Copy link
Member

jiangliu commented Oct 2, 2020

Now the vm-memory create has atomic interfaces for atomic access. Is it suitable to split existing vm-memory interfaces into byte stream access and object access?
Current the read_obj()/write_obj() could be classified as object access, and all other existing interfaces of GuestMemory are byte stream accesses. So instead of heuristic algorithm, we may use:

1. copy_slice() for read_obj()/write_obj() with correctly aligned address

2. revert to original slice.write(buf) for other access methods.

Sorry @jiangliu I don't really follow what you are saying. Could you elaborate more, giving some concrete examples? I think you're saying that we need to check everywhere that the code is being used to ensure we're using the correct API?

Sorry for my poor english. What I means is

  1. revert DoS issue when using virtio with rust-vmm/vm-memory #95
  2. only use copy_slice()(introduced by DoS issue when using virtio with rust-vmm/vm-memory #95 ) for read_obj()/write_obj().

@rbradford
Copy link
Contributor Author

Now the vm-memory create has atomic interfaces for atomic access. Is it suitable to split existing vm-memory interfaces into byte stream access and object access?
Current the read_obj()/write_obj() could be classified as object access, and all other existing interfaces of GuestMemory are byte stream accesses. So instead of heuristic algorithm, we may use:

1. copy_slice() for read_obj()/write_obj() with correctly aligned address

2. revert to original slice.write(buf) for other access methods.

Sorry @jiangliu I don't really follow what you are saying. Could you elaborate more, giving some concrete examples? I think you're saying that we need to check everywhere that the code is being used to ensure we're using the correct API?

Sorry for my poor english. What I means is

1. revert #95

2. only use copy_slice()(introduced by #95 ) for read_obj()/write_obj().

Ah, cool, I get it. I think that might solve our problem as we use the .write_slice()/.read_slice() entry points on our critical paths. Are you going to propose a patch?

@jiangliu
Copy link
Member

jiangliu commented Oct 2, 2020

Now the vm-memory create has atomic interfaces for atomic access. Is it suitable to split existing vm-memory interfaces into byte stream access and object access?
Current the read_obj()/write_obj() could be classified as object access, and all other existing interfaces of GuestMemory are byte stream accesses. So instead of heuristic algorithm, we may use:

1. copy_slice() for read_obj()/write_obj() with correctly aligned address

2. revert to original slice.write(buf) for other access methods.

Sorry @jiangliu I don't really follow what you are saying. Could you elaborate more, giving some concrete examples? I think you're saying that we need to check everywhere that the code is being used to ensure we're using the correct API?

Sorry for my poor english. What I means is

1. revert #95

2. only use copy_slice()(introduced by #95 ) for read_obj()/write_obj().

Ah, cool, I get it. I think that might solve our problem as we use the .write_slice()/.read_slice() entry points on our critical paths. Are you going to propose a patch?

We are on Chinese National Holiday this week, so it would be great if you could help on this:)

@alexandruag
Copy link
Collaborator

Hi, sorry for the late review. The changes look good! #95 was necessary because we were using read/write_obj as if they have atomic access semantics (which is required in several places, such as virtio queue handling). The recently published v0.3.0 adds the load and store methods to Bytes, which are guaranteed to perform atomic accesses (with a user-specified ordering).

At this point we can go back to a world where read/write_obj do not have to provide any guarantees w.r.t. concurrent access, which simplifies the code and revers the performance impact. However, we have to ensure (to the extent possible) that vm-memory users switch to the new methods for accesses that need to be atomic first. We can use this patch while the transition is going on. Does that sound reasonable?

@alexandruag
Copy link
Collaborator

Hi again! Should we merge this? Also, what do you think about reverting #95 entirely at some point in the future? (including for read/write_obj since they are not actually supposed to be used for atomic accesses; we need to clarify that as part of the interface documentation as well)

@jiangliu
Copy link
Member

jiangliu commented Nov 6, 2020

It sounds reasonable to merge this PR as a transient solution to avoid regressions. And eventually we should revert to copy_from_slice.

@slp
Copy link
Contributor

slp commented Nov 6, 2020

I was wondering when would be the right time to revert #95 ? Perhaps we can use the recent jump to 0.3.x as an opportunity to do it now? Users needing the old behavior can stay at 0.2.2.

@alexandruag
Copy link
Collaborator

I think that reverting should take place as part of a subsequent series (i.e. 0.4.x), so that ppl can use 0.3.x to transition to the new atomic methods while the old read/write_obj behaviour remains in place as well. Afterwards, the main concerns are that consumers which still rely on the old semantics should be careful not to inadvertently upgrade to the newer versions, and we'll prob have to backport stuff for a while. I guess we should start by engaging the known customers of vm-memory (and also open an issue about it) to see how fast we can move forward. I'll start doing that (+ double check once more and merge this PR if there are no further comments) by tomorrow.

@slp
Copy link
Contributor

slp commented Nov 9, 2020

@alexandruag Sounds like a sensible strategy to me.

@alexandruag alexandruag merged commit ca43226 into rust-vmm:master Nov 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Performance degradation after fixing #95 Major performance change with vm-memory bump
4 participants