-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Query performance optimization #3
Comments
v1 was developed this way. The main problem with this approach is if your game adds, removes and modifies a lot of entities, perfs would be horrible. I haven't find an efficient way to implement a similar caching system. The perf bottleneck comes from // entity_manager.js
EntityManager.prototype.query = function *() {
const mask = this._components.masks(...arguments)
for (var id = 0; id < this._entityCounter; id++) {
if (this._entityFlag[id] === ENTITY_ALIVE && (this._entityMask[id] & mask) === mask) {
yield this._entityInst[id]
}
}
} // my_system.js
update(em) {
for (let entity of em.query(A, B, C)) {
// do stuff
}
} This would solve all perf issues since no intermediate array would be created.
If you look closer at benchmarks you will see that numbers are currently pretty good! Let's do some quick math:
Equals: 15.000 tick/s If you succeed in implementing a fast, bug free, caching system, I would be happy to merge it! 😉 |
Interesting! I like the yield idea. |
So I did an initial implementation and the tests are passing, but I'm adding more tests to be absolutely sure it works since it's a very error prone thing. Initial test results: 100k entities: 10k entities: 1k entities Your current implementation: 100k entities 10k entities 1k entities Not as good as I had hoped, it's actually slower when dealing with a smaller amount of entities but it scales better and when returning only a few matches the number stays constant independent of the amount of alive entities. Like you said, push is in fact the biggest bottleneck. |
If you have a public branch I could drop an eye and suggest some optimizations. |
Ok, but let me make sure it works correctly first. I need to test absolutely every scenario because if even one thing is wrong, it won't function correctly. Just the cache index calculation looks like this (Yeah I know, I will make it readable lol): EntityManager.prototype._getCacheIndex = function(cache, entity) {
var index = (cache.arr.length - 1) - ((entity._oldCacheIndex - entity._oldCacheLength) + (entity._oldCacheLength - (cache.arr.length - 1)))
return (cache.start + index) % cache.arr.length
} This one is only used when deleting/removing entities from one mask array in the cache to avoid having to loop over each entity. |
So I came up with a much much nicer way to do this! It's much simpler too (can't believe I was so stupid not to think of it). I think you'll be happy with the results. I'm currently implementing it, should be ready soon! |
Check it out! The solution is, instead of using splice to remove entities from each cache, we simply replace the entity with the last element in the cache array. This means we don't have to update any indices since we are just moving it around, leaving other elements unaffected. This gives us a constant time for destroying/removing components/adding components. |
Can you make a PR to enable code annotations? |
Done #4 |
At the moment the speed of querying depends on the amount of entities. As this should be one of the most common operations performed I think it should be optimized. I did a little experiment where I'd store all entities with the same mask in an array which could be accessed by their mask through a hashmap.
...etc
The entities then move around based on their mask. This lets us get a constant query time no matter the amount of entities by simply testing the query mask against each index in the hashmap. Initial tests gave ~5mil queries per second no matter the amount of entities (with hardly any optimizations applied).
The problem is that you'd have to keep track of each entitys index in its hashMap[entityMask] array to remove it from there when its mask changes or it is destroyed. You'd also have to update the indices of the other entities in the mask array which would in turn make any operation that changes the mask of entities slower (the speed would now be determined by the amount of entities with the same mask and the index of the entity that was removed).
Thoughts? Or perhaps other ideas on how to do it? If something was unclear I'll explain it better.
The text was updated successfully, but these errors were encountered: