Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run until no more work is pending #134

Merged
merged 2 commits into from
Aug 22, 2024
Merged

Run until no more work is pending #134

merged 2 commits into from
Aug 22, 2024

Conversation

danpalmer
Copy link
Contributor

@danpalmer danpalmer commented Jul 28, 2024

These changes are now available in 1.15.1

First pass at fixing #124.

Currently, every refresh interval, workers run a single task. This means that the maximum task throughput in the refresh interval is the number of workers.

With this change, every refresh interval, workers run tasks until there are no more tasks. This means that task throughput is tied to the speed of processing tasks, rather than just the number of workers.

Open to feedback about the approach, and I haven't added any tests here yet. This is a fairly major behavioural change, but should result in a large speed increase for most users.

@petrpavlik
Copy link

Big fan of this. This'd match my SQS polling where every x seconds I poll SQS and process whatever messages are in there, not just one. I can imagine that this can also be controlled by a boolean or an integer, representing the max number of tasks to process, to ship this as an opt-in behavior.

@danpalmer
Copy link
Contributor Author

danpalmer commented Jul 29, 2024

Regarding whether this should be opt-in or opt-out... I see this quite clearly as a bug, and a fairly major one at that. I've used quite a few queueing systems, and they all basically attempt to do the work as fast as they can, they don't wait between jobs. Some provide a mechanism such as a refresh interval which is the time they will sleep before checking again for more work after having run out, but again that's not what we have here, even though I suspect that was the intention.

Based on this, I'd suggest this should either be opt-out, or not given as a configuration option. If reviewers would like it to be a configuration option I'm not sure what it should even be called. perJobDelay? I can't see a documented version of that which doesn't seem weird to anyone reading it for the first time.

That all said, this is based on my current understanding of the library, through the lens of the Fluent driver. It's still possible I've got this all wrong, and I'd be very happy to be shown my misunderstanding and an alternative way to fix the queues to run at full speed.

@petrpavlik
Copy link

I don't personally insist on having this opt out, was just thinking out loud. Happy to try to deploy this change to my backend to try to beta test this.

@danpalmer
Copy link
Contributor Author

@petrpavlik amazing, that would be really helpful if you could. I don't have any Swift in production (this is for a personal project) so while I can run this, check it does what I expect, etc, I'm not super confident in it.

@petrpavlik
Copy link

Yes, will do. I've already tried, the name of your work has a different name for some reason (vapor-queues vs.s queues), which would start yelling at me about some conflicts in my Package.swift. I'll try again, didn't have much time for it.

Sources/Queues/QueueWorker.swift Outdated Show resolved Hide resolved
@danpalmer
Copy link
Contributor Author

@petrpavlik

the name of your work has a different name for some reason

Apologies. I didn't want a repo just named "queues" on my profile as it's not very descriptive out of the context of the Vapor org. Hopefully we can get these changes in soon anyway so it won't require a diff for too long.

@danpalmer danpalmer requested a review from 0xTim August 1, 2024 10:00
@petrpavlik
Copy link

Thanks, I'll try to deploy it later today, worst case scenario I fork this repo myself and push the changes, so I don't have to deal with the rename-related issues.

@danpalmer
Copy link
Contributor Author

FWIW, when I switched to my fork it was a one line Package.swift change for the dependency. None of the code has been renamed, it's just the git repository URL which I assume would need to change anyway. Unless I've misunderstood the problem.

@petrpavlik
Copy link

Ok, I have this deployed now, seems to work fine so far. Will keep an eye on it.

@petrpavlik
Copy link

@0xTim @danpalmer ok, I've been running this for few weeks. Seems to work perfectly fine.

Copy link
Member

@0xTim 0xTim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok great, I'm happy with this change so we can release

@0xTim 0xTim added the semver-patch Internal changes only label Aug 22, 2024
Copy link

codecov bot commented Aug 22, 2024

Codecov Report

Attention: Patch coverage is 83.33333% with 2 lines in your changes missing coverage. Please review.

Project coverage is 83.16%. Comparing base (15829c5) to head (16b5a3c).
Report is 1 commits behind head on main.

Files Patch % Lines
Sources/Queues/QueueWorker.swift 83.33% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #134      +/-   ##
==========================================
+ Coverage   82.86%   83.16%   +0.29%     
==========================================
  Files          22       22              
  Lines         671      677       +6     
==========================================
+ Hits          556      563       +7     
+ Misses        115      114       -1     
Files Coverage Δ
Sources/Queues/QueueWorker.swift 95.45% <83.33%> (+2.12%) ⬆️

@0xTim 0xTim merged commit d867b5b into vapor:main Aug 22, 2024
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
semver-patch Internal changes only
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants