Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move out from SteveJobs Meteor Queue #47

Closed
joseconstela opened this issue Sep 9, 2019 · 3 comments
Closed

Move out from SteveJobs Meteor Queue #47

joseconstela opened this issue Sep 9, 2019 · 3 comments
Labels
🏆 epic 🙏🏼 help wanted Extra attention is needed

Comments

@joseconstela
Copy link
Member

joseconstela commented Sep 9, 2019

The main objective is to separate the queue engine from the MeteorJS main application.

There's an issue open on SteveJobs package - msavin/SteveJobs#63 - about possible DB query hammer problems.

It would be interesting to get an overview of how it's affecting Tideflow right now, consider trying https://github.com/wildhart/meteor.jobs or moving to Bull - https://github.com/OptimalBits/bull - Kafka or other

@joseconstela joseconstela added 🙏🏼 help wanted Extra attention is needed 🏆 epic labels Sep 9, 2019
@wildhart
Copy link

It should be fairly easy to test how this is affecting Tideflow right now. Simply run Jobs.configure({interval: 60000}) to change the polling interval from the default of 3 seconds to 1 minute, and monitor the CPU usage before and after this change. A polling interval of 1 minute shouldn't really have any negative impact even on a production deployment, depending on what the jobs are used for, but you can change it to a shorter interval if you want.

Since my original graph in the bug report msavin:sjobs has been improved so that the DB isn't updated on every interval for every job type, but it still remains the case that the DB is read every interval for every job type. The more types of job you configure, the more setIntervals are created and the more times the DB is read every minute. I see that tideflow has 9 job types ('queues') right now.

If someone could try this and report back it would be really interesting.

It should also be fairly trivial to switch out msavin:sjobs for wildhart:meteor-jobs (of which I am the author). Note that there are a few API differences but in most cases it should be a simple switch. wildhart:meteor-jobs works more efficiently by creating a single observer on the jobs queue and sets a single setTimeout for the next due job. This means that most of the time it is doing absolutely nothing - no regular setIntervals and no DB reads or writes.

I'm using my package in two production apps right now and will continue to do so, and I'm happy to keep supporting it.

@wildhart
Copy link

wildhart commented Oct 4, 2019

I've just completed my own tests with my updated app which now uses 20 job types. Running on a fresh server with no user connections and no jobs actually running during the measurement period.

See comment here: msavin/SteveJobs#63 (comment)

msavin:sjobs  with 20 jobs defined = 1.63% CPU;   with 40 jobs defined = 2.62%
wildhart:jobs with 20 jobs defined = 0.37% CPU;   with 40 jobs defined = 0.39%

@joseconstela
Copy link
Member Author

Closing in favor of #73 Implement pure NodeJS jobs queue backed by Bull

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🏆 epic 🙏🏼 help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants