-
-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unicorn LB not aware of node status without service restart #44
Comments
As a workaround, I create 2 systemd services on the VM which will run at boot and at shutdown, restarting the LB on rpi4. It works, but would be great to have the LB figure it out on its own. Not sure what CUSTOM_SCORES_TIMEOUT from the LB config is doing, but the explanation I understand is that it should consider a node transcoder not available after x amount of seconds. Also the config on Transcoder, has a line ping frequency. Any reply would be greatly appreciated |
Interesting... Normaly when the transcoder don't ping the LB during 30 sec, the LB flag it as "unavailable" and don't redirect streams on it. When your transcoder is down, maybe the transcoder continue to ping the load balancer ? |
Here I launched the shutdown command on the VM (transcoder-2) Feb 12 10:51:16 XXXX npm[5549]: 2021-02-12T09:51:16.938Z UnicornLoadBalancer PING 9bdc5c59b76960b8-com-plexapp-android [https://transcoder-2.domain.com] it went like this for some time... Feb 12 11:01:49 XXXX npm[22497]: 2021-02-12T10:01:49.560Z UnicornLoadBalancer PING 9bdc5c59b76960b8-com-plexapp-android [https://transcoder-2.domain.com] |
Hi,
First I want to say thanks for this great piece(s) of software!
Here is my set-up: PMS, Unicorn LB and Transcoder running on rpi4, and I have set-up a VM on my desktop for when I need help on transcodes, with Unicorn Transcoder and PMS running on it. On both machines there is systemd service running for unicorn transcoder and only on rpi4 systemd service for load balancer.
I managed to install all pieces of software correctly (or at least I think I did), as all works fine. When the max number of transcodes is reached on VM transcoder, the next one goes on pi. The problem is when the VM goes down for any reason, LB doesn't know it, unless a restart is issued, and will keep redirecting to the offline node. After restart, it will see that it has only 1 node. Same vice-versa, if LB knows about 1 node, when 2nd one is online, without a restart it wont see it.
Here is the part of config files I think it's relevant to the issue:
rpi4 Unicorn LB config:
timeout: env.int('CUSTOM_SCORES_TIMEOUT', 10)
list: env.array('CUSTOM_SERVERS_LIST', 'string', ['https://transcoder-2.domain.com/','https://transcoder-1.domain.com/'])
rpi4 Unicorn Transcoder config:
transcoder_decay_time: env.int ('TRANSCODER_DECAY_TIME', 120),
loadbalancer_address: env.string('LOADBALANCER_ADDRESS', 'https://plex.domain.com'),
ping_frequency: env.int ('PING_FREQUENCY', 10),
instance_address: env.string('INSTANCE_ADDRESS', 'https://transcoder-1.domain.com'),
maxSessions: env.int('MAX_SESSIONS', 0),
VM Unicorn Transcoder config:
transcoder_decay_time: env.int ('TRANSCODER_DECAY_TIME', 120),
loadbalancer_address: env.string('LOADBALANCER_ADDRESS', 'https://plex.domain.com'),
ping_frequency: env.int ('PING_FREQUENCY', 10),
instance_address: env.string('INSTANCE_ADDRESS', 'https://transcoder-2.domain.com'),
maxSessions: env.int('MAX_SESSIONS', 2),
The text was updated successfully, but these errors were encountered: