-
Notifications
You must be signed in to change notification settings - Fork 575
Conversation
This reverts commit dd182bd
…appens automatically
…operations see also: - eclipse-ee4j/jersey#3772 - #808
Codecov Report
@@ Coverage Diff @@
## master #982 +/- ##
===========================================
- Coverage 58.51% 54.5% -4.02%
- Complexity 642 646 +4
===========================================
Files 54 56 +2
Lines 4127 4455 +328
Branches 370 413 +43
===========================================
+ Hits 2415 2428 +13
- Misses 1493 1804 +311
- Partials 219 223 +4 |
@yosserO @tstern I'll also appreciatte if you can hangout on the #zalenium channel in the Selenium Slack, in case people have questions. What do you think? |
Hi Diego, |
Really eager to be one of the first users :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The approach of creating new service when need to create new container is not as good as pre-defining the service then update it scaling.
Refer: https://docs.docker.com/engine/swarm/swarm-tutorial/scale-service/
final ServiceSpec serviceSpec = buildServiceSpec(taskSpec, nodePort, noVncPort); | ||
|
||
try { | ||
ServiceCreateResponse service = SwarmUtilities.createService(serviceSpec); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should be better if we pre-define the worker services, then update theirs scaling like https://docs.docker.com/engine/swarm/swarm-tutorial/scale-service/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @trinhpham ,
Service scaling was our first Approach for creating and deleting Browser tasks.
The problem was: When scaling down the number of replicas, it doesn't took in account which task is not active. It killed containers that were actually running tests and made the tests fail. We found no way to specify the task to shutdown explicitly when scaling down the number of replicas.
If we scale the number of replicas up instead of creating new services. We will have to scale them down when the tests end. And this will lead to a wrong behaviour.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got your idea. Thanks for explaining it to me :)
Ok, let's merge this and do a release! |
We created a pull request to add documentation: #984 |
Comes from #907, thanks to @yosserO and @tstern
Running all CI