-
Notifications
You must be signed in to change notification settings - Fork 202
Add the posibility to dynamically set the number of replicas in the Kubernetes Deployment file #1354
Comments
Depending how you generate you resource descriptors, you can already do this. I.e. if you are relying on the enricher to generate a default <enricher>
<config>
<fmp-controller>
<replicaCount>2</replicaCount>
</fmp-controller>
</config>
</enricher> Of in the general case, you can also use a fragment by putting a file spec:
replicas: 2 |
@rhuss thanks for your quick reply, I am using a generator so the enricher property fits perfectly. However I am having trouble making this all fit in a Jenkins pipeline, could you suggest how to do it, please? This was my initial scenario:
The problem here is: in the "Build and Push Docker Image" stage an image is pushed to my private registry with this tag: 0.0.1-SNAPSHOTsnapshot-180813-094821-0461 but in the "Deploy Staging" stage the generated kubernetes.yaml points at an unknown image 0.0.1-SNAPSHOTsnapshot-180813-095003-0795 and the startup of the application fails (ImagePullBackOff). Thus, I am forced to execute in the same stage this so the tag used in the image push and the kubernetes.yml resource is the same:
Now I cannot customize the replica count per environment since I cannot separate the fabric8:resource generation per environment. Any suggestion? |
Actually the time based tagging of snapshot images is flawed if See also #1093 for further explanation. You can change the strategy by setting the property Does this help ? |
The downside I see is that relying on latest (%l) could lead to some uncertainty about what you have in Staging: You build a dev image (1.0.0-SNAPSHOT) and apply it to the staging env. Someone builds a pro image (1.0.0) and applies it to the prod env. You don't know the new image exists and for some reason restart your pod. It's gonna pull latest again which is 1.0.0 not 1.0.0-SNAPSHOT anymore. |
I have also come across a problem using %l for SNAPSHOT versions. Google Cloud Registry complaints about using the same tag (latest) since it already exists:
Config:
|
This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions! |
Hello @rhuss, is the property |
Seems like a regression, we split DefaultControllerEnricher to two enrichers in 4.x versions. Could you please provide a reproducible sample if possible? We'll try to fix it before next release(coming in 1-2 weeks) |
I created a minimal example and in this it is working for both versions 3.5.42 and 4.4.0. The benefit of having the property setting it implicitly over setting it explicitly is minimal. So feel free to close the ticket again. |
This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions! |
Description
Let's suppose I have a Jenkins pipeline pipeline with two stages deploy-staging and deploy-pro.
sh "./mvnw fabric8:resource"
generates a kubernetes.yaml file with this number of replicas:Now, image in want one replica for staging but n for production. I can manually do
kubectl scale deployment --replicas=2
but as soon as I domvn fabric8:apply
with for instance a new image to use, the replicas will be downscaled to 1 again.It'd be great to have a fabric8.replicas property to allow us to override this value.
Info
f-m-p version : 3.5.41
Maven version (
mvn -v
) : Apache Maven 3.5.3Kubernetes / OpenShift setup and version : Kubernetes 1.11
The text was updated successfully, but these errors were encountered: