Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ProxySSL ingress annotations do not work as expected when not all annotations are provided #10264

Closed
staizen-stephen opened this issue Aug 1, 2023 · 11 comments · May be fixed by #11490
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@staizen-stephen
Copy link

What happened:

SSL certificate on backend was not verified despite providing annotations.

I0801 10:53:39.076948       7 annotations.go:184] "Parsing Ingress annotation" name="ProxySSL" ingress="OMITTED/OMITTED" value={"secret":"","caFilename":"","caSha":"","crlFileName":"","crlSha":"","pemFilename":"","ciphers":"","protocols":"","proxySSLName":"","verify":"","verifyDepth":0,"proxySSLServerName":""}

What you expected to happen:

I expected options verifyDepth and verify to be configured.

I discovered the cause in proxyssl/main.go#L119.

In the absence of client certificate data (secret, certificate), no other options are considered. The missing annotation error is reported to the calling function but that function (perhaps correctly) does not consider this an error.

The docs describe the use of these annotations for client authentication, but I do not need client authentication. My primary goal is to ensure that I have step-wise encryption and no unexpected man-in-the-middle attack. So I need each step in my flow to: 1) use TLS; 2) verify target server; 3) verify that a trusted root certificate was used.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

-------------------------------------------------------------------------------

NGINX Ingress controller

  Release:       v1.8.0

  Build:         35f5082ee7f211555aaff431d7c4423c17f8ce9e

  Repository:    https://github.com/kubernetes/ingress-nginx

  nginx version: nginx/1.21.6



-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.

Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:20:07Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}

Kustomize Version: v5.0.1

Server Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.3-eks-a5565ad", GitCommit:"78c8293d1c65e8a153bf3c03802ab9358c0e1a14", GitTreeState:"clean", BuildDate:"2023-06-16T17:32:40Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS, EKS

  • OS (e.g. from /etc/os-release): Amazon Linux 2

  • Kernel (e.g. uname -a): 5.10.179-166.674.amzn2.x86_64 #1 SMP Mon May 8 16:54:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools:

    • Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
    • Terraform scripts, custom node scripts based on AWS EKS node template AMI
  • Basic cluster related info:

    • kubectl version
    • see above
    • kubectl get nodes -o wide
    • not disclosed
  • How was the ingress-nginx-controller installed:

    • If helm was used then please show output of helm ls -A | grep -i ingress
    • If helm was used then please show output of helm -n <ingresscontrollernamepspace> get values <helmreleasename>
    • If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used
    • if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
    • manifests adapted from GitHub; do not use Helm
  • Current State of the controller:

    • This is not sanitized. I don't think it is necessary for this issue. The code for the behaviour is easy to interpret.
  • Current state of ingress object, if applicable:

    • Annotations added to ingress:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
    nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "10"
    nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on"
  • Others:
    • Any other related information like ;
      • copy/paste of the snippet (if applicable)
      • kubectl describe ... of any custom configmap(s) created and in use
      • Any other related information that may help

How to reproduce this issue:

  • Edit the controller: add the log level --v=5 to include debug level logging
  • Create a workload with a pod and service exposing HTTPS
  • Create an ingress for HTTPS pointing to the service. Ensure the service has the following annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
    nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "10"
    nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on"
  • Review the controller logs to see that the proxy-ssl-* annotations have been ignored

Anything else we need to know:

Would be good to make it clear in the annotation documentation that these are a group that have to be configured as a single set.

@staizen-stephen staizen-stephen added the kind/bug Categorizes issue or PR as related to a bug. label Aug 1, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Aug 1, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@staizen-stephen
Copy link
Author

staizen-stephen commented Aug 1, 2023

I have been able to achieve my objective by doing the following:

  • Add the following configuration to the ingress-nginx controller confg map:
  http-snippet: |
    proxy_ssl_verify       on;
    proxy_ssl_verify_depth 5;
    proxy_ssl_protocols    TLSv1.2 TLSv1.3;
    proxy_ssl_trusted_certificate /etc/nginx/ca-root-bundle.pem;
    proxy_ssl_ciphers      HIGH:!aNULL:!MD5;
  • Add the following annotations to my ingress:
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    nginx.ingress.kubernetes.io/server-snippet: |
      proxy_ssl_name          <<cluster-service-fqdn>>;
      proxy_ssl_session_reuse on;
  • Inject a volume into the controller to present my trusted CA certificate bundle at /etc/nginx/ca-root-bundle.pem

@strongjz
Copy link
Member

strongjz commented Aug 3, 2023

/assign @rikatz

@github-actions
Copy link

github-actions bot commented Sep 3, 2023

This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.

@github-actions github-actions bot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Sep 3, 2023
@g1franc
Copy link
Contributor

g1franc commented Jun 20, 2024

I have the same issue and I plan to provide a PR to quickly address the issue

@longwuyuan
Copy link
Contributor

I commented that you may not want to use client cert auth, but you need to tell the controller about the CA that signed the certificate that is being presented by the HTTPS server at the backend, over on the other side of the "backend-protocol: HTTP" annotation. See below

image

@longwuyuan
Copy link
Contributor

There has been no user updates since a year so there is no action item on the project here.

The fundamental issue is an expectation to use SSL/TLS and yet not require the CA and other details. That is just not feasible as that approach increases risk of unauthorized connection.

This can be discussed in the community meetings if the user joins or in slack. But there is no action item here on the project so closing this issue. The tally of open issues without action items is in hundreds so closing this issue until a action item emerges from this issue.

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

There has been no user updates since a year so there is no action item on the project here.

The fundamental issue is an expectation to use SSL/TLS and yet not require the CA and other details. That is just not feasible as that approach increases risk of unauthorized connection.

This can be discussed in the community meetings if the user joins or in slack. But there is no action item here on the project so closing this issue. The tally of open issues without action items is in hundreds so closing this issue until a action item emerges from this issue.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@staizen-stephen
Copy link
Author

staizen-stephen commented Sep 13, 2024

There are multiple issues and PRs that are all linked to the same underlying issue. We can keep this closed and refer to #10557 for example.

The current implementation is clearly favouring the mTLS use case at the expense of absolutely standard server authentication modes (as used by almost every public website and many other systems). The other PRs can be used and there's no further update from me because I am using the configuration snippet approach to work around this limitation.

To be clear, mTLS adds overhead and in many cases it isn't necessary or worthwhile. Without the base server authentication, the client cannot trust that the hop is safely encrypted. But client authentication (i.e. adding mTLS) only proves the identity of the sending server to the receiving server, but often this is accomplished with other data in the transmission, so it is a completely separate design choice. Not least because it is only proving the sender is the ingress reverse proxy, not even the originating client here. That can equally be accomplished by network security policies, for example.

@longwuyuan
Copy link
Contributor

@staizen-stephen grateful for your comments.

There are more updates that may help move this discussion forward.

  • There is no contention that some users would benefit from the implementation that is being expressed/proposed here. I agree 100% on that.

  • But there is no data that ALL users would prefer default as that backend-protocol: HTTPS not verifying CA/Other etc. To the best of our knowledge, its only some users. Hence the decision to implement what you expect could "potentially" be done. But it will require to be optional (with a configMap/annotaion/flag etc) and default behavior.

  • Now this update comes in the backlight that the project has deprecated many useful and popular features (like TCP/UDP forwarding for example) due to shortage of resources to maintain/support them. So there is not much chance that the project can allocate resources on this.

  • And then finally comes the update that there have been CVEs that got highlighted big time. And then there are CVEs (for which we are constantly patching when possible) that are re-occurring all the time so-to-speak related to the SSL libs. So there was a lot of resources required and resulted in a change of direction, like giving priority to decure-by-default shipping of the controller. Hence there is very little chance that the collective decision will come down to allocating resources, to implement a new capability of the controller, that allows TLS without CA (even between internal hops). Even K8S internal hops don't do TLS without CA.

The creator of the issue can very well re-open the issue and that would be welcome. But the hope is that re-opening the issue starts tracking a action item on someone. The project is unfortunately in no position to allocate resources.

@staizen-stephen
Copy link
Author

Absolutely fine. And fully understand the resource pressures.

I would add that I'm not trying to remove verification, actually my entire goal was to ensure that verification was taking place against a certificate authority that was configured to be trusted by the host and injected into the container as well. In my case I simply did not have a client certificate and secret available to supply.

Again, no issues closing this. Thanks for your fast response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

Successfully merging a pull request may close this issue.

6 participants