-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Service Account not working different namespace #2103
Comments
@devscheffer Could you provide detailed information about how you install the helm chart? Is this service account |
it is created by the helm. ---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
labels:
app: spark-operator
name: spark-operator
namespace: spark-operator
spec:
chart:
spec:
chart: spark-operator
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: spark-operator
version: 1.4.0
interval: 5m0s
releaseName: spark-operator
values:
image:
repository: docker.io/kubeflow/spark-operator
pullPolicy: IfNotPresent
tag: ""
rbac:
create: false
createRole: true
createClusterRole: true
annotations: {}
serviceAccounts:
spark:
create: true
name: "spark-sa"
annotations: {}
sparkoperator:
create: true
name: "spark-operator-sa"
annotations: {}
sparkJobNamespaces:
- spark-operator
- team-1
webhook:
enable: true
port: 443
portName: webhook
namespaceSelector: ""
timeout: 30
metrics:
enable: true
port: 10254
portName: metrics
endpoint: /metrics
prefix: ""
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
effect: "NoSchedule"
It works when I do manually through the terminal however when I execute from airflow I get this error here is the task in airflow spark_kpo = KubernetesPodOperator(
task_id="kpo",
name="spark-app-submission",
namespace=namespace,
image="bitnami/kubectl:1.28.11",
cmds=["/bin/bash", "-c"],
arguments=[f"echo '{spark_app_manifest_content}' | kubectl apply -f -"],
in_cluster=True,
get_logs=True,
service_account_name=service_account_name,
on_finish_action="keep_pod",
)
``` |
@devscheffer The service account |
Hello.
With values.yaml for it was like:
And right after that, if I run a DAG from Airflow, as a result I have a POD spark-submit which fails with the next error:
This can be fixed by adding:
With all this above, I'd like to ask why this fixes wasn't added by the helm chart ? |
Description
I use the helmchart of spark operator, it is deployed at the namespace spark-operator I configure on the helmrelease sparkJobNamespaces: spark-jobs that is the namespace where I want to run the jobs.
However, I'm getting this error
Name: "pyspark-pi", Namespace: "spark-jobs"
from server for: "STDIN": sparkapplications.sparkoperator.k8s.io "pyspark-pi" is forbidden: User "system:serviceaccount:spark-jobs:spark-sa" cannot get resource "sparkapplications" in API group "sparkoperator.k8s.io" in the namespace "spark-jobs"
The text was updated successfully, but these errors were encountered: