Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error pulumi install #191

Open
efcunha opened this issue Aug 26, 2022 · 8 comments
Open

Error pulumi install #191

efcunha opened this issue Aug 26, 2022 · 8 comments

Comments

@efcunha
Copy link

efcunha commented Aug 26, 2022

Describe the bug
Error installing:
kubernetes:core/v1:ConfigMap

Your environment
$ pulumi version
v3.38.0

$ node --version
v18.8.0

$ python --version
Python 3.9.13

KIC v1.0.1

Ubuntu 22.04

$ cat Pulumi.devops.yaml

config:
aws:profile: default
aws:region: us-west-2
kubernetes:infra_type: AWS

To Reproduce


$ ./start.sh

Updating (devops)

View Live: https://app.pulumi.com/efcunha/aws-eks/devops/updates/6

 Type                                   Name                                    Status       Info
  • pulumi:pulumi:Stack aws-eks-devops creating public subnets: ['sub
  • ├─ aws:iam:Role ec2-nodegroup-iam-role created
  • ├─ aws:iam:Role eks-iam-role created
  • pulumi:pulumi:Stack aws-eks-devops creating.. public subnets:
  • pulumi:pulumi:Stack aws-eks-devops creating... public subnets:
  • ├─ aws:iam:RolePolicyAttachment ec2-container-ro-policy-attachment created
  • pulumi:pulumi:Stack aws-eks-devops creating public subnets:
  • ├─ aws:iam:RolePolicyAttachment eks-service-policy-attachment created
  • ├─ aws:iam:RolePolicyAttachment eks-cluster-policy-attachment created
  • ├─ eks:index:Cluster aws-eks-devops created
  • │ ├─ eks:index:ServiceRole aws-eks-devops-instanceRole created
  • │ │ ├─ aws:iam:Role aws-eks-devops-instanceRole-role created
  • │ │ ├─ aws:iam:RolePolicyAttachment aws-eks-devops-instanceRole-03516f97 created
  • │ │ ├─ aws:iam:RolePolicyAttachment aws-eks-devops-instanceRole-3eb088f2 created
  • │ │ └─ aws:iam:RolePolicyAttachment aws-eks-devops-instanceRole-e1b295bd created
  • │ ├─ eks:index:RandomSuffix aws-eks-devops-cfnStackName created
  • │ ├─ aws:ec2:SecurityGroup aws-eks-devops-eksClusterSecurityGroup created
  • │ ├─ aws:iam:InstanceProfile aws-eks-devops-instanceProfile created
  • │ ├─ aws:ec2:SecurityGroupRule aws-eks-devops-eksClusterInternetEgressRule created
  • │ ├─ aws:eks:Cluster aws-eks-devops-eksCluster created
  • │ ├─ aws:ec2:SecurityGroup aws-eks-devops-nodeSecurityGroup created
  • │ ├─ eks:index:VpcCni aws-eks-devops-vpc-cni created
  • │ └─ pulumi:providers:kubernetes aws-eks-devops-eks-k8s created
    └─ kubernetes:core/v1:ConfigMap aws-eks-devops-nodeAccess failed 1

Diagnostics:
kubernetes:core/v1:ConfigMap (aws-eks-devops-nodeAccess):
error: failed to initialize discovery client: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

pulumi:pulumi:Stack (aws-eks-devops):
aws default profile
vpc id: vpc-025729a9f6169ec49
warning: aws:ec2/getSubnetIds:getSubnetIds verification warning: Deprecated Resource
public subnets: ['subnet-0064d171ae22441ae', 'subnet-0370ad0da9abcc517', 'subnet-0316b175509d86c3d', 'subnet-0278bf3e9620b24b5']
warning: aws:ec2/getSubnetIds:getSubnetIds verification warning: Deprecated Resource
public subnets: ['subnet-0bab25610049dbc2f', 'subnet-0143a10ed96adf3cd', 'subnet-02082be33e689f992', 'subnet-033cc97107714b619']
error: Resource monitor has terminated, shutting down

Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead
Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[1].key: beta.kubernetes.io/arch is deprecated since v1.14; use "kubernetes.io/arch" instead

Resources:
+ 23 created

Duration: 10m47s

(base) efcunha@devops:~/GitHub/kic-reference-architectures$
...

@qdzlug
Copy link
Contributor

qdzlug commented Aug 31, 2022

HI @efcunha - please try again with the latest updates. One of the changes in the last PR addresses this issue. Note that this PR changes the way you start the MARA process - ./pulumi/python/runner - the start.sh script is deprecated and only used for workstation based installs now. The script will tell you this as well.

Let me know if you have issues.

Jay

@efcunha
Copy link
Author

efcunha commented Aug 31, 2022

~/kic-reference-architectures/pulumi/python$ ./runner -s devops -p aws validate
stack configuration file [/home/efcunha/GitHub/kic-reference-architectures/config/pulumi/Pulumi.devops.yaml] does not exist
creating new configuration based on user input
AWS region to use [us-east-1]: us-west-2
AWS region: us-west-2
AWS profile to use [none] (enter "none" for none): default
AWS profile: default
AWS availability zones to use with VPC [us-west-2a, us-west-2b, us-west-2c, us-west-2d] (separate with commas):
AWS availability zones: us-west-2a, us-west-2b, us-west-2c, us-west-2d
EKS Kubernetes version [1.21]: 1.23
EKS Kubernetes version: 1.23
EKS instance type [t2.large]:
EKS instance type: t2.large
Minimum number compute instances for EKS cluster [3]:
EKS minimum cluster size: 3
Maximum number compute instances for EKS cluster [12]:
EKS maximum cluster size: 12
Desired number compute instances for EKS cluster [3]:
EKS maximum cluster size: 3
Prometheus administrator password:
Bank of Sirius Accounts Database password:
Bank of Sirius Ledger Database password:
Bank of Sirius demo site login username [testuser]:
Bank of Sirius demo site login password [password]:

$ cat /kic-reference-architecturesconfig/pulumi/Pulumi-devops.yaml

config:
aws:profile: default
aws:region: us-west-2
eks:desired_capacity: 3
eks:instance_type: t2.large
eks:k8s_version: '1.23'
eks:max_size: 12
eks:min_size: 3
kubernetes:infra_type: AWS
vpc:azs:

  • us-west-2a
  • us-west-2b
  • us-west-2c
  • us-west-2d

kic-reference-architectures/pulumi/python$ ./runner -s devops -p aws up

`
Duration: 11m49s

ERROR:root:Error running Pulumi operation [up] with provider [aws] for stack [devops]
Traceback (most recent call last):
File "/home/efcunha/GitHub/kic-reference-architectures/pulumi/python/automation/main.py", line 562, in
main()
File "/home/efcunha/GitHub/kic-reference-architectures/pulumi/python/automation/main.py", line 277, in main
raise e
File "/home/efcunha/GitHub/kic-reference-architectures/pulumi/python/automation/main.py", line 273, in main
pulumi_cmd(provider=provider, env_config=env_config)
File "/home/efcunha/GitHub/kic-reference-architectures/pulumi/python/automation/main.py", line 538, in up
pulumi_project.on_success(params)
File "/home/efcunha/GitHub/kic-reference-architectures/pulumi/python/automation/providers/aws.py", line 206, in _update_kubeconfig
res, err = external_process.run(cmd)
File "/home/efcunha/GitHub/kic-reference-architectures/pulumi/python/venv/lib/python3.9/site-packages/kic_util/external_process.py", line 23, in run
raise ExternalProcessExecError(msg, cmd)
kic_util.external_process.ExternalProcessExecError: aws --region us-west-2 --profile default eks update-kubeconfig --name aws-eks-devops-eksCluster-719a8f4 when running: Failed to execute external process: aws --region us-west-2 --profile default eks update-kubeconfig --name aws-eks-devops-eksCluster-719a8f4

Error:
'NoneType' object is not iterable
`

@qdzlug
Copy link
Contributor

qdzlug commented Sep 1, 2022

Hi @efcunha,

Do you have multiple entries in your kubeconfig file, or do you have included directories with additional kubeconfig files? This error - NoneType object is not iterable - is something I saw in the past when the AWS K8 configuration was added to an existing kubeconfig. If this is the case, could you try with an empty kubeconfig file (~/.kube/config) and a KUBECONFIGenv variable that just points to the standard config (ie,KUBECONFIG=~/.kube/config`)

If this is not the case, let me know and I can look deeper into it; I just tested myself and this should work on AWS (it's passing both our deployment tests, and I deployed it locally as well).

Cheers,

Jay

@efcunha
Copy link
Author

efcunha commented Sep 1, 2022

$ cat /kic-reference-architecturesconfig/pulumi/Pulumi-devops.yaml

config:
aws:profile: default
aws:region: us-west-2
eks:desired_capacity: 3eks:k8s_version: '1.23'
eks:instance_type: t2.large
eks:k8s_version: '1.23'
eks:max_size: 12
eks:min_size: 3
kubernetes:infra_type: AWS
vpc:azs:

us-west-2a
us-west-2b
us-west-2c
us-west-2d
kic-reference-architectures/pulumi/python$ ./runner -s devops -p aws up

  • kubernetes:helm.sh/v3:Release elastic creating warning: Helm release "elastic" was created but has a failed status. Use the helm command to investigate the error, correct it, then retry. Reason: timed out waiting for the condition
  • kubernetes:helm.sh/v3:Release elastic creating error: 1 error occurred:
  • kubernetes:helm.sh/v3:Release elastic creating failed error: 1 error occurred:
  • pulumi:pulumi:Stack logstore-devops creating error: update failed
  • pulumi:pulumi:Stack logstore-devops creating failed 1 error

Diagnostics:
kubernetes:helm.sh/v3:Release (elastic):
warning: Helm release "elastic" was created but has a failed status. Use the helm command to investigate the error, correct it, then retry. Reason: timed out waiting for the condition
error: 1 error occurred:
* Helm release "logstore/elastic" was created, but failed to initialize completely. Use Helm CLI to investigate.: failed to become available within allocated timeout. Error: Helm Release logstore/elastic: timed out waiting for the condition

pulumi:pulumi:Stack (logstore-devops):
error: update failed

Resources:
+ 3 created

Duration: 6m9s

stderr:

@qdzlug
Copy link
Contributor

qdzlug commented Sep 2, 2022

@efcunha - thanks for the output; running some tests now.

@qdzlug
Copy link
Contributor

qdzlug commented Sep 2, 2022

@efcunha - thanks for your patience. That error above - where logstore (elasticsearch) times out is tied to how long elasticsearch takes to become available. By default, pulumi waits for 5 minutes and then bails out. This is normally long enough, but in my testing here I see that it's taking about 7 minutes for it to become available.

There is, fortunately, a tunable for this - you can set it via

pulumi -C ./pulumi/python/kubernetes/logstore config set logstore:helm_timeout=600

This value is in seconds, and I normally put it at 600 (10 mins) which is usually sufficient.

Please give it a try with this set and let me know how it goes - you will need to destroy the existing deployment, since helm leaves things in a weird state when it fails like this.

Cheers,

Jay

@efcunha
Copy link
Author

efcunha commented Sep 2, 2022

Hello, no problem, I like to keep testing, news for me is a preazer.

@falyoun
Copy link

falyoun commented Feb 27, 2023

Same here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants