Skip to content

Conversation

@pcfreak30
Copy link

Hello.

So this PR might be over-complicated/over engineered. I have not messed with k8 and spent several days it feels figuring this out, and had a lot of AI help.

The key things it adds are

ingressHost := pmanifest.IngressHost(lid, b.Name())
	env = addIfNotPresent(envVarsAlreadyAdded, env, envVarAkashIngressHostname, fmt.Sprintf("%s.%s", ingressHost, b.settings.DeploymentIngressDomain))

	svc := &b.deployment.ManifestGroup().Services[b.serviceIdx]

	// Add hostnames from service expose configurations
	for _, expose := range svc.Expose {
		if expose.IsIngress() {
			// Add custom hostnames if specified
			for idx, hostname := range expose.Hosts {
				env = addIfNotPresent(envVarsAlreadyAdded, env,
					fmt.Sprintf("%s_%d_%d", envVarAkashIngressCustomHostname, expose.Port, idx),
					hostname)
			}
		}

		if expose.Global {
			// Add external port mappings
			if svc := b.service; svc != nil {
				if nodePort, exists := svc.portMap[int32(expose.Port)]; exists {
					env = addIfNotPresent(envVarsAlreadyAdded, env,
						fmt.Sprintf("%s_%d", envVarAkashExternalPort, expose.Port),
						fmt.Sprintf("%d", nodePort))
				}
			}
		}
	}

Now the hostnames are much easier to deal with overall. However the ports put me through rabbit hole hell.

  • I am not sure if a service can or should be moved to be created before a statefulset/deployment.
  • Ports are only assigned AFTER creation of the service, so we have to run service apply twice.
  • We have to ALSO re-apply the statefulset/deployment to use the generated env vars.
  • We have to do a hack on the builder factories due to the weird struct embedding of workload? So we pass an optional service pointer to the stateful/deployment builder factory and have it call a new setService helper. Found also it has to be called direct and not on the workload reference, using go's pass through on struct embedding.
  • waitForProcessedVersion was created along with k8 retry helper imported because I was concerned about cached data being returned as it seems some stuff is processed async, and i got a conflict error at-least 1 time.

This PR is currently a draft to get feedback on how this should be best implemented vs what I have done. I also generally don't do unit tests in most cases so if they are needed, will need to be advised where at.

This is what has currently worked for me and I will be deploying this to prod for my projects workloads.

The use case for me is I have mysql master & slave servers that has to self identify themselves (host and port) and register with etcd. I have a docker image im building that acts as one side of the cluster at https://github.com/LumeWeb/akash-mysql with https://github.com/LumeWeb/akash-proxysql being the other.

Kudos.

 Add environment variables to expose:
 - Generated ingress hostname (AKASH_INGRESS_HOST)
 - Custom ingress hostnames per port (AKASH_INGRESS_CUSTOM_HOST_<port>_<index>)

 This allows containers to be aware of their external access points.
- Add port mapping to track assigned NodePorts for container ports
- Modify deployment/statefulset builders to use service NodePort info
- Update workload env vars to use actual NodePort values
- Add retry and version tracking for deployment/statefulset updates
- Change deployment order to ensure services exist before workloads

This change ensures external port environment variables reflect actual
NodePort assignments rather than using static external port values.
@troian
Copy link
Member

troian commented Feb 14, 2025

@pcfreak30 thanks for bringing this up.
can you first create an issue in https://github.com/akash-network/support and describe issue and proposed solution

@pcfreak30
Copy link
Author

@pcfreak30 thanks for bringing this up. can you first create an issue in https://github.com/akash-network/support and describe issue and proposed solution

I can, however, pretty much everything needed has already been said here? Please clarify?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants