-
Notifications
You must be signed in to change notification settings - Fork 11
Create user server role #113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
WalkthroughNew Ansible playbooks, tasks, handlers, templates, and configuration files were introduced to automate the setup of user servers, including FRP and Slurm services, NIS, NFS, system configurations, and package management. Inventory and playbooks were expanded to include new hosts and groups, with supporting service unit files and templates added for deployment. Changes
Sequence Diagram(s)sequenceDiagram
participant Ansible
participant UserServer
participant Systemd
participant Services
Ansible->>UserServer: Deploys configuration files, templates, and scripts
Ansible->>Systemd: Installs and reloads service unit files (FRP, Slurm, NFS, NIS)
Ansible->>UserServer: Installs packages, sets up directories and permissions
Systemd->>Services: Starts/Restarts FRP, Slurm, NIS, NFS, and other services
Services->>UserServer: Services become active and configured
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 23
♻️ Duplicate comments (2)
files/frpc_cloud2.service (1)
5-11: Repeat hardening & whitespace fixes as infrpc_cloud1_bk.service.Apply the exact same sandboxing directives and whitespace cleanup to keep both unit files consistent and secure.
roles/userserver/tasks/frpc.yml (1)
58-68: Copy destination depends on the previous bugBecause of the nesting issue above, these copies will point to a non-existent path and fail. Fixing the unarchive step will resolve this.
🧹 Nitpick comments (15)
files/frpc_cloud1_bk.service (2)
1-4: Trim trailing spaces in section headers.Extra spaces after
[Unit]&[Service]aren’t fatal but cluttersystemctl catoutput and break copy-pasting.-[Unit] +[Unit] ... -[Service] +[Service]
5-11: Harden the service sandbox.FRP runs as root‐capable binary; add minimal hardening:
[Service] Type=simple User=frpclient +CapabilityBoundingSet=CAP_NET_BIND_SERVICE +NoNewPrivileges=true +PrivateTmp=true Restart=always RestartSec=5sroles/userserver/tasks/packages.yml (1)
80-85: Consideraptstate pinning for Docker packages.Mixing upstream Docker packages with distro ones often causes dependency drift after upgrades. Pin versions or add
update_cache: yes cache_valid_time: 3600so nightly runs don’t unexpectedly upgrade Docker engine.roles/userserver/tasks/quarantine.yml (1)
1-2: Fix typo and align module syntax.‘quarnantine’ → ‘quarantine’. Also prefer full mapping form for clarity:
- name: Configure quarantine in /etc/bash.bashrc - copy: src=files/bash.bashrc dest=/etc/bash.bashrc + - name: Configure quarantine bashrc + copy: + src: files/bash.bashrc + dest: /etc/bash.bashrcroles/userserver/templates/watchdog.j2 (1)
32-33: Consider bumpingpriority
realtime=yeswithout an explicitsched_rt_runtime_usadjustment can starve other kernel threads on heavily loaded boxes. Validate thatpriority=1is low enough for your workload or document the rationale.roles/userserver/tasks/main.yml (1)
1-6: Duplicate task names hinder--list-tasksdiagnosticsBoth tasks share the identical name “enable ssh keys”; this makes troubleshooting harder.
-- name: enable ssh keys +- name: enable ssh key for beatom ... -- name: enable ssh keys +- name: enable ssh key for devgabroles/userserver/handlers/main.yml (2)
1-22: Inconsistent module usage
Mixing the legacyservice:shortcut with thesystemd:module complicates handler maintenance and feature parity (e.g.,enabled=not honoured byserviceon all distros). Stick toansible.builtin.systemdfor systemd hosts.Also applies to: 30-32, 51-52
33-35: Usecommandfor idempotent grub recreation
update-grubneeds no shell features. Switching toansible.builtin.commandavoids/bin/sh -coverhead and lint warnings.roles/userserver/tasks/repos.yml (1)
1-3:pro configlikely always reports “changed”
The command has no side-effects after the first run, yet Ansible will mark the task as changed. Addchanged_when: falseto keep reports clean.files/slurmctld.service (1)
6-18: Service runs as root without PIDFile or ExecReload
Runningslurmctldas root increases blast radius; the upstream unit typically uses theslurmuser. Also consider uncommentingPIDFile=and adding anExecReloadsosystemctl reloadworks cleanly.roles/userserver/templates/slurm.conf.j2 (2)
57-59:LLN=yeson both partitions may starve interactive jobs
LeafNodeLimitsis powerful but can mis-prioritise if all nodes belong to both partitions. Ensure this aligns with your scheduling policy.
64-67: Hard-coded path totask_prolog.sh
If the script is updated via Ansible, remember to add a handler to restart slurmd so changes propagate.roles/userserver/tasks/file-config.yml (1)
36-38: Task label vs. value inconsistencyThe task title says “enable softdog driver” but you set
watchdog_module="iTCO_wdt". Either rename the task or usesoftdogto avoid confusion for the next operator.roles/userserver/tasks/nfs.yml (1)
7-10: Misleading task nameTwo tasks are both labelled “autofs installed”; the second actually removes
cachefilesd.
Consider renaming to avoid confusion in play output.roles/userserver/tasks/slurm.yml (1)
76-96: Compile step not idempotent
configureis re-run on every play, markingslurm_configureas changed each time, which triggers a full rebuild.
Addcreates: /opt/slurm/24.05.1/bin/srun(or similar) to the configure task to make it idempotent.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (29)
files/frpc_cloud1_bk.service(1 hunks)files/frpc_cloud1_bk.toml(1 hunks)files/frpc_cloud2.service(1 hunks)files/frpc_cloud2.toml(1 hunks)files/slurmctld.service(1 hunks)files/slurmdbd.service(1 hunks)files/users.exports(1 hunks)files/yp.conf(1 hunks)initial-setup.yml(1 hunks)inventory(1 hunks)roles/userserver/handlers/main.yml(1 hunks)roles/userserver/tasks/file-config.yml(1 hunks)roles/userserver/tasks/frpc.yml(1 hunks)roles/userserver/tasks/hosts.yml(1 hunks)roles/userserver/tasks/main.yml(1 hunks)roles/userserver/tasks/nfs.yml(1 hunks)roles/userserver/tasks/nis.yml(1 hunks)roles/userserver/tasks/packages.yml(1 hunks)roles/userserver/tasks/quarantine.yml(1 hunks)roles/userserver/tasks/repos.yml(1 hunks)roles/userserver/tasks/slurm.yml(1 hunks)roles/userserver/tasks/system-config.yml(1 hunks)roles/userserver/templates/99-backports.j2(1 hunks)roles/userserver/templates/hosts.j2(1 hunks)roles/userserver/templates/slurm.conf.j2(1 hunks)roles/userserver/templates/slurmdbd.conf.j2(1 hunks)roles/userserver/templates/watchdog.j2(1 hunks)site.yml(1 hunks)vars/slurm.yml(1 hunks)
🔇 Additional comments (16)
vars/slurm.yml (1)
8-14: LGTM! Proper use of Ansible Vault for sensitive data.The encrypted password variable follows security best practices for protecting database credentials.
inventory (2)
71-75: LGTM! Proper addition of compute nodes.The new compute nodes follow the existing naming convention and are correctly placed in the computenodes group.
82-85: LGTM! Well-structured userservers group.The role assignments (nis_role, slurmctld) are clearly defined and align with the infrastructure requirements.
roles/userserver/templates/hosts.j2 (1)
43-44: Fix duplicate IPMI entries.Lines 43-44 contain identical entries for the backup server IPMI interface.
-172.16.69.64 ipmi-bs ipmi-cicbs -172.16.69.64 ipmi-bs ipmi-cicbs +172.16.69.64 ipmi-bs ipmi-cicbsLikely an incorrect or invalid review comment.
files/frpc_cloud2.toml (1)
17-24: LGTM! Proper FRP proxy configuration.The SSH proxy configuration with health checks is correctly configured.
files/yp.conf (1)
15-15: LGTM! NIS server update aligns with inventory.The change from cicus04 to dnpus01 is consistent with the new userservers group configuration.
site.yml (1)
20-25: Confirm inventory group consistency.The play targets
userservers, which is correct, but be sure the inventory file contains that exact group name and notuserserverto avoid silent skips. Align naming across all playbooks (see issue flagged ininitial-setup.yml).roles/userserver/templates/99-backports.j2 (1)
1-3: Looks good.Template is minimal and correct; no issues spotted.
roles/userserver/templates/slurmdbd.conf.j2 (1)
6-9: ExposeStoragePortexplicitly.If MySQL is not listening on the default port (3306) the daemon will silently fail to connect. Add an explicit port parameter (
StoragePort=) or make it templated:StoragePort={{ mysql_slurmdb_port | default(3306) }}roles/userserver/tasks/quarantine.yml (1)
11-12: Ensure/scratchsurvives reboots with correct perms.If
/scratchis on a separate mount or tmpfs it may vanish. Add amount:task or place the directory creation in a handler triggered after filesystem mounts.roles/userserver/tasks/system-config.yml (1)
1-5: Modifying root’s GECOS can break automated toolingSome LDAP/PAM setups rely on the default comment. Double-check downstream expectations before merging.
roles/userserver/tasks/repos.yml (1)
10-14: Blindly deletingtrusted.gpgmay break other repos
That file can contain keys used by Ubuntu itself. Consider pruning specific keys withapt_keyinstead of wholesale removal.roles/userserver/tasks/nis.yml (1)
46-55: ConflictingYPPWDDIRvalues
/etc/default/nissetsYPPWDDIR=/etc, but later you overwrite the Makefile to/var/lib/extrausers. Keep them consistent to avoid map build failures.roles/userserver/templates/slurm.conf.j2 (1)
50-58:NodeName=DEFAULTlacksSockets/Cores/Threads/RealMemory
Many Slurm sub-commands break when these fields are absent. Add conservative defaults or override per-node.roles/userserver/tasks/file-config.yml (1)
31-35: Handler name mismatchYou notify
restart timesyncd, but most distros ship asystemd-timesyncdunit. Ensure the handler name matches the actual service or the restart will silently skip.roles/userserver/tasks/slurm.yml (1)
114-118:set_factloop overwrites itselfLooping over every host means the last matching host wins, which may be non-deterministic.
Break after the first assignment or useselectattr | firstto pick a single master.
| #Login/User Servers IPMI | ||
| 172.16.69.50 ipmi-us01 ipmi-cicus01 | ||
| 172.16.69.51 ipmi-us02 ipmi-cicus02 | ||
| 172.16.69.65 cicus03-mgmt.douglasrtss.qc.ca cicus03-mgmt ipmi-us03 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typo in domain name.
Missing dot in domain name "douglasrtss.qc.ca" should be "douglas.rtss.qc.ca".
-172.16.69.65 cicus03-mgmt.douglasrtss.qc.ca cicus03-mgmt ipmi-us03
+172.16.69.65 cicus03-mgmt.douglas.rtss.qc.ca cicus03-mgmt ipmi-us03📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 172.16.69.65 cicus03-mgmt.douglasrtss.qc.ca cicus03-mgmt ipmi-us03 | |
| 172.16.69.65 cicus03-mgmt.douglas.rtss.qc.ca cicus03-mgmt ipmi-us03 |
🤖 Prompt for AI Agents
In roles/userserver/templates/hosts.j2 at line 35, there is a typo in the domain
name "douglasrtss.qc.ca" missing a dot. Correct the domain name by inserting the
missing dot so it reads "douglas.rtss.qc.ca".
| tls_enable = true | ||
| #transport.protocol = "kcp" | ||
| #transport.protocol = "quic" | ||
| auth.token = "raiCeeshie6ahz1ahre2Eegha5eeng8Shahx1Ma8vae0xeepheiShieshaeN5eic" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security concern: Hardcoded authentication token.
The authentication token is stored in plaintext. Consider using Ansible variables to inject this from an encrypted source.
-auth.token = "raiCeeshie6ahz1ahre2Eegha5eeng8Shahx1Ma8vae0xeepheiShieshaeN5eic"
+auth.token = "{{ frp_auth_token }}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| auth.token = "raiCeeshie6ahz1ahre2Eegha5eeng8Shahx1Ma8vae0xeepheiShieshaeN5eic" | |
| auth.token = "{{ frp_auth_token }}" |
🤖 Prompt for AI Agents
In files/frpc_cloud2.toml at line 9, the authentication token is hardcoded in
plaintext, posing a security risk. Remove the hardcoded token and replace it
with a placeholder that references an Ansible variable. Configure Ansible to
inject the actual token securely from an encrypted source during deployment,
avoiding storing sensitive data directly in the file.
| - name: generate hostsfile | ||
| template: src=hosts.j2 dest=/etc/hosts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Specify owner, group and mode for /etc/hosts.
Relying on the template module defaults can leave file permissions inconsistent across distros or after prior manual edits. Add explicit settings to guarantee idempotence.
- template: src=hosts.j2 dest=/etc/hosts
+ template:
+ src: hosts.j2
+ dest: /etc/hosts
+ owner: root
+ group: root
+ mode: '0644'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: generate hostsfile | |
| template: src=hosts.j2 dest=/etc/hosts | |
| - name: generate hostsfile | |
| template: | |
| src: hosts.j2 | |
| dest: /etc/hosts | |
| owner: root | |
| group: root | |
| mode: '0644' |
🤖 Prompt for AI Agents
In roles/userserver/tasks/hosts.yml at lines 1 to 2, the template task
generating /etc/hosts lacks explicit owner, group, and mode settings, which can
cause inconsistent file permissions. Add parameters to the template module
specifying the owner (e.g., root), group (e.g., root), and mode (e.g., 0644) to
ensure consistent and idempotent file permissions across environments.
| - hosts: | ||
| - computenodes | ||
| - workstations | ||
| - userserver |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Group name typo breaks targeting.
userserver here does not match the new inventory group userservers (plural) used elsewhere, so this play never runs on the intended hosts.
- - userserver
+ - userservers📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - hosts: | |
| - computenodes | |
| - workstations | |
| - userserver | |
| - hosts: | |
| - computenodes | |
| - workstations | |
| - userservers |
🤖 Prompt for AI Agents
In initial-setup.yml at lines 1 to 4, the group name 'userserver' is incorrect
and should be changed to 'userservers' to match the inventory group used
elsewhere. Update the hosts list to replace 'userserver' with 'userservers' so
the play targets the correct hosts.
🛠️ Refactor suggestion
hosts: expects a scalar, not a YAML list.
Although some YAML parsers tolerate it, Ansible expects hosts to be a single string pattern. Join the groups with a comma for portability.
- hosts:
- - computenodes
- - workstations
- - userservers
+ hosts: computenodes,workstations,userserversCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In initial-setup.yml at lines 1 to 4, the hosts field is incorrectly specified
as a YAML list, but Ansible requires hosts to be a single string pattern.
Replace the list under hosts with a single string where the group names are
joined by commas, like "computenodes,workstations,userserver", to ensure
compatibility and correct parsing.
| /tank 172.16.67.0/24(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) \ | ||
| 127.0.0.1(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) \ | ||
| 172.16.67.231(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) \ | ||
| 10.116.134.0/24(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) \ | ||
| 10.116.130.0/24(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) \ | ||
| cichm*(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) \ | ||
| cicws*(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) \ | ||
| ciccs*(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) \ | ||
| dnpws*(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) \ | ||
| cicss*(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate fsid=0 options can confuse exportfs.
fsid=0 should appear only once per export path. Repeating it for every client spec may trigger warnings or undefined behaviour. Keep it on the first entry and drop from the continuations.
-/tank 172.16.67.0/24(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) \
- 127.0.0.1(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) \
+/tank 172.16.67.0/24(rw,async,fsid=0,crossmnt,no_subtree_check,root_squash) \
+ 127.0.0.1(rw,async,crossmnt,no_subtree_check,root_squash) \
172.16.67.231(rw,async,crossmnt,no_subtree_check,root_squash) \
10.116.134.0/24(rw,async,crossmnt,no_subtree_check,root_squash) \
10.116.130.0/24(rw,async,crossmnt,no_subtree_check,root_squash) \
cichm*(rw,async,crossmnt,no_subtree_check,root_squash) \
cicws*(rw,async,crossmnt,no_subtree_check,root_squash) \
ciccs*(rw,async,crossmnt,no_subtree_check,root_squash) \
dnpws*(rw,async,crossmnt,no_subtree_check,root_squash) \
cicss*(rw,async,crossmnt,no_subtree_check,root_squash)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In files/users.exports lines 1 to 10, the option fsid=0 is duplicated for every
client specification under the same export path, which can cause warnings or
undefined behavior in exportfs. To fix this, keep fsid=0 only on the first
client entry for the /tank export and remove fsid=0 from all subsequent client
entries on the continuation lines.
| - name: watchdog enable | ||
| lineinfile: dest=/etc/default/watchdog regexp=^run_watchdog line=run_watchdog=1 | ||
| notify: | ||
| - restart watchdog | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handlers are commented out – configuration changes never take effect
The two watchdog-related tasks add/replace config but the notify: block is commented. Unless another task restarts watchdog, the service keeps the old config.
Uncomment the notifications or add a single final task that restarts watchdog if any of these files changed.
Also applies to: 36-45
🤖 Prompt for AI Agents
In roles/userserver/tasks/file-config.yml around lines 26 to 30 and 36 to 45,
the notify block that triggers the watchdog service restart is commented out, so
configuration changes do not take effect. Uncomment the notify block in these
tasks or add a final task that restarts the watchdog service if any of the
related configuration files were changed to ensure the new settings are applied.
| - name: enable unattended upgrades | ||
| copy: src=files/50unattended-upgrades dest=/etc/apt/apt.conf.d/ | ||
|
|
||
| - name: enable periodic upgrades | ||
| copy: src=files/10periodic dest=/etc/apt/apt.conf.d/ | ||
|
|
||
| - name: enable apt proxy | ||
| copy: src=files/99proxy dest=/etc/apt/apt.conf.d/ | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Drop the files/ prefix for role-local copies
Inside a role, copy: automatically looks in the role’s files directory. Prefixing the path with files/ creates roles/userserver/files/files/… at runtime, which will 404.
- copy: src=files/50unattended-upgrades dest=/etc/apt/apt.conf.d/
+ copy: src=50unattended-upgrades dest=/etc/apt/apt.conf.d/Repeat for every other copy using the same pattern in this task file.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: enable unattended upgrades | |
| copy: src=files/50unattended-upgrades dest=/etc/apt/apt.conf.d/ | |
| - name: enable periodic upgrades | |
| copy: src=files/10periodic dest=/etc/apt/apt.conf.d/ | |
| - name: enable apt proxy | |
| copy: src=files/99proxy dest=/etc/apt/apt.conf.d/ | |
| - name: enable unattended upgrades | |
| copy: src=50unattended-upgrades dest=/etc/apt/apt.conf.d/ | |
| - name: enable periodic upgrades | |
| copy: src=10periodic dest=/etc/apt/apt.conf.d/ | |
| - name: enable apt proxy | |
| copy: src=99proxy dest=/etc/apt/apt.conf.d/ |
🤖 Prompt for AI Agents
In roles/userserver/tasks/file-config.yml lines 1 to 9, remove the 'files/'
prefix from all src paths in the copy tasks because Ansible automatically looks
in the role's files directory. Change src=files/50unattended-upgrades to
src=50unattended-upgrades, src=files/10periodic to src=10periodic, and
src=files/99proxy to src=99proxy to avoid incorrect file path resolution.
| - name: Ensure FRP version directory exists | ||
| file: | ||
| path: /opt/frp/frp_0.53.2_linux_amd64 | ||
| state: directory | ||
| owner: root | ||
| group: frpclient | ||
| mode: '0770' | ||
|
|
||
| - name: Extract FRP | ||
| unarchive: | ||
| src: /opt/frp/frp_0.53.2_linux_amd64.tar.gz | ||
| dest: /opt/frp/frp_0.53.2_linux_amd64 | ||
| remote_src: yes | ||
| creates: /opt/frp/frp_0.53.2_linux_amd64/frpc | ||
| owner: root | ||
| group: frpclient | ||
| mode: '0770' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extract path will nest the archive one level too deep
The tarball already contains a top-level directory frp_0.53.2_linux_amd64.
Extracting it into the directory of the same name yields:
/opt/frp/frp_0.53.2_linux_amd64/frp_0.53.2_linux_amd64/...
Subsequent tasks expect /opt/frp/frp_0.53.2_linux_amd64/frpc, so the play will loop forever or fail.
- dest: /opt/frp/frp_0.53.2_linux_amd64
+ dest: /opt/frp/frp_0.53.2_linux_amd64
+ extra_opts: ["--strip-components=1"]Add --strip-components=1 or extract to /opt/frp instead.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Ensure FRP version directory exists | |
| file: | |
| path: /opt/frp/frp_0.53.2_linux_amd64 | |
| state: directory | |
| owner: root | |
| group: frpclient | |
| mode: '0770' | |
| - name: Extract FRP | |
| unarchive: | |
| src: /opt/frp/frp_0.53.2_linux_amd64.tar.gz | |
| dest: /opt/frp/frp_0.53.2_linux_amd64 | |
| remote_src: yes | |
| creates: /opt/frp/frp_0.53.2_linux_amd64/frpc | |
| owner: root | |
| group: frpclient | |
| mode: '0770' | |
| - name: Ensure FRP version directory exists | |
| file: | |
| path: /opt/frp/frp_0.53.2_linux_amd64 | |
| state: directory | |
| owner: root | |
| group: frpclient | |
| mode: '0770' | |
| - name: Extract FRP | |
| unarchive: | |
| src: /opt/frp/frp_0.53.2_linux_amd64.tar.gz | |
| dest: /opt/frp/frp_0.53.2_linux_amd64 | |
| extra_opts: ["--strip-components=1"] | |
| remote_src: yes | |
| creates: /opt/frp/frp_0.53.2_linux_amd64/frpc | |
| owner: root | |
| group: frpclient | |
| mode: '0770' |
🤖 Prompt for AI Agents
In roles/userserver/tasks/frpc.yml around lines 32 to 48, the unarchive task
extracts the tarball into a directory that already exists with the same name,
causing a nested directory structure that breaks subsequent tasks expecting the
binary at a specific path. Fix this by adding the option to strip the top-level
directory from the archive during extraction (e.g., using --strip-components=1)
or change the extraction destination to /opt/frp so the files are placed
correctly without nesting.
| - name: Verify that slurm directories exist | ||
| file: | ||
| path: "{{ item }}" | ||
| state: directory | ||
| owner: slurm | ||
| group: slurm | ||
| mode: '0755' | ||
| loop: | ||
| - /opt/slurm | ||
| - /opt/slurm/logs | ||
| - /opt/slurm/etc | ||
| - /var/spool/slurmctld | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slurm user/group are referenced before creation
The file: loop sets ownership to slurm:slurm, but no user/group is created earlier.
Add:
- name: Ensure slurm user and group exist
ansible.builtin.user:
name: slurm
system: yes
shell: /usr/sbin/nologinbefore this block.
🤖 Prompt for AI Agents
In roles/userserver/tasks/slurm.yml around lines 51 to 63, the task sets
ownership of directories to the slurm user and group, but the slurm user and
group are not created beforehand. To fix this, add a task before this block that
ensures the slurm user and group exist by using the ansible.builtin.user module
with parameters name: slurm, system: yes, and shell: /usr/sbin/nologin.
| - name: Copy slurmctld and slurmdbd service files | ||
| copy: | ||
| src: "{{ item }}" | ||
| dest: /etc/systemd/system/ | ||
| loop: | ||
| - files/slurmctld.service | ||
| - files/slurmdbd.service | ||
| notify: reload systemd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use role-relative paths for service units
src: files/slurmctld.service assumes the file is under the playbook root.
When these tasks live in a role, drop the prefix:
- src: "{{ item }}"
+ src: "{{ item | basename }}"and set the loop list to just the filenames.
🤖 Prompt for AI Agents
In roles/userserver/tasks/slurm.yml around lines 104 to 111, the src path for
copying service files uses a relative path assuming the playbook root, which is
incorrect for role tasks. Remove the 'files/' prefix from the src parameter and
update the loop list to only include the filenames 'slurmctld.service' and
'slurmdbd.service' so the paths are correctly resolved relative to the role's
files directory.
gdevenyi
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be pared way back.
- remove all the repo additions
- remove all package installation but the packages mandatory for the services needed
- update the frp version to latest (all other servers need this too)
- vault the frp keys (already rotated see servers)
- no ups attached to these
- no apptainer
- don't modify grub config
- update ntp to ca.pool.ntp.org
- Slurm from new deb packages, so no need for build scripts
- password required for sudoers
- drop adding hosts file
- drop watchdog
Summary by CodeRabbit
New Features
Bug Fixes
Chores