-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Part of: #408
Story: SSH Key Cluster Sync Service
Part of: #408
[Conversation Reference: "This includes moving cached objects to postgres" for SSH keys]
Story Overview
Objective: Implement a per-node sync service that reads SSH key metadata and encrypted key material from PostgreSQL and writes the corresponding key files to the local ~/.ssh/ directory. This ensures all cluster nodes have the same SSH keys available for git operations (clone, pull) on golden repositories.
User Value: When an SSH key is added via the admin UI on any node, all cluster nodes automatically receive the key and can use it for git operations. No manual key distribution across nodes.
Acceptance Criteria
AC1: SSH Key Material Stored in PostgreSQL
Scenario: SSH keys are stored in PostgreSQL with encrypted key material.
Given an admin adds an SSH key via the admin UI
When the key is stored
Then the key metadata (name, host assignment, fingerprint) is in PostgreSQL
And the private key material is stored in PostgreSQL (same encryption as current SQLite storage)
And the public key is stored in PostgreSQL
And the key is accessible from any cluster node via the SSHKeysBackendTechnical Requirements:
- SSHKeysPostgresBackend (Story 6) stores complete key data including private key material
- Encryption at rest: same approach as current SQLite backend
- Key material column: TEXT (PEM encoded, encrypted)
AC2: Per-Node Sync Service Writes Keys to Local ~/.ssh/
Scenario: Each node syncs SSH keys from PostgreSQL to its local filesystem.
Given SSH keys exist in PostgreSQL
When the SSH Key Sync Service runs on a node
Then it reads all SSH keys from PostgreSQL
And it writes each private key to ~/.ssh/<key_name>
And it writes each public key to ~/.ssh/<key_name>.pub
And it sets correct file permissions (600 for private, 644 for public)
And it updates ~/.ssh/config with host assignmentsTechnical Requirements:
-
SSHKeySyncServiceinsrc/code_indexer/server/services/ssh_key_sync_service.py - Decrypt key material from PostgreSQL
- Write to
~/.ssh/with correct permissions - Update
~/.ssh/configentries for host-key mappings - Idempotent: re-running produces same result
AC3: Sync on Startup and on Change
Scenario: Keys are synced at startup and whenever keys change.
Given the server starts in cluster mode
When startup runs
Then the SSH Key Sync Service performs a full sync from PostgreSQL to local ~/.ssh/
And after startup, it periodically checks for changes (every 60 seconds)
When a new key is added or an existing key is deleted in PostgreSQL
Then the next sync cycle writes/removes the key from local ~/.ssh/Technical Requirements:
- Full sync on startup
- Periodic check every 60 seconds
- Detect changes: compare PostgreSQL key list with local key list
- Add new keys, remove deleted keys, update modified keys
- Log each sync action: "Added key 'github-deploy' to ~/.ssh/"
AC4: Key Deletion Sync
Scenario: Deleting a key from PostgreSQL removes it from all nodes.
Given an SSH key "old-key" exists on all nodes
When an admin deletes "old-key" via the admin UI
Then the next sync cycle on each node detects the deletion
And each node removes ~/.ssh/old-key and ~/.ssh/old-key.pub
And each node removes the corresponding ~/.ssh/config entryTechnical Requirements:
- Track known keys: compare PostgreSQL set with local set
- Remove local files for keys no longer in PostgreSQL
- Remove ~/.ssh/config entries for deleted keys
- Log: "Removed key 'old-key' from ~/.ssh/ (deleted from cluster)"
AC5: Standalone Mode Compatibility
Scenario: In standalone mode, SSH keys continue to work as-is.
Given the server is running in standalone mode
When SSH keys are managed
Then the existing local-only behavior is preserved
And no sync service runs
And keys are written directly to ~/.ssh/ as they are nowTechnical Requirements:
- SSHKeySyncService only active in cluster mode
- Standalone mode: existing SSH key management unchanged
Implementation Status
- Core implementation complete
- Unit tests passing
- Integration tests passing
- E2E tests passing
- Code review approved
- Manual E2E testing completed
- Documentation updated
Technical Implementation Details
File Structure
src/code_indexer/server/services/
ssh_key_sync_service.py # SSHKeySyncService
Sync Algorithm
1. Read all SSH keys from PostgreSQL (via SSHKeysBackend)
2. List existing CIDX-managed keys in ~/.ssh/ (tracked via a manifest file)
3. For each key in PostgreSQL:
a. If not on disk: write key files, add to manifest
b. If on disk but different: update key files
c. If on disk and same: skip
4. For each key on disk (in manifest) but not in PostgreSQL:
a. Delete key files from disk
b. Remove from manifest
5. Regenerate ~/.ssh/config entries
Manifest File
~/.ssh/.cidx-managed-keys.json -- tracks which keys were written by the sync service to avoid touching user-managed keys.
Testing Requirements
- Automated: Sync writes keys from PostgreSQL to ~/.ssh/ with correct permissions.
- Automated: Key deletion in PostgreSQL leads to file removal.
- Automated: Idempotent re-sync produces no changes.
- Automated: Manifest tracks CIDX-managed keys correctly.
- Manual E2E: Add SSH key via admin UI, verify it appears on a second cluster node within 60 seconds. Delete the key, verify removal on both nodes.
Definition of Done
- SSHKeySyncService writes keys from PostgreSQL to local ~/.ssh/
- Sync on startup and periodic (60s) change detection
- Key deletion synced across nodes
- Correct file permissions (600/644)
- ~/.ssh/config updated with host assignments
- Manifest file prevents touching user-managed keys
- Standalone mode unchanged
- All tests pass