Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"unable to parse volume ID 'local-zfs'" when adding volume to existing LXC container #1493

Open
andreaswolf opened this issue Aug 17, 2024 · 0 comments
Labels
🐛 bug Something isn't working

Comments

@andreaswolf
Copy link

andreaswolf commented Aug 17, 2024

Describe the bug
Adding a new volume to an existing LXC container fails. (Presumably this is only an issue on ZFS?).

The error returned is "Error: error updating container: received an HTTP 500 response - Reason: unable to parse volume ID 'local-zfs'"

To Reproduce
Steps to reproduce the behavior:

  1. Create an LXC container with a single root disk (see below for minimum example)
  2. Run terraform apply
  3. Add a mount point (see below)
  4. Run terraform apply

Please also provide a minimal Terraform configuration that reproduces the issue.

resource "proxmox_virtual_environment_container" "volumetest" {
  provider = proxmox-bpg

  node_name = "proxmox01"

  unprivileged = true

  initialization {
    hostname = "volumetest"
  }

  console {
    enabled = true
    type    = "console"
  }

  disk {
    datastore_id = "local-zfs"
    size = 10
  }

  operating_system {
    #template_file_id = proxmox_virtual_environment_file.latest_ubuntu_24_jammy_lxc_img.id
    # Or you can use a volume ID, as obtained from a "pvesm list <storage>"
    template_file_id = "local:vztmpl/nixos-24.05-20240707-2134_x86-64.tar.xz"
    type             = "nixos"
  }
}

The mount point I added in step 3:

  mount_point {
    volume = "local-zfs"
    size   = "50G"
    path   = "/mnt/data"
    backup = true
    replicate = true
  }

Expected behavior
The subvolume is created on "local-zfs" and attached to the container.

Additional context
Adding the volume works if it is present on container creation already. Removing the volume later on then fails (it is not removed although Terraform reports success in removing it).

  • Single or clustered Proxmox: clustered, two nodes
  • Proxmox version: 8.2.4
  • Provider version (ideally it should be the latest version): 0.62.0
  • Terraform/OpenTofu version: OpenTofu 1.7.1
  • OS (where you run Terraform/OpenTofu from): Ubuntu 22.04
  • Debug logs (TF_LOG=DEBUG terraform apply):
[…]
OpenTofu will perform the following actions:

  # proxmox_virtual_environment_container.volumetest will be updated in-place
  ~ resource "proxmox_virtual_environment_container" "volumetest" {
        id             = "116"
        tags           = []
        # (11 unchanged attributes hidden)

      ~ initialization {
            # (1 unchanged attribute hidden)

          + ip_config {
              + ipv4 {
                  + address = "dhcp"
                }
            }
        }

      + mount_point {
          + acl       = false
          + backup    = true
          + path      = "/mnt/data"
          + quota     = false
          + read_only = false
          + replicate = true
          + shared    = false
          + size      = "50G"
          + volume    = "local-zfs"
        }

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.
2024-08-17T15:28:09.758+0200 [DEBUG] command: asking for input: "\nDo you want to perform these actions?"

Do you want to perform these actions?
  OpenTofu will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

2024-08-17T15:28:30.133+0200 [INFO]  backend/local: apply calling Apply
2024-08-17T15:28:30.133+0200 [DEBUG] Building and walking apply graph for NormalMode plan

[redacted]

2024-08-17T15:28:30.143+0200 [DEBUG] provider: starting plugin: path=.terraform/providers/registry.opentofu.org/bpg/proxmox/0.62.0/linux_amd64/terraform-provider-proxmox_v0.62.0 args=[".terraform/providers/registry.opentofu.org/bpg/proxmox/0.62.0/linux_amd64/terraform-provider-proxmox_v0.62.0"]
2024-08-17T15:28:30.143+0200 [DEBUG] provider: plugin started: path=.terraform/providers/registry.opentofu.org/bpg/proxmox/0.62.0/linux_amd64/terraform-provider-proxmox_v0.62.0 pid=3831884
2024-08-17T15:28:30.143+0200 [DEBUG] provider: waiting for RPC address: path=.terraform/providers/registry.opentofu.org/bpg/proxmox/0.62.0/linux_amd64/terraform-provider-proxmox_v0.62.0
2024-08-17T15:28:30.148+0200 [INFO]  provider.terraform-provider-proxmox_v0.62.0: configuring server automatic mTLS: timestamp="2024-08-17T15:28:30.148+0200"
2024-08-17T15:28:30.160+0200 [DEBUG] provider: using plugin: version=6
2024-08-17T15:28:30.160+0200 [DEBUG] provider.terraform-provider-proxmox_v0.62.0: plugin address: network=unix address=/tmp/plugin1758826353 timestamp="2024-08-17T15:28:30.159+0200"
2024-08-17T15:28:30.172+0200 [INFO]  provider.terraform-provider-proxmox_v0.62.0: Configuring the Proxmox provider...: @module=proxmox tf_mux_provider="*proto6server.Server" tf_req_id=35546dad-58ca-6af1-3fda-436fdcba1312 tf_rpc=ConfigureProvider @caller=/home/runner/work/terraform-provider-proxmox/terraform-provider-proxmox/fwprovider/provider.go:236 tf_provider_addr=registry.terraform.io/bpg/proxmox timestamp="2024-08-17T15:28:30.172+0200"
2024-08-17T15:28:30.184+0200 [WARN]  Provider "registry.opentofu.org/bpg/proxmox" produced an invalid plan for proxmox_virtual_environment_container.volumetest, but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .description: planned value cty.StringVal("") for a non-computed attribute
      - .timeout_clone: planned value cty.NumberIntVal(1800) for a non-computed attribute
      - .started: planned value cty.True for a non-computed attribute
      - .timeout_update: planned value cty.NumberIntVal(1800) for a non-computed attribute
      - .template: planned value cty.False for a non-computed attribute
      - .timeout_delete: planned value cty.NumberIntVal(60) for a non-computed attribute
      - .timeout_start: planned value cty.NumberIntVal(300) for a non-computed attribute
      - .start_on_boot: planned value cty.True for a non-computed attribute
      - .tags: planned value cty.ListValEmpty(cty.String) for a non-computed attribute
      - .timeout_create: planned value cty.NumberIntVal(1800) for a non-computed attribute
      - .console[0].tty_count: planned value cty.NumberIntVal(2) for a non-computed attribute
      - .mount_point[0].acl: planned value cty.False for a non-computed attribute
      - .mount_point[0].quota: planned value cty.False for a non-computed attribute
      - .mount_point[0].read_only: planned value cty.False for a non-computed attribute
      - .mount_point[0].shared: planned value cty.False for a non-computed attribute
proxmox_virtual_environment_container.volumetest: Modifying... [id=116]
2024-08-17T15:28:30.184+0200 [INFO]  Starting apply for proxmox_virtual_environment_container.volumetest
2024-08-17T15:28:30.185+0200 [DEBUG] proxmox_virtual_environment_container.volumetest: applying the planned Update change
2024-08-17T15:28:30.188+0200 [DEBUG] provider.terraform-provider-proxmox_v0.62.0: Sending authentication request: @caller=/home/runner/work/terraform-provider-proxmox/terraform-provider-proxmox/proxmox/api/ticket_auth.go:71 @module=proxmox path=/api2/json/access/ticket tf_req_id=3b2a9723-8920-a520-bcae-c641adb2ab58 tf_mux_provider=tf5to6server.v5tov6Server tf_provider_addr=registry.terraform.io/bpg/proxmox tf_resource_type=proxmox_virtual_environment_container tf_rpc=ApplyResourceChange timestamp="2024-08-17T15:28:30.188+0200"
2024-08-17T15:28:30.188+0200 [DEBUG] provider.terraform-provider-proxmox_v0.62.0: Sending HTTP Request: Content-Type=application/x-www-form-urlencoded Host=proxmox01[redacted]:8006 tf_http_req_body="[redacted]" tf_http_req_method=POST @caller=/home/runner/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/logging/logging_http_transport.go:162 @module=proxmox Content-Length=37 tf_http_req_uri=/api2/json/access/ticket tf_req_id=3b2a9723-8920-a520-bcae-c641adb2ab58 tf_resource_type=proxmox_virtual_environment_container tf_http_req_version=HTTP/1.1 User-Agent=Go-http-client/1.1 tf_http_op_type=request tf_rpc=ApplyResourceChange Accept-Encoding=gzip tf_http_trans_id=1eb452d9-e4e8-6958-4d11-77ea457e051a tf_mux_provider=tf5to6server.v5tov6Server tf_provider_addr=registry.terraform.io/bpg/proxmox timestamp="2024-08-17T15:28:30.188+0200"
2024-08-17T15:28:30.290+0200 [DEBUG] provider.terraform-provider-proxmox_v0.62.0: Received HTTP Response: tf_http_res_version=HTTP/1.1 tf_http_trans_id=1eb452d9-e4e8-6958-4d11-77ea457e051a tf_mux_provider=tf5to6server.v5tov6Server tf_provider_addr=registry.terraform.io/bpg/proxmox @module=proxmox Date="Sat, 17 Aug 2024 13:28:30 GMT" Pragma=no-cache Expires="Sat, 17 Aug 2024 13:28:30 GMT" tf_http_res_body="{\"data\":{\"CSRFPreventionToken\":\"66C0A57E:8SMzgsdez60O87XLSAGXIeCvs651H2pcLeyulv4lRFg\",\"cap\":{\"vms\":{\"VM.Backup\":1,\"VM.Config.HWType\":1,\"VM.Migrate\":1,\"VM.Snapshot.Rollback\":1,\"VM.PowerMgmt\":1,\"VM.Config.Options\":1,\"Permissions.Modify\":1,\"VM.Monitor\":1,\"VM.Snapshot\":1,\"VM.Clone\":1,\"VM.Config.Cloudinit\":1,\"VM.Config.Memory\":1,\"VM.Console\":1,\"VM.Config.Disk\":1,\"VM.Config.Network\":1,\"VM.Audit\":1,\"VM.Config.CPU\":1,\"VM.Config.CDROM\":1,\"VM.Allocate\":1},\"sdn\":{\"SDN.Audit\":1,\"SDN.Allocate\":1,\"Permissions.Modify\":1,\"SDN.Use\":1},\"nodes\":{\"Sys.Console\":1,\"Sys.Modify\":1,\"Permissions.Modify\":1,\"Sys.Incoming\":1,\"Sys.Syslog\":1,\"Sys.PowerMgmt\":1,\"Sys.Audit\":1,\"Sys.AccessNetwork\":1},\"access\":{\"User.Modify\":1,\"Group.Allocate\":1,\"Permissions.Modify\":1},\"mapping\":{\"Mapping.Audit\":1,\"Mapping.Use\":1,\"Mapping.Modify\":1,\"Permissions.Modify\":1},\"dc\":{\"Sys.Audit\":1,\"SDN.Audit\":1,\"SDN.Allocate\":1,\"SDN.Use\":1,\"Sys.Modify\":1},\"storage\":{\"Datastore.AllocateSpace\":1,\"Permissions.Modify\":1,\"Datastore.Allocate\":1,\"Datastore.AllocateTemplate\":1,\"Datastore.Audit\":1}},\"clustername\":\"[redacted]\",\"username\":\"root@pam\",\"ticket\":\"PVE:root@pam:66C0A57E::bCecuKYQD2mQS2SBPs89kDikKPB24gvsEzbJN3B9bs7FRXnIkLDlj11LOysnx9QorDPwsuADJvdxdQXkPusPifDtsvImGs8Lv62bMTIy66EdreayoVAF9jdGNTB8Dhmar37+ylWLDD8E8vVzvrUUH/jOSoghq3eTrBInJbsY7xhPZ0f16lzi36syuMxTAM3L6bfDffb4ugsmmMNbCbfxtf+zXPmL8hdjSM6MTKjvL/0DC3kc1yxwEzhOg4v5uRPpxC/T9xS6pB/j9LbiG97G6khDzjks+f9wVxd+SQmuLPKUE61wmvtAxhme6/vQOMGrOialWYYLYWImR8PhsxmW5w==\"}}" Cache-Control=max-age=0 tf_http_op_type=response tf_http_res_status_code=200 tf_http_res_status_reason="200 OK" tf_req_id=3b2a9723-8920-a520-bcae-c641adb2ab58 @caller=/home/runner/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/logging/logging_http_transport.go:162 Server=pve-api-daemon/3.0 Content-Type=application/json;charset=UTF-8 tf_resource_type=proxmox_virtual_environment_container tf_rpc=ApplyResourceChange timestamp="2024-08-17T15:28:30.289+0200"
2024-08-17T15:28:30.290+0200 [DEBUG] provider.terraform-provider-proxmox_v0.62.0: Sending HTTP Request: Accept=application/json tf_http_req_uri=/api2/json/nodes/proxmox01/lxc/116/config tf_http_trans_id=000c026b-7390-3769-65ac-1286d677a371 tf_req_id=3b2a9723-8920-a520-bcae-c641adb2ab58 tf_resource_type=proxmox_virtual_environment_container @module=proxmox Content-Length=271 Host=proxmox01[redacted]:8006 User-Agent=Go-http-client/1.1 tf_http_req_version=HTTP/1.1 tf_http_req_body="delete=net0&delete=net1&delete=net2&delete=net3&delete=net4&delete=net5&delete=net6&delete=net7&description=&hostname=volumetest&mp0=acl%3D0%2Cbackup%3D1%2Cmp%3D%2Fmnt%2Fdata%2Cquota%3D0%2Cro%3D0%2Creplicate%3D1%2Cshared%3D0%2Cvolume%3Dlocal-zfs&nameserver=&searchdomain=" tf_http_req_method=PUT tf_mux_provider=tf5to6server.v5tov6Server tf_rpc=ApplyResourceChange @caller=/home/runner/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/logging/logging_http_transport.go:162 Accept-Encoding=gzip Content-Type=application/x-www-form-urlencoded Cookie="PVEAuthCookie=PVE:root@pam:66C0A57E::bCecuKYQD2mQS2SBPs89kDikKPB24gvsEzbJN3B9bs7FRXnIkLDlj11LOysnx9QorDPwsuADJvdxdQXkPusPifDtsvImGs8Lv62bMTIy66EdreayoVAF9jdGNTB8Dhmar37+ylWLDD8E8vVzvrUUH/jOSoghq3eTrBInJbsY7xhPZ0f16lzi36syuMxTAM3L6bfDffb4ugsmmMNbCbfxtf+zXPmL8hdjSM6MTKjvL/0DC3kc1yxwEzhOg4v5uRPpxC/T9xS6pB/j9LbiG97G6khDzjks+f9wVxd+SQmuLPKUE61wmvtAxhme6/vQOMGrOialWYYLYWImR8PhsxmW5w==" Csrfpreventiontoken=66C0A57E:8SMzgsdez60O87XLSAGXIeCvs651H2pcLeyulv4lRFg tf_http_op_type=request tf_provider_addr=registry.terraform.io/bpg/proxmox timestamp="2024-08-17T15:28:30.290+0200"
2024-08-17T15:28:30.315+0200 [DEBUG] provider.terraform-provider-proxmox_v0.62.0: Received HTTP Response: tf_rpc=ApplyResourceChange Date="Sat, 17 Aug 2024 13:28:30 GMT" Server=pve-api-daemon/3.0 tf_http_res_body="{\"data\":null}" Content-Length=13 tf_http_res_version=HTTP/1.1 tf_req_id=3b2a9723-8920-a520-bcae-c641adb2ab58 tf_resource_type=proxmox_virtual_environment_container @caller=/home/runner/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/logging/logging_http_transport.go:162 Cache-Control=max-age=0 @module=proxmox tf_http_op_type=response tf_http_res_status_code=500 tf_http_trans_id=000c026b-7390-3769-65ac-1286d677a371 tf_mux_provider=tf5to6server.v5tov6Server tf_provider_addr=registry.terraform.io/bpg/proxmox Content-Type=application/json;charset=UTF-8 Expires="Sat, 17 Aug 2024 13:28:30 GMT" Pragma=no-cache tf_http_res_status_reason="500 unable to parse volume ID 'local-zfs'" timestamp="2024-08-17T15:28:30.315+0200"
2024-08-17T15:28:30.316+0200 [ERROR] provider.terraform-provider-proxmox_v0.62.0: Response contains error diagnostic: diagnostic_detail="" diagnostic_severity=ERROR tf_proto_version=6.6 tf_rpc=ApplyResourceChange tf_req_id=3b2a9723-8920-a520-bcae-c641adb2ab58 tf_resource_type=proxmox_virtual_environment_container @caller=/home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/tfprotov6/internal/diag/diagnostics.go:58 @module=sdk.proto diagnostic_summary="error updating container: received an HTTP 500 response - Reason: unable to parse volume ID 'local-zfs'" tf_provider_addr=registry.terraform.io/bpg/proxmox timestamp="2024-08-17T15:28:30.316+0200"
2024-08-17T15:28:30.319+0200 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
2024-08-17T15:28:30.319+0200 [ERROR] vertex "proxmox_virtual_environment_container.volumetest" error: error updating container: received an HTTP 500 response - Reason: unable to parse volume ID 'local-zfs'
╷
│ Error: error updating container: received an HTTP 500 response - Reason: unable to parse volume ID 'local-zfs'
│ 
│   with proxmox_virtual_environment_container.volumetest,
│   on containers_volumetest.tf line 2, in resource "proxmox_virtual_environment_container" "volumetest":
│    2: resource "proxmox_virtual_environment_container" "volumetest" {
│ 
╵
2024-08-17T15:28:30.332+0200 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-08-17T15:28:30.335+0200 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.opentofu.org/bpg/proxmox/0.62.0/linux_amd64/terraform-provider-proxmox_v0.62.0 pid=3831884
2024-08-17T15:28:30.335+0200 [DEBUG] provider: plugin exited
@andreaswolf andreaswolf added the 🐛 bug Something isn't working label Aug 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant