diff --git a/content/SCALE/GettingStarted/Configure/FirstTimeLogin.md b/content/SCALE/GettingStarted/Configure/FirstTimeLogin.md
index f4c25948c0..c1aa3385ad 100644
--- a/content/SCALE/GettingStarted/Configure/FirstTimeLogin.md
+++ b/content/SCALE/GettingStarted/Configure/FirstTimeLogin.md
@@ -57,10 +57,10 @@ The browser you use can impact the quality of your user experience. We generally
With the implementation of rootless login, root is no longer the default administrator username, rather you use the new admin user created during the installation process.
We recommend creating the admin user during the installation process and using it to log into SCALE.
-Based on the authentication method selected in step 4 of the SCALE [TrueNAS installer Console Setup]({{< relref "InstallingScale.md" >}}) process, you could see one of three sign-in splash screen options for the web UI.
+Based on the authentication method selected in step 4 of the SCALE [TrueNAS installer Console Setup]({{< relref "InstallingScale.md#using-the-truenas-installer-console-setup" >}}) process, you could see one of three sign-in splash screen options for the web UI.
* Selecting **1. Administrative user (admin)** opens the SCALE sign-in screen to log in with the admin username and password created during installation.
-* Selecting **2. Root user (not recommended** opens the SCALE sign-in screen to log in with the root username and the root password created during installation.
+* Selecting **2. Root user (not recommended)** opens the SCALE sign-in screen to log in with the root username and the root password created during installation.
* Selecting **3. Configuring using Web UI** opens a SCALE sign-in screen where you select the option for either the admin or root user and create the password.
If you select option 1, the root user still exists but with the password disabled by default, which means only the admin user can log into the system.
@@ -68,7 +68,7 @@ You can activate the password for the root user for some limited uses, but you s
### Logging In as Admin
-If you set up the admin user during the installation, enter the username **admin** and password you set up.
+If you set up the admin user during the installation using the option **1. Administrative user (admin)**, enter the username **admin** and password you set up.
![LoginScreenSCALE](/images/SCALE/22.12/LoginScreenSCALE.png "TrueNAS SCALE Login Screen")
@@ -82,9 +82,13 @@ To create an admin user go to **Credentials > Local Users**, and click **Add** t
Follow the directions in [Managing Users]({{< relref "ManageLocalUsersScale.md" >}}) to create an admin user with all the settings it requires.
### Creating an Administrator Account at First Log in
-Selecting the option to create the root or administration user when you first log into SCALE presents a sign-in splash screen with two radio buttons. Select either the admin or root user option, then enter the password to use with that user. After selecting the option another sign-in splash screen displays where you enter the password for the administration user option you selected.
+Selecting the option **3. Configuring using Web UI** during the installation asks you to create the root or administration user when you first log into SCALE. This option presents a sign-in splash screen with two radio buttons.
-After creating the login account, go to **Credentials > Local Users** screen. [Create the admin account]({{< relref "ManageLocalUsersSCALE.md" >}}) immediately after you enter the UI. Create or edit the [admin user account settings]({{< relref "ManageLocalUsersSCALE.md" >}}), enable the password, and click **Save**. After setting up the admin user, then edit the root user to disable the password and resume rootless login security hardening.
+![FirstTimeLoginInstallOpt3SCALE](/images/SCALE/22.12/FirstTimeLoginInstallOpt3SCALE.png "TrueNAS SCALE Login Screen Set Admin Password")
+
+Select either the admin or root user (not recommended) option, then enter the password to use with that user.
+
+If you chose **Root user (not recommended)** as the TrueNAS authentication method, go to the **Credentials > Local Users** screen. [Create the admin account]({{< relref "ManageLocalUsersSCALE.md" >}}) immediately after you enter the UI. Create the [admin user account settings]({{< relref "ManageLocalUsersSCALE.md" >}}), enable the password, and click **Save**. After setting up the admin user, edit the root user to disable the password and resume rootless login security hardening.
{{< expand "What happens if I disable both admin and root passwords at the same time?" "V">}}
If you disabled the root user password and did not create the admin user and enable that password, or you disable both admin and root user passwords and your session times out before you enable one of the passwords, SCALE displays a sign-in screen that allows you create a temporary password for one-time access.
diff --git a/content/SCALE/GettingStarted/Install/InstallEnterpriseHASCALE.md b/content/SCALE/GettingStarted/Install/InstallEnterpriseHASCALE.md
index d3457b8bd0..7d7acaa265 100644
--- a/content/SCALE/GettingStarted/Install/InstallEnterpriseHASCALE.md
+++ b/content/SCALE/GettingStarted/Install/InstallEnterpriseHASCALE.md
@@ -11,7 +11,7 @@ tag:
{{< toc >}}
{{< enterprise >}}
-TrueNAS SCALE Enterprise will be generally available with the release of SCALE 22.12.2.
+TrueNAS SCALE Enterprise is generally available with the release of SCALE 22.12.2.
Do not attempt to install Enterprise High Availability systems with TrueNAS SCALE until it becomes generally available or the deployment is experimental in nature.
Installing TrueNAS SCALE on High Availability (HA) systems is complicated and should be guided by Enterprise level support.
@@ -27,75 +27,116 @@ Incorrect use of CLI commands can further disrupt your system access and can pot
## Installing SCALE for an Enterprise (HA) System
-This article outlines a procedure to do a clean install of a SCALE Enterprise (HA) systems using an iso file.
-HA systems are dual controller systems. Execute this procedure on both controllers in the system. SCALE includes features and functions to help guide you with completing the process after you get to the SCALE UI.
+This article outlines a procedure to do a clean install of a SCALE Enterprise High Availability (HA) systems using an iso file.
+
+HA systems are dual controller systems with the primary controller referred to as controller 1 (sometimes also as controller A) and controller 2 (or controller B).
+{{< include file="/content/_includes/HAControllerInstallBestPracticeSCALE.md" type="page" >}}
+
+SCALE includes features and functions to help guide with completing the configuration process after installing and getting access to the SCALE web interface.
### Preparing for a Clean Install
For a list of SCALE Enterprise (HA) preparation information, see [Preparing for SCALE UI Configuration (Enterprise)]({{< relref "InstallPrepEnterprise.md" >}}).
Have this information handy to complete this procedure:
-* All the assigned network addresses and host names (VIP, controller A and B IP addresses)
+* All the assigned network addresses and host names (VIP, controller 1 and 2 IP addresses)
* Other network information including domain name(s), and DNS server, default gateway, alias or other static IP addresses
* The IPMI access addresses for each controller and the administration credentials for IPMI access to these addresses
* SCALE license file provided by iXsystems.
-* SCALE Storage Controller A and B serial numbers (refer to contracts or documentation provided with the system, or contact iXsystems Support and provide your contract number)
+* SCALE Storage Controller 1 (A) and 2 (B) serial numbers (refer to contracts or documentation provided with the system, or contact iXsystems Support and provide your contract number)
{{< hint info >}}
-HA system controllers each have serial numbers, the lower number assigned is for controller A (e.g. of two controller serial numbers assigned *A1-12345* and *A1-12346*, the *A1-12345* is for controller A and *A1-12346* is for controller B).
+HA system controllers each have serial numbers, the lower number assigned is for controller 1 (e.g. of two controller serial numbers assigned *A1-12345* and *A1-12346*, the *A1-12345* is for controller 1 and *A1-12346* is for controller 2).
{{< /hint >}}
When restoring after a clean install, also have ready:
* Storage data backups to import into the Enterprise HA system.
* System configuration file from the previous TrueNAS install.
-### Overview of the Install Procedure
+### Overview of the Installation Procedure
+{{< hint warning >}}
+{{< include file="/content/_includes/HAControllerInstallBestPracticeSCALE.md" type="page" >}}
+{{< /hint >}}
+
+There are two ways to install the HA dual controller system to ensure controller 1 comes online as the primary controller:
+
+* Install both controller simultaneously beginning with controller 1, then immediately starting the install on controller 2.
+* Installing each controller individually to specific points in the installation process.
+
+Simultaneous installation must start with controller 1 so it comes online first.
+Installing each controller individually follows a particular method to ensure controller 1 comes online as the primary controller.
+
+The sections in this article cover the primary steps as a simultaneous installation:
+
+1. [Download](#downloading-the-scale-install-file) the iso file from the TrueNAS website and prepare the USB flash drives if not using IPMI for remote access.
+2. [Log into your IPMI](#using-ipmi-to-install-the-iso-on-a-controller) system using the network address assigned to controller 1, and then establish a second connection with controller 2 in a new browser session.
+3. [Install SCALE using the iso file](#using-ipmi-to-install-the-iso-on-a-controller) and select the **Fresh Install** option.
+ Install on controller 1, then immediately begin installing on controller 2 in the other IPMI session to simultaneously install SCALE on both controllers.
+
+4. Use the DHCP-assigned IP address or assign the controller 1 IP address using the [Console Setup Menu](#configuring-the-network-with-console-setup-menu) to gain access to the SCALE UI.
+
+ Use the SCALE UI for system configuration as it has safety mechanisms in place to prevent disrupting network access that could require you to repeat the clean install to access your system.
+ However, if you are experienced with the Console Setup Menu and are using it to configure network settings you can configure the rest of the controller 1 network settings with the Console setup menu.
-The sections in this article cover these primary steps:
+5. [Log into the SCALE UI](#configuring-settings-in-the-scale-ui) for controller 1 to sign the EULA agreement and apply the system HA license.
+6. Disable failover to configure the rest of the network settings and edit the primary network interface on controller 1, and then enable failover.
+7. Complete the minimum storage requirement by adding or importing one pool on controller 1.
+8. Sign in using the Virtual IP (VIP) address.
+9. With controller 2 powered up, on controller 1 sync to peer to complete the install and make controller 2 the standby controller.
-1. [Download](#downloading-the-scale-install-file) the iso file from the TrueNAS website and prepare a USB flash drive to use if not using IPMI for remote access.
-2. [Log into your IPMI](#using-ipmi-to-install-the-iso-on-a-controller) system using the network address assigned to for controller A.
-3. [Install SCALE using the iso file](#using-ipmi-to-install-the-iso-on-a-controller) and use the **Fresh Install** option on controller A, and when complete, then repeat on controller B in the other IPMI session.
-4. Use the DHCP-assigned IP address or assign the controller A IP address using the [Console Setup Menu](#configuring-the-network-with-console-setup-menu) to gain access to the SCALE UI.
+The sections that follow describe these steps in detail.
- Use the SCALE UI for system configuration as it has safety mechanisms in place to prevent disrupting network access that could require you to repeat the clean install to access your system. However, if you are experienced with the Console Setup Menu and are using it to configure network settings you can configure the rest of the controller A network settings with the Console Setup Menu.
+#### Overview of the Alternative Installation Process
+This process of installing each controller sequentially has two methods:
-5. In a separate web browser session, log into the system IPMI using the network address assigned for controller B.
- Leave the controller A IPMI connection up and where you left it at after completing step 4, and then [repeat step 3 for controller B](#using-ipmi-to-install-the-iso-on-a-controller).
-6. [Log into the SCALE UI](#configuring-settings-in-the-scale-ui) to sign the EULA agreement on controller A and apply the system license.
-7. Disable failover to configure the rest of the network settings and edit the primary network interface on controller A, and then enable failover.
-8. Complete the minimum storage requirement by adding or importing one pool on controller A.
-9. Sign in using the Virtual IP (VIP) address.
+* Install and configure controller 1 up to the point where you are ready to sync to controller 2.
+ Then install controller 2 and reboot. When the console setup menu displays, switch back to controller 1 and sync to peer.
+ This synchronizes the completed configuration from controller 1 to controller 2 and keeps controller 1 designated as the primary controller.
+Or
+* Begin installing controller 2 immediately after installing controller 1. When controller 2 finishes installing, power it off and keep it powered down.
+ When finished configuring controller 1, power up controller 2 and wait for it to finish booting. Switch back to controller 1 and sync the configuration to controller 2.
-These steps are described in detail in the sections that follow.
+This section provides an overview of the alternative method to clean install an HA system with controller 2 powered off while installing and configuring controller 1.
+These steps are nearly identical to the section above but controller 2 is either powered off or not installed while you install and configure controller 1.
+
+1. Use either the prepared USB flash drive inserted into a USB port for controller 1 or log into an IPMI session and install SCALE on controller 1.
+ Finish the installation and allow controller 1 to complete its first boot.
+2. Use either the prepared USB flash drives inserted into a USB port for controller 2 or login into an IPMI session for controller 2 to install SCALE.
+ When the installation finishes, power down controller 2.
+3. Configure network settings on controller 1 either with the Console setup menu or using the UI.
+4. Log into controller 1 using the IP address assigned to controller 1.
+ Apply the HA license, sign the EULA, and complete the UI configuration to the point where you are ready to sync to peer on controller 1, but do not sync yet.
+5. Power up controller 2 and wait for it to complete the boot process.
+6. Log into controller 1, go to **System Settings > Failover**, and click **Sync to Peer**.
+ This synchronizes controller 2 with controller 1 and reboots controller 2. Controller 2 becomes the standby controller when it finishes rebooting.
### Downloading the SCALE Install File
-[Download](https://www.truenas.com/download-tn-scale/) the .iso file.
+[Download](https://www.truenas.com/download-tn-scale/) the .iso file.
If you are remote to the system and are installing through an IPMI connection you do not need to save the .iso file to a USB flash drive.
-If you are physically present with the TrueNAS SCALE system, burn the .iso file to a USB flash drive and use that as the install media.
+If you are physically present with the TrueNAS SCALE system, burn the .iso file to a USB flash drive and use that as the install media.
### Using IPMI to Install the ISO on a Controller
-{{< hint info >}}
-Use this process to install the iso file on controller A, and then after completing [Using the SCALE Installer](#using-the-scale-installer) on controller A, repeat this process for controller B.
-{{< /hint >}}
+
+Use this process to install the iso file on both controller 1 and controller 2. Best practice is to begin the install on controller 1, then immediately on controller 2.
+
{{< expand "Installing ISO Steps" "v" >}}
-1. Enter the IP address assigned to the controller A IPMI port into a web browser and log into your IPMI system with admin credentials.
+1. Enter the IP address assigned to the controller 1 IPMI port into a web browser and log into your IPMI system with admin credentials.
2. Select **Remote Control > iKVM/HTML5** to open the Console Setup window.
IPMI interfaces can vary but they generally have options for **Remote Control** and **iKVM/HTML5** to open a console session on the platform.
-3. Install the .iso file. Select the **Virtual Media > CD-ROM image** option in your IPMI system.
+3. Install the .iso file. Select the **Virtual Media > CD-ROM image** option in your IPMI system.
- a. Enter the IP address of where you downloaded the .iso file into **Share Host**.
+ a. Enter the IP address of where you downloaded the .iso file into **Share Host**.
You might need assistance from your Network or IT department to obtain this address.
- b. Enter the path to the .iso file.
+ b. Enter the path to the .iso file.
For example, if you stored the file in an *iso* folder enter **/iso/TrueNAS-SCALE-22.12.1.iso** in **Path to Image**.
- c. Click **Save**, then **Mount**. You should see the .iso file under **Device 1** or the device name your IPMI configures.
+ c. Click **Save**, then **Mount**. You should see the .iso file under **Device 1** or the device name your IPMI configures.
3. Return to the **Remote Control > iKVM/HTML5** window opened in step 2. Either use your keyboard or open the keyboard in the window then:
- a. Type **8** to reboot controller A, and type **y** to confirm and reboot.
+ a. Type **8** to reboot controller 1 (also repeat for controller 2), and type **y** to confirm and reboot.
b. As the system reboots, be prepared to hit the F11 key when you first see the **TrueNAS Open Storage** splash screen.
Alternatively you can start clicking on the **F11** key on the online keyboard until you see the TrueNAS SCALE Installer screen.
@@ -104,7 +145,7 @@ Use this process to install the iso file on controller A, and then af
{{< /expand >}}
### Using the SCALE Installer
{{< hint info >}}
-If you are doing a clean install from the SCALE .iso file to recover from an issue that requires you to re-install SCALE from the .iso, have your network configuration information ready to use after the installation completes.
+If you are doing a clean install from the SCALE .iso file to recover from an issue that requires you to re-install SCALE from the .iso, have your network configuration information ready to use for controller 1 after the installation completes. Do not configure network settings on controller 2.
Also have your SCALE system configuration file and data backups handy so you can recover your system settings and import your data into the recovered SCALE clean-install system.
{{< /hint >}}
{{< expand "SCALE Installer Steps" "v" >}}
@@ -113,19 +154,19 @@ Also have your SCALE system configuration file and data backups handy so you can
7. Select **OK** after you see **The TrueNAS installation on succeeded** displays. The Console setup menu screen displays.
8. Enter **3** to **Reboot System** and immediately return to the IPMI **Virtual Media > CD-ROM image** screen to click **Unmount**. Click **Save**.
- If you fail to unmount the iso image before the system completes the reboot, the bootstrap install continues in a boot loop.
+ If you fail to unmount the iso image before the system completes the reboot, the bootstrap install continues in a boot loop.
-SCALE is now installed on controller A. Repeat this process for controller B starting with [Using IPMI to Install the ISO on a Controller](#using-ipmi-to-install-the-iso-on-a-controller).
+SCALE is now installed on controller 1 and repeated for controller 2 starting with [Using IPMI to Install the ISO on a Controller](#using-ipmi-to-install-the-iso-on-a-controller).
{{< /expand >}}
### Configuring the Network with Console Setup Menu
-After installing the both controller A and B .iso file and finishing the TrueNAS SCALE Installer process, if the TrueNAS SCALE server is connected to the network where DHCP is not enabled, use the Console setup menu to assign controller A main network interface the static IP address to allow access to the SCALE UI.
+After installing the both controller 1 and 2 .iso file and finishing the TrueNAS SCALE Installer process, if the TrueNAS SCALE server is connected to the network where DHCP is not enabled, use the Console setup menu to assign controller 1 main network interface the static IP address to allow access to the SCALE UI.
TrueNAS SCALE uses DHCP to assign an IP address to the primary network interface to allow access to the SCALE UI.
{{< hint warning >}}
-Only users with experience configuring network settings and using the Console setup menu should use it to configure all network settings. All other users should only use the Console Setup Menu to configure a static IP address for the primary network interface for Controller A to allow access to the SCALE UI.
+Only users with experience configuring network settings and using the Console setup menu should use it to configure all network settings. All other users should only use the Console setup menu to configure a static IP address for the primary network interface for controller 1 to allow access to the SCALE UI.
The SCALE UI has safeguards in place to prevent network connectivity issues that could require a clean install of SCALE to restore access.
{{< /hint >}}
-To use the Console setup menu to change the primary network interface IP address:
+To use the Console setup menu to change the primary network interface IP address on controller 1:
1. Type 1 and then press Enter to open the **Configure Network Interfaces** screen.
2. Use either Tab or the arrow keys to select the interface assigned as your primary network interface.
@@ -134,36 +175,42 @@ To use the Console setup menu to change the primary network interface IP address
4. Type q to return to the main Console setup menu.
### Configuring Settings in the SCALE UI
+{{< hint info >}}
+This section only applies to controller 1. Do not configure settings on controller 2.
+{{< /hint >}}
SCALE UI Enterprise customers see the End User License Agreement (EULA) screen the first time they log in.
Sign the agreement to open the main SCALE **Dashboard**.
Apply the system license next.
Go to **System Settings > General** and click **Add License** on the **Support** widget. Copy your license and paste it into the **License** field, then click **Save License**.
-The **Reload** dialog opens. Click **Reload Now**. Controller A restarts, and displays the EULA for controller B. Sign the EULA agreement for controller B, and add the license.
+The **Reload** dialog opens. Click **Reload Now**. Controller 1 restarts, and displays the EULA for controller 2. Sign the EULA agreement for controller 2, and add the license.
-The A and B controller serial numbers display on the **Support** widget on the **System Settings > General** screen.
+The controller 1 and 2 (or a and b) serial numbers display on the **Support** widget on the **System Settings > General** screen.
#### Configure Network Settings
{{< hint warning >}}
You must disable the failover service before you can configure network settings!
+
+Only configure network settings on controller 1! When ready to sync to peer, SCALE applies settings to controller 2 at that time.
{{< /hint >}}
SCALE Enterprise (HA) systems use three static IP addresses for access to the UI:
* VIP to provide UI access regardless of which controller is active.
- If your system fails over from controller A to B, then fails over back to controller A later you might not know which controller is active.
-* IP for controller A. If enabled on your network, DHCP assigns only the Controller A IP address. If not enabled, you must change this to the static IP address your network administrator assigned to this controller.
-* IP for Controller B. DHCP does not assign the second controller an IP address.
+ If your system fails over from controller 1 to 2, then fails over back to controller 1 later you might not know which controller is active.
+* IP for controller 1. If enabled on your network, DHCP assigns only the controller 1 IP address.
+ If not able to use DHCP, you must change this to the static IP address your network administrator assigned to this controller.
+* IP for controller 2. DHCP does not assign the second controller an IP address.
-Have your list of network addresses, host and domain names ready so you can complete the network configuration without disruption or system timeouts.
+Have your list of network addresses, host and domain names ready so you can complete the network configuration on controller 1 without disruption or system timeouts.
SCALE safeguards allow a default of 60 seconds to test and save changes to a network interface before reverting changes. This is to prevent users from breaking their network connection in SCALE.
{{< expand "Configuration Steps" "v">}}
-To configure network settings:
+To configure network settings on controller 1:
1. Disable the failover service.
Go to **System Settings > Services** locate the **Failover** service and click edit.
Select **Disable Failover** and click **Save**.
-2. [Edit the Global Network settings]({{< relref "AddingGlobalConf.md" >}}) to add the host and domain names, DNS name server and default gateway address.
+2. [Edit the global network settings]({{< relref "AddingGlobalConf.md" >}}) to add the host and domain names, DNS name server and default gateway address.
If enabled on your network, TrueNAS uses DHCP to assign global network addresses as well as the SCALE UI access IP address. If not enabled in your network, you must enter these values yourself.
Review the **Global Configuration** settings to verify they match the information your network administrator provided.
@@ -178,21 +225,21 @@ To configure network settings:
![EditInterfaceFailoveSettingsrHA](/images/SCALE/22.12/EditInterfaceFailoveSettingsrHA.png "Edit Network Interface Failover Settings")
- c. Add the virtual IP (VIP) and controller B IP. Click **Add** for **Aliases** to displays the additional IP address fields.
+ c. Add the virtual IP (VIP) and controller 2 IP. Click **Add** for **Aliases** to displays the additional IP address fields.
![EditInterfaceAddAliasesHA](/images/SCALE/22.12/EditInterfaceAddAliasesHA.png "Edit Network Interface Add Alias IP Addresses")
- 1. Type the IP address for controller A into **IP Address (This Controller)** and select the CIDR number from the dropdown list.
+ 1. Type the IP address for controller 1 into **IP Address (This Controller)** and select the CIDR number from the dropdown list.
- 2. Type the controller B IP address into **IP Address (TrueNAS Controller 2)** field.
+ 2. Type the controller 2 IP address into **IP Address (TrueNAS Controller 2)** field.
- 3. Type the VIP address into **Virtual IP Address (Failover Address) field.
+ 3. Type the VIP address into **Virtual IP Address (Failover Address)** field.
4. Click **Save**
After editing the interface settings the **Test Changes** button displays. You have 60 seconds to test and then save changes before they revert. If this occurs, edit the interface again.
-3. Create or import a storage pool from a backup. You must have at least one storage pool on controller A.
+3. Create or import a storage pool from a backup. You must have at least one storage pool on controller 1.
For more information on how to create a new pool [click here for more information]({{< relref "CreatePoolSCALE.md" >}}).
For more information on how to import a pool [click here for more information]({{< relref "ImportPoolSCALE.md" >}}).
@@ -200,14 +247,28 @@ To configure network settings:
Go to **System Settings > Services** locate the **Failover** service and click edit.
Select **Disable Failover** to clear the checkmark and turn failover back on, then click **Save**.
- The system might reboot. Use IPMI to monitor the status of controller B and wait until the controller is back up and running then click **Sync To Peer**.
- Select **Reboot standby TrueNAS controller** and **Confirm**, then click **Proceed** to start the sync operation. This sync controller B with controller A which adds the network settings and pool to controller B.
+ The system might reboot. Use IPMI to monitor the status of controller 2 and wait until the controller is back up and running.
+
+5. Log out of the controller 1 UI, and log in using the VIP address.
+
+ With controller 2 powered on, but not configured, from controller 1 click **Sync To Peer**.
+ Select **Reboot standby TrueNAS controller** and **Confirm**, then click **Proceed** to start the sync operation. This sync controller 2 with controller 1 which adds the network settings and pool to controller 2.
![FailoverSyncToPeerDialog](/images/SCALE/22.12/FailoverSyncToPeerDialog.png "Failover Sync To Peer")
-When the system comes back up, log into SCALE using the virtual IP address. The main **Dashboard** should now have two **System Information** widgets, one for controller A with the serial number and the host name that includes the letter **a** and the other for controller B labeled as **Standby Controller** and that includes the serial number and the host name that includes the leter **b**. Take note of this information.
+When the system comes back up, log into SCALE using the virtual IP address.
+The main **Dashboard** should now have two **System Information** widgets, one for controller 1 with the serial number and the host name that includes the letter **a** and the other for controller 2 labeled as **Standby Controller** and that includes the serial number and the host name that includes the letter **b**.
+Take note of this information.
![HAMainDashboard](/images/SCALE/22.12/HAMainDashboard.png "Main Dashboard for HA Systems")
+
+#### Troubleshooting HA Installation
+If controller 2 comes on line as the primary and controller 1 as the standby, you installed and configured these controllers incorrectly.
+Go to **System Settings > Failover**, clear the **Default TrueNAS Controller** option, and click **Save**.
+The system reboots and fails over to the current standby controller (in this case, to controller 1).
+Log back into the UI with the VIP address, go to **System Settings > Failover** and select **Default TrueNAS Controller** to make controller 1 the primary controller.
+and then select **Sync to Peer**. SCALE makes controller 2 the standby controller and syncs the configuration on controller 1 to controller 2.
+Click **Save**.
{{< /expand >}}
{{< taglist tag="scaleinstall" title="Related Installation Articles" limit="20" >}}
diff --git a/content/SCALE/GettingStarted/Migrate/MigrateCOREHAtoSCALEHA.md b/content/SCALE/GettingStarted/Migrate/MigrateCOREHAtoSCALEHA.md
new file mode 100644
index 0000000000..2d7cb35e7d
--- /dev/null
+++ b/content/SCALE/GettingStarted/Migrate/MigrateCOREHAtoSCALEHA.md
@@ -0,0 +1,36 @@
+---
+title: "Migrating a TrueNAS HA system from CORE to SCALE"
+description: "This article discusses migrating a TrueNAS CORE High Availability (HA) system to SCALE."
+weight: 25
+aliases:
+tags:
+- scalemigrate
+- scaleinstall
+- scaleconfig
+---
+
+{{< toc >}}
+
+{{< include file="/content/_includes/MigrateCOREtoSCALEWarning.md" type="page" >}}
+
+Customers with CORE Enterprise High Availability (HA) systems that want to migrate to SCALE cannot directly migrate the system.
+Instead, the process is to clean install SCALE on the system and reimport the storage pools.
+Due to software differences between CORE and SCALE, an HA system with CORE installed cannot directly migrate to SCALE.
+
+## Moving an HA system from CORE to SCALE
+
+First, back up your data storage and export your pools to the server.
+
+Review the list of preparation steps in [Preparing for SCALE UI Configuration (Enterprise)]({{< relref "InstallPrepEnterprise.md" >}}) and gather the information you need before you begin installing SCALE.
+
+Next, do a [clean install of SCALE]({{< relref "InstallEnterpriseHASCALE.md" >}}) using the iso file. You must observe the proper sequence for controller 1 and controller 2 so the system comes up with controller 1 as the primary and controller 2 the standby controller.
+
+Remember:
+
+After installing SCALE, [configure controller 1 using the SCALE UI]({{< relref "UIConfigurationSCALE.md" >}}), configure controller 1 to the point just before you sync to peer, then power up controller 2 with SCALE already installed and at the Console setup menu screen but not configured, then on controller 1 sync to peer.
+
+After configuring network in controller 1, import all your pools.
+Creating a new pool before importing pools could result in accidentally wiping disks currently used with an exported pool.
+
+{{< taglist tag="scalemigrate" limit="10" title="Related Migration Articles" >}}
+{{< taglist tag="scaleconfig" limit="10" title="Related Configuration Articles" >}}
diff --git a/content/SCALE/GettingStarted/Migrate/MigratePrep.md b/content/SCALE/GettingStarted/Migrate/MigratePrep.md
new file mode 100644
index 0000000000..2e05d8e31d
--- /dev/null
+++ b/content/SCALE/GettingStarted/Migrate/MigratePrep.md
@@ -0,0 +1,43 @@
+---
+title: "Preparing to Migrate CORE to SCALE"
+description: "This article guides CORE users about elements they should prepare before beginning the one-way CORE to SCALE migration process."
+weight: 10
+aliases:
+tags:
+- scalemigrate
+- scaleconfigure
+---
+
+{{< toc >}}
+
+{{< include file="/content/_includes/MigrateCOREtoSCALEWarning.md" type="page" >}}
+
+## What can or cannot migrate?
+
+{{< include file="/content/_includes/COREMigratesList.md" type="page" >}}
+
+## Preparing for Migration
+
+Before you attempt to migrate your CORE system to SCALE:
+
+1. Upgrade your CORE system to the most recent publicly-available CORE version.
+ TrueNAS systems on 12.0x or earlier should upgrade to the latest CORE 13.0 release (e.g 13.0-U4 or newer) prior to migrating to SCALE.
+ CORE systems at the latest 13.0 release can use the [iso upgrade](#migrating-using-an-iso-file-to-upgrade) method to migrate to SCALE.
+
+2. Verify the root user is not locked.
+ Go to **Accounts > Users**, use **Edit** for the root user to view current settings and confirm **Lock User** is not selected.
+
+3. After updating to the latest publicly-available release of CORE, download your system configuration file and a debug file.
+ Keep these files in a safe place in case you need to revert back to CORE with a clean install of the CORE iso file.
+
+4. Back up your stored data files.
+ If you need to do a clean install with the SCALE iso file, you can import your data pools into SCALE.
+
+5. Write down your network configuration information to use if you do a clean install of SCALE from an iso file.
+ {{< include file="/_includes/NetworkInstallRequirementsSCALE.md" type="page" >}}
+
+6. Back up any critical data!
+
+Download the [SCALE ISO file](https://www.truenas.com/download-tn-scale/) or the SCALE upgrade file and save it to your computer or a USB drive (see the **Physical Hardware tab** in [Installing SCALE]({{< relref "InstallingSCALE.md" >}})) to use if you upgrade from the physical system.
+
+{{< taglist tag="scalemigrate" limit="10" >}}
diff --git a/content/SCALE/GettingStarted/Migrate/MigratePrepEnterprise.md b/content/SCALE/GettingStarted/Migrate/MigratePrepEnterprise.md
new file mode 100644
index 0000000000..6803814e4f
--- /dev/null
+++ b/content/SCALE/GettingStarted/Migrate/MigratePrepEnterprise.md
@@ -0,0 +1,59 @@
+---
+title: "Preparing for CORE to SCALE Migration (Enterprise HA)"
+description: "This article provides information for CORE Enterprise (HA) users planning to migrate to SCALE, and what you need to know and have ready before beginning the one-way process."
+weight: 15
+aliases:
+tags:
+- scalemigrate
+- scaleconfigure
+---
+
+{{< toc >}}
+
+{{< include file="/content/_includes/MigrateCOREtoSCALEWarning.md" type="page" >}}
+
+## What can or cannot migrate?
+
+{{< include file="/content/_includes/COREMigratesList.md" type="page" >}}
+
+## Before Migrating to SCALE
+
+You cannot directly migrate a TrueNAS Enterprise High Availability (HA) system from CORE to SCALE!
+Instead, the system can be freshly installed with TrueNAS SCALE and storage data pools reimported after the install process is complete.
+
+This section outlines actions to take or consider to prepare for the clean installation of SCALE for an Enterprise (HA) system.
+
+Before you begin the clean install of SCALE, on CORE:
+
+1. Back up your stored data files and any critical data!
+ If you need to do a clean install with the SCALE iso file, you can import your data into SCALE.
+
+2. Write down your network configuration information to use after the clean install of SCALE.
+ {{< include file="/_includes/NetworkInstallRequirementsSCALE.md" type="page" >}}
+
+3. Identify your system dataset.
+ If you want to use the same dataset for the system dataset in SCALE, note the pool and system datasat.
+ When you set up the first required pool on SCALE import this pool first.
+
+4. Review and document down any system configuration information in CORE you want to duplicate in SCALE. Areas to consider:
+
+ * Tunables on CORE.
+ SCALE does not use **Tunables** the way CORE does. SCALE provides script configuration on the **System Settings > Advanced** screen as **Sysctl** scripts.
+ A future release of SCALE could introduce similar tunables options found in CORE but for now it is not available.
+
+ * CORE init/shutdown scripts to add to SCALE.
+
+ * CORE cron jobs configured if you want to set the same jobs up in SCALE.
+
+ * The global self-encrypting drive (SED) password to configure in SCALE, or unlock these drives in CORE before you clean install SCALE.
+
+ * Cloud storage backup provider credentials configured in CORE if you do not have these recorded in other files kept secured outside of CORE.
+
+ * Replication, periodic snapshot, cloud sync, or other tasks settings to reconfigure in SCALE if you want to duplicate these tasks.
+
+ * Make sure you have backed-up copies of certificates used in CORE to import or configure in SCALE.
+
+Download the SCALE [SCALE ISO file](https://www.truenas.com/download-tn-scale/) or the SCALE upgrade file and save it to your computer or on two USB drives (see the **Physical Hardware tab** in [Installing SCALE]({{< relref "InstallingSCALE.md" >}})).
+
+{{< taglist tag="scalemigrate" limit="10" title="Related Migration Articles" >}}
+{{< taglist tag="scaleconfigure" limit="10" title="Related Configuration Articles" >}}
\ No newline at end of file
diff --git a/content/SCALE/GettingStarted/Migrate/MigratingFromCORE.md b/content/SCALE/GettingStarted/Migrate/MigratingFromCORE.md
index 41574c40df..b257125fb9 100644
--- a/content/SCALE/GettingStarted/Migrate/MigratingFromCORE.md
+++ b/content/SCALE/GettingStarted/Migrate/MigratingFromCORE.md
@@ -1,5 +1,5 @@
---
-title: "Migrating from TrueNAS CORE"
+title: "Migrating from TrueNAS CORE to SCALE"
description: "This article provides instructions on migrating from TrueNAS CORE to SCALE. Migration methods include using an ISO file or a manual update file."
weight: 20
aliases:
@@ -15,68 +15,19 @@ tags:
This article provides information and instructions for migrating from TrueNAS CORE to SCALE.
-{{< hint danger >}}
-Migrating TrueNAS from CORE to SCALE is a one-way operation.
-Attempting to activate or roll back to a CORE boot environment can break the system.
+{{< include file="/_includes/MigrateCOREtoSCALEWarning.md" type="page" >}}
-High Availability systems cannot migrate from CORE to SCALE.
-Enterprise customers should contact iXsystems Support before attempting any migration.
+### What Can and Cannot Migrate?
-Migrating from CORE to SCALE is not recommended when custom modifications have been made to the system database.
-If any such modifications are present in CORE, these must be reverted before attempting a migration to SCALE.
-
-{{< expand "Contacting Support" "v" >}}
-{{< include file="static/includes/General/iXsystemsSupportContact.html.part" html="true" >}}
-{{< /expand >}}
-{{< /hint >}}
-
-### What Can and Can't Migrate?
-
-Although TrueNAS attempts to keep most of your CORE configuration data when upgrading to SCALE, some CORE-specific items do not transfer.
-These are the items that don't migrate from CORE:
-
-* FreeBSD GELI encryption. If you have GELI-encrypted pools on your system that you plan to import into SCALE, you must migrate your data from the GELI pool to a non-GELI encrypted pool *before* migrating to SCALE.
-* Malformed certificates. TrueNAS SCALE validates the system certificates when a CORE system migrates to SCALE. When a malformed certificate is found, SCALE generates a new self-signed certificate to ensure system accessibility.
-* CORE Plugins and Jails. Save the configuration information for your plugin and back up any stored data. After completing the SCALE install, add the equivalent SCALE application using the **Apps** option. If your CORE plugin is not listed as an available application in SCALE, use the **Launch Docker Image** option to add it as an application and import data from the backup into a new SCALE dataset for the application.
-* NIS data
-* System tunables
-* ZFS Boot Environments
-* AFP shares also do not transfer, but migrate into an SMB share with AFP compatibility enabled.
-* CORE `netcli` utility. A new CLI utility is used for the [Console Setup Menu]({{< relref "ConsoleSetupMenuSCALE.md" >}}) and other commands issued in a CLI.
-
-VM storage and its basic configuration transfer over during a migration, but you need to double-check the VM configuration and the network interface settings specifically before starting the VM.
-
-Init/shutdown scripts transfer, but can break. Review them before use.
-
-After migration, it is strongly recommended to review each area of the UI that was previously configured in CORE.
+{{< include file="/_includes/COREMigratesList.md" type="page" >}}
### Migration Methods
-You can migrate from CORE to SCALE through an upgrade or clean install using an iso file.
+You can migrate from CORE to SCALE through an upgrade or clean install using an iso file.
Alternately, some CORE 13.0 releases can migrate using the CORE UI Upgrade function with the SCALE update file downloaded from the website.
The easiest method is to upgrade from the CORE system UI, but your system must have the CORE 13.0 major release installed to use this method.
Note the CORE 13.0-U3 release might not work when updating from the CORE UI using the Update function.
-If you do a clean-install with a SCALE iso file, you need to reconfigure your CORE settings in SCALE and import your data.
-
-## Preparing for Migration
-
-Before you attempt to migrate your CORE system to SCALE:
-
-1. Upgrade your CORE system to the most recent publicly-available CORE version.
- TrueNAS systems on 12.0x or lower should update to the latest CORE 13.0 release (e.g 13.0-U2 or U4 when released) prior to migrating to SCALE.
- CORE systems at release 13.0-Ux can use the [iso upgrade](#migrating-using-an-iso-file-to-upgrade) method to migrate to SCALE.
- Lower releases of CORE (12.0-Ux) must do a clean install with the SCALE iso file.
-2. Verify the root user is not locked.
- Go to **Accounts > Users**, use **Edit** for the root user to view current settings and confirm **Lock User** is not selected.
-3. After updating to the latest publicly-available release of CORE, download your system configuration file and a debug file.
- Keep these files in a safe place in case you need to revert back to CORE with a clean install of the CORE iso file.
-4. Back up your stored data files.
- If you need to do a clean install with the SCALE iso file, you can import your data into SCALE.
-5. Write down your network configuration information to use if you do a clean install of SCALE from an iso file.
- {{< include file="/_includes/NetworkInstallRequirementsSCALE.md" type="page" >}}
-6. Back up any critical data!
-
-Download the SCALE [SCALE ISO file](https://www.truenas.com/download-tn-scale/) or the SCALE upgrade file and save it to your computer or a USB drive (see the **Physical Hardware tab** in [Installing SCALE]({{< relref "InstallingSCALE.md" >}})) to use if you upgrade from the physical system.
+If you do a clean-install with a SCALE iso file, you need to reconfigure your CORE settings in SCALE and import your data.
## Migrating Using an ISO File to Upgrade
@@ -136,30 +87,12 @@ After the update completes, reboot the system if it does not reboot automaticall
![SCALESidegradeReboot](/images/SCALE/SidegradeRestart.png "Reboot to Finish")
+After migration, we strongly recommend you review each area of the UI that was previously configured in CORE.
+
## Migrating by Clean Install
-If it becomes necessary to do a clean install to migrate your CORE system to SCALE using the iso file, follow the instructions in the [Install]({{< relref "/SCALE/GettingStarted/Install/_index.md" >}}) articles.
-
-## Parallel SCALE CLI Commands
-
-The following CLI commands are available after migrating from CORE to SCALE.
-{{< expand "List of CLI Commands" "v" >}}
-The CORE equivalent CLI commands are for reference. These commands are for diagnostic use. Making configuration changes using the SCALE OS CLI is not recommended.
-
-| CORE CLI Command | SCALE CLI Command | Description |
-|-----------------|-------------------|-------------|
-| [camcontrol devlist](https://www.freebsd.org/cgi/man.cgi?query=camcontrol&sektion=8) | [lshw -class disk -short sfdisk -l](https://linux.die.net/man/1/lshw) | Use `lshw -class disk -short sfdisk -l` to get detailed information on hardware (disk) configuration that includes memory, mainboard and cache configuration, firmware version, CPU version and speed. |
-| [geom disk list](https://www.freebsd.org/cgi/man.cgi?geom(4)) | [lsblk](https://manpages.debian.org/testing/util-linux/lsblk.8.en.html), [hdparm](https://manpages.debian.org/bullseye/hdparm/hdparm.8.en.html) | Use `lsblk` to lists block devices or `hwparm` to get or set SATA/IDE device parameters. |
-| [glabel status](https://www.freebsd.org/cgi/man.cgi?glabel(8)) | [blkid](https://linux.die.net/man/8/blkid) | Use `blkid` to locate or print block device attributes. |
-| [gstat -pods](https://www.freebsd.org/cgi/man.cgi?gstat(8)) | [iostat](https://manpages.debian.org/testing/sysstat/iostat.1.en.html)
iostat -dtx | Use `iostat -dtx` to display the device utiilization report with the time for each report displayed and includes extended statistics. |
-| [ifconfig](https://www.freebsd.org/cgi/man.cgi?ifconfig(8))
ifconfig -l | [ip addr](https://linux.die.net/man/8/ip)
[ifconfig -s](https://linux.die.net/man/8/ifconfig)
[lshw -class network -short](https://linux.die.net/man/1/lshw)
[ethtool *devname*](https://linux.die.net/man/8/ethtool) | Use `ip addr` to show or manipulate routing, devices, or policy routing and tunnels.
Use `ifconfig -s` configure a network interface.
Use `lshw -class network -short` to display a network device tree showing hardware paths.
Use `ethtool *devnam*` to query or control network driver and hardware settings. |
-| [netstat -i](https://www.freebsd.org/cgi/man.cgi?query=netstat&sektion=1) | [ifstat -i](https://linux.die.net/man/1/ifstat) | Use `ifstat -i` to get interface statisitcs on a list of interfaces to monitor. |
-| [nvmecontrol devlist](https://www.freebsd.org/cgi/man.cgi?query=nvme&sektion=4) | [nvme list](https://manpages.org/nvme-list-ctrl) | Use `nvme list` to identify the list of NVMe devices on your system. |
-| [pmcstat](https://www.freebsd.org/cgi/man.cgi?query=pmcstat&sektion=8) | [profile-bpfcc](https://manpages.debian.org/unstable/bpfcc-tools/profile-bpfcc.8.en.html) | Use `profile-bpfcc` to get a CPU usage profile obtaine by sampling stack traces. |
-| [systat -ifstat](https://www.freebsd.org/cgi/man.cgi?query=systat&sektion=1&manpath=FreeBSD+4.9-RELEASE) | [iftop](https://linux.die.net/man/8/iftop)
[netstat](https://linux.die.net/man/8/netstat) | Use `iftop` to display interface bandwidth usage by host and `netstat` to print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. |
-| [top -SHIzP](https://www.freebsd.org/cgi/man.cgi?top(1)) | [top -Hi](https://linux.die.net/man/1/top) | Use `top -Hi` to display Linux tasks for all individual threads and starts with the last remembered *i* state reversed. |
-| [vmstat -P](https://www.freebsd.org/cgi/man.cgi?query=vmstat&apropos=0&sektion=0&manpath=2.8+BSD&format=html) | [sar -P ALL](https://linux.die.net/man/1/sar) | Use `sar -P ALL` to get reports with statistics for each individual processor and global statistics among all processors. |
-{{< /expand >}}
+If it becomes necessary to do a clean install to migrate your CORE system to SCALE using the iso file, follow the instructions in the [Install]({{< relref "/SCALE/GettingStarted/Install/_index.md" >}}) articles.
+
{{< taglist tag="scalemigrate" limit="10" >}}
{{< taglist tag="scaleinstall" limit="10" title="Related Installation Articles" >}}
diff --git a/content/SCALE/GettingStarted/Migrate/_index.md b/content/SCALE/GettingStarted/Migrate/_index.md
index 3447befc9f..3a5bce7041 100644
--- a/content/SCALE/GettingStarted/Migrate/_index.md
+++ b/content/SCALE/GettingStarted/Migrate/_index.md
@@ -2,14 +2,21 @@
title: "Migrating Instructions"
geekdocCollapseSection: true
weight: 40
+aliases:
+tags:
+- scalemigrate
---
-This section provides information for CORE users migrating to SCALE.
+This section provides information and instructions for CORE users wanting to migrate to SCALE.
+
+{{< include file="/content/_includes/MigrateCOREtoSCALEWarning.md" type="page" >}}
Linux treats device names differently than FreeBSD so please read [Component Naming]({{< relref "ComponentNaming.md" >}}) for more information.
The ZFS flag feature merged into the TrueNAS fork of OpenZFS for developers to test and integrage with other parts of the system on June 29,2021 is also removed. Please read [ZFS Feature Flags Removed]({{< relref "ScaleZFSFlagRemoval.md" >}}) for details on this change.
+After migration, it is strongly recommended to review each area of the UI that was previously configured in CORE.
+
## Migration Articles
{{< children depth="2" description="true" >}}
\ No newline at end of file
diff --git a/content/SCALE/SCALE22.12.md b/content/SCALE/SCALE22.12.md
index 8494ddc552..834bb656b7 100644
--- a/content/SCALE/SCALE22.12.md
+++ b/content/SCALE/SCALE22.12.md
@@ -8,17 +8,17 @@ weight: 7
{{< toc >}}
+## Software Lifecycle
+
{{< hint danger >}}
Early releases are intended for testing and early feedback purposes only.
Do not use early release software for critical tasks.
{{< /hint >}}
-Want to get involved by collaborating on TrueNAS SCALE? Join our [Official Discord Server.](https://discord.com/invite/Q3St5fPETd)
-
-## Software Lifecycle
-
{{< include file="/static/includes/General/LifecycleTable.html.part" html="true" >}}
+Want to collaborate on TrueNAS SCALE? Join our [Official Discord Server.](https://discord.com/invite/Q3St5fPETd)
+
{{< include file="/content/_includes/SoftwareStatusPage.md" type="page" >}}
## SCALE Schedule
@@ -27,10 +27,6 @@ Want to get involved by collaborating on TrueNAS SCALE? Join our [Official Disco
| Version | Checkpoint | Scheduled Date |
|---------|------------|----------------|
-| SCALE 22.12.2 | Code-freeze | 08 March 2023 |
-| SCALE 22.12.2 | Internal Testing Sprints | 13 March 2023 - 7 April 2023 |
-| SCALE 22.12.2 | Tag | 10 April 2023 |
-| SCALE 22.12.2 | Release | 11 April 2023 |
| SCALE 22.12.3 | Code-freeze | 10 May 2023 |
| SCALE 22.12.3 | Internal Testing Sprints | 11 - 26 May 2023 |
| SCALE 22.12.3 | Tag | 29 May 2023 |
@@ -45,11 +41,11 @@ Want to get involved by collaborating on TrueNAS SCALE? Join our [Official Disco
{{< hint warning >}}
* SCALE is developed as an appliance that uses specific Linux packages with each release. Attempting to update SCALE with `apt` or methods other than the SCALE web interface can result in a nonfunctional system.
* TrueNAS SCALE has only been validated with systems up to 250 Drives. We currently recommend that users with higher drive counts run TrueNAS Enterprise.
-* HA migration in Bluefin 22.12.0 is not recommended for critical-use Enterprise HA systems yet. Enterprise General Availability (GA) is planned for the 22.12.2 release. HA migrations from CORE are not recommended before Enterprise GA is announced.
+* HA migration in Bluefin 22.12.0 is not recommended for critical-use Enterprise HA systems yet. Enterprise General Availability (GA) is planned for the 22.12.2 release but HA migrations from CORE are not recommended without consulting with iXsystems Support first.
* All auxiliary parameters are subject to change between major versions of TrueNAS due to security and development issues.
We recommend removing all auxiliary parameters from TrueNAS configurations before upgrading.
-* New security checks are present for host paths in use by various system services. If you have host paths that are shared by multiple system services (e.g. Apps and SMB), please read the 22.12.0 [Known Issues](#known-issues) and take steps to create unique host paths for each in-use system service.
-* As part of security hardening, users upgrading to 22.12.0 Bluefin are prompted to create a separate administrative user for UI logins. TrueNAS shows an informational alert when only the **root** account is present and reminds to create the administrative user for logins. Future security updates to TrueNAS SCALE could disable the root account. Please create an administrative user as soon as possible after upgrading. See the [Managing Users Tutorial]({{< relref "ManageLocalUsersSCALE.md" >}}) for more details.
+* New security checks are present for host paths in use by various system services. If you have host paths that are shared by multiple system services (e.g. Apps and SMB), please read the 22.12 [Known Issues](#known-issues-with-a-future-resolution) and take steps to create unique host paths for each in-use system service.
+* As part of security hardening, users upgrading to 22.12 Bluefin are prompted to create a separate administrative user for UI logins. TrueNAS shows an informational alert when only the **root** account is present and reminds to create the administrative user for logins. Future security updates to TrueNAS SCALE could disable the root account. Please create an administrative user as soon as possible after upgrading. See the [Managing Users Tutorial]({{< relref "ManageLocalUsersSCALE.md" >}}) for more details.
{{< /hint >}}
To download an .iso file for installing SCALE Bluefin, go to https://www.truenas.com/truenas-scale/ and click **Download**.
@@ -57,8 +53,131 @@ Manual update files are also available at this location.
To upgrade an existing SCALE install, log in to your SCALE web interface and go to **System Settings > Update**.
-## 22.12.1
+## 22.12.2
+
+**April 11, 2023**
+
+iXsystems is pleased to release TrueNAS SCALE 22.12.2!
+
+{{< enterprise >}}
+22.12.2 is the first SCALE release that supports TrueNAS Enterprise systems!
+For TrueNAS Enterprise systems that are already deployed with TrueNAS CORE installed, you can [contact iXsystems Support]({{< relref "GetSupportSCALE.md" >}}) to verify SCALE compatibility and schedule a migration.
+To purchase a new [TrueNAS Enterprise system](https://www.truenas.com/truenas-enterprise/), please contact iXsystems for a quote!
+{{< /enterprise >}}
+
+22.12.2 includes many new features and improved functionality that span SCALE Enterprise High Availability (HA), applications, rootless login administrative user, enclosure management, and replication:
+
+* Adding sudo options to user and replication configuration screens
+* SSH service option for the administration user
+* Application advanced settings changes that add a force flag option
+* Replication task improvements that add reasons why tasks are waiting to run
+* (Enterprise only) Applications new Kubernetes passthrough functionality
+* (Enterprise only) New enclosure management for the R30 and Mini R platforms
+
+It also implements fixes to pool status reporting, application options, reporting functions, cloud sync and replication tasks, iSCSI shares, SMB service in HA systems, various UI issues, UI behavior related to isolated GPU and USB passthrough in VMs, and changes to setting options and failover on HA systems.
+
+### Component Versions
+
+TrueNAS SCALE is built from many different software components.
+This list has up-to-date information on which versions of Linux, ZFS, and NVIDIA drivers are included with this TrueNAS SCALE release.
+Click the component version number to see the latest release notes for that component.
+
+
+
+## 22.12.2 Change Log
+
+### New Feature
+
+* [NAS-119055](https://ixsystems.atlassian.net/browse/NAS-119055) Add new Kubernetes Passthrough Functionality
+* [NAS-120019](https://ixsystems.atlassian.net/browse/NAS-120019) New SSH service field: \`adminlogin\`
+* [NAS-120049](https://ixsystems.atlassian.net/browse/NAS-120049) Add \`sudo\` field to replication and credentials forms
+* [NAS-120452](https://ixsystems.atlassian.net/browse/NAS-120452) Add MinIO to enterprise train
+* [NAS-120660](https://ixsystems.atlassian.net/browse/NAS-120660) Branch out mirrors for 22.12.2
+
+### Epic
+
+[NAS-120108](https://ixsystems.atlassian.net/browse/NAS-120108) SCALE enterprise enclosure functionality
+
+### Improvement
+
+* [NAS-119244](https://ixsystems.atlassian.net/browse/NAS-119244) Log equivalent info to mprutil output in scale
+* [NAS-119687](https://ixsystems.atlassian.net/browse/NAS-119687) Missing force flag in Apps -> Advanced Settings
+* [NAS-119748](https://ixsystems.atlassian.net/browse/NAS-119748) Partial enclosure Management for R30 SCALE
+* [NAS-119753](https://ixsystems.atlassian.net/browse/NAS-119753) Enclosure Management for R30 SCALE \(UI\)
+* [NAS-120101](https://ixsystems.atlassian.net/browse/NAS-120101) Display waiting reason for WAITING replication task
+* [NAS-120160](https://ixsystems.atlassian.net/browse/NAS-120160) Need plx\_eeprom tool updated for and installed on scale
+* [NAS-120253](https://ixsystems.atlassian.net/browse/NAS-120253) No need to check update of a dangling image
+* [NAS-120264](https://ixsystems.atlassian.net/browse/NAS-120264) Dataset names don't have to begin with an alphanumeric character.
+* [NAS-120289](https://ixsystems.atlassian.net/browse/NAS-120289) Add min\_memory field to VM edit/create screen in UI
+* [NAS-120424](https://ixsystems.atlassian.net/browse/NAS-120424) Add a migration to add community train to preferred trains in official catalog
+* [NAS-120605](https://ixsystems.atlassian.net/browse/NAS-120605) Add reporting of NVDIMM Operational Statistics.
+* [NAS-120648](https://ixsystems.atlassian.net/browse/NAS-120648) Add deps for building spotlight elasticsearch backend
+* [NAS-121111](https://ixsystems.atlassian.net/browse/NAS-121111) Adapt to IPMI API changes
+* [NAS-121152](https://ixsystems.atlassian.net/browse/NAS-121152) Add sas3flash tool
+
+### Bug
+
+* [NAS-117906](https://ixsystems.atlassian.net/browse/NAS-117906) Cannot clear optional integer value - nodePort
+* [NAS-119427](https://ixsystems.atlassian.net/browse/NAS-119427) SCALE 22.12.0, /usr/local/bin/snmp-agent.py constantly using high CPU
+* [NAS-119431](https://ixsystems.atlassian.net/browse/NAS-119431) Bluefin Apps: Error "\[emptyDirVolume\] A dict was expected" when adding Memory Backed Volume
+* [NAS-119452](https://ixsystems.atlassian.net/browse/NAS-119452) \[Apps\] No GUI option to force selection of "partially initialized" pool
+* [NAS-119515](https://ixsystems.atlassian.net/browse/NAS-119515) hot-spares do not auto detach from zpool after they have been activated and the failed drive replaced
+* [NAS-119605](https://ixsystems.atlassian.net/browse/NAS-119605) Replication Tasks between two TrueNAS SCALE 22.12: Destination part requires mandatory root user to initiate replication
+* [NAS-119682](https://ixsystems.atlassian.net/browse/NAS-119682) scale - ui/credentials/certificates section slide-out window for certificates each have the name "edit certificate authority"
+* [NAS-119707](https://ixsystems.atlassian.net/browse/NAS-119707) \[Bluefin 22.12.0\] Angelfish sysctl values don't persist following Bluefin upgrade
+* [NAS-119750](https://ixsystems.atlassian.net/browse/NAS-119750) Sorting snapshots by space used not working as intended
+* [NAS-119838](https://ixsystems.atlassian.net/browse/NAS-119838) \[TrueNAS SCALE Bluefin\] VM USB Passthru does not see USB device
+* [NAS-119868](https://ixsystems.atlassian.net/browse/NAS-119868) View Enclosure -> Enclosure label in model "undefined"
+* [NAS-119886](https://ixsystems.atlassian.net/browse/NAS-119886) iSCSI Wizard - Add Listen should be a required field
+* [NAS-119945](https://ixsystems.atlassian.net/browse/NAS-119945) Decimals not saved in quota and reserved fields
+* [NAS-119955](https://ixsystems.atlassian.net/browse/NAS-119955) Network speed mismatch between Key-Max-Mean-Min values
+* [NAS-119980](https://ixsystems.atlassian.net/browse/NAS-119980) Missing icon in dataset details header
+* [NAS-120011](https://ixsystems.atlassian.net/browse/NAS-120011) Do not check for failover.upgrade\_pending on non-HA systems
+* [NAS-120092](https://ixsystems.atlassian.net/browse/NAS-120092) Cannot edit SMART test task
+* [NAS-120113](https://ixsystems.atlassian.net/browse/NAS-120113) Do not toggle ZFS\_ARCHIVE on mtime updates
+* [NAS-120126](https://ixsystems.atlassian.net/browse/NAS-120126) Resetting config doesn't properly go to signin page
+* [NAS-120128](https://ixsystems.atlassian.net/browse/NAS-120128) Static routes added through WebUI are not reflected in kernel
+* [NAS-120133](https://ixsystems.atlassian.net/browse/NAS-120133) Cloud Sync Task to Google Drive fails 50% of the time
+* [NAS-120157](https://ixsystems.atlassian.net/browse/NAS-120157) User creation wizard does in fact not create a subdir if the given path does not end with username
+* [NAS-120246](https://ixsystems.atlassian.net/browse/NAS-120246) IntegrityError ISCSI
+* [NAS-120269](https://ixsystems.atlassian.net/browse/NAS-120269) nss\_winbind can return results for passdb-backed local users
+* [NAS-120296](https://ixsystems.atlassian.net/browse/NAS-120296) remove netbiosname for standby controller from webui for services->SMB.
+* [NAS-120303](https://ixsystems.atlassian.net/browse/NAS-120303) iSCSI Extent not enforcing selected Logical Block Size
+* [NAS-120310](https://ixsystems.atlassian.net/browse/NAS-120310) Jobs dropdown progress bar is broken for very long job descriptions
+* [NAS-120352](https://ixsystems.atlassian.net/browse/NAS-120352) unable to setup SSH over Rsync
+* [NAS-120372](https://ixsystems.atlassian.net/browse/NAS-120372) High Memory Usage - /usr/bin/python3 /usr/bin/cli --menu --pager
+* [NAS-120376](https://ixsystems.atlassian.net/browse/NAS-120376) Storage - Manage Devices incorrectly presents the "EXTEND" button for member disks in RAIDZ VDEVs
+* [NAS-120387](https://ixsystems.atlassian.net/browse/NAS-120387) When loading currently running jobs for task manager, exclude the ones that have \`transient: true\`
+* [NAS-120403](https://ixsystems.atlassian.net/browse/NAS-120403) GUI behavior for isolated GPUs
+* [NAS-120426](https://ixsystems.atlassian.net/browse/NAS-120426) When creating "admin" user it says "root account password"
+* [NAS-120432](https://ixsystems.atlassian.net/browse/NAS-120432) Replication tasks cannot be edited
+* [NAS-120435](https://ixsystems.atlassian.net/browse/NAS-120435) Button-field for type in "Reporting" disappears when pressing "Reporting" again
+* [NAS-120446](https://ixsystems.atlassian.net/browse/NAS-120446) py-libzfs should have a non-blank default history prefix
+* [NAS-120489](https://ixsystems.atlassian.net/browse/NAS-120489) \[SCALE/Apps\]: Setting \`immutable\` on a \`string\` causes it to be locked even at install time.
+* [NAS-120490](https://ixsystems.atlassian.net/browse/NAS-120490) \[SCALE/Apps\]: Editing an app with a \`type: list\` variable with added items, fails to render
+* [NAS-120492](https://ixsystems.atlassian.net/browse/NAS-120492) \[SCALE/Apps\]: Initial value of a dropdown is null, but after picking something and reverting to empty is no longer null
+* [NAS-120493](https://ixsystems.atlassian.net/browse/NAS-120493) \[SCALE/Apps\]: Weird behavior with show\_if under subquestions
+* [NAS-120498](https://ixsystems.atlassian.net/browse/NAS-120498) Pool reporting as unhealthy due to aborted smart tests, not failures
+* [NAS-120672](https://ixsystems.atlassian.net/browse/NAS-120672) Fix disk\_resize to work with solidigm \(Intel\) P5430 \(D5-P5316\) 30TB NVMe drives
+* [NAS-121074](https://ixsystems.atlassian.net/browse/NAS-121074) Non-critical interfaces may have BACKUP vrrp state on Active Controller
+* [NAS-121133](https://ixsystems.atlassian.net/browse/NAS-121133) Manual reboot of active controller via SSH breaks HA on SCALE
+## 22.12.1
+{{< expand "22.12.1" "v">}}
February 21, 2023
TrueNAS SCALE 22.12.1 has been released. It includes many new features and improved functionality that span initial effort for high availability (HA) feature support and improvements, and new or improved features in SCALE applications, services, ACLs, and shares:
@@ -146,11 +265,11 @@ TrueNAS SCALE 22.12.1 has been released. It includes many new features and impro
* [NAS-118803](https://ixsystems.atlassian.net/browse/NAS-118803) VM deletion performs a check on systems virtualization capability
* [NAS-118859](https://ixsystems.atlassian.net/browse/NAS-118859) add minio/operator app and use logsearchapi entrypoint override
* [NAS-118870](https://ixsystems.atlassian.net/browse/NAS-118870) Sharing/SMB/Add Name field not updated after first folder click
-* [NAS-118895](https://ixsystems.atlassian.net/browse/NAS-118895) \[Apps\] Installing App without kubernetes objects \(empty\), leads to error and middleware lockup
+* [NAS-118895](https://ixsystems.atlassian.net/browse/NAS-118895) \[Apps\] Installing App without Kubernetes objects \(empty\), leads to error and middleware lockup
* [NAS-118921](https://ixsystems.atlassian.net/browse/NAS-118921) \[Apps\] Helm charts are recreated/upgraded on restart before cluster is ready
* [NAS-118992](https://ixsystems.atlassian.net/browse/NAS-118992) Verify that the update to Syncthing 1.22.0 works out of the box w/latest versions of SCALE
* [NAS-119007](https://ixsystems.atlassian.net/browse/NAS-119007) API call "pool.dataset.details" responds to an object with a field "snapshot\_count = 0"
-* [NAS-119037](https://ixsystems.atlassian.net/browse/NAS-119037) Critical alert : Failed to start kubernetes cluster for Applications : \[EFAULT\] Failed to configure PV/PVCs support
+* [NAS-119037](https://ixsystems.atlassian.net/browse/NAS-119037) Critical alert : Failed to start Kubernetes cluster for Applications : \[EFAULT\] Failed to configure PV/PVCs support
* [NAS-119081](https://ixsystems.atlassian.net/browse/NAS-119081) Do not disallow failover when system versions mismatch
* [NAS-119110](https://ixsystems.atlassian.net/browse/NAS-119110) Zpool status is not showing the last scheduled Scrub event
* [NAS-119113](https://ixsystems.atlassian.net/browse/NAS-119113) Head template error when reloading the page
@@ -249,6 +368,7 @@ TrueNAS SCALE 22.12.1 has been released. It includes many new features and impro
* [NAS-120099](https://ixsystems.atlassian.net/browse/NAS-120099) desktop.ini files break permissions
* [NAS-120126](https://ixsystems.atlassian.net/browse/NAS-120126) Resetting config doesn't properly go to signin page
+{{< /expand >}}
## 22.12.0
{{< expand "22.12.0" "v" >}}
@@ -315,7 +435,7 @@ TrueNAS SCALE 22.12.0 has been released and includes many new features and impro
* [NAS-115222](https://ixsystems.atlassian.net/browse/NAS-115222) Explicitly ask for user's input on websockify port of display devices
* [NAS-115308](https://ixsystems.atlassian.net/browse/NAS-115308) Update Bluefin apt mirrors
* [NAS-115390](https://ixsystems.atlassian.net/browse/NAS-115390) Remove repository logic from repo-mgmt as we don't have any anymore
-* [NAS-115402](https://ixsystems.atlassian.net/browse/NAS-115402) Update kubernetes and related dependencies
+* [NAS-115402](https://ixsystems.atlassian.net/browse/NAS-115402) Update Kubernetes and related dependencies
* [NAS-115407](https://ixsystems.atlassian.net/browse/NAS-115407) Have automatic updates for collabora app
* [NAS-115409](https://ixsystems.atlassian.net/browse/NAS-115409) Improve apt sources generation in builder
* [NAS-115479](https://ixsystems.atlassian.net/browse/NAS-115479) Get usage stats of docker images being used by ix-chart
@@ -388,7 +508,7 @@ TrueNAS SCALE 22.12.0 has been released and includes many new features and impro
* [NAS-119047](https://ixsystems.atlassian.net/browse/NAS-119047) Bluefin Kernel updates to fix several CVEs
* [NAS-119056](https://ixsystems.atlassian.net/browse/NAS-119056) Refactor truecommand wireguard interface name
* [NAS-119059](https://ixsystems.atlassian.net/browse/NAS-119059) Remove numberValidator
-* [NAS-119084](https://ixsystems.atlassian.net/browse/NAS-119084) Remove kubernetes asyncio from scale build
+* [NAS-119084](https://ixsystems.atlassian.net/browse/NAS-119084) Remove Kubernetes asyncio from scale build
* [NAS-119103](https://ixsystems.atlassian.net/browse/NAS-119103) Highlight degraded vdevs in Devices
* [NAS-119132](https://ixsystems.atlassian.net/browse/NAS-119132) Allow 3rd party catalogs to benefit from catalog sync performance improvements
* [NAS-119136](https://ixsystems.atlassian.net/browse/NAS-119136) Add glusterfs.filesystem tests
@@ -427,12 +547,12 @@ TrueNAS SCALE 22.12.0 has been released and includes many new features and impro
* [NAS-117990](https://ixsystems.atlassian.net/browse/NAS-117990) Service running toggle state incorrect after canceling
* [NAS-118236](https://ixsystems.atlassian.net/browse/NAS-118236) Trouble expanding pool, error "\[EZFS\_NOCAP\] cannot relabel '/dev/disk/by-partuuid/905647b7-3ca7-11e9-a8f0-8cae4cfe7d0f': unable to read disk capacity"
* [NAS-118492](https://ixsystems.atlassian.net/browse/NAS-118492) Datasets detail cards should realign to fill horizontal space first
-* [NAS-118571](https://ixsystems.atlassian.net/browse/NAS-118571) Apps Used port detection, does not read kubernetes services
+* [NAS-118571](https://ixsystems.atlassian.net/browse/NAS-118571) Apps Used port detection, does not read Kubernetes services
* [NAS-118660](https://ixsystems.atlassian.net/browse/NAS-118660) Cloud sync task "Bandwidth Limit" pop-up help text appears to be incorrect
* [NAS-118691](https://ixsystems.atlassian.net/browse/NAS-118691) NoVNC Not working for Some VMS on Scale Bluefin Beta 2
* [NAS-118738](https://ixsystems.atlassian.net/browse/NAS-118738) \[SCALE\]: svclb pods are getting created on kube-system namespace and there are also couple of stuck svclb pods from previous installation
* [NAS-118756](https://ixsystems.atlassian.net/browse/NAS-118756) Deleting a dataset removes snapshot tasks assigned to the parent of a dataset
-* [NAS-118759](https://ixsystems.atlassian.net/browse/NAS-118759) \[SCALE\] Failed to start kubernetes cluster for Applications
+* [NAS-118759](https://ixsystems.atlassian.net/browse/NAS-118759) \[SCALE\] Failed to start Kubernetes cluster for Applications
* [NAS-118765](https://ixsystems.atlassian.net/browse/NAS-118765) SMB Share ACLs do not open/work on TrueNAS Scale 22.12-BETA.2
* [NAS-118803](https://ixsystems.atlassian.net/browse/NAS-118803) VM deletion performs a check on systems virtualization capability
* [NAS-118819](https://ixsystems.atlassian.net/browse/NAS-118819) Apps failing to list any thing, spinning circle, after a reboot
@@ -443,7 +563,7 @@ TrueNAS SCALE 22.12.0 has been released and includes many new features and impro
* [NAS-118867](https://ixsystems.atlassian.net/browse/NAS-118867) \[Scale\] Apps does not respect the selected version.
* [NAS-118868](https://ixsystems.atlassian.net/browse/NAS-118868) \[SCALE\] Apps UI goes into a back and forth loop between tabs.
* [NAS-118891](https://ixsystems.atlassian.net/browse/NAS-118891) Used snapshot size not showed on the storage page
-* [NAS-118895](https://ixsystems.atlassian.net/browse/NAS-118895) \[Apps\] Installing App without kubernetes objects \(empty\), leads to error and middleware lockup
+* [NAS-118895](https://ixsystems.atlassian.net/browse/NAS-118895) \[Apps\] Installing App without Kubernetes objects \(empty\), leads to error and middleware lockup
* [NAS-118897](https://ixsystems.atlassian.net/browse/NAS-118897) Fix invalid token on the Shell page after manual reload
* [NAS-118898](https://ixsystems.atlassian.net/browse/NAS-118898) \[SCALE\] Editing an app does not show the default values for fields under a checkbox
* [NAS-118902](https://ixsystems.atlassian.net/browse/NAS-118902) Minio app update to 2022-10-29\_1.6.59 stuck at “Deploying”. Requires Roll Back to 1.6.58
@@ -465,7 +585,7 @@ TrueNAS SCALE 22.12.0 has been released and includes many new features and impro
* [NAS-119022](https://ixsystems.atlassian.net/browse/NAS-119022) \[Bluefin RC1\] Sysctl - 'field was not expected' error when trying to disable sysctl
* [NAS-119025](https://ixsystems.atlassian.net/browse/NAS-119025) Errors not shown when ACME certificate is not created
* [NAS-119034](https://ixsystems.atlassian.net/browse/NAS-119034) after 22.12-BETA.2 to RC.1 upgrade, can no longer log into web UI
-* [NAS-119037](https://ixsystems.atlassian.net/browse/NAS-119037) Critical alert : Failed to start kubernetes cluster for Applications : \[EFAULT\] Failed to configure PV/PVCs support
+* [NAS-119037](https://ixsystems.atlassian.net/browse/NAS-119037) Critical alert : Failed to start Kubernetes cluster for Applications : \[EFAULT\] Failed to configure PV/PVCs support
* [NAS-119038](https://ixsystems.atlassian.net/browse/NAS-119038) \[Bluefin RC1\] Chart tooltips/descriptions on headers are not rendered in the UI
* [NAS-119039](https://ixsystems.atlassian.net/browse/NAS-119039) \[Bluefin RC1\] Pod Shell/Logs applications link incorrectly goes to dashboard
* [NAS-119043](https://ixsystems.atlassian.net/browse/NAS-119043) Fix and improve config.save and config.upload
@@ -523,7 +643,7 @@ SCALE 22.12-RC.1 introduces a change in Applications. Users upgrading to 22.12-R
* [NAS-118325](https://ixsystems.atlassian.net/browse/NAS-118325) Add USB passthrough support in the UI
* [NAS-118446](https://ixsystems.atlassian.net/browse/NAS-118446) add MISMATCH\_VERSIONS to webUI
* [NAS-118505](https://ixsystems.atlassian.net/browse/NAS-118505) R50BM needs to be added to webUI codebase
-* [NAS-118593](https://ixsystems.atlassian.net/browse/NAS-118593) Update kubernetes to 1.25 and related deps
+* [NAS-118593](https://ixsystems.atlassian.net/browse/NAS-118593) Update Kubernetes to 1.25 and related deps
* [NAS-118642](https://ixsystems.atlassian.net/browse/NAS-118642) Allow users to specify USB vendor/product id in UI
* [NAS-118701](https://ixsystems.atlassian.net/browse/NAS-118701) add new public endpoint to return whether or not truenas is clustered
* [NAS-118749](https://ixsystems.atlassian.net/browse/NAS-118749) Branchout mirrors for RC1
@@ -721,7 +841,7 @@ TrueNAS SCALE 22.12-BETA.2 has been released and includes many new features and
* [NAS-118101](https://ixsystems.atlassian.net/browse/NAS-118101) Function clean up for Datasets module
* [NAS-118058](https://ixsystems.atlassian.net/browse/NAS-118058) Sync \[visual-ui\] data on the Pool and Storage widgets
* [NAS-118044](https://ixsystems.atlassian.net/browse/NAS-118044) Refactor console message footer
-* [NAS-118041](https://ixsystems.atlassian.net/browse/NAS-118041) Do not backup catalogs dataset on kubernetes backup
+* [NAS-118041](https://ixsystems.atlassian.net/browse/NAS-118041) Do not backup catalogs dataset on Kubernetes backup
* [NAS-118039](https://ixsystems.atlassian.net/browse/NAS-118039) Clean up topbar.component
* [NAS-118007](https://ixsystems.atlassian.net/browse/NAS-118007) Remove BaseService
* [NAS-118006](https://ixsystems.atlassian.net/browse/NAS-118006) Refactor ReportsDashboard module
@@ -1090,7 +1210,7 @@ Additional feature in future Bluefin releases:
* [NAS-117841](https://ixsystems.atlassian.net/browse/NAS-117841) Ban `res` as variable name
* [NAS-117803](https://ixsystems.atlassian.net/browse/NAS-117803) Blank dashboard of the first login
* [NAS-117802](https://ixsystems.atlassian.net/browse/NAS-117802) Use truenas tls endpoint for usage stats
-* [NAS-117775](https://ixsystems.atlassian.net/browse/NAS-117775) Update kubernetes related dependencies from upstream
+* [NAS-117775](https://ixsystems.atlassian.net/browse/NAS-117775) Update Kubernetes related dependencies from upstream
* [NAS-117769](https://ixsystems.atlassian.net/browse/NAS-117769) Add support for multi selection in ix-explorer
* [NAS-117719](https://ixsystems.atlassian.net/browse/NAS-117719) Do not run CI checks when only RE tests were changed
* [NAS-117707](https://ixsystems.atlassian.net/browse/NAS-117707) Merge zfs-2.1.6-staging
@@ -1300,7 +1420,7 @@ Additional feature in future Bluefin releases:
* [NAS-117307](https://ixsystems.atlassian.net/browse/NAS-117307) Investigate/fix ix-volumes being migrated on apps migration
* [NAS-117306](https://ixsystems.atlassian.net/browse/NAS-117306) Fix ctdb jobs on pnn 0
* [NAS-117305](https://ixsystems.atlassian.net/browse/NAS-117305) fill in app information in pool.dataset.details
-* [NAS-117303](https://ixsystems.atlassian.net/browse/NAS-117303) use ejson in kubernetes backup plugin
+* [NAS-117303](https://ixsystems.atlassian.net/browse/NAS-117303) use ejson in Kubernetes backup plugin
* [NAS-117293](https://ixsystems.atlassian.net/browse/NAS-117293) Deprecate legacy behavior to allow empty homes path
* [NAS-117289](https://ixsystems.atlassian.net/browse/NAS-117289) Attempting to delete VM causes system crash
* [NAS-117285](https://ixsystems.atlassian.net/browse/NAS-117285) \[required\] validator from FormsModule conflicts with \* [required\] input ix-\* components
@@ -1371,7 +1491,7 @@ Additional feature in future Bluefin releases:
## Known Issues
Known issues are those found during internal testing or reported by the community and are listed in three tables:
-* Notices that are provided to provide more detail about Bluefin specific changes.
+* Notices that provide more detail about Bluefin specific changes.
* Issues from a release that will be resolved in a future targeted release(s).
* Issues resolved in a particular version.
@@ -1379,52 +1499,73 @@ Known issues are those found during internal testing or reported by the communit
| Notice or Behavior | Details |
|--------------------|---------|
+| TrueNAS does not create alerts for SMR disks. | TrueNAS SCALE and TrueCommand have never created alerts when SMR disks are used. |
+| TrueNAS SCALE does not support T10-DIF drives. | We are currently working on a procedure to resolve the issue. |
| Unable to mount an NFS export after migrating from CORE > SCALE or updating to 22.02.0. | The /etc/exports file is no longer generated when the NFS configuration contains mapall or maproot entries for unknown users or groups. If you are unable to mount an NFS export, review your NFS share configuration entries to something specific for your environment or wheel or root. |
| SCALE Gluster/Cluster. | Gluster/Cluster features are still in testing. Administrators should use caution when deploying and avoid use with critical data. |
| AFP sharing is removed from TrueNAS SCALE. | The AFP protocol is deprecated and no longer receives development effort or security fixes. TrueNAS SCALE automatically migrates any existing AFP shares into an SMB configuration that is preset to function like an AFP share. |
| Alderlake GPU acceleration | TrueNAS SCALE Bluefin includes Linux Kernel 5.15 which can enable Alderlake GPU acceleration by using the following boot loader tunable and rebooting: `midclt call system.advanced.update '{"kernel_extra_options": "i915.force_probe=4690" }'`. Replace `4690` with your specific Alderlake GPU version.
+| TrueNAS Bluefin no longer supports MS-DOS based SMB clients. | As of SCALE 22.12, Bluefin, TrueNAS now uses Samba 4.17. Samba 4.16 announced in their release notes that they deprecated and disabled the whole SMB1 protocol as of 4.11. If needed for security purposes or code maintenance they continue to remove older protocol commands and unused dialects or that are replaced in more modern SMB1 version. Refer to [Samba](https://www.samba.org/samba/latest_news.html) release notes for more information. |
+| Cannot mount WebDAV share in Windows when WebDAV service is set to Basic Authentication | If the TrueNAS WebDAV service is set to Basic Authentication, you cannot mount the share in Windows. This is a security protection on the part of Windows as Basic Authentication is considered an insecure way to input passwords. While the Windows Registry can be edited to allow for basic authentication, this is not recommended. It is recommended to access WebDAV shares using a browser with https security enabled or mounting shares with Digest Authentication enabled. |
+| App deployment can get stuck in validation when the Host Path is used between Apps and TrueNAS sharing services (e.g. SMB and NFS). | Shared host paths are considered insecure and are not recommended. Review host paths used by Apps and Sharing services and adjust paths to be unique. As a last resort that can result in system and app instability, **Host Path Safety Checks** can be disabled in **Apps > Settings > Advanced Settings**. |
+| Apps fail to start | There are known issues where applications fail to start after reboot. The fixed-in release is not known at this time. |
### Known Issues with a Future Resolution
| Seen In | Key | Summary | Workaround | Resolution Target |
|---------|-----|---------|------------|-------------------|
-| 22.12.1 | NAS-120432 | Existing replication tasks updated from 22.12.0 cannot be edited in 22.12.1 | Replication tasks that can't be edited can be remade as a new replication task to work around this bug. | Targeted 22.12.2 |
-| 22.12.1 | NAS-120368 | Selecting UI Configuration for User During Fresh Install leads to default login screen instead of User Configuration screen | When doing a fresh install of SCALE using the iso file, when presented with the option to configure the user after completing the install and upon first UI login, the default sign in splash screen displays rather than the user configuration screen. This means the user cannot log in as admin or root because the root user password is disabled by default. Create the admin user as part of the iso installation process. | Targeted 22.12.2 |
-| 22.12.1 | NAS-120366 | The Available Applications don't load after selecting the pool | On an Enterprise HA system, using the admin user and after creating a pool, Apps were loading until the pool to use for applications was selected. The screen shows a spinner but doesn't load the Available Applications screen. This could be a screen cache issue. Try switching to another screen and back to Apps or clear the browser session cache. | Targeted 22.12.2 |
-| 22.12.1 | NAS-120361 | TrueNAS Mini Enclosure view should not have Identify Drive button | The View Enclosure screen for TrueNAS Mini should not have the Identify Drive button on the disk information screen after selecting a drive on the image. | Unassigned |
-| 22.12.1 | NAS-120348 | Storage and Topology page does not update when a pool gets degraded | After removing a drive from a pool, the main Storage and the Topology pages do not update the status to show a degraded state unless you do a hard refresh of the screen. | Targeted 23.10-ALPHA.2 |
-| 22.12.1 | NAS-120319 | Cannot Replicate from an Encrypted Dataset to an Unencrypted Dataset | When creating a replication task from an encrypted dataset to a remote system with an unencrypted dataset, and creating a new SSH connection, the task fails with a permissions error. | Targeted 22.12.2 |
-| 22.12.1 | NAS-120316 | Backblaze Bucket Folders do not Display Properly in Cloud Sync Task Form | When creating a new cloud sync task for Backblaze, after selecting the Bucket the Folders dropdown list only has some of the folders in the selected bucket impacting the ability to properly configure a Backblaze B2 cloud sync task. | Targeted 22.12.2 |
-| 22.12.1 | NAS-120266 | Stopping VM Popup Does Not Go Away after VM Stops | After clicking Force Stop After Timeout, the Stopping VM dialog does not close after the VM stops. | Targeted 22.12.3
23.10.ALPHA.1 |
-| 22.12.1 | NAS-120249 | Enclosure View does not update when drive is removed from a pool/system | The image on the View Enclosure screen does not update after removing a drive from a pool or system. This appears to be related to a screen caching issue. | Targeted 22.12.2 |
-| 22.12.1 | NAS-120243 | Using Non-Supported Characters while creating a Boot Environment Creates a CallError | SCALE allows you to save a new boot environment created with non-supported characters but it results in a call error. Do not use bracket ([]), brace ({}), pipe (|), colon (:), and comma (,) characters in the Name field. | Targeted 22.12.2 |
-| 22.12.1 | NAS-120232 | Apps not starting after failover | With Apps deployed on an HA system, after failover the apps stick a deploying but never finish and the system generates alerts in the UI. Issue is pools are degraded after failover, and docker is not able to get image details. | Targeted 22.12.2 |
-| 22.12.1 | NAS-120216 | Need Validation for IP address when setting up Replication | When setting up replication to a remote system, the IP address entered must include http:// as part of the IP address entered. | Unassigned |
-| 22.12.1 | NAS-120210 | Enclosure View does not Function Properly on TrueNAS Minis | View Enclosure only displays the top and bottom rows of slots as active and clickable, but the middle row is inactive even when loaded with drives. | Targeted 22.12.1
23.10-ALPHA.1 (Cobia) |
-| 22.12.1 | NAS-120191 | Dataset not fully migrated to the pool before starting rrdcached | Collectd can't talk to rrdcached. We automatically move the system dataset to the zpool on first pool creating and we have a race where the system dataset was not fully migrated to the pool before we tried to start rrdcached. | Targeted 22.12.2 |
-| 22.12.1 | NAS-120145 | The web UI isn't updating various parts on the screen (Jobs) | On and Enterprise HA system with some alerts, the Jobs screen shows tasks running and trying to gather the information for listed alerts. | Targeted 22.12.2 |
-| 22.12.1 | NAS-120136 | App Collabora installation failed with error values.config: not a string values.config.extra_params not a string. | | Targeted 22.12.2
23.10-ALPHA.1 (Cobia) |
-| 22.12.1 | NAS-120118 | Network Dashboard Card is missing some Up Interfaces | On an Enterprise HA system with multiple network interfaces configured and active, the Network widget on the main Dashboard does not list all active interfaces and the Reports screen does not include the same missing active interfaces. | Targeted 22.12.2 |
-| 22.12.1 | NAS-120069 | 2FA SSH not functional for non root users | After installing SCALE using the admin user (non-root option), setting the SSH service to allow admin to log in, and then enabling 2FA. After logging out and verifying 2FA works for the UI, and then changing the 2FA settings to Enable Tow-Factor Auth for SSH, the SSH session asks for the admin password several times before the session disconnects. | Targeted 23.10-ALPHA.1 |
-| 22.12.0 | N/A | TrueNAS SCALE does not support T10-DIF drives. | We are currently working on a procedure to resolve the issue. | Targeted 22.12.2 |
-| 22.12.0 | NAS-119608 | Middleware halt after upgrade from Angelfish to Bluefin due to multiple iSCSI portals with the same IP address. | Before upgrading from SCALE Angelfish, remove any iSCSI portals with duplicate IP addresses. | 22.12.1
23.10-ALPHA.1 (Cobia) |
-| 22.12.0 | NAS-119374 | Non-root administrative users cannot access installed VM or App console. | For Apps console access, log in to the UI as the **root** user. For VM console access, go to **Credentials > Local Users** and edit the non-root administrative user. In the **Auxiliary Groups** field, open the dropdown and select the **root** option. Click **Save**. | 22.12.1
23.10-ALPHA.1 (Cobia) |
-| 22.12.0 | N/A | App deployment can get stuck in validation when the Host Path is used between Apps and TrueNAS sharing services (e.g. SMB and NFS). | Shared host paths are considered insecure and are not recommended. Review host paths used by Apps and Sharing services and adjust paths to be unique. As a last resort that can result in system and app instability, **Host Path Safety Checks** can be disabled in **Apps > Settings > Advanced Settings**. | N/A |
-| 22.12.0 | NAS-119270 | One Time Replication of Same System to A Different System Fails with Traceback | Unable to perform a run once operation for a replication task without getting a traceback, or to set an option from the UI replication wants. | Targeted 23.10-ALPHA.1 (Cobia) |
-| 22.12-BETA.2 | NAS-118613 | Cannot mount WebDAV share in Windows when WebDAV service is set to Basic Authentication | If the TrueNAS WebDAV service is set to Basic Authentication, you cannot mount the share in Windows. This is a security protection on the part of Windows as Basic Authentication is considered an insecure way to input passwords. While the Windows Registry can be edited to allow for basic authentication, this is not recommended. It is recommended to access WebDAV shares using a browser with https security enabled or mounting shares with Digest Authentication enabled. | N/A |
-| 22.12-BETA.2 | N/A | TrueNAS Bluefin no longer supports MS-DOS based SMB clients. | As of SCALE 22.12, Bluefin, TrueNAS now uses Samba 4.17. Samba 4.16 announced in their release notes that they deprecated and disabled the whole SMB1 protocol as of 4.11. If needed for security purposes or code maintenance they continue to remove older protocol commands and unused dialects or that are replaced in more modern SMB1 version. Refer to [Samba](https://www.samba.org/samba/latest_news.html) release notes for more information. | N/A |
+| 22.12.2 | NAS-121383 | Replicating a pull encrypted to existing non-encrypted dataset gets error saying Has Data when the dataset is empty | After correctly configuring a pull replication task with Source set to On a Different System, selecting the SSH Connection and Use Sudo for ZFS Commands, then with Destination set to On this System and selecting Custom Snapshots, set the Destination path to a local unencrypted dataset. After selecting Run Once and Start Replication, the Replication task failed with error but should have succeeded. | 22.12.3
23.10.ALPHA.1 |
+| 22.12.2 | NAS-121378 | Sudo Enabled dialog appears for Local source location replication | When setting up a replication task with the Sources set to On a Different System, then selecting an SSH Connection the Sudo Enable dialog displays allowing selection of the Use Sudo for ZFS Commands. The Source automatically changes to On this System as it should but the Sudo Enable dialog displays when it should not. | TBD |
+| 22.12.2 | NAS-121371 | Web portal for Apps launches after failover but was closed pre-failover | After installing and successfully deploying the MinIO app on an HA system and with the app active, successfully logged into it, then closed the MinIO web browser. After failing over the HA system and logging into the system, the MinIO web portal automatically launched as expected but the MinIO browser URL contained the VIP address without the :30001 so it had no connection and was blank. | 22.12.3
23.10.ALPHA.1 |
+| 22.12.2 | NAS-121367 | User creation field validation - spaces accepted | When creating a new user a number of fields marked mandatory allow a space character but should not. Fields affected are Full Name and password fields. | TBD |
+| 22.12.2 | NAS-121358 | Need method for userspace to report arm status info from ACPI tables | Older Micron nvdimms do not properly report arm information status until the second power cycle occurs and after the nvdimm is unarmed. This unarmed state during power loss can result in data loss. Find a way for userspace to identify and report the ACPI nvdimm arm information status on TrueNAS CORE and SCALE. | 22.12.3
23.10.ALPHA.1 |
+| 22.12.2 | NAS-121356 | Required for starting Kubernetes alert after clean rootless HA install, Apps pool selected and failover. | After an HA clean install as admin user (rootless) and configuring the HA, and then creating a pool for Apps and selecting it in Apps, the MinIO install attempt fails but does not send an alert about the fail to configure Kubernetes cluster for applications until failing over the HA system. At that time when the standby controller is active the critical alert Failed to configure Kubernetes cluster for Applications alert is received. | 22.12.3 |
+| 22.12.2 | NAS-121333 | Post Reset Network interface cannot be set up blocking bring up of system | On an HA system, after configuring HA license, networking, and confirming failover works, then selecting the option to reset the configuration through an API call, when you change to the Console setup menu, then select option 1 to configure the interface aliases, failover group, failover_aliases, and failover_virtual_aliases, and after save the configuration changes, received an error. The DHCP option on HA system is hidden in Console setup menu so you cannot disable the functionality to allow this change. To work around this issue, use the UI to make network setting changes to the primary interface. | 22.12.3
23.10.ALPHA.1 |
+| 22.12.2 | NAS-121274 | Error when attempting disk wipe post install | On an HA system, after installing SCALE, during the initial system configuration of a new storage pool when performing a Disk Wipe, the operation fails if the disk selected is part of a exported pool. SCALE, by design, prevents this operation to prevent accidental destruction of data on a disk associated with an exported pool. | 22.12.3
23.10.ALPHA.1 |
+| 22.12.2 | NAS-121244 | Custom Schedule window displays partially off screen | The Custom Schedule window displays partially off screen when scheduling a task like a periodic snapshot task on a system with a smaller screen like a laptop and running 125% zoom in Windows. Issue is not present on a normal monitor. | 22.12.3
23.10.ALPHA.1 |
+| 22.12.2 | NAS-121237 | HA system does not fail over and log out after selecting Disable Failover. | On an HA system, after selecting Disable Failover, clearing Default TrueNAS controller, then confirming you want to failover, the system does not failover but does log out of the UI. | 22.12.3 |
+| 22.12.2 | NAS-121207 | Removed drive from pool does not show attached after reinsertion, spare remains in use | After removing a drive from a mirror pool degrades the poll so the spare attaches, but when reinserting the drive and the pool status changes to healthy, the spare still shows in use status in the Enclosure view and the reinserted drive does not show as part of the pool. | 22.12.3 |
+| 22.12.2 | NAS-121128 | Reporting CPU chart only ever shows in percentage | Reporting CPU option to report kernel time does not show in the report when selected. Reports only show (%) usage values. | 22.12.3
23.10.ALPHA.1 |
+| 22.12.2 | NAS-121085 | SCALE CLI debug command failure | The SCALE CLI system debug command fails to generate a system debug, and returns errors when attempting the command and the command with additional arguments. | 22.12.3
23.10.ALPHA.1 |
+| 22.12.2 | NAS-121065 | x-series ntb failing to initialize on receive buffer | The SCALE CLI system debug command fails to generate a system debug, and returns errors when attempting the command and the command with additional arguments. | 22.12.3
23.10.ALPHA.1 |
+| 22.12.2 | NAS-121064 | Fix x-series controller logic | Corrects the method of calculating the controller position in the chassis, which prevented IP address assignment of the ntb0 interface. | 22.12.3
23.10-ALPHA.1 Cobia |
+| 22.12.2 | NAS-121035 | Web UI not updating pool column in Storage > Disks page (disk.get_unused) | If a disk, associated with an exported zpool, and where you perform a Disk Wipe operation, the pool table does not update and the disk shows the wrong state. | 22.12.3
23.10-ALPHA.1 Cobia |
+| 22.12.2 | NAS-121030 | ES12 Enclosure View not updating after drive insertion and recognition (HA/SCALE) | An HA system Enclosure View for an ES12 does not update after drive insertion and recognition by SCALE. | 22.12.3 |
+| 22.12.2 | NAS-121014 | System configured on LA time but Network dashboard card is in EST | System time configured PST while the main Dashboard Network card displays EST. | 22.12.3
23.10-ALPHA.1 Cobia |
+| 22.12.2 | NAS-120961 | HA: Connected TrueCommand Cloud status is disabled after HA failover | Add an HA system to TrueCommand, failover of the HA system results in TrueCommand cloud status change to Disabled after the failover. TrueCommand should remain enabled and connected. | 22.12.3 |
+| 22.12.2 | NAS-120669 | HA: Pending Upgrade screen presented again after upgrade completes | After completing the upgrade, including clicking Continue on the Pending Upgrade window, both controllers show the same version, and HA is up. Later, logged into the system and the Pending Upgrade window still shows up. Clicking Continue results in the error message Failed to Activate the BE. | 22.12.3 |
+| 22.12.2 | NAS-120594 | After clean install of 22.12.2 build, upon login get InstanceNotFound call err/traceback | After completing a clean install of 22.12.2 build, received the InstanceNotFound error traceback stating PoolDataset does not exist immediately after first log in. When setting up initial storage pool, found two disks associated with a pool. Was able to import (create) the first pool without issue. | 23.10.BETA.1 |
+| 22.12.1
22.12.12 | NAS-121087 | TrueNAS CLI and Console User Shell Options Don't Work | After creating a user and setting Shell type to TrueNAS Console or TrueNAS CLI, then changing SSH service to enable Allow Password Authentication, receive an error when attempting to set up an SSH session. | 22.12.3
23.10.ALPHA.1 |
+| 22.12.1 | NAS-120366 | The Available Applications do not load after selecting the pool | On an Enterprise HA system, using the admin user and after creating a pool, Apps are loading until but stop after selecting the pool to use for applications. The screen shows a spinner but does not load the Available Applications screen. This could be a screen cache issue. Try switching to another screen and back to Apps or clear the browser session cache. | 22.12.3 |
+| 22.12.1 | NAS-120348 | Storage and Topology page does not update when a pool gets degraded | After removing a drive from a pool, the main Storage and the Topology pages do not update the status to show a degraded state unless you do a hard refresh of the screen. | 23.10-ALPHA.2 |
+| 22.12.1 | NAS-120238 | Widget Sizes and Fonts Differ Between Web Browsers | Widget sizes and text in the WebUI is different between web browsers. | 23.10-BETA.1 |
+| 22.12.1 | NAS-120069 | 2FA SSH not functional for non root users | After installing SCALE using the admin user (non-root option), setting the SSH service to allow admin to log in, and then enabling 2FA. After logging out and verifying 2FA works for the UI, and then changing the 2FA settings to Enable Tow-Factor Auth for SSH, the SSH session asks for the admin password several times before the session disconnects. | 23.10-ALPHA.1 |
+| 22.12.0 | NAS-119270 | One Time Replication of Same System to A Different System Fails with Traceback | Unable to perform a run once operation for a replication task without getting a traceback, or to set an option from the UI replication wants. | 23.10-ALPHA.1 (Cobia) |
+| 22.12.0 | NAS-115112 | Zpool status upon removal/reinsertion of spare does not update | After removing a spare from a pool the zpool status showed spare as removed, but after reinserting the spare the zpool status does not update and continues to sow the spare as removed. Disk does show as in inventory. | 23.10-ALPHA.1 |
| 22.12-BETA.1 | NAS-117974 | Replication Task Wizard Source and Destination fields cut off the path information | The **Source** and **Destination** fields in the **Replication Task Wizard** window are cutoff. UI form issue that positions the paths in the fields such that only part of the value is visible. | Backlog |
-| 22.12-BETA.1 | NAS-118063 | SCALE Cluster growth/resize features | Currently, there is no way to grow or resize an existing cluster without the user destroying their cluster and starting with a new cluster. This issue looks to implement a solution using TrueCommand and TrueNAS API that provides the ability to have shared volumes that do not occupy all nodes in the cluster, add one or more nodes to a cluster without impacting existing shared volumes, "grow" a shared volume, and temporarily remove nodes from a cluster without destroying the cluster. | Backlog |
+| 22.12-BETA.1 | NAS-118063 | SCALE Cluster growth/resize features | Currently, there is no way to grow or resize an existing cluster without the user destroying their cluster and starting with a new cluster. This issue looks to implement a solution using TrueCommand and TrueNAS API that provides the ability to have shared volumes that do not occupy all nodes in the cluster, add one or more nodes to a cluster without impacting existing shared volumes, "grow" a shared volume, and temporarily remove nodes from a cluster without destroying the cluster. | Backlog |
| 22.12-BETA.1 | NAS-118095 | Core dumps on ctdb at startup | Traceback received that indicates ctdb core-dumps when starting nodes after a fresh install. | Unscheduled |
-| 22.02.1 |NAS-116473 | Large Drive Count Issues | iX is investigating issues with booting SCALE on systems with more than 100 Disks. | Targeted 23.10-ALPHA.1 (Cobia) |
+| 22.02.4 | NAS-118894 | Web UI changes in Data Protection Section | Issues with spacing on Cloud Sync Task and Scrub Task widgets between Nex Run, Enabled and State. The color orange for running tasks is misapplied, pending or aborted tasks display in orange. Make Running tasks blue. | 23.10-BETA.1 (Cobia) |
+| 22.02.1 | NAS-116473 | Large Drive Count Issues | iX is investigating issues with booting SCALE on systems with more than 100 Disks. | 23.10-ALPHA.1 (Cobia) |
+| 21.06-BETA.1 | NAS-111805 | Cannot configure static IP on HA B node | On an HA system with the NTB card removed and the interface working with DHCP-assigned IP after a clean install, attempt to set a static IP and test changes results in the network interfaces and default gateway disappearing from the serial console screen, and the system does not respond to a new IP address. After test time expires, these network settings reappear and the system responds to the DHCP-assigned IP address. | 23.10-ALPHA.1 (Cobia) |
+
### Resolved Known Issues
{{< expand "Resolved Known Issues List" "v">}}
| Seen In | Resolved In | Key | Summary | Workaround |
|---------|-------------|-----|---------|------------|
-| 22.12.1 | NAS-120136 | App Collabora installation failed with error values.config: not a string values.config.extra_params not a string. | | 22.12.1 |
+| 22.12.2 | 22.12.2
23.10.ALPHA.1 | NAS-120623 | Only have one sudoers file entry as only the last one is taken into account | When setting user sudo options, set on only one as the system only takes the last set into account. For example, when setting sudo for replication tasks as the admin user, set only one of the sudo options such as the Allow all sudo commands with no password. |
+| 22.12.1 | 22.12.2
23.10.ALPHA.1 | NAS-120432 | Existing replication tasks updated from 22.12.0 cannot be edited in 22.12.1 | Replication tasks that can't be edited can be remade as a new replication task to work around this bug. |
+| 22.12.1 | 22.12.2
23.10.ALPHA.1 | NAS-120361 | TrueNAS Mini Enclosure view should not have Identify Drive button | The View Enclosure screen for TrueNAS Mini should not have the Identify Drive button on the disk information screen after selecting a drive on the image. |
+| 22.12.1 | 22.12.2
23.10.ALPHA.1 | NAS-120319 | Cannot Replicate from an Encrypted Dataset to an Unencrypted Dataset | When creating a replication task from an encrypted dataset to a remote system with an unencrypted dataset, and creating a new SSH connection, the task fails with a permissions error. |
+| 22.12.1 | 22.12.2
23.10.ALPHA.1 | NAS-120266 | Stopping VM Popup Does Not Go Away after VM Stops | After clicking Force Stop After Timeout, the Stopping VM dialog does not close after the VM stops. |
+| 22.12.1 | 22.12.2
23.10-ALPHA.1 (Cobia | NAS-120243 | Using Non-Supported Characters while creating a Boot Environment Creates a CallError | SCALE allows you to save a new boot environment created with non-supported characters but it results in a call error. Do not use bracket ([]), brace ({}), pipe (|), colon (:), and comma (,) characters in the Name field. |
+| 22.12.1 | 22.12.2 | NAS-120232 | Apps not starting after failover | With Apps deployed on an HA system, after failover the apps stick a deploying but never finish and the system generates alerts in the UI. Issue is pools are degraded after failover, and docker is not able to get image details. |
+| 22.12.1 | 22.12.2
23.10-ALPHA.1 (Cobia) | NAS-120145 | The web UI isn't updating various parts on the screen (Jobs) | On and Enterprise HA system with some alerts, the Jobs screen shows tasks running and trying to gather the information for listed alerts. |
+| 22.12.1 | 22.12.2
23.10-ALPHA.1 (Cobia) | NAS-120136 | App Collabora installation failed with error values.config: not a string values.config.extra_params not a string. |
+| 22.12.1 | 22.12.1
23.10-ALPHA.1 (Cobia) | NAS-120210 | Enclosure View does not Function Properly on TrueNAS Minis | View Enclosure only displays the top and bottom rows of slots as active and clickable, but the middle row is inactive even when loaded with drives. |
| 22.12.0 | 22.12.1
23.10-ALPHA.1 (Cobia) | NAS-119608 | Middleware halt after upgrade from Angelfish to Bluefin due to multiple iSCSI portals with the same IP address. | Before upgrading from SCALE Angelfish, remove any iSCSI portals with duplicate IP addresses. |
-| 22.12.0 | NAS-119390 | SMB Access Based Share Enumeration fails due to inaccessible Share ACL DB (share_inf.tdb) | SMB shares with Access Based Share Enumeration enabled and share ACLs are not browsable. Disabling Access Based Share Enumeration makes them browsable. Download the replacement Samba package attached to this ticket to correct this issue and make shares with Access Based Share Enumeration enabled browsable. | 22.12.1 |
+| 22.12.0 | 22.12.1 | NAS-119390 | SMB Access Based Share Enumeration fails due to inaccessible Share ACL DB (share_inf.tdb) | SMB shares with Access Based Share Enumeration enabled and share ACLs are not browsable. Disabling Access Based Share Enumeration makes them browsable. Download the replacement Samba package attached to this ticket to correct this issue and make shares with Access Based Share Enumeration enabled browsable. |
| 22.12.0 | 22.12.1
23.10-ALPHA.1 (Cobia) | NAS-119374 | Non-root administrative users cannot access installed VM or App console. | For Apps console access, log in to the UI as the **root** user. For VM console access, go to **Credentials > Local Users** and edit the non-root administrative user. In the **Auxiliary Groups** field, open the dropdown and select the **root** option. Click **Save**. |
| 22.12.0 | 22.12.1
23.10-ALPHA.1 (Cobia) | NAS-119233 | Validation error received when modifying HTTP/S Port Setting in the Web UI | A validation error can occur if using the iw.iso8 keyboard map where the system interprets digits "81" as the text "us". |
| 22.12.0 | 22.12.1
23.10-ALPHA.1 (Cobia) | NAS-119279 | Missing an option to promote dataset | After cloning a snapshot to a dataset, the option to promote that dataset is missing from the UI. |
diff --git a/content/SCALE/SCALETutorials/Apps/Customizing Advanced (Kubernetes) Settings/Configuring Host Path Safety Checks.md b/content/SCALE/SCALETutorials/Apps/Customizing Advanced (Kubernetes) Settings/Configuring Host Path Safety Checks.md
new file mode 100644
index 0000000000..4313300ef2
--- /dev/null
+++ b/content/SCALE/SCALETutorials/Apps/Customizing Advanced (Kubernetes) Settings/Configuring Host Path Safety Checks.md
@@ -0,0 +1,34 @@
+---
+title: "Configuring Host Path Validation"
+description: "This article provide information on host path validation in SCALE."
+weight: 10
+aliases:
+tags:
+- scaleapps
+---
+
+{{< toc >}}
+
+TrueNAS SCALE uses host path safety checks to ensure that host path volumes are secure when creating apps. We recommend creating datasets for applications that do not share the same host path as an SMB or NFS share.
+
+Since TrueNAS considers shared host paths non-secure, apps that use shared host paths (such as an SMB shared dataset) fail to deploy.
+
+## Using Shared Host Paths with Safety Checks Enabled
+
+If you group share and application data under a common dataset (such as *media*) where both use a path such as */tank/media/*, the application fails to deploy.
+
+You can still group shares and applications under *media*, but you must alter the path for shares and apps, such as */tank/media-shares* or */tank/media/shares/sharename*, and */tank/media-apps* or */tank/media/apps/appname*.
+
+The paths differ enough to use host path validation and avoid issues that prevent application deployment.
+
+## Using Shared Host Paths with Safety Checks Disabled
+
+{{< hint warning >}}
+We do not recommend disabling host path safety checks since shared host paths are non-secure.
+{{< /hint >}}
+
+If you want apps to deploy in a shared host path, disable **Enable Host Path Safety Checks** in **Applications > Settings > Advanced Settings**
+
+![AppsAdvancedSettingsKubernetesSettings](/images/SCALE/22.12/AppsAdvancedSettingsKubernetesSettings.png "Apps Advanced Settings")
+
+Disabling host path safety checks might be helpful if you intend to have an app running in a shared dataset. For example, if you have apps that perform virus detection or media management and want them to work on files in your shared dataset.
\ No newline at end of file
diff --git a/content/SCALE/SCALETutorials/Apps/Customizing Advanced (Kubernetes) Settings/_index.md b/content/SCALE/SCALETutorials/Apps/Customizing Advanced (Kubernetes) Settings/_index.md
new file mode 100644
index 0000000000..6da9e04483
--- /dev/null
+++ b/content/SCALE/SCALETutorials/Apps/Customizing Advanced (Kubernetes) Settings/_index.md
@@ -0,0 +1,13 @@
+---
+title: "Customizing Advanced (Kubernetes) Settings"
+geekdocCollapseSection: true
+weight: 55
+---
+
+The **Kubernetes Settings** screen allows users to customize network, system, and cluster settings for all apps in TrueNAS SCALE.
+
+![AppsAdvancedSettingsKubernetesSettings](/images/SCALE/22.12/AppsAdvancedSettingsKubernetesSettings.png "Apps Advanced Settings")
+
+## Article Summaries
+
+{{< children depth="2" description="true" >}}
\ No newline at end of file
diff --git a/content/SCALE/SCALETutorials/Apps/Docker.md b/content/SCALE/SCALETutorials/Apps/Docker.md
index 400893547f..3eb0759e63 100644
--- a/content/SCALE/SCALETutorials/Apps/Docker.md
+++ b/content/SCALE/SCALETutorials/Apps/Docker.md
@@ -14,24 +14,24 @@ tags:
SCALE includes the ability to run Docker containers using Kubernetes.
{{< expand "What is Docker?" "v" >}}
-Docker is an open platform for developing, shipping, and running applications. Docker enables the separation of applications from infrastructure through OS-level virtualization to deliver software in containers.
+Docker is an open-source platform for developing, shipping, and running applications. Docker enables the separation of applications from infrastructure through OS-level virtualization to deliver software in containers.
{{< /expand >}}
{{< expand "What is Kubernetes?" "v" >}}
-Kubernetes is a portable, extensible, open-source container-orchestration system for automating computer application deployment, scaling, and management with declarative configuration and automation.
+Kubernetes (K8s) is an open-source system for automating deployment, scaling, and managing containerized applications.
{{< /expand >}}
Always read through the Docker Hub page for the container you are considering installing so that you know all of the settings that you need to configure.
-To set up a Docker image, first determine if you want the container to use its own dataset. If yes, [create a dataset]({{< relref "DatasetsSCALE.md" >}}) for host volume paths before you click **Launch Docker Image**.
+To set up a Docker image, first, determine if you want the container to use its dataset. If yes, [create a dataset]({{< relref "DatasetsSCALE.md" >}}) for host volume paths before you click **Launch Docker Image**.
## Adding Custom Applications
{{< hint warning >}}
-If your application requires directory paths, specific datasets, or other storage arrangements, configure these before you starting the **Launch Docker Image** wizard.
+If your application requires directory paths, specific datasets, or other storage arrangements, configure these before you start the **Launch Docker Image** wizard.
-You cannot interrupt the configuration wizard and save settings to leave and go create data storage or directories in the middle of the process.
+You cannot exit the configuration wizard and save settings to create data storage or directories in the middle of the process. If you are unsure about any configuration settings, review the [Launch Docker Image UI reference article]({{< relref "LaunchDockerImageScreens.md" >}}) before creating a Docker image.
-To create directories in a dataset on SCALE use **System Settings > Shell** before you begin installing the container.
+To create directories in a dataset on SCALE, use **System Settings > Shell** before you begin installing the container.
{{< /hint >}}
When you are ready to create a container, go to the **APPS** screen, select the **Available Applications** tab, and then click **Launch Docker Image**.
@@ -39,26 +39,26 @@ When you are ready to create a container, go to the **APPS** screen, select the
![AvailableApplicationsScreen](/images/SCALE/22.02/AvailableApplicationsScreen.png "Available Applications")
1. Fill in the **Application Name** and the current version information in **Version**.
- Add the Github repository URL in **Image Repository** for the docker container you are setting up.
+ Add the GitHub repository URL in **Image Repository** for the docker container.
![LaunchDockerImageAppNameVerContainerImage](/images/SCALE/22.12/LaunchDockerImageAppNameVerContainerImage.png "Launch Docker Image")
2. Enter the Github repository for the application you want to install in **Image Repository**.
If the application requires it, enter the correct setting values in **Image Tag** and select the **Image Pull Policy** to use.
- If the application requires it, enter the executables you want or need to run after starting the container in **Container Entrypoint**. Click **Add** for **Container CMD** to add a command, or for **Container Arg** to add a container argument.
+ If the application requires it, enter the executables you want or need to run after starting the container in **Container Entrypoint**. Click **Add** for **Container CMD** to add a command. Click **Add** for **Container Arg** to add a container argument.
![LaunchDockerImageAddContainerEntrypoints](/images/SCALE/22.12/LaunchDockerImageAddContainerEntrypoints.png "Add Container Entrypoints")
3. Enter the **Container Environment Variables**. Not all applications use environment variables.
- Check the Docker Hub for details on the application you want to install to verify which variables are required for that particular application.
+ Check the Docker Hub for details on the application you want to install to verify which variables that particular application requires.
![LaunchDockerImageAddContainerEnvironmentVariables](/images/SCALE/22.12/LaunchDockerImageAddContainerEnvironmentVariables.png "Add Container Environmental Variables")
4. Enter the networking settings.
a. Enter the external network interface to use.
- Click **Add** to display the **Host Interface** and **IPAM Type** fields required when setting up networking.
+ Click **Add** to display the **Host Interface** and **IPAM Type** fields required when configuring network settings.
![LaunchDockerImageAddNetworking](/images/SCALE/22.12/LaunchDockerImageAddNetworking.png "Add Networking")
@@ -67,49 +67,49 @@ When you are ready to create a container, go to the **APPS** screen, select the
![LaunchDockerImageAddDNS](/images/SCALE/22.12/LaunchDockerImageAddDNS.png "Add DNS Policy and Settings")
5. Enter the **Port Forwarding** settings.
- Click **Add** for each port you need to enter. TrueNAS SCALE requires setting all **Node Ports** above 9000.
+ Click **Add** for all ports you need to enter. TrueNAS SCALE requires setting all **Node Ports** above 9000.
![LaunchDockerImageAddPortForwarding](/images/SCALE/22.12/LaunchDockerImageAddPortForwarding.png "Add Port Forwarding")
- Enter the required **Container Port** and **Node Port** settings, and select the protocol to use for these ports. Repeat for each port you need to add.
+ Enter the required **Container Port** and **Node Port** settings, and select the protocol for these ports. Repeat for all ports.
6. Add the **Storage** settings.
- Click **Add** for each host path you need to enter for the application. Add any memory-backed or other volumes you want to use.
+ Click **Add** for each application host path. Add any memory-backed or other volumes you want to use.
![LaunchDockerImageAddStorage](/images/SCALE/22.12/LaunchDockerImageAddStorage.png "Add Storage Paths and Volumes")
You can add more volumes to the container later if they are needed.
-7. Enter any additional settings required for your application. Such as workload details or adding container settings for your application.
+7. Enter any additional settings required for your application, such as workload details or adding container settings for your application.
- Select the **Update Strategy** to use. Default is to **Kill existing pods before creating new ones**.
+ Select the **Update Strategy** to use. The default is **Kill existing pods before creating new ones**.
Set any resource limits you want to impose on this application.
8. Enter or select any **Portal Configuration** settings to use.
-9. Click **Save** to complete the configuration and deployment. TrueNAS SCALE deploys the container.
- If correctly configured, the application widget displays on the **Installed Applications** screen.
+9. Click **Save** to deploy the container.
+ If you correctly configured the app, the widget displays on the **Installed Applications** screen.
When complete, the container becomes active. If the container does not automatically start, click **Start** on the widget.
Click on the App card reveals details.
-### Defining Container Settings
+### Defining Container Settings
Define any [commands and arguments](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) to use for the image.
These can override any existing commands stored in the image.
You can also [define additional environment variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) for the container.
Some Docker images can require additional environment variables.
-Be sure to check the documentation for the image you are trying to deploy and add any required variables here.
+Check the documentation for the image you are trying to deploy and add any required variables here.
### Defining Networking
To use the system IP address for the container, set Docker [Host Networking](https://docs.docker.com/network/host/).
-The container is not given a separate IP address and the container port number is appended to the end of the system IP address.
+TrueNAS does not give the container a separate IP address, and the container port number appends to the end of the system IP address.
See the [Docker documentation](https://docs.docker.com/network/host/) for more details.
Users can create additional network interfaces for the container if needed.
-Users can also give static IP addresses and routes to new interface.
+Users can also give static IP addresses and routes to a new interface.
By default, containers use the DNS settings from the host system.
You can change the DNS policy and define separate nameservers and search domains.
@@ -117,10 +117,10 @@ See the Docker [DNS services documentation](https://docs.docker.com/config/conta
### Defining Port Forwarding List
Choose the protocol and enter port numbers for both the container and node.
-You can define multiple port forwards.
+You can define multiple ports to forward to the workload.
{{< hint info >}}
The node port number must be over **9000**.
-Make sure no other containers or system services are using the same port number.
+Ensure no other containers or system services are using the same port number.
{{< /hint >}}
### Defining Host Path Volumes
@@ -138,7 +138,7 @@ To view created container datasets, go to **Datasets** and expand the dataset tr
### Setting Up Persistent Volume Access
-Users developing applications should be mindful that if an application uses Persistent Volume Claims (PVC), those datasets are not mounted on the host, and therefore are not accessible within a file browser. This is upstream zfs-localpv behavior used for managing PVC(s)
+Users developing applications should be mindful that if an application uses Persistent Volume Claims (PVC), those datasets are not mounted on the host and therefore are not accessible within a file browser. Upstream zfs-localpv uses this behavior to manage PVC(s).
If you want to consume or have file browser access to data that is present on the host, set up your custom application to use host path volumes.
@@ -155,7 +155,7 @@ To copy a remote pod file locally:
## Accessing the Shell in an Active Container
-To access the shell in an active container, first identify the namespace and pod for the container.
+To access the shell in an active container, first, identify the namespace and pod for the container.
In the Scale UI, go to **System Settings > Shell** to begin entering commands:
To view container namespaces: `k3s kubectl get namespaces`.
diff --git a/content/SCALE/SCALETutorials/Apps/InstallNetdataApp.md b/content/SCALE/SCALETutorials/Apps/InstallNetdataApp.md
new file mode 100644
index 0000000000..4fe321edbd
--- /dev/null
+++ b/content/SCALE/SCALETutorials/Apps/InstallNetdataApp.md
@@ -0,0 +1,109 @@
+---
+title: "Adding the Netdata app"
+description: "This article provides information on how to install and configure the Netdata app on TrueNAS SCALE."
+weight: 30
+tags:
+- scalenetdata
+- scaleapps
+- scaleadmin
+---
+
+{{< toc >}}
+
+
+## Before You Begin
+
+Before using SCALE to install the Netdata application you need to configure TrueNAS SCALE storage for the Netdata application to use.
+
+Verify the [local administrator]({{< relref "ManageLocalUsersSCALE.md" >}}) account has sudo permissions enabled.
+
+Set up an account with Netdata if you don't already have one.
+
+## Installing Netdata on SCALE
+
+In this procedure you:
+
+1. Add the storage Netdata uses
+
+2. Install the Netdata app in SCALE
+
+### Adding Netdata Storage
+
+Netdata needs a primary dataset for the application (netdata).
+
+SCALE Bluefin creates the **ix-applications** dataset in the pool you set as the application pool when you first go to the **Apps** screen. This dataset is internally managed so you cannot use this as the parent when you create the required Netdata dataset.
+
+To create the Netdata dataset, go to **Datasets**, select the dataset you want to use as the parent dataset, then click **Add Dataset** to [add a dataset]({{< relref "DatasetsScale.md" >}}). In this example, we create the *apnetdat* dataset under the root parent dataset **tank**.
+
+![InstallNetDAppDatasetsSCALE](/images/SCALE/22.12/InstallNetDAppDatasetsSCALE.png "Netdata Dataset")
+
+### Installing Netdata in SCALE
+
+Go to **Apps** to open the **Applications** screen and then click on the **Available Applications** tab.
+
+1. Set the pool SCALE applications use.
+
+ If you have not installed an application yet, SCALE opens the **Choose a pool for Apps** dialog. Select the pool where you created the Netdata dataset from the **Pools** dropdown list and then click **Choose** to set the pool for all applications.
+
+ After SCALE finishes configuring the system to use this pool, a confirmation dialog displays. Click **Close**.
+
+2. Locate the **netdata** widget and then click **Install** to open the **netdata** configuration wizard.
+
+ ![InstallNetDAppAvailAppSCALE](/images/SCALE/22.12/InstallNetDAppAvailAppSCALE.png "Available Applications")
+
+3. Enter a name for the app in **Application Name** and then click **Next**. This example uses *netdata*.
+
+ ![InstallNetDAppNameSCALE](/images/SCALE/22.12/InstallNetDAppNameSCALE.png "Add Nextcloud Application Name")
+
+4. For a basic installation you can leave the default values in all settings.
+ TrueNAS populates **Node Port to use for Netdata UI** with the default port number of 20489. If you wish to add an image environment, click the **Add** button and enter a **Name** and **Value**.
+
+ ![InstallNetDAppServiceConfSCALE](/images/SCALE/22.12/InstallNetDAppServiceConfSCALE.png "Add Netdata Configuration Data")
+
+5. **Storage** for Netdata by default is configured without host path volumes enabled.
+
+ You can enable host paths for the Netdata Configuration, Cache and Library volumes by selecting their respective checkboxes. You can also specify additional host path volumes by clicking the **Add** button next to **Extra Host Path Volumes**, and providing the location of where the volume will be mounted inside the pod, as well as the path to the host.
+
+ ![InstallNetDAppServiceConfHostPSCALE](/images/SCALE/22.12/InstallNetDAppServiceConfHostPSCALE.png "Add Netdata Storage Data")
+
+6. The default **DNS Configuration** should be sufficient for a basic installation. If you want to specify additional DNS options, click the **Add** button next to **DNS Options** to enter a DNS **Option Name** and **Option Value**.
+
+ ![InstallNetDAppAdvancedDNSSettingsSCALE](/images/SCALE/22.12/InstallNetDAppAdvancedDNSSettingsSCALE.png "Add Netdata DNS Configuration")
+
+7. The checkbox for **Enable Pod Resource limits** is not selected by default.
+
+ ![InstallNetDAppResourceSCALE](/images/SCALE/22.12/InstallNetDAppResourceSCALE.png "Add Netdata Resources Configuration")
+
+ When selected, additional fields display where you can specify CPU resource and memory limits.
+
+8. Click **Save**. The Netdata app installation process begins. Go to **Apps** > **Applications** and click on the **Installed Applications** tab. The **netdata** widget shows the status of *DEPLOYING*.
+
+ ![InstallNetDAppDeployingSCALE](/images/SCALE/22.12/InstallNetDAppDeployingSCALE.png "Netdata App Status")
+
+ Once installed, the **netdata** widget shows the status of *ACTIVE*. Clicking on the vertical ellipsis provides additional options to interact with the agent.
+
+ ![InstallNetDAppRunningOptionsSCALE](/images/SCALE/22.12/InstallNetDAppRunningOptionsSCALE.png "Netdata App Active")
+
+## Using the Netdata Web Portal
+
+A successfully installed Netdata app displays in the **Installed Applications** tab with a status of **Active**.
+
+ ![InstallNetDAppRunningSCALE](/images/SCALE/22.12/InstallNetDAppRunningSCALE.png "Netdata App Installed")
+
+1. Click on the **Web Portal** button. The Netdata agent dashboard displays. The Netdata agent dashboard provides a system overview that displays CPU usage and other vital statistics.
+
+ ![InstallNetDAppNetDAgentCropSCALE](/images/SCALE/22.12/InstallNetDAppNetDAgentCropSCALE.png "Netdata Agent Dashboard")
+
+2. The Netdata agent displays a limited portion of the reporting capabilities of the Netdata app. Click on the *Node View* tab to better understand the differences between the Netdata agent and Netdata Cloud. Evaluate your system reporting needs.
+
+ ![InstallNetDAppNetDAgentDashNoCloudSCALE](/images/SCALE/22.12/InstallNetDAppNetDAgentDashNoCloudSCALE.png "Netdata Agent Node View")
+
+3. To sign in to *Netdata Cloud*, click the *Sign in to Netdata Cloud!* button.
+
+ ![InstallNetDAppCloudSignUpSCALE](/images/SCALE/22.12/InstallNetDAppCloudSignUpSCALE.png "Netdata Cloud Sign In")
+
+4. To stop the Netdata app, return to the **Installed Applications** tab and click the **Stop** button on the **netdata** widget.
+
+ ![InstallNetDAppRunningSCALE](/images/SCALE/22.12/InstallNetDAppRunningSCALE.png "Stopping the Netdata App")
+
+{{< taglist tag="scaleapps" limit="10" >}}
diff --git a/content/SCALE/SCALETutorials/Apps/InstallNextCloudMedia.md b/content/SCALE/SCALETutorials/Apps/InstallNextCloudMedia.md
index e263be0b12..59683734a7 100644
--- a/content/SCALE/SCALETutorials/Apps/InstallNextCloudMedia.md
+++ b/content/SCALE/SCALETutorials/Apps/InstallNextCloudMedia.md
@@ -70,17 +70,21 @@ Go to **Apps** to open the **Applications** screen and then click on the **Avail
2. Locate the **nextcloud** widget and then click **Install** to open the **Nextcloud** configuration wizard.
- ![AvailableApplications](/images/SCALE/22.02/AvailableApplications.png "Available Applications")
+ ![AddNextcloudAvailableAppsSCALE](/images/SCALE/22.12/AddNextcloudAvailableAppsSCALE.png "Available Applications")
3. Enter a name for the app in **Application Name** and then click **Next**. This example uses *nextcloud*.
- ![AddNextcloudEnterApplicationName](/images/SCALE/22.12/AddNextcloudEnterApplicationName.png "Add Nextcloud Application Name")
+ ![AddNextcloudAppNameSCALE](/images/SCALE/22.12/AddNextcloudAppNameSCALE.png "Add Nextcloud Application Name")
-4. Enter a user name and password to use as a Nextcloud login on the **Nextcloud Configuration** settings screen, and then click **Next**.
+4. Enter a user name and password to use as a Nextcloud login on the **Nextcloud Configuration** settings screen.
For a basic installation you can leave the default values in all settings except **Username** and **Password**. This example uses *admin* as the user.
- TrueNAS populates **Nextcloud host** with the IP address for your server, **Nextcloud data directory** with the correct path, and **Node Port to use for Nextcloud** with the correct port number.
+ TrueNAS populates **Nextcloud host** with the IP address for your server and **Nextcloud data directory** with the correct path. The checkbox for **Install ffmpeg** is not selected by default. If selected, the utility *FFmpeg* is automatically installed when the container starts.
- ![AddNextcloudUsernameAndPassword](/images/SCALE/22.12/AddNextcloudUsernameAndPassword.png "Add Nextcloud User Name and Password")
+ ![AddNextcloudConfigurationSCALE](/images/SCALE/22.12/AddNextcloudConfigurationSCALE.png "Add Nextcloud User Name and Password")
+
+ TrueNAS populates the **Node Port to use for Nextcloud** field with the correct port number. To specify an optional **Nextcloud environment** name and value, click the **Add** button.
+
+ ![AddNextcloudEnvironmentSCALE](/images/SCALE/22.12/AddNextcloudEnvironmentSCALE.png "Add Nextcloud Environment")
5. Enter the storage settings for each of the four datasets created for Nextcloud.
diff --git a/content/SCALE/SCALETutorials/Apps/UsingApps.md b/content/SCALE/SCALETutorials/Apps/UsingApps.md
index d5ca389541..ae829f3818 100644
--- a/content/SCALE/SCALETutorials/Apps/UsingApps.md
+++ b/content/SCALE/SCALETutorials/Apps/UsingApps.md
@@ -20,24 +20,15 @@ The first time you open the **Applications** screen, the UI asks you to choose a
![AppsSettingsChoosePool](/images/SCALE/22.02/AppsSettingsChoosePool.png "Choosing a Pool for Apps")
-We recommend users keep the container use case in mind when choosing a pool.
-Select a pool that has enough space for all the application containers you intend to use.
-TrueNAS creates an *ix-applications* dataset on the chosen pool and uses it to store all container-related data. This is for internal use only.
-Set up a new dataset before installing your applications if you want to store your application data in an location separated from other storage on your system.
-For example, create the datasets for Nextcloud application, and if installing Plex, create the dataset(s) for Plex data storage needs.
+We recommend keeping the container use case in mind when choosing a pool.
+Select a pool with enough space for all the application containers you intend to use.
+TrueNAS creates an *ix-applications* dataset on the chosen pool and uses it to store all container-related data. The dataset is for internal use only.
+Set up a new dataset before installing your applications if you want to store your application data in a location separate from other storage on your system.
+For example, create the datasets for the Nextcloud application, and, if installing Plex, create the dataset(s) for Plex data storage needs.
-{{< hint warning >}}
-Since TrueNAS considers shared host paths non-secure, apps that use shared host paths (such as those services like SMB are using) fail to deploy.
-Best practice is to create datasets for applications that do not share the same host path as an SMB or NFS share.
-If you want apps to deploy in a shared host path, either disable **Enable Host Path Safety Checks** in **Applications > Settings > Advanced Settings** or alter the path for shares and applications.
-For example, if you want to group the share and application data under a common dataset such as *media*, where both use a path such as */tank/media/*, and you want to enable host path validation, this can result in the application not moving past the deployment stage.
-You can still group shares and applications under *media* but alter the path for shares and apps, such as */tank/media-shares* or */tank/media/shares/sharename* and */tank/media-apps* or */tank/media/apps/appname*.
-This differs enough to use host path validation and avoid issues that prevent application deployment.
-{{< /hint >}}
+{{< include file="/content/_includes/AppsVMsNoHTTPS.md" type="page" >}}
-You can find additional options for configuring general network interfaces and IP addresses for application containers in **Apps > Settings > Advanced Settings**.
-
-![AppsAdvancedSettingsKubernetesSettings](/images/SCALE/22.12/AppsAdvancedSettingsKubernetesSettings.png "Apps Advanced Settings")
+![SystemSettingsGUISettingsSCALE](/images/SCALE/22.12/SystemSettingsGUISettingsSCALE.png "General System Settings")
## Deploying Official Applications
@@ -45,7 +36,7 @@ Official applications are pre-configured and only require a name during deployme
![AppAddPlexApplicationName](/images/SCALE/22.12/AppAddPlexApplicationName.png "Plex App Wizard Application Name")
-A button to open the application web interface displays when the container is deployed and active.
+A button to open the application web interface displays when the container deploys and activates.
![AppsInstalledPlexWidgetActive](/images/SCALE/22.12/AppsInstalledPlexWidgetActive.png "Plex App: Active")
@@ -64,7 +55,7 @@ Official applications use the default system-level Kubernetes Node IP settings i
You can change the Kubernetes Node IP to assign an external interface to your apps, separate from the web UI interface.
-We recommend using the default Kubernetes Node IP (0.0.0.0) to ensure apps function properly.
+We recommend using the default Kubernetes Node IP (0.0.0.0) to ensure apps function correctly.
## Deploying Custom Application Containers
@@ -74,7 +65,7 @@ To deploy a custom application container in the SCALE web interface, go to **App
Custom applications use the system-level Kubernetes Node IP settings by default. You can assign an external interface to custom apps by setting one on the **Networking** section of the **Launch Docker Image** form.
-Unless you need to run an application separately from the Web UI, we recommend using the default Kubernetes Node IP (0.0.0.0) to ensure apps function properly.
+Unless you need to run an application separately from the Web UI, we recommend using the default Kubernetes **Node IP** (0.0.0.0) to ensure apps function correctly.
## Upgrading Apps
@@ -88,7 +79,7 @@ To upgrade multiple apps, select the checkbox in the widget of each app you want
## Deleting Apps
-To delete an application, click **Stop** and wait for the status to change to stopped.
+To delete an application, click **Stop** and wait for the status to show stopped.
Click the in an app widget to see the list of app options, then select **Delete**.
![DeleteStoppedApp](/images/SCALE/22.12/DeleteStoppedApp.png "Delete App in Stopped State")
@@ -99,12 +90,12 @@ If you attempt to delete the application before it fully deploys, a dialog opens
Click **Confirm** and then **OK** to delete the application.
-If you only select **Confirm** to delete the selected application and you do not select **Delete docker images used by the app** the docker image remains on the list of images on the **Manage Docker Images** screen.
+If you only select **Confirm** to delete the application and do not select **Delete docker images used by the app**, the docker image remains on the image list on the **Manage Docker Images** screen.
To remove the image, go to **Manage Docker Images**, click the and then **Delete**.
![AppsManageDockerImageDelete](/images/SCALE/22.12/AppsManageDockerImageDelete.png "Delete Docker Image")
-Click **Confirm** and **Force delete** then click **Delete** to remove the docker image from the system.
+Click **Confirm** and **Force delete**, then click **Delete** to remove the docker image from the system.
{{< taglist tag="scaleapps" limit="10" >}}
diff --git a/content/SCALE/SCALETutorials/Credentials/ManageLocalGroups.md b/content/SCALE/SCALETutorials/Credentials/ManageLocalGroups.md
index a08edfe441..c5ff7df2f5 100644
--- a/content/SCALE/SCALETutorials/Credentials/ManageLocalGroups.md
+++ b/content/SCALE/SCALETutorials/Credentials/ManageLocalGroups.md
@@ -18,21 +18,20 @@ If the network uses a directory service, import the existing account information
To see saved groups, go to **Credentials > Local Groups**.
-![LocalGroupsSCALE](/images/SCALE/22.02/LocalGroupsSCALE.png "Local Groups Built-In List")
+![GroupsListedSCALE](/images/SCALE/22.12/GroupsListedSCALE.png "Local Groups Built-In List")
By default, TruNAS hides the system built-in groups.
-To see built-in groups, click settings **Toggle Built-In Groups** icon. The **Show Built-In Groups** dialog opens. Click **Show**.
-Click settings **Toggle Built-In Groups** icon again to open the **Hide Built-In Groups** dialog. Click **Hide** to show only non-built-in groups on the system.
+To see built-in groups, click the **Show Built-In Groups** toggle. The toggle turns blue, and all built-in groups display. Click the **Show Built-In Groups** toggle again to show only non-built-in groups on the system.
## Adding a New Group
To create a group, go to **Credentials > Local Groups** and click **Add**.
-![AddGroupSCALE](/images/SCALE/22.02/AddGroupSCALE.png "Add Group")
+![AddGroupGIDConfigSCALE](/images/SCALE/22.12/AddGroupGIDConfigSCALE.png "Add Group")
-Enter a unique number for the group ID in **GID** that TrueNAS uses to identify a Unix group. Enter a number above 1000 for a group with user accounts or for a system service enter the default port number for the service as the GID. Enter a name for the group. The group name cannot begin with a hyphen (-) or contain a space, tab, or any of these characters: colon (:), plus (+), ampersand (&), hash (#), percent (%), carat (^), open or close parentheses ( ), exclamation mark (!), at symbol (@), tilde (~), asterisk (*), question mark (?) greater or less than (<) (>), equal ). You can only use the dollar sign ($) as the last character in a user name.
+Enter a unique number for the group ID in **GID** that TrueNAS uses to identify a Unix group. Enter a number above 1000 for a group with user accounts or for a system service enter the default port number for the service as the GID. Enter a name for the group. The group name cannot begin with a hyphen (-) or contain a space, tab, or any of these characters: colon (:), plus (+), ampersand (&), hash (#), percent (%), carat (^), open or close parentheses ( ), exclamation mark (!), at symbol (@), tilde (~), asterisk (*), question mark (?) greater or less than (<) (>), equal (=). You can only use the dollar sign ($) as the last character in a user name.
-If giving this group administration permissions, select **Permit Sudo**.
+**Allowed sudo commands**, **Allow all sudo commands**, **Allowed sudo commands with no password** and **Allow all sudo commands with no password** allows the group members to act as the system administrator using the [sudo](https://www.sudo.ws/) command. Leave it disabled for better security.
To allow Samba permissions and authentication to use this group, select **Samba Authentication**.
@@ -42,7 +41,7 @@ To allow more than one group to have the same group ID (not recommended), select
To manage group membership, go to **Credentials > Local Groups**, expand the group entry, and click **Members** to open the **Update Members** screen.
-![LocalGroupsUpdateMembersSCALE](/images/SCALE/22.02/LocalGroupsUpdateMembersSCALE.png "Update Members Screen")
+![GroupsManageMembersSCALE](/images/SCALE/22.12/GroupsManageMembersSCALE.png "Add Group")
To add user accounts to the group, select users and then click .
Select **All Users** to move all users to the selected group, or select multiple users by holding Ctrl while clicking each entry.
diff --git a/content/SCALE/SCALETutorials/Credentials/ManageLocalUsersSCALE.md b/content/SCALE/SCALETutorials/Credentials/ManageLocalUsersSCALE.md
index 35d5508452..4c3063a0c3 100644
--- a/content/SCALE/SCALETutorials/Credentials/ManageLocalUsersSCALE.md
+++ b/content/SCALE/SCALETutorials/Credentials/ManageLocalUsersSCALE.md
@@ -87,7 +87,7 @@ Setting **Disable Password** toggle to active (blue toggle) disables several opt
### Configuring User ID and Groups Settings
-![AddUser-UserIDAndGroupSettings](/images/SCALE/22.12/AddUser-UserIDAndGroupSettings.png "Add User User Id and Groups Settings")
+![AddUser-UserIDAndGroupSettings](/images/SCALE/22.12/AddUser-UserIDAndGroupSettings.png "Add User Id and Groups Settings")
Next, you must set a user ID (UID).
TrueNAS suggests a user ID starting at **1000**, but you can change it if you wish.
@@ -96,14 +96,16 @@ New users can be created with a UID of **0**.
By default, TrueNAS creates a new primary group with the same name as the user. This happens when the **Create New Primary Group** toggle is enabled.
To add the user to an existing primary group instead, disable the **Create New Primary Group** toggle and search for a group in the **Primary Group** field.
-You can add the user to more groups using the **Auxiliary Groups** drop-down list.
+You can add the user to more groups using the **Auxiliary Groups** drop-down list.
### Configuring Directories and Permissions Settings
+![AddUserHomeDirPermSCALE](/images/SCALE/22.12/AddUserHomeDirPermSCALE.png "Add User Home Directory")
+
When creating a user, the home directory path is set to /nonexistent, which does not create a home directory for the user.
-To set a user home directory, select a path using the file browser.
+To set a user home directory, enter a path in **Home Directory** or select it using the file browser.
If the directory exists and matches the user name, TrueNAS sets it as the user home directory.
-When the path does not end with a sub-directory matching the user name, TrueNAS creates a new sub-directory.
+When the path does not end with a sub-directory matching the user name, TrueNAS creates a new sub-directory if the **Create Home Directory** checkbox is enabled.
TrueNAS shows the path to the user home directory when editing a user.
You can set the home directory permissions directly under the file browser.
@@ -111,7 +113,7 @@ You cannot change TrueNAS default user account permissions.
### Configuring Authentication Settings
-![AddUserDirPermAuthSCALE](/images/SCALE/22.12/AddUserDirPermAuthSCALE.png "Add User Directories, Permissions and Authentication Settings")
+![AddUserHomeDirAuthSCALE](/images/SCALE/22.12/AddUserHomeDirAuthSCALE.png "Add User Directories, Permissions and Authentication Settings")
You can assign a public SSH key to a user for key-based authentication by entering or pasting the *public* key into the **Authorized Keys** field.
diff --git a/content/SCALE/SCALETutorials/Credentials/RootlessLogin.md b/content/SCALE/SCALETutorials/Credentials/RootlessLogin.md
index 3ef8752598..f854bde16e 100644
--- a/content/SCALE/SCALETutorials/Credentials/RootlessLogin.md
+++ b/content/SCALE/SCALETutorials/Credentials/RootlessLogin.md
@@ -1,27 +1,26 @@
---
-title: "Using Rootless Log In"
-description: "This article explains the rootless log in functions, provides instructions on properly configuring SSH, working with the admin and root user passwords, and other functions to be aware of."
+title: "Using Rootless Login"
+description: "This article explains the rootless login functions, provides instructions on properly configuring SSH, working with the admin and root user passwords, and other functions to be aware of."
weight: 5
aliases:
tags:
-- scalelogin
- scaleadmin
- scale2fa
- scalessh
+- scalemigrate
---
-The initial implementation of TrueNAS SCALE rootless login still permits users to use the root user but encourages users to create the local administrator account when first [installing SCALE]({{< relref "InstallingSCALE.md" >}}).
-Some screens and UI settings might still refer to the *root* account.
-These references are updating to point to the **administrator account** in future release of SCALE.
+The initial implementation of TrueNAS SCALE rootless log in permits users to use the root user but encourages users to create the local administrator account when first [installing SCALE]({{< relref "InstallingSCALE.md" >}}).
-{{< include file="/_includes/RootLoginDeprecatedSCALE.md" type="page" >}}
+{{< include file="/content/_includes/RootLoginDeprecatedSCALE.md" type="page" >}}
If migrating from CORE to SCALE, when [first logging into SCALE]({{< relref "FirstTimeLogin.md" >}}) as the root user, you are advised to create the administrator account.
-All users should [create the local administrator account]({{< relref "ManageLocalUsersSCALE.md" >}}) and stop using root.
+All users should [create the local administrator account]({{< relref "ManageLocalUsersSCALE.md" >}}) and use this account for web interface access.
+To improve system security after the local administrator account is created, disable the root account password so that root access to the system is restricted.
{{< hint info >}}
-Some screens and UI settings still refer to the root account. These references should change to the administrator account in future release of SCALE.
+Some UI screens and settings still refer to the root account, but these references are updating to the administrator account in future release of SCALE.
{{< /hint >}}
## About Admin and Root Logins and Passwords
@@ -29,34 +28,48 @@ Some screens and UI settings still refer to the root account. These references s
At present, SCALE has both the root and local administrator user logins and passwords.
If properly set up, the local administrator (admin) account performs the same functions and has the same access the root user has.
-The root user is no longer the default user so you must add a password to use the root user.
+The root user is no longer the default user so you must add and enable a password to use the root user.
-### Disabling Passwords
+### Disabling Root and Admin User Passwords
-To security-harden your system, disable the root user password but do not disable the admin account password at the same time.
-If you have both the root and admin account passwords disabled and the session times out you can still log into the system using a one-time sign-in screen.
+As a security measure, the root user password is disabled when you create the admin user during installation.
+Do not disable the admin account and root passwords at the same time.
+If both root and admin account passwords become disabled at the same time and the web interface session times out, a one-time sign-in screen allows access to the system.
![ResetRootAccountPasswordSignIn](/images/SCALE/22.12/ResetRootAccountPasswordSignIn.png "Reset Root Password Sign-In Screen")
-Enter and confirm a password to gain access to the UI, but then immediately go to **Credentials > Local Users** to enable either the root or admin password.
-This password is not saved as a new password and it does not enable the admin or root passwords.
-It only gives temporary sign in access if you lock yourself out of the box.
+Enter and confirm a password to gain access to the UI. After logging in, immediately go to **Credentials > Local Users** to enable either the root or admin password before the session times out again.
+This temporary password is not saved as a new password and it does not enable the admin or root passwords, it only provides one-time access to the UI.
-If you disable the password for UI login, it is also disabled for SSH access.
+When disabling a password for UI login, it is also disabled for SSH access.
-## Accessing the System Using SSH
+## Accessing the System Through an SSH Session
-Use the administrator account when using SSH to access the system.
+To enable SSH to access the system as root or the admin user:
-To enable SSH access, select the **Log in as Root with Password** on the **System Settings > Services > SSH** screen.
-Selecting this option to allows administrator account access to the system with the admin or root accounts.
+1. Configure the SSH service.
-If you want to SSH into the system as the root, you must create and enable a password for the root user.
-If the root password password is disabled in the UI you cannot use it to gain SSH access to the system.
+ a. Go to **System Settings > Services**, then select **Configure** for the SSH service.
+
+ b. Select **Log in as Root with Password** to enable the root user to sign in as root.
+
+ Select **Log in as Admin with Password** and **Allow Password Authentication** to enable the admin user to sign in as admin. Select both options.
+
+ c. Click **Save** and restart the SSH service.
+
+2. Configure or verify the user configuration options to allow ssh access.
+
+ If you want to SSH into the system as the root, you must create and enable a password for the root user.
+ If the root password password is disabled in the UI you cannot use it to gain SSH access to the system.
+
+ To allow the admin user to issue commands in an ssh session, edit the admin user and select which sudo options are allowed.
+
+ Select **Allow all sudo commands with no password**.
+ You might see a prompt in the ssh session to enter a password the first time you enter a sudo command but will not see this password prompt again in the same session.
## Two-Factor Authentication (2FA) and Administrator Account Log In
-To use two-factor authentication with the administrator account (root or admin user) first configure and enable SSH service to allow SSH access, then [configure two-factor authentication]({{< relref "2faSCALE.md" >}}).
+To use two-factor authentication with the administrator account (root or admin user), first configure and enable SSH service to allow SSH access, then [configure two-factor authentication]({{< relref "2faSCALE.md" >}}).
If you have the root user configured with a password and enable it, you can SSH into the system with the root user. Security best practice is to disable the root user password and only use the local administrator account.
## Rootless Log In and TrueCommand
diff --git a/content/SCALE/SCALETutorials/DataProtection/Replication/AddReplicationSCALE.md b/content/SCALE/SCALETutorials/DataProtection/Replication/AddReplicationSCALE.md
deleted file mode 100644
index aea27056cb..0000000000
--- a/content/SCALE/SCALETutorials/DataProtection/Replication/AddReplicationSCALE.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: "Adding Replication Tasks"
-description: "This article provides an overview of setting up simple replication and a video demonstration of setting up a remote replication. "
-weight: 10
-aliases: /scale/scaletutorials/dataprotection/addreplicationscale/
-tags:
- - scalereplication
- - scalebackup
----
-
-{{< toc >}}
-
-
-To streamline creating simple replication tasks use the **Replication Wizard**. The wizard assists with creating a new SSH connection and automatically creates a periodic snapshot task for sources that have no existing snapshots.
-
-## Before You Begin
-
-Configure SSH in TrueNAS before creating a remote replication task. This ensures that new snapshots are regularly available for replication.
-
-
-## Setting Up Simple Replications
-
-{{< expand "Process Summary" "v" >}}
-
-* **Data Protection > Replication Tasks**
- * Choose sources for snapshot replication.
- * Remote sources require an SSH connection.
- * TrueNAS shows the number of snapshots available to replicate.
- * Define the snapshot destination.
- * A remote destination requires an SSH connection.
- * Choose a destination or define it manually by typing a path.
- * Adding a new name at the end of the path creates a new dataset.
- * Choose replication security.
- * iXsystems always recommend replication with encryption.
- * Disabling encryption is only meant for absolutely secure and trusted destinations.
- * Schedule the replication.
- * You can schedule standardized presets or a custom-defined schedule.
- * Running once runs the replication immediately after creation.
- * Task is saved, and you can rerun or edit it.
- * Choose how long to keep the replicated snapshots.
-{{< /expand >}}
-
-This video tutorial presents a simple example of setting up remote replication:
-
-{{< embed-video name="scaleangelfishreplication" >}}
-
diff --git a/content/SCALE/SCALETutorials/DataProtection/Replication/LocalReplicationSCALE.md b/content/SCALE/SCALETutorials/DataProtection/Replication/LocalReplicationSCALE.md
index 6850fb4498..1e8cddee53 100644
--- a/content/SCALE/SCALETutorials/DataProtection/Replication/LocalReplicationSCALE.md
+++ b/content/SCALE/SCALETutorials/DataProtection/Replication/LocalReplicationSCALE.md
@@ -1,87 +1,63 @@
---
title: "Setting Up a Local Replication Task"
-description: "This article provides instructions on adding a replication task on the same TrueNAS system."
+description: "This article provides instructions on adding a replication task using different pools or datasets on the same TrueNAS system."
weight: 20
aliases:
tags:
- - scalereplication
- - scalebackup
+- scalereplication
+- scalebackup
+- scalesnapshots
---
{{< toc >}}
-## Local Replication
+## Using Local Replication
-### Process Summary
+A local replication creates a zfs snapshot and saves it to another location on the same TrueNAS SCALE system either using a different pool, or dataset or zvol.
+This allows users with only one system to take quick data backups or snapshots of their data when they have only one system.
+In this scenario, create a dataset on the same pool to store the replication snapshots. You can create and use a zvol for this purpose.
+If configuring local replication on a system with more than one pool, create a dataset to use for the replicated snapshots on one of those pools.
-{{< expand "Process Summary" "v" >}}
+While we recommend regularly scheduled replications to a remote location as the optimal backup scenario, this is useful when no remote backup locations are available, or when a disk is in immediate danger of failure.
-* Requirements: Storage pools and datasets created in **Storage > Pools**.
+{{< include file="/content/_includes/ZvolSpaceWarning.md" type="page" >}}
-* Go to **Data Protection > Replication Tasks** and click **ADD**
- * Choose **Sources**
- * Set the source location to the local system
- * Use the file browser or type paths to the sources
- * Define a **Destination** path
- * Set the destination location to the local system
- * Select or manually define a path to the single destination location for the snapshot copies.
- * Set the **Replication schedule** to run once
- * Define how long the snapshots are stored in the **Destination**
- * Clicking **START REPLICATION** immediately snapshots the chosen sources and copies those snapshots to the destination
- * Dialog might ask to delete existing snapshots from the destination. Be sure that all important data is protected before deleting anything.
-* Clicking the task **State** shows the logs for that replication task.
-{{< /expand >}}
+With the implementation of rootless login and the admin user, setting up replication tasks as an admin user has a few differences over setting up replication tasks when logged in as root.
-### Quick Local Backups with the Replication Wizard
+{{< include file="/content/_includes/ReplicationIntroSCALE.md" type="page" >}}
-TrueNAS provides a wizard for quickly configuring different simple replication scenarios.
+## Setting Up a Simple Replication Task Overview
+This section provides a simple overview of setting up a replication task regardless of the type of replication, local or remote.
+It also covers the related steps you should take prior to configuring a replication task.
-![TasksReplicationTasksAdd](/images/CORE/12.0/TasksReplicationTasksAdd.png "New Replication Task")
+{{< include file="/content/_includes/BasicReplicationProcess.md" type="page" >}}
-While we recommend regularly scheduled replications to a remote location as the optimal backup scenario, the wizard can very quickly create and copy ZFS snapshots to another location on the same system.
-This is useful when no remote backup locations are available, or when a disk is in immediate danger of failure.
+## Configuring a Local Replication Task
-The only things you need before creating a quick local replication are datasets or zvols in a storage pool to use as the replication source and (preferably) a second storage pool to use for storing replicated snapshots.
-You can set up the local replication entirely in the **Replication Wizard**.
+The replication wizard allows users to create and copy ZFS snapshots to another location on the same system.
-To open the **Replication Wizard**, go to **Data Protection > Replication Tasks** and click **ADD**.
-Set the source location to the local system and pick which datasets to snapshot.
-The wizard takes new snapshots of the sources when no existing source snapshots are found.
-Enabling **Recursive** replicates all snapshots contained within the selected source dataset snapshots.
-Local sources can also use a naming schema to identify any custom snapshots to include in the replication.
-A naming schema is a collection of [strftime](https://www.freebsd.org/cgi/man.cgi?query=strftime) time and date strings and any identifiers that a user might have added to the snapshot name.
+If you have an existing replication task, you can select it on the **Load Previous Replication Task** dropdown list to load the configuration settings for that task into the wizard, and then make change such as assigning it a different destination, schedule, or retention lifetime, etc.
+Saving changes to the configuration creates a new replication task without altering the task you loaded into the wizard.
-![TasksReplicationTasksAddLocalSourceSCALE](/images/SCALE/RepWizardLocalSourceSCALE.png "Replication with Local Source")
+{{< include file="/content/_includes/ReplicationCreateDatasetAndAdminHomeDirSteps.md" type="page" >}}
-Set the destination to the local system and define the path to the storage location for replicated snapshots.
-When manually defining the destination, be sure to type the full path to the destination location.
+3. Go to **Data Protection** and click **Add** on the **Replication Tasks** widget to open the **Replication Task Wizard**. Configure the following settings:
+
+ ![CreateLocalReplicationTask](/images/SCALE/22.12/CreateLocalReplicationTask.png "New Local Replication Task")
+
+ a. Select **On this System** on the **Source Location** dropdown list.
+ Browse to the location of the pool or dataset you want to replicate and select it so it populates **Source** with the path.
+ Selecting **Recursive** replicates all snapshots contained within the selected source dataset snapshots.
-![TasksReplicationTasksAddLocalDestSCALE](/images/SCALE/RepWizardLocalDestSCALE.png "Local Destination")
+ b. Select **On this System** on the **Destination Location** dropdown list.
+ Browse to the location of the pool or dataset you want to use to store replicated snapshots and select it so it populates the **Destination** with the path.
-TrueNAS suggests a default name for the task based on the selected source and destination locations, but you can type your own name for the replication.
-You can load any saved replication task into the wizard to make creating new replication schedules even easier.
+ c. (Optional) Enter a name for the snapshot in **Task Name**.
+ SCALE populates this field with the default name using the source and destination paths separated by a hyphen, but this default can make locating the snapshot in destination dataset a challenge.
+ To make it easier to find the snapshot, give it name easy for you to identify. For example, a replicated task named *dailyfull* for a full file system snapshot taken daily.
+
+{{< include file="/content/_includes/ReplicationScheduleAndRetentionSteps.md" type="page" >}}
-You can define a specific schedule for this replication or choose to run it immediately after saving the new task.
-TrueNAS saves unscheduled tasks in the replication task list. You can run saved tasks manually or edit them later to add a schedule.
-
-The destination lifetime is how long copied snapshots are stored in the destination before they are deleted.
-We usually recommend defining a snapshot lifetime to prevent storage issues.
-Choosing to keep snapshots indefinitely can require you to manually clean old snapshots from the system if or when the destination fills to capacity.
-
-![TasksReplicationTasksAddLocalSourceLocalDestCustomLife](/images/CORE/12.0/TasksReplicationTasksAddLocalSourceLocalDestCustomLife.png "Custom Lifetime")
-
-Clicking **START REPLICATION** saves the new task and immediately attempts to replicate snapshots to the destination.
-When TrueNAS detects that the destination already has unrelated snapshots, it asks to delete the unrelated snapshots and do a full copy of the new snapshots.
-This can delete important data, so ensure you can delete any existing snapshots or back them up in another location.
-
-TrueNAS adds the simple replication to the replication task list and shows that it is currently running.
-Clicking the task state shows the replication log with an option to download the log to your local system.
-
-![TasksReplicationTasksLogSCALE](/images/SCALE/RepLogSCALE.png "Replication Log")
-
-To confirm that snapshots are replicated, go to **Storage > Snapshots > Snapshots** and verify the destination dataset has new snapshots with correct timestamps.
-
-![TasksReplicationTasksLocalSnapshotsSCALE](/images/SCALE/RepLocalSnaphots.png "Local Replicated Snapshots")
-
-{{< taglist tag="scalereplication" limit="10" >}}
\ No newline at end of file
+{{< taglist tag="scalereplication" limit="10" >}}
+{{< taglist tag="scalesnapshots" limit="10" title="Related Snapshot Articles" >}}
\ No newline at end of file
diff --git a/content/SCALE/SCALETutorials/DataProtection/Replication/RemoteReplicationSCALE.md b/content/SCALE/SCALETutorials/DataProtection/Replication/RemoteReplicationSCALE.md
index d0d469434f..b2a65cb749 100644
--- a/content/SCALE/SCALETutorials/DataProtection/Replication/RemoteReplicationSCALE.md
+++ b/content/SCALE/SCALETutorials/DataProtection/Replication/RemoteReplicationSCALE.md
@@ -1,112 +1,91 @@
---
title: "Setting Up a Remote Replication Task"
-description: "This article provides instructions on adding a replication task with a remote system (TrueNAS or other)."
-weight: 60
+description: "This article provides instructions on adding a replication task with a remote system."
+weight: 30
aliases:
tags:
- - scalereplication
- - scalebackup
+- scalereplication
+- scalebackup
---
{{< toc >}}
-## Creating a Remote Replication Task
-
-To create a new replication, go to **Data Protection > Replication Tasks** and click **ADD**.
-
-![TasksReplicationTasksAddSCALE](/images/SCALE/RepWhatWhereSCALE.png "Add new Replication Task")
-
-You can load any saved replication to prepopulate the wizard with that configuration.
-Saving changes to the configuration creates a new replication task without altering the task you loaded into the wizard.
-This saves some time when creating multiple replication tasks between the same two systems.
-
-### Set up the Sources
-
-{{< expand "Source" "v" >}}
-Start by configuring the replication sources.
-Sources are the datasets or zvols with snapshots to use for replication.
-Choosing a remote source requires selecting an SSH connection to that system.
-Expanding the directory browser shows the current datasets or zvols that are available for replication.
-You can select multiple sources or manually type the names into the field.
-
-TrueNAS shows how many snapshots are available for replication.
-We recommend you manually snapshot the sources or create a periodic snapshot task *before* creating the replication task.
-However, when the sources are on the local system and don't have any existing snapshots, TrueNAS can create a basic periodic snapshot task and snapshot the sources immediately before starting the replication. Enabling **Recursive** replicates all snapshots contained within the selected source dataset snapshots.
-
-![TasksReplicationTasksAddSourceSCALE](/images/SCALE/RepAddSourceSCALE.png "Choosing a Local Source")
-
-Local sources can also use a naming schema to identify any custom snapshots to include in the replication.
-Remote sources require entering a *snapshot naming schema* to identify the snapshots to replicate.
-A naming schema is a collection of [strftime](https://www.freebsd.org/cgi/man.cgi?query=strftime) time and date strings and any identifiers that a user might have added to the snapshot name.
-{{< /expand >}}
-
-### Configure the Destination
-
-{{< expand "Destination" "v" >}}
-The destination is where replicated snapshots are stored.
-Choosing a remote destination requires an SSH connection to that system.
-Expanding the directory browser shows the current datasets that are available for replication.
-You can select a destination dataset or manually type a path in the field.
-You cannot use zvols as a remote replication destination.
-Adding a name to the end of the path creates a new dataset in that location.
+## Using Remote Replication
-![TasksReplicationTasksAddRemoteDestSCALE](/images/SCALE/RepAddDestinationSCALE.png "Replication with Remote Destination")
+TrueNAS SCALE replication allows users to create one-time or regularly scheduled snapshots of data stored in pools, datasets or zvols on their SCALE system as a way to back up stored data.
+When properly configured and scheduled, remote replication takes take regular snapshots of storage pools or datasets and saves them in the destination location on another system.
-{{< hint info >}}
-To use encryption when replicating data click the **Encryption** box. After selecting the box these additional encryption options become available:
+Remote replication can occur between your TrueNAS SCALE system and another TrueNAS system (SCALE or CORE) where you want to use to store your replicated snapshots.
-* **Encryption Key Format** allows the user to choose between a hex (base 16 numeral) or passphrase (alphanumeric) style encryption key.
-* **Store Encryption key in Sending TrueNAS database** allows the user to either store the encryption key in the sending TrueNAS database (box checked) or choose a temporary location for the encryption key that decrypts replicated data (box unchecked)
-{{< /hint >}}
+With the implementation of rootless login and the admin user, setting up replication tasks as an admin user has a few differences than with setting up replication tasks when logged in as root. Setting up remote replication while logged in as the admin user requires selecting **Use Sudo For ZFS Commands**.
-{{< /expand >}}
+{{< include file="/content/_includes/ReplicationIntroSCALE.md" type="page" >}}
-### Security and Task Name
+Remote replication requires setting up an SSH connection in TrueNAS before creating a remote replication task.
-{{< expand "Security and Task Name" "v" >}}
-{{< hint info >}}
-Using encryption for SSH transfer security is always recommended.
-{{< /hint >}}
+## Setting Up a Simple Replication Task Overview
+This section provides a simple overview of setting up a replication task regardless of the type of replication, local or remote.
+It also covers the related steps you should take prior to configuring a replication task.
-In situations where two systems within an absolutely secure network are used for replication, disabling encryption speeds up the transfer.
-However, the data is completely unprotected from eavesdropping.
+{{< include file="/content/_includes/BasicReplicationProcess.md" type="page" >}}
-Choosing **no encryption** for the task is less secure but faster. This method uses common port settings but these can be overridden by switching to the advanced options screen or editing the task after creation.
-
-![TasksReplicationTaskSecuritySCALE](/images/SCALE/RepSecurityTaskSCALE.png "Replication Security and Task Name")
-
-TrueNAS suggests a name based off the selected sources and destination, but this can be overwritten with a custom name.
-{{< /expand >}}
-
-### Define a Schedule and Snapshot Lifetime
-
-{{< expand "Schedule and Lifetime" "v" >}}
-
-Adding a schedule automates the task to run according to your chosen times.
-You can choose between a number of preset schedules or create a custom schedule for when the replication runs.
-Choosing to run the replication once runs the replication immediately after saving the task, but you must manually trigger any additional replications.
-
-Finally, define how long you want to keep snapshots on the destination system.
-We generally recommend defining snapshot lifetime to prevent cluttering the system with obsolete snapshots.
-
-![TasksReplicationTasksScheduleLifeSCALE](/images/SCALE/RepScheduleSCALE.png "Custom Lifetimes")
-{{< /expand >}}
-
-### Starting the Replication
-
-{{< expand "Starting the Replication" "v" >}}
-**Start Replication*** saves the new replication task.
-New tasks are enabled by default and activate according to their schedule or immediately when no schedule is chosen.
-The first time a replication task runs, it takes longer because the snapshots must be copied entirely fresh to the destination.
-
-![TasksReplicationTasksSuccessSCALE](/images/SCALE/RepSuccessSCALE.png "Remote Replication Success")
+## Creating a Remote Replication Task
-Later replications run faster since the task only replicates subsequent changes to snapshots.
-Clicking the task state opens the log for that task.
+To streamline creating simple replication tasks use the **Replication Task Wizard** to create and copy ZFS snapshots to another system.
+The wizard assists with creating a new SSH connection and automatically creates a periodic snapshot task for sources that have no existing snapshots.
-![TasksReplicationTasksLogSCALE](/images/SCALE/RepLogSCALE.png "Replication Log")
+If you have an existing replication task, you can select it on the **Load Previous Replication Task** dropdown list to load the configuration settings for that task into the wizard, and then make change such as assigning it a different destination, schedule, or retention lifetime, etc.
+Saving changes to the configuration creates a new replication task without altering the task you loaded into the wizard.
+This saves some time when creating multiple replication tasks between the same two systems.
-{{< /expand >}}
+{{< include file="/content/_includes/ReplicationCreateDatasetAndAdminHomeDirSteps.md" type="page" >}}
+
+3. Go to **Data Protection** and click **Add** on the **Replication Tasks** widget to open the **Replication Task Wizard**. Configure the following settings:
+
+ ![CreateRemoteReplicationTask](/images/SCALE/22.12/CreateRemoteReplicationTask.png "New Remote Replication Task")
+
+ a. Select either **On this System** or **On a Different System** on the **Source Location** dropdown list.
+ If your source is a remote system, select **On a Different System**. The **Destination Location** automatically changes to **On this System**.
+ If your source is the local TrueNAS SCALE system, you must select **On a Different System** from the **Destination Location** dropdown list to do remote replication.
+
+ TrueNAS shows the number snapshots available for replication.
+
+ b. Select an existing SSH connection to the remote system, or select **Create New** to open the **[New SSH Connection](#configure-a-new-ssh-connection)** configuration screen.
+
+ c. Browse to the source pool/dataset(s), then click on the dataset(s) to populate the **Source** with the path.
+ You can select multiple sources or manually type the names into the **Source** field.
+ Selecting **Recursive** replicates all snapshots contained within the selected source dataset snapshots.
+
+ d. Repeat to populate the **Destination** field.
+ You cannot use zvols as a remote replication destination. Add a name to the end of the path to create a new dataset in that location.
+
+ e. Select **Use Sudo for ZFS Commands**. Only displays when logged in as the admin user (or the name of the admin user).
+ This removes the need to issue the cli `zfs allow` command in Shell on the remote system.
+ When the dialog displays, click **Use Sudo for ZFS Comands**. If you close this dialog, select the option on the **Add Replication Task** wizard screen.
+
+ ![UseSudoForZFSCommandsDialog](/images/SCALE/22.12/UseSudoForZFSCommandsDialog.png "Select Use Sudo for ZFS Commands")
+
+ f. Select **Replicate Custome Snapshots**, then leave the default value in **Naming Schema**.
+ If you know how to enter the schema you want, enter it in **Naming Schema**.
+ Remote sources require entering a snapshot naming schema to identify the snapshots to replicate.
+ A naming schema is a pattern of naming custom snapshots you want to replicate.
+ Enter the name and [strftime(3)](https://man7.org/linux/man-pages/man3/strftime.3.html) %Y, %m, %d, %H, and %M strings that match the snapshots to include in the replication. Separate entries by pressing Enter. The number of snapshots matching the patterns display.
+
+ g. (Optional) Enter a name for the snapshot in **Task Name**.
+ SCALE populates this field with the default name using the source and destination paths separated by a hyphen, but this default can make locating the snapshot in destination dataset a challenge.
+ To make it easier to find the snapshot, give it name easy for you to identify. For example, a replicated task named *dailyfull* for a full file system snapshot taken daily.
+
+{{< include file="/content/_includes/ReplicationScheduleAndRetentionSteps.md" type="page" >}}
+
+For information on replicating encrypted pools or datasets, see [Setting Up a Encrypted Replication Task]({{< relref "ReplicationWithEncryptionSCALE.md" >}}).
+
+### Configuring a New SSH Connection
+
+{{< include file="/content/_includes/ReplicationConfigNewSSHConnection.md" type="page" >}}
+
+### Using SSH Transfer Security
+
+{{< include file="/content/_includes/ReplicationSSHTransferSecurity.md" type="page" >}}
{{< taglist tag="scalereplication" limit="10" >}}
\ No newline at end of file
diff --git a/content/SCALE/SCALETutorials/DataProtection/Replication/ReplicationWithEncryptionSCALE.md b/content/SCALE/SCALETutorials/DataProtection/Replication/ReplicationWithEncryptionSCALE.md
new file mode 100644
index 0000000000..d7d48a9b0c
--- /dev/null
+++ b/content/SCALE/SCALETutorials/DataProtection/Replication/ReplicationWithEncryptionSCALE.md
@@ -0,0 +1,157 @@
+---
+title: "Setting Up a Encrypted Replication Task"
+description: "This article provides instructions on adding a replication task to a remote system and using encryption."
+weight: 40
+aliases:
+tags:
+- scalereplication
+- scalebackup
+---
+
+{{< toc >}}
+
+
+## Using Encryption in Replication Tasks
+
+TrueNAS SCALE replication allows users to create replicated snapshots of data stored in encrypted pools, datasets or zvols that on their SCALE system as a way to back up stored data to a remote system. You can use encrypted datasets in a local replication.
+
+{{< hint warning >}}
+You can set up a replication task for a dataset encrypted with a passphrase or a hex encryption key, but you must unlock the dataset before the task runs or the task fails.
+{{< /hint>}}
+
+With the implementation of rootless login and the admin user, when setting up remote replication tasks when logged in as an admin user requires selecting **Use Sudo For ZFS Commands**.
+
+{{< include file="/content/_includes/ReplicationIntroSCALE.md" type="page" >}}
+
+Remote replication with datasets also require an SSH connection in TrueNAS. You can use an existing SSH connection if it has the same user credentials you want to use for the new replication task.
+
+## Setting Up a Simple Replication Task Overview
+
+This section provides a simple overview of setting up a remote replication task for an encrypted dataset.
+It also covers the related steps you should take prior to configuring the replication task.
+
+{{< expand "Replication Task General Overview" "v" >}}
+
+1. Set up the data storage for where you want to save replicated snapshots.
+
+2. Make sure the admin user has a home directory assigned.
+ In the SCALE Bluefin early release, when creating the admin user at installation the home directory default is set to **/nonexistent**. To create an SSH connection to use in a remote replication you must assign a home directory path.
+
+ Later releases of SCALE Bluefin set the admin user home directory to one created by SCALE during the installation process, but you need to select the option to create the admin user home directory.
+
+3. Create an SSH connection between the local SCALE system and the remote system.
+ You can do this from either **Credentials > Backup Credentials > SSH Connection** and clicking **Add** or from the **Replication Task Wizard** using the **Generate New** option in the settings for the remote system.
+
+4. Unlock the encrypted dataset(s) and export the encryption key to a text editor like Notepad.
+
+5. Go to **Data Protection > Replication Tasks** and click **Add** to open the **Replication Task Wizard**.
+ You then specify the from and to sources, task name, and set the schedule.
+
+ Setting options change based on the source selections. Replicating to or from a local source does not requires an SSH connection.
+
+This completes the general process for all replication tasks.
+{{< /expand >}}
+
+## Creating a Remote Replication Task for an Encrypted Dataset
+
+To streamline creating simple replication tasks use the **Replication Task Wizard** to create and copy ZFS snapshots to another system.
+The wizard assists with creating a new SSH connection and automatically creates a periodic snapshot task for sources that have no existing snapshots.
+
+If you have an existing replication task, you can select it on the **Load Previous Replication Task** dropdown list to load the configuration settings for that task into the wizard, and then make change such as assigning it a different destination, select encryption options, schedule, or retention lifetime, etc.
+Saving changes to the configuration creates a new replication task without altering the task you loaded into the wizard.
+This saves some time when creating multiple replication tasks between the same two systems.
+
+{{< include file="/content/_includes/ReplicationCreateDatasetAndAdminHomeDirSteps.md" type="page" >}}
+
+3. Unlock the source dataset and export the encryption key to a text editor such as Notepad.
+ Go to **Datasets** select the source dataset, locate the **ZFS Encryption** widget and unlock the dataset if locked.
+ Export the key and paste it in any text editor such as Notepad. If you set up encryption to use a passphrase, you do not need to export a key.
+
+4. Go to **Data Protection** and click **Add** on the **Replication Tasks** widget to open the **Replication Task Wizard**. Configure the following settings:
+
+ ![CreateRemoteReplicationTask](/images/SCALE/22.12/CreateRemoteReplicationTask.png "New Remote Replication Task")
+
+ a. Select **On this System** on the **Source Location** dropdown list.
+ If your source is the local TrueNAS SCALE system, you must select **On a Different System** from the **Destination Location** dropdown list to do remote replication.
+
+ If your source is a remote system, create the replication task as the root user and select **On a Different System**. The **Destination Location** automatically changes to **On this System**.
+
+ TrueNAS shows the number of snapshots available for replication.
+
+ b. Select an existing SSH connection to the remote system or create a new connection.
+ Select **Create New** to open the **[New SSH Connection](#configure-a-new-ssh-connection)** configuration screen.
+
+ c. Browse to the source pool/dataset(s), then click on the dataset(s) to populate the **Source** with the path.
+ You can select multiple sources or manually type the names into the **Source** field. Separate multiple entries with commas.
+ Selecting **Recursive** replicates all snapshots contained within the selected source dataset snapshots.
+
+ d. Repeat to populate the **Destination** field.
+ You cannot use zvols as a remote replication destination.
+ Add a **/*datasetname*** to the end of the destination path to create a new dataset in that location.
+
+ e. (Optional) Select **Encryption** to add a [second layer of encryption](#adding-additional-encryption) over the already encrypted dataset.
+
+ f. Select **Use Sudo for ZFS Commands**. Only displays when logged in as the admin user (or the name of the admin user).
+ This removes the need to issue the cli `zfs allow` command in Shell on the remote system.
+ When the dialog displays, click **Use Sudo for ZFS Comands**. If you close this dialog, select the option on the **Add Replication Task** wizard screen.
+
+ ![UseSudoForZFSCommandsDialog](/images/SCALE/22.12/UseSudoForZFSCommandsDialog.png "Select Use Sudo for ZFS Commands")
+
+ This option only displays when logged in as the admin user.
+ If not selected you need to issue the cli `zfs allow` command in Shell on the remote system.
+
+ g. Select **Replicate Custom Snapshots**, then accept the default value in **Naming Schema**.
+ Remote sources require entering a snapshot naming schema to identify the snapshots to replicate.
+ A naming schema is a pattern of naming custom snapshots you want to replicate.
+ If you want to change the default schema, enter the name and [strftime(3)](https://man7.org/linux/man-pages/man3/strftime.3.html) %Y, %m, %d, %H, and %M strings that match the snapshots to include in the replication.
+ Separate entries by pressing Enter. The number of snapshots matching the patterns display.
+
+ h. (Optional) Enter a name for the snapshot in **Task Name**.
+ SCALE populates this field with the default name using the source and destination paths separated by a hyphen, but this default can make locating the snapshot in destination dataset a challenge.
+ To make it easier to find the snapshot, give it a name that is easy for you to identify. For example, a replicated task named *dailyfull* for a full file system snapshot taken daily.
+
+{{< include file="/content/_includes/ReplicationScheduleAndRetentionSteps.md" type="page" >}}
+
+### Unlocking the Destination Dataset
+
+After the replication task runs and creates the snapshot on the destination, you must unlock it to access the data. Use the encryption key exported from the dataset or pool, or if you use a passphrase to lock the dataset, enter the passphrase to unlock the dataset on the remote destination system.
+
+### Replication to an Unencrypted Destiation Dataset
+
+To replication an encrypted dataset to an unencrypted dataset on the remote destintation system, follow the instructions above to configure the task, then:
+
+1. Select the task on the **Replication Task** widget. The **Edit Replication Task** screen opens.
+
+2. Scroll down to **Include Dataset Properties** and select it to clear the checkbox.
+
+ ![EditReplicationTaskIncludeDatasetProperties](/images/SCALE/22.12/EditReplicationTaskIncludeDatasetProperties.png "Edit Replication Task Include Dataset Properties")
+
+3. Click **Save**.
+
+This replicates the unlocked encrypted source dataset to an unencrypted destination dataset.
+
+### Adding Additional Encryption
+When you replicate an encrypted pool or dataset you have one level of encryption applied at the data storage level. Use the passphrase or key created or exported from the dataset or pool to unlock the dataset on the destination server.
+
+To add a second layer of encryption at the replication task level, select **Encryption**, then select the type of encryption you want to apply.
+
+![ReplicationTaskEncryptionOptions](/images/SCALE/22.12/ReplicationTaskEncryptionOptions.png "Replication Task Encryption Options")
+
+Select either **Hex** (base 16 numeral format) or **Passphrase** (alphanumeric format) from the **Encryption Key Format** dropdown list to open settings for that type of encryption.
+
+Selecting **Hex** displays **Generate Encryption Key** preselected. Select the checkbox to clear it and display the **Encryption Key** field where you can import a custom hex key.
+
+Selecting **Passphrase** displays the **Passphrase** field where you enter your alphanumeric passphrase.
+
+Select **Store Encryption key in Sending TrueNAS database** to store the encryption key in the sending TrueNAS database or leave unselected to choose a temporary location for the encryption key that decrypts replicated data.
+
+### Configure a New SSH Connection
+
+{{< include file="/content/_includes/ReplicationConfigNewSSHConnection.md" type="page" >}}
+
+### Using SSH Transfer Security
+
+{{< include file="/content/_includes/ReplicationSSHTransferSecurity.md" type="page" >}}
+
+
+{{< taglist tag="scalereplication" limit="10" >}}
\ No newline at end of file
diff --git a/content/SCALE/SCALETutorials/DataProtection/Replication/UseAdvancedReplicationSCALE.md b/content/SCALE/SCALETutorials/DataProtection/Replication/UseAdvancedReplicationSCALE.md
index 00c0e4608a..77da2f3c43 100644
--- a/content/SCALE/SCALETutorials/DataProtection/Replication/UseAdvancedReplicationSCALE.md
+++ b/content/SCALE/SCALETutorials/DataProtection/Replication/UseAdvancedReplicationSCALE.md
@@ -1,7 +1,7 @@
---
title: "Setting Up Advanced Replication Tasks"
description: "This article provides instruction on using the advanced replication task creation screen to add a replication task."
-weight: 30
+weight: 60
aliases:
tags:
- scalereplication
diff --git a/content/SCALE/SCALETutorials/DataProtection/Replication/_index.md b/content/SCALE/SCALETutorials/DataProtection/Replication/_index.md
index fde0dfe99f..1a1d4934a1 100644
--- a/content/SCALE/SCALETutorials/DataProtection/Replication/_index.md
+++ b/content/SCALE/SCALETutorials/DataProtection/Replication/_index.md
@@ -1,10 +1,34 @@
---
title: "Replication Tasks"
geekdocCollapseSection: true
+aliases:
+ - /scale/scaletutorials/dataprotection/addreplicationscale/
+ - /scale/scaletutorials/dataprotection/replication/addreplicationscale/
weight: 100
---
+TrueNAS SCALE replication allows users to create one-time or regularly scheduled snapshots of data stored in pools, datasets or zvols on their SCALE system as a way to back up stored data.
+When properly configured and scheduled, replication takes regular snapshots of storage pools or datasets and saves them in the destination location either on the same system or a different system.
+
+Local replication occurs on the same TrueNAS SCALE system using different pools or datasets.
+Remote replication can occur between your TrueNAS SCALE system and another TrueNAS system, or with some other remote server you want to use to store your replicated data.
+Local and remote replication can involve encrypted pools or datasets.
+
+With the implementation of rootless login and the admin user, setting up replication tasks as an admin user has a few differences than with setting up replication tasks when logged in as root. Each of the tutorials in this section include these configuration differences.
+
+The first snapshot taken for a task creates a full file system snapshot, and all subsequent snapshots taken for that task are incremental to capture differences occurring between the full and subsequent incremental snapshots.
+
+Scheduling options allow users to run replication tasks daily, weekly, monthly, or on a custom schedule.
+Users also have the option to run a scheduled job on demand.
+
+## Setting Up Simple Replications
+This section provides a simple overview of setting up a replication task regardless of the type of replication, local or remote.
+It also covers the related steps to take prior to configuring a replication task.
+Article linked in the [Article Summaries](#article-summaries) provides the specific setup instructions by replication type.
+
+{{< include file="/content/_includes/BasicReplicationProcess.md" type="page" >}}
+
## Article Summaries
{{< children depth="2" description="true" >}}
\ No newline at end of file
diff --git a/content/SCALE/SCALETutorials/Network/ManageNetworkSettingsHA.md b/content/SCALE/SCALETutorials/Network/ManageNetworkSettingsHA.md
new file mode 100644
index 0000000000..217a0efcee
--- /dev/null
+++ b/content/SCALE/SCALETutorials/Network/ManageNetworkSettingsHA.md
@@ -0,0 +1,70 @@
+---
+title: "Managing Network Settings (Enterprise HA)"
+description: "This article provides instructions on how to make changes to network settings on SCALE Enterprise (HA) systems."
+weight: 25
+aliases:
+tags:
+- scaleenterprise
+- scalefailover
+- scalenetwork
+---
+
+
+{{< enterprise >}}
+The instructions in the article only apply to SCALE Enterprise (HA) systems.
+{{< /enterprise >}}
+
+{{< include file="/_includes/EnterpriseHANetworkIPs.md" type="page" >}}
+
+## Configuring Enterprise (HA) Network Settings
+{{< hint warning >}}
+You must disable the failover before you can configure network settings!
+{{< /hint >}}
+
+To configure network settings:
+
+1. Disable the failover service.
+ Go to **System Settings > Failover**.
+ Select **Disable Failover** and click **Save**.
+
+2. [Edit the Global Network settings]({{< relref "AddingGlobalConf.md" >}}) to add or change the host and domain names, DNS name server and default gateway address.
+ If enabled on your network, TrueNAS uses DHCP to assign global network addresses as well as the SCALE UI access IP address. If not enabled in your network, you must enter these values yourself.
+ Review the **Global Configuration** settings to verify they match the information your network administrator provided.
+
+3. Edit the primary network interface.
+ Go to **Network** and click on the primary interface **eno1** to open the **Edit Interface** screen for this interface.
+
+ a. Turn DHCP off. Select **DHCP** to clear the checkbox.
+
+ ![EditInterfaceInterfaceSettingsHA](/images/SCALE/22.12/EditInterfaceInterfaceSettingsHA.png "Edit Network Interface Settings")
+
+ b. Add the failover settings. Select **Critical**, and then select **1** on the **Failover Group** dropdown list.
+
+ ![EditInterfaceFailoveSettingsrHA](/images/SCALE/22.12/EditInterfaceFailoveSettingsrHA.png "Edit Network Interface Failover Settings")
+
+ c. Add the virtual IP (VIP) and controller 2 IP. Click **Add** for **Aliases** to display the additional IP address fields.
+
+ ![EditInterfaceAddAliasesHA](/images/SCALE/22.12/EditInterfaceAddAliasesHA.png "Edit Network Interface Add Alias IP Addresses")
+
+ 1. Type the IP address for controller 1 into **IP Address (This Controller)** and select the CIDR number from the dropdown list.
+
+ 2. Type the controller 2 IP address into **IP Address (TrueNAS Controller 2)** field.
+
+ 3. Type the VIP address into **Virtual IP Address (Failover Address)** field.
+
+ 4. Click **Save**
+
+ After editing the interface settings, the **Test Changes** button displays. You have 60 seconds to test and then save changes before they revert. If this occurs, edit the interface again.
+
+4. Turn failover back on.
+ Go to **System Settings > Failover**
+ Select **Disable Failover** to clear the checkmark and turn failover back on, then click **Save**.
+
+ The system might reboot. Monitor the status of controller 2 and wait until the controller is back up and running, then click **Sync To Peer**.
+ Select **Reboot standby TrueNAS controller** and **Confirm**, then click **Proceed** to start the sync operation. The controller reboots, and SCALE syncs controller 2 with controller 1, which adds the network settings and pool to controller 2.
+
+ ![FailoverSyncToPeerDialog](/images/SCALE/22.12/FailoverSyncToPeerDialog.png "Failover Sync To Peer")
+
+
+{{< taglist tag=*scalefailover" limit="10" title="Related Failover Articles" >}}
+{{< taglist tag=*scaleenterprise" limit="10" title="Related Enterprise Articles" >}}
diff --git a/content/SCALE/SCALETutorials/Shares/SMB/SetUpBasicTimeMachineSMBShare.md b/content/SCALE/SCALETutorials/Shares/SMB/SetUpBasicTimeMachineSMBShare.md
new file mode 100644
index 0000000000..49dd97047e
--- /dev/null
+++ b/content/SCALE/SCALETutorials/Shares/SMB/SetUpBasicTimeMachineSMBShare.md
@@ -0,0 +1,77 @@
+---
+title: "Adding a Basic Time Machine SMB Share"
+description: "This article provides instructions to adding an SMB share and enabling basic time machine."
+weight: 25
+aliases:
+tags:
+- scalesmb
+- scaleafp
+- scaleshares
+---
+
+{{< toc >}}
+
+
+SCALE uses predefined setting options to establish an SMB share that fits a predefined purpose, such as for a share enabled for a basic time machine share.
+
+## Setting Up a Basic Time Machine SMB Share
+
+To set up a basic time machine share:
+
+1. [Create the user(s)]({{< relref "ManageLocalUsersSCALE.md" >}}) that use this share. Go to **Credentials > Local User** and click **Add**.
+
+2. [Create a dataset](#creating-the-dataset-for-the-share) for the share to use.
+
+3. [Modify the SMB service settings](#modify-the-smb-service).
+
+4. [Create the share](#create-the-basic-time-machine-smb-share) with **Purpose** set to **Basic time machine share**.
+
+After creating the share, enable the SMB service.
+
+### Creating the Dataset for the Share
+
+When adding a share, first create the dataset you plan to use for the new share.
+
+{{< include file="/content/_includes/CreateDatasetSCALE.md" type="page" >}}
+
+Select this dataset as the mount path when you create your SMB share that uses the **Basic time machine share** setting.
+
+### Modify the SMB Service
+
+Go to **System Settings > Services** and scroll down to **SMB**.
+
+1. Click the toggle to turn off the SMB service if it is running, then click edit **Configure** to open the **SMB Service** settings screen..
+
+2. Click **Advanced Settings**.
+
+3. Verify or select **Enable Apple SMB2/3 Protocol Extension** to enable it, then click **Save**
+
+4. Click the toggle to restart the SMB service.
+
+### Create the Basic Time Machine SMB Share
+
+Go to **Shares** and click **Add** on the **Windows SMB Share** widget to open the **Add SMB Share** screen.
+
+1. Enter the SMB share **Path** and **Name**.
+
+ The **Path** is the directory tree on the local file system that TrueNAS exports over the SMB protocol.
+
+ The **Name** is the SMB share name, which forms part of the full share pathname when SMB clients perform an SMB tree connect.
+ Because of how the SMB protocol uses the name, it must be less than or equal to 80 characters. It cannot have invalid characters as specified in Microsoft documentation MS-FSCC section 2.1.6.
+ If you do not enter a name, the share name becomes the last component of the path.
+ If you change the name, follow the naming conventions for:
+ * [Files and directories](https://learn.microsoft.com/en-us/windows/win32/fileio/naming-a-file#naming-conventions)
+ * [Share names](https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/dc9978d7-6299-4c5a-a22d-a039cdc716ea)
+
+2. Select a **Basic time machine share** from the **Purpose** dropdown list.
+
+3. (Optional) Enter a **Description** to help explain the share purpose.
+
+4. Select **Enabled** to allow sharing of this path when the SMB service is activated.
+ Leave it cleared if you want to disable the share without deleting the configuration.
+
+5. Click **Save** to create the share and add it to the **Shares > Windows (SMB) Shares** list.
+
+You can also choose to enable the SMB service at this time.
+
+{{< taglist tag="scalesmb" limit="10" >}}
\ No newline at end of file
diff --git a/content/SCALE/SCALETutorials/Storage/Datasets/AddManageZvols.md b/content/SCALE/SCALETutorials/Storage/Datasets/AddManageZvols.md
index 9ffc3eaf7c..0b450a6674 100644
--- a/content/SCALE/SCALETutorials/Storage/Datasets/AddManageZvols.md
+++ b/content/SCALE/SCALETutorials/Storage/Datasets/AddManageZvols.md
@@ -12,11 +12,8 @@ tags:
A ZFS Volume (zvol) is a [dataset]({{< relref "DatasetsSCALE.md" >}}) that represents a block device or virtual disk drive.
TrueNAS requires a zvol when configuring [iSCSI Shares]({{< relref "/SCALE/SCALEUIReference/Shares/_index.md" >}}). Adding a virtual machine also creates a zvol to use for storage.
-{{< hint warning >}}
-Storage space you allocate to a zvol is only used by that volume, it does not get reallocated back to the total storage capacity of the pool or dataset where you create the zvol if it goes unused.
-Plan your anticipated storage need before you create the zvol to avoid creating a zvol that exceeds your storage needs for this volume.
-Do not assign capacity that exceeds what is required for SCALE to operate properly. For more information, see [SCALE Hardware Guide]({{< relref "SCALEHardwareGuide.md" >}}) for CPU, memory and storage capacity information.
-{{< /hint >}}
+
+{{< include file="/content/_includes/ZvolSpaceWarning.md" type="page" >}}
## Adding a Zvol
diff --git a/content/SCALE/SCALETutorials/Storage/Datasets/ZvolSpaceWarning.md b/content/SCALE/SCALETutorials/Storage/Datasets/ZvolSpaceWarning.md
new file mode 100644
index 0000000000..7160b3feaf
--- /dev/null
+++ b/content/SCALE/SCALETutorials/Storage/Datasets/ZvolSpaceWarning.md
@@ -0,0 +1,8 @@
+---
+---
+
+{{< hint warning >}}
+Storage space you allocate to a zvol is only used by that volume, it does not get reallocated back to the total storage capacity of the pool or dataset where you create the zvol if it goes unused.
+Plan your anticipated storage need before you create the zvol to avoid creating a zvol that exceeds your storage needs for this volume.
+Do not assign capacity that exceeds what is required for SCALE to operate properly. For more information, see [SCALE Hardware Guide]({{< relref "SCALEHardwareGuide.md" >}}) for CPU, memory and storage capacity information.
+{{< /hint >}}
\ No newline at end of file
diff --git a/content/SCALE/SCALETutorials/SystemSettings/General/GeneralSettings.md b/content/SCALE/SCALETutorials/SystemSettings/General/GeneralSettings.md
index 2b0fc4b58d..a693046647 100644
--- a/content/SCALE/SCALETutorials/SystemSettings/General/GeneralSettings.md
+++ b/content/SCALE/SCALETutorials/SystemSettings/General/GeneralSettings.md
@@ -34,8 +34,9 @@ Select the cryptographic protocols for securing client/server connections from t
To redirect HTTP connections to HTTPS, select **Web Interface HTTP -> HTTPS Redirect**. A GUI SSL Certificate is required for HTTPS.
Activating this also sets the [HTTP Strict Transport Security (HSTS)](https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security) maximum age to 31536000 seconds (one year).
This means that after a browser connects to the web interface for the first time, the browser continues to use HTTPS and renews this setting every year.
-A warning displays when setting this function. Setting HTTPS redirects can have unintended consequences if an app does not support secure connections.
-If this occurs, to reset, clear this option and click **Save**. Then clear the browser cache before trying to connect to the app again.
+A warning displays when setting this function.
+
+{{< include file="/_includes/AppsVMsNoHTTPS.md" type="page" >}}
To send failed HTTP request data which can include client and server IP addresses, failed method call tracebacks, and middleware log file contents to iXsystems, select **Crash Reporting**.
diff --git a/content/SCALE/SCALETutorials/SystemSettings/General/ManageSysConfigSCALE.md b/content/SCALE/SCALETutorials/SystemSettings/General/ManageSysConfigSCALE.md
index 8767cc321d..386f4e2383 100644
--- a/content/SCALE/SCALETutorials/SystemSettings/General/ManageSysConfigSCALE.md
+++ b/content/SCALE/SCALETutorials/SystemSettings/General/ManageSysConfigSCALE.md
@@ -32,14 +32,17 @@ All passwords are reset if the uploaded configuration file was saved without the
{{< /hint >}}
### Resetting to Defaults
-The **Reset to Defaults** option resets the system configuration to factory settings.
-After the configuration resets, the system restarts and users must set a new login password.
-{{< hint danger >}}
+{{< enterprise >}}
+Enterprise High Availability (HA) systems should never reset their system configuration to defaults.
+[Contact iXsystems Support]({{< relref "GetSupportSCALE.md" >}}) when a system configuration reset is required.
+{{< /enterprise >}}
+
Save the system current configuration with the **Download File** option before resetting the configuration to default settings!
-
If you do not save the system configuration before resetting it, you could lose data that was not backed up, and you cannot revert to the previous configuration.
-{{< /hint >}}
+
+The **Reset to Defaults** option resets the system configuration to factory settings.
+After the configuration resets, the system restarts and users must set a new login password.
{{< taglist tag="scalesettings" limit="10" >}}
{{< taglist tag="scalebackup" limit="10" title="Related Backup Articles" >}}
diff --git a/content/SCALE/SCALETutorials/SystemSettings/Services/FTPServiceSCALE.md b/content/SCALE/SCALETutorials/SystemSettings/Services/FTPServiceSCALE.md
index 4325581d05..a983719a75 100644
--- a/content/SCALE/SCALETutorials/SystemSettings/Services/FTPServiceSCALE.md
+++ b/content/SCALE/SCALETutorials/SystemSettings/Services/FTPServiceSCALE.md
@@ -8,6 +8,7 @@ tags:
- scaleftp
- scalesftp
- scaletftp
+ - scalefiletransfer
---
@@ -15,12 +16,12 @@ tags:
The [File Transfer Protocol (FTP)](https://tools.ietf.org/html/rfc959) is a simple option for data transfers.
-The SSH and Trivial FTP options provide secure or simple config file transfer methods respectively.
+The SSH options provide secure transfer methods for critical objects like configuration files, while the Trivial FTP options provide simple file transfer methods for non-critical files.
Options for configuring **FTP**, **SSH**, and **TFTP** are in **System Settings > Services**.
Click the edit to configure the related service.
-## Configuring FTP Services Storage
+## Configuring FTP For Any Local User
FTP requires a new dataset and a local user account.
Go to **Storage** to add a new [dataset](https://www.truenas.com/docs/scale/scaletutorials/storage/pools/datasetsscale/) to use as storage for files.
@@ -28,15 +29,15 @@ Go to **Storage** to add a new [dataset](https://www.truenas.com/docs/scale/scal
Next, add a new user. Go to **Credentials > Local Users** and click **Add** to create a local user on the TrueNAS.
Assign a user name and password, and link the newly created FTP dataset as the user home directory.
-You can do this for every user, or create a global account for FTP (for example, *OurOrgFTPaccnt*).
+You can do this for every user or create a global account for FTP (for example, *OurOrgFTPaccnt*).
-Edit the file permissions for the new dataset. Go to **Storage** > **Usage** > **Manage Datasets**. Click on the name of the new dataset. Scroll down to **Permissions** and click the **Edit** button.
+Edit the file permissions for the new dataset. Go to **Datasets**, then click on the name of the new dataset. Scroll down to **Permissions** and click **Edit**.
![EditPermissionsUnixPermissionsEditor](/images/SCALE/22.12/EditPermissionsUnixPermissionsEditor.png "Basic Permissions Editor")
Enter or select the new user account in the **User** and **Group** fields.
Select **Apply User** and **Apply Group**.
-Select the **Read**, **Write** and **Execute** for **User**, **Group** and **Other** that you want to apply.
+Select the **Read**, **Write**, and **Execute** for **User**, **Group**, and **Other** you want to apply.
Click **Save**.
### Configuring FTP Service
@@ -50,20 +51,51 @@ Configure the options according to your environment and security considerations.
To confine FTP sessions to the home directory of a local user, select both **chroot** and **Allow Local User Login**.
Do *not* allow anonymous or root access unless it is necessary.
-For better security, enable TLS when possible (especially when exposing FTP to a WAN).
-TLS effectively makes this [FTPS](https://tools.ietf.org/html/rfc4217).
+Enable TLS when possible (especially when exposing FTP to a WAN). TLS effectively makes this [FTPS](https://tools.ietf.org/html/rfc4217) for better security.
Click **Save** and then start the FTP service.
-### Connecting with FTP
+## Configuring FTP Services For FTP Group
+FTP requires a new dataset and a local user account.
+
+Go to **Storage** and add a new [dataset]]({{< relref "DatasetsSCALE.md" >}}).
+
+Next, add a new user. Go to **Credentials > Local Users** and click **Add** to create a local user on the TrueNAS.
+
+Assign a user name and password, and link the newly created FTP dataset as the user home directory. Then, add *ftp* to the **Auxiliary Groups** field and click *Save*.
+
+Edit the file permissions for the new dataset. Go to **Datasets**, then click on the name of the new dataset. Scroll down to **Permissions** and click **Edit**.
+
+![EditPermissionsUnixPermissionsEditor](/images/SCALE/22.12/EditPermissionsUnixPermissionsEditor.png "Basic Permissions Editor")
+
+Enter or select the new user account in the **User** and **Group** fields.
+Enable **Apply User** and **Apply Group**.
+Select the **Read**, **Write**, and **Execute** for **User**, **Group**, and **Other** you want to apply, then click **Save**.
+
+### Configuring FTP Service
+
+Go to **System Settings > Services** and find **FTP**, then click edit to open the **Services > FTP** screen.
+
+![FTPBasicSettings](/images/SCALE/22.12/FTPBasicSettings.png "Services FTP Basic Settings General Options")
+
+Configure the options according to your environment and security considerations. Click **Advanced Settings** to display more options.
+
+To confine FTP sessions to the home directory of a local user, select **chroot**.
+
+Do *not* allow anonymous or root access unless it is necessary.
+Enable TLS when possible (especially when exposing FTP to a WAN). TLS effectively makes this [FTPS](https://tools.ietf.org/html/rfc4217) for better security.
+
+Click **Save**, then start the FTP service.
+
+## Connecting with FTP
Use a browser or FTP client to connect to the TrueNAS FTP share.
-The images below use [FileZilla](https://sourceforge.net/projects/filezilla/), a free option.
+The images below use [FileZilla](https://sourceforge.net/projects/filezilla/), which is free.
-The user name and password are those of the local user account on the TrueNAS.
+The user name and password are those of the local user account on the TrueNAS system.
The default directory is the same as the user home directory.
After connecting, you can create directories and upload or download files.
![FilezillaFTPConnect](/images/CORE/FilezillaFTPConnect.png "Filezilla FTP Connection")
-{{< taglist tag="scale" limit="10" >}}
\ No newline at end of file
+{{< taglist tag="scale" limit="10" >}}
diff --git a/content/SCALE/SCALETutorials/SystemSettings/Services/SMARTServicesSCALE.md b/content/SCALE/SCALETutorials/SystemSettings/Services/SMARTServicesSCALE.md
index f7ed971938..7131b5a61c 100644
--- a/content/SCALE/SCALETutorials/SystemSettings/Services/SMARTServicesSCALE.md
+++ b/content/SCALE/SCALETutorials/SystemSettings/Services/SMARTServicesSCALE.md
@@ -11,6 +11,10 @@ tags:
{{< toc >}}
+{{< hint info >}}
+There is a special consideration when installing TrueNAS in a Virtual Machine (VM), as S.M.A.R.T services monitor actual physical devices, which are abstracted in a VM. After the installation of TrueNAS completes on the VM, go to **System Settings** > **Services** > and click the blue toggle button on the S.M.A.R.T. service to stop the service from running. Clear the **Start Automatically** checkbox so the service does not automatically start when the system reboots.
+{{< /hint >}}
+
Use the **Services > S.M.A.R.T.** screen to configure when S.M.A.R.T. tests run and when to trigger alert warnings and send emails.
![SMARTSystemServicesStoppedSCALE](/images/SCALE/22.12/SMARTSystemServicesStoppedSCALE.png "Services S.M.A.R.T. Options")
diff --git a/content/SCALE/SCALETutorials/Virtualization/CreatingManagingVMsSCALE.md b/content/SCALE/SCALETutorials/Virtualization/CreatingManagingVMsSCALE.md
index ff5f2bd3c1..0fcc26111b 100644
--- a/content/SCALE/SCALETutorials/Virtualization/CreatingManagingVMsSCALE.md
+++ b/content/SCALE/SCALETutorials/Virtualization/CreatingManagingVMsSCALE.md
@@ -26,15 +26,16 @@ Before creating a virtual machine, you need an installer .iso or im
To create a new VM, go to **Virtualization** and click **Add** or **Add Virtual Machines** if you have not yet added a virtual machine to your system.
Configure each category of the VM according to your specifications, starting with the **Operating System**.
-![CreateVMWOpSysSCALE](/images/SCALE/22.12/CreateVMWOpSysSCALE.png "VM Add: OS")
+![AddVMOperatingSystemSCALE](/images/SCALE/AddVMOperatingSystemSCALE.png "VM Add: OS")
-Choose the **Guest Operating System** from the dropdown list. If you choose Windows, select **Enable Hyper-V Enlightenments** to implement KVM Hyper-V Enlightenments for Windows guests.
+For more information see [Virtualization Screens]({{< relref "VirtualizationScreens.md" >}}) for more information on virtual machine screen settings.
-Enter a name for the VM, and a description to help you remember its usage. The description is optional.
+Additional notes:
-Set the **System Clock** to the option you want to use. The default is **Local**.
+Compare the recommended specifications for your guest operating system with the available host system resources when allocating virtual CPUs, cores, threads, and memory size.
-Select **UEFI** for the **Boot Method** unless you need support for older operating systems that only support BIOS booting.
+Do not allocate too much memory to a VM.
+Activating a VM with all available memory allocated to it can slow the host system or prevent other VMs from starting.
Enter the time the system waits for the VM to cleanly shut down in **Shutdown Timeout** or leave set at the default which is 90 seconds.
@@ -112,31 +113,24 @@ Click **Upload** to begin the upload process. After the upload finishes, click *
### Specifying a GPU
{{< expand "Click Here for More Information" "v" >}}
+The **VirtIO** network interface requires a guest OS that supports VirtIO paravirtualized network drivers.
{{< hint info >}}
iXsystems does not have a list of approved GPUs at this time but does have drivers and basic support for the list of [nvidia Supported Products](https://www.nvidia.com/Download/driverResults.aspx/191961/en-us/).
{{< /hint >}}
-![CreateVMWGPUsSCALE](/images/SCALE/22.12/CreateVMWGPUsSCALE.png "VM GPU")
+### Adding and Removing Devices
-This next section is optional. The **Hide from MSR** checkbox is not selected by default. Select this option if you want to enable the VM to hide the graphic processing unit (GPU) from the Microsoft Reserved Partition (MSR).
+After creating the VM, add and remove virtual devices by expanding the VM entry on the **Virtual Machines** screen and clicking device_hub**Devices**.
-The following checkbox is enabled by default. **Ensure Display Device**, when selected, permits the guest operating system to always have access to a video device. It is required for headless installations such as ubuntu server. Leave the checkbox clear for cases where you want to use a graphic processing unit (GPU) passthrough and do not want a display device added.
+![VirtualMachinesDevicesSCALE](/images/SCALE/VirtualMachinesDevicesSCALE.png "VM Devices")
-Optional: the **GPUs** dropdown list allows you to select a relevant GPU if at least one relevant GPU is present.
+Device notes:
-Click **Next**.
-{{< /expand >}}
-
-### Confirming Your Selections
-{{< expand "Click Here for More Information" "v" >}}
-![CreateVMWConfirmSCALE](/images/SCALE/22.12/CreateVMWConfirmSCALE.png "VM Summary")
-
-The **Confirm Options** screen should be reviewed carefully. This is a summary of the values you have input in the previous screens. If all information is correct, click **Save** to create the VM. If you need to make changes, click the **Back** button. Note that if you navigate away from the wizard without clicking **Save** you will lose your progress and need to start again.
-{{< /expand >}}
-
-
-See [Virtualization Screens]({{< relref "VirtualizationScreens.md" >}}) for more information on any of the fields listed in the Create Virtual Machine wizard or other virtual machine screen settings. The next step is to configure devices for the VM. This process is described in [Adding and Managing VM Devices]({{< relref "AddManageVMDevicesSCALE.md" >}}).
+* A virtual machine attempts to boot from devices according to the **Device Order**, starting with **1000**, then ascending.
+* A **CD-ROM** device allow booting a VM from a CD-ROM image like an installation CD.
+ The CD image must be available in the system storage.
## Managing a Virtual Machine
+
After creating the VM and configuring devices for it, manage the VM by expanding the entry on the **Virtual Machines** screen.
![VirtualMachinesOptionsSCALE](/images/SCALE/VMRunningOptionsSCALE.png "VM Options")
diff --git a/content/SCALE/SCALEUIReference/Apps/AppsScreensSCALE.md b/content/SCALE/SCALEUIReference/Apps/AppsScreensSCALE.md
index 844802eb7b..c2b1a3f7eb 100644
--- a/content/SCALE/SCALEUIReference/Apps/AppsScreensSCALE.md
+++ b/content/SCALE/SCALEUIReference/Apps/AppsScreensSCALE.md
@@ -11,10 +11,10 @@ tags:
{{< toc >}}
-The **Applications** screen displays with **Installed Applications** displayed by default.
+The **Applications** screen displays with **Installed Applications** by default.
-The first time time you select **Apps** on the main feature navigation panel, the **Applications** screen displays the **Choose a pool for Apps** dialog.
-Select a pool from the dropdown list and then click **Choose** to set the selected pool as the one applications use for data storage.
+The first time you select **Apps** on the main feature navigation panel, the **Applications** screen displays the **Choose a pool for Apps** dialog.
+Select a pool from the dropdown list, then click **Choose** to set that pool for application data storage.
## Applications Screen Options
@@ -24,16 +24,16 @@ The options at the top right of the **Applications** screen change with the scre
### Bulk Actions
-The **Bulk Action** option that displays at the top right of the **Installed Applications** screen allows you to select more than one, or all installed apps on your system. After selecting the apps, use the other action buttons to either **Start**, **Stop** or **Delete** the selected apps.
+The **Bulk Action** option at the top right of the **Installed Applications** screen allows you to select more than one or all installed apps on your system. After selecting the apps, use the other action buttons to either **Start**, **Stop**, or **Delete** the selected apps.
**Select All** places a checkmark in the top left corner of the widget for each installed application. Toggles to **Unselect All**.
-**Start** starts all selected apps, and displays **Success** dialog for each app after it starts without issue.
+**Start** starts all selected apps and displays the **Success** dialog for each app after it starts without issue.
**Stop** stops all selected apps and displays a **Success** dialog for each app after it stops without issue.
The **Upgrade** option allows you to select multiple apps, and if there are updates available, you can update the apps to the most recent version of the application.
### Settings
-**Settings** displays at the top right of all four **Applications** screens, but they are only functional when on the **Available Applications** screen. Setting options are:
+**Settings** displays at the top right of all four **Applications** screensand has three options.
**Choose Pool** opens the **[Choose a pool](#choose-pool-window)** window.
**Advanced Settings** opens the **[Kubernetes Settings](#kubernetes-settings-screen)** configuration screen.
@@ -41,12 +41,12 @@ The **Upgrade** option allows you to select multiple apps, and if there are upda
#### Choose Pool Window
Selecting **Choose Pool** on the **Settings** list opens a different **Choose a pool for Apps** window than the one that first displays before you add your first application.
-Use the **Settings > Choose Pool** option to change the pool applications use for storage.
+Use the **Settings > Choose Pool** option to change the pool .
![AppsSettingsChoosePool](/images/SCALE/22.02/AppsSettingsChoosePool.png "Apps Choose Pool Window")
-**Migrate applications to the new pool** starts the process of moving your application data from the existing pool to the new pool specified after you click **Choose**.
-Select **Migrate applications to the new pool** if you change your applications pool and want to migrate data from the existing pool to the new pool.
+**Migrate applications to the new pool** starts moving your application data from the existing pool to the new pool specified after you click **Choose**.
+Select **Migrate applications to the new pool** if you change your applications pool and want to migrate data from the existing pool to the new one.
#### Kubernetes Settings Screen
The **Advanced Settings** option opens the **Kubernetes Settings** configuration screen.
@@ -59,28 +59,29 @@ The **Advanced Settings** option opens the **Kubernetes Settings** configuration
| **Node IP** | Select the IP address for the node from the dropdown list. |
| **Route v4 Interface** | Select the network interface from the dropdown list. |
| **Route v4 Gateway** | Enter the IP address for the route v4 gateway. |
-| **Enable Container Image Updates** | Select to enable updates of the container image. |
+| **Enable Container Image Updates** | Select to enable container image updates. |
| **Enable GPU support** | Select to enable GPU support. The maximum number of apps that can use an Intel GPU is five. |
-| **Enable Integrated Loadbalancer** | Select to enable the integrated loadbalancer. The default uses servicelb but if disabled, allows using metallb and allows users to specify any IP from the local network. |
+| **Enable Integrated Loadbalancer** | Select to enable the integrated loadbalancer. The default uses servicelb. When disabled, you can use metallb and specify any IP from the local network. |
| **Enable Host Path Safety Checks** | Enabled by default. TrueNAS SCALE performs safety checks to ensure app host path volumes are secure. |
**Settings Requiring Re-Initialization**
-![AppsAdvancedSettingsKubernetesSettingsReInitialization](/images/SCALE/22.02/AppsAdvancedSettingsKubernetesSettingsReInitialization.png "Advanced Settings Kubernetes Settings 2")
+![AppsAdvancedSettingsKubernetesSettingsReInitialization](/images/SCALE/22.12/AppsAdvancedSettingsKubernetesSettingsReInitialization.png "Advanced Settings Kubernetes Settings 2")
| Setting | Description |
|---------|-------------|
| **Cluster CIDR** | Required. Enter the IP address and CIDR number for the Kubernetes cluster. |
| **Service CIDR** | Required. Enter the IP address and CIDR number for the Kubernetes service. |
| **Cluster DNS IP** | Required. Enter the IP address for the cluster DNS. |
+| **Force** | When selected, **Force** bypasses pool validation during Kubernetes reinitialization. |
{{< /expand >}}
#### Unset Pool
-The **Unset Pool** option on the **Settings** list displays a confirmation dialog. Click **UNSET** to unset the pool. When complete a **Success** dialog displays.
+The **Unset Pool** option under **Settings** displays a confirmation dialog. Click **UNSET** to unset the pool. When complete, a **Success** dialog displays.
### Refresh All
-Opens a **Refreshing** counter with status of the refresh options. When complete, the **Task Manager** displays with the status of each app refresh operation.
+Opens a **Refreshing** counter showing the refresh options status. When complete, the **Task Manager** displays the status of each app refresh operation.
### Add Catalog
**Add Catalog** at the top of the **Manage Catalogs** screen opens a warning dialog before it opens the **Add Catalog** screen.
@@ -97,8 +98,8 @@ Click **CONTINUE** to open the **Add Catalog** screen.
| **Catalog Name** | enter the name the TrueNAS uses to look up the catalog. For example, *truecharts*. |
| **Force Create** | Select to add the catalog to the system even if some trains are unhealthy. |
| **Repository** | Enter the valid git repository URL. For example, *https://github.com/truecharts/catalog*. |
-| **Preferred Trains** | The trains TrueNAS uses to retrieve available applications for the catalog. Default is **stable** (and optionally: **incubator**). |
-| **Branch** | Specify the git repository branch TrueNAS should use for the catalog. Default is **main**. |
+| **Preferred Trains** | The trains TrueNAS uses to retrieve available applications for the catalog. The default is **stable** (and optionally: **incubator**). |
+| **Branch** | Specify the git repository branch TrueNAS should use for the catalog. The default is **main**. |
{{< /expand >}}
### Pull Image
@@ -109,11 +110,11 @@ The **Pull Image** option at the top right of the **Manage Docker Images** scree
| Setting | Description |
|---------|-------------|
-| **Image Name** | Enter the name of the image to pull. Format for the name is `registry/repo/image`. |
-| **Image Tag** | Enter the tag of the image. For example, *latest*. |
+| **Image Name** | Enter the name of the image to pull. The format for the name is `registry/repo/image`. |
+| **Image Tag** | Enter the image tag. For example, *latest*. |
#### Docker Registry Authentication Settings
-These settings are optional, and only needed for private images.
+These settings are optional and only necessary for private images.
| Setting | Description |
|---------|-------------|
@@ -123,8 +124,8 @@ These settings are optional, and only needed for private images.
### Launch Docker Image
-**Launch Docker Image** opens the Docker Image wizard where you can configure third-party applications not listed on the **Available Applications** screen.
-These docker image options are derived from the [Kubernetes container options](https://kubernetes.io/docs/setup/).
+**Launch Docker Image** opens the Docker Image wizard, where you can configure third-party applications not listed on the **Available Applications** screen.
+These docker image options derive from the [Kubernetes container options](https://kubernetes.io/docs/setup/).
See [Launch Docker Image Screens]({{< relref "LaunchDockerImageScreens.md" >}}) for more information.
## Installed Applications Screen
@@ -137,18 +138,23 @@ After installing your application(s), this screen displays the application(s).
![InstalledApplicationsWithApps](/images/SCALE/22.02/InstalledApplicationsWithApps.png "Installed Applications")
-Click the edit on the application widget to open the action options dropdown list. Options are:
+Click the application name in the app widget to open the app summary screen. The app summary screen displays information on the app version, ports, status, pods, deployments, statefulsets, catalog, update train, and name. The application summary screen also has dropdowns that list the active container images and application events. The **Refresh Events** button updates the list with the latest events.
+
+![AppsSummaryScreen](/images/SCALE/22.12/AppsSummaryScreen.png "Apps Summary")
+
+Click the on the application widget to open the action options dropdown list. Options are:
* **Edit** opens the configuration form for the selected application.
* **Shell** opens the **[Choose pod](#choose-pod-window)** window before opening the **[Applications > Pod Shell](#pod-shell-screen)** screen.
* **Logs** opens the **[Choose Log](#choose-log-window)** window before opening the **[Applications > Pod Log](#pod-log-window)** screen.
* **Delete** opens a confirmation dialog.
{{< /expand >}}
-### Choose Pod Window
-The **Choose Pod** window specifies which pod or active container, and the shell commands you want to use when the **Applications > Pod Shell** screen displays.
+
+### Choose Pod Shell Window
+The **Choose Pod Shell** window lets you choose which pod or active container and shell commands to use when the **Applications > Pod Shell** screen displays.
{{< expand "Click Here for More Information" "v" >}}
-![AppChoosePodWindow](/images/SCALE/22.02/AppChoosePodWindow.png "Choose Pod Window")
+![AppChoosePodWindow](/images/SCALE/22.12/AppChoosePodWindow.png "Choose Pod Window")
| Setting | Description |
|---------|-------------|
@@ -165,7 +171,7 @@ The **Pod Shell** screen allows users to enter TrueNAS CLI commands to access in
![AppsPodShellWindow](/images/SCALE/22.02/AppsPodShellWindow.png "Applications Pod Shell")
-The following are examples of commands you can enter to access information on an active container. You can also use the **System Settings > Shell** to access the same information.
+The following are example commands to access information on an active container. You can also use the **System Settings > Shell** to access the same information.
To view container namespaces: `k3s kubectl get namespaces`.
To view pods by namespace: `k3s kubectl get -n pods`.
@@ -175,10 +181,10 @@ To get container status: `k3s kubectl describe -n
{{< /expand >}}
### Choose Log Window
-The **Logs** options opens the **Choose Log** window.
+The **Logs** option opens the **Choose Log** window.
{{< expand "Click Here for More Information" "v" >}}
-![AppsChooseLogWindow](/images/SCALE/22.02/AppsChooseLogWindow.png "Choose Log Window")
+![AppsChooseLogWindow](/images/SCALE/22.12/AppsChooseLogWindow.png "Choose Log Window")
| Setting | Description |
|---------|-------------|
@@ -188,23 +194,23 @@ The **Logs** options opens the **Choose Log** window.
{{< /expand >}}
### Pod Log Window
-The **Pod Log** shell screen displays with the information selected in the **Choose Log** window.
+The **Pod Log** shell screen displays the information selected in the **Choose Log** window.
{{< expand "Click Here for More Information" "v" >}}
![ApplicationsPodLogsScreen](/images/SCALE/22.02/ApplicationsPodLogsScreen.png "Applications Pod Logs Shell")
-Use the **Set font size** slider to increase or decrease the size of the font displayed on the screen.
+Use the **Set font size** slider to increase or decrease the font size displayed on the screen.
**Reconnect** re-establishes a connection with the application service.
**Download Logs** downloads the logs to your server.
{{< /expand >}}
### Delete Application
-The **Delete** dialog for stoppped applications includes two confirmation options, a **Confirm** option and a **Delete docker images used by the app** option.
+The **Delete** dialog for stopped applications includes two confirmation options, a **Confirm** option and a **Delete docker images used by the app** option.
![DeleteStoppedAppDialog](/images/SCALE/22.12/DeleteStoppedAppDialog.png "Delete Application")
-**Delete docker images used by the app** deletes the docker image used by the app when you delete the app. If you do not delete the image it remains on the **Manage Docker Images** list until you [deleted it](#delete-image).
+**Delete docker images used by the app** deletes the docker image the app uses when you delete the app. If you do not delete the image, it remains on the **Manage Docker Images** list until you [delete it](#delete-image).
**Confirm** activates the **Delete** button.
@@ -216,11 +222,11 @@ The **Available Applications** screen displays the widgets for all applications
The **Install** button on each application card opens the configuration wizard for that application.
-Click on the application icon or name to open an ***appname* Application Summary** window that includes information on the **Catalog**, **Categories**, **Train**, **Status** and **Versions** for that application.
+Click on the application icon or name to open an ***appname* Application Summary** window that includes information on the **Catalog**, **Categories**, **Train**, **Status**, and **Versions** for that application.
{{< /expand >}}
## Manage Catalogs
-The **Manage Catalog** screen displays the list of application catalogs installed on TrueNAS SCALE. The **Official** catalog contains all the applications listed on the **Available Applications** screen.
+The **Manage Catalog** screen displays the application catalogs installed on TrueNAS SCALE. The **Official** catalog contains all the applications listed on the **Available Applications** screen.
{{< expand "Click Here for More Information" "v" >}}
The options at the top right of the screen include the **[Refresh All](#refresh-all)** and **[Add Catalog](#add-catalog-screen)** options.
@@ -248,7 +254,7 @@ Opens a **Refreshing** counter that shows the status of the refresh operation. Y
Opens a confirmation dialog before deleting the catalog. The **Official** catalog **Delete** option is inactive. You cannot delete the official catalog.
### Catalog Summary Window
-The **Summary** option for each catalog listed on **Manage Catalogs** opens the ***Name* Catalog Summary** window where *Name* is the name of the catalog. The summary displays the catalog status, application and train, and allows you to select the train and status you want to include in the summary.
+The **Summary** option for each catalog listed on **Manage Catalogs** opens the ***Name* Catalog Summary** window where *Name* is the name of the catalog. The summary displays the catalog status, application, and train, and allows you to select the train and status you want to include in the summary.
{{< expand "Click Here for More Information" "v" >}}
**[Add Catalog](#add-catalog)** opens the **Add Catalog** screen.
@@ -261,7 +267,7 @@ The **Summary** option for each catalog listed on **Manage Catalogs** opens the
{{< /expand >}}
## Manage Docker Images
-The **Manage Docker Images** displays a list of Docker image IDs and tags on the system. The list displays **Update Available** for container images you can update.
+The **Manage Docker Images** button displays a list of Docker image IDs and tags on the system. The list shows **Update Available** for container images you can update.
{{< expand "Click Here for More Information" "v" >}}
![ApplicationsManageDockerImagesScreen](/images/SCALE/22.02/ApplicationsManageDockerImagesScreen.png "Applications Manage Docker Images")
@@ -277,12 +283,14 @@ Select **Update** to open the **Choose a tag** dialog. Select the image tag and
After updating the Docker image, the option becomes inactive until a new update becomes available.
### Delete Image
-The **Delete** dialog for images includes a pre-selected radio button for the docker image you selected to delete, a **Confirm** option, and a **Force delete** option.
+The **Delete** dialog for images includes a pre-selected radio button for the docker image you selected to delete, a **Confirm** option and a **Force delete** option.
![AppsManageDockerImageDelete](/images/SCALE/22.12/AppsManageDockerImageDelete.png "Delete Docker Image")
**Confirm** activates the **Delete** button.
-**Force delete** adds the force flag to the delete-image operation. Select to avoid issues deleting an image. Issues can occur if the same Docker image is references in two different ways, for example on the Docker hub registry and the Github container registry.
+**Force delete** adds the force flag to the delete-image operation. Select to avoid issues deleting an image. You can encounter problems if multiple regeistries reference the same Docker image.
+
+For example, you can encounter issues deleting an image if the Docker hub registry and GitHub container registry reference it.
{{< taglist tag="scaleapps" limit="10" >}}
diff --git a/content/SCALE/SCALEUIReference/Apps/LaunchDockerImageScreens.md b/content/SCALE/SCALEUIReference/Apps/LaunchDockerImageScreens.md
index 5b8d852de3..e6f4105f76 100644
--- a/content/SCALE/SCALEUIReference/Apps/LaunchDockerImageScreens.md
+++ b/content/SCALE/SCALEUIReference/Apps/LaunchDockerImageScreens.md
@@ -1,6 +1,6 @@
---
title: "Launch Docker Image Screens"
-description: "This article provides information on the **Launch Docker Image** wizard configuration screens and settings."
+description: "This article provides information on the **Launch Docker Image** wizard configuration settings."
weight: 25
aliases:
tags:
@@ -11,36 +11,35 @@ tags:
{{< toc >}}
-**Launch Docker Image** on the **Applications** screen opens a configuration wizard that steps through the application creation process using Docker image when selected while on the **Available Applications** tab.
+**Launch Docker Image** on the **Applications** screen opens a configuration wizard that steps through the application creation process using a Docker image when selected while on the **Available Applications** tab.
![AppsScreenHeaderSCALE](/images/SCALE/22.02/AppsScreenHeaderSCALE.png "Avalable Application Header Options")
-The docker image wizard includes 12 configuration screens and a **Confirm Options** screen that displays a summary of some of the setting options configured.
The **Launch Docker Image** wizard allows you to configure third-party applications using settings based on Kubernetes. You can use the wizard to configure applications not included in the **Official** catalog or to do a more advanced installation of official catalog applications.
-## Application Name Screen
-The **Application Name** screen is the first step in the **Launch Docker Image** configuration wizard.
+## Application Name
+The **Application Name** section is the first step in the **Launch Docker Image** configuration wizard.
{{< expand "Click Here for More Information" "v" >}}
![LaunchDockerImageApplicationVersion](/images/SCALE/22.12/LaunchDockerImageApplicationVersion.png "Application Name and Version")
| Setting | Description |
|---------|-------------|
-| **Application Name** | Displays **ix-Chart** as the default. Enter a name for the application you are adding. The name must have lowercase alphanumeric characters, must begin with an alphabet character and can end with an alphanumeric character. The name can contain a hyphen (-) but not as the first or last character in the name. For example, using *chia-1* but not *-chia1* or *1chia-* as a valid name. |
-| **Version** | Displays the current version for the default application. Enter the version for the application you want to install.|
+| **Application Name** | Displays **ix-Chart** as the default. Enter a name for the application you are adding. The name must have lowercase alphanumeric characters, must begin with an alphabet character, and can end with an alphanumeric character. The name can contain a hyphen (-) but not as the first or last character in the name. For example, using *chia-1* but not *-chia1* or *1chia-* as a valid name. |
+| **Version** | Displays the current version of the default application. Enter the version of the application you want to install.|
{{< /expand >}}
-## Container Images Screen
-The **Container Images** settings specify the Docker image details. Always refer to the dockerhub page for information on what the docker container requires.
+## Container Images
+The **Container Images** settings specify the Docker image details. Always refer to the docker hub page for information on docker container requirements.
{{< expand "Click Here for More Information" "v" >}}
-Define the image tag, when the image is pulled from the remote repository, how the container is updated, and when a container automatically restarts with these settings.
+Define the image tag, when TrueNAS pulls the image from the remote repository, how the container updates, and when a container automatically restarts with these settings.
![LaunchDockerImageContainerImage](/images/SCALE/22.12/LaunchDockerImageContainerImage.png "Container Images")
| Setting | Description |
|---------|-------------|
-| **Image Repository** | Required. Enter the Docker image repository name. For example, for Plex enter *plexinc/pms-docker*.|
-| **Image Tag** | Enter the tag for the specified image. For example, for Plex enter *1.20.2.3402-0fec14d92*. |
+| **Image Repository** | Required. Enter the Docker image repository name. For example, *plexinc/pms-docker* for Plex.|
+| **Image Tag** | Enter the tag for the specified image. For example, *1.20.2.3402-0fec14d92* for Plex. |
| **Image Pull Policy** | Select the Docker image pull policy from the dropdown list. Options are **Only pull image if not present on host**, **Always pull image even if present on host**, or **Never pull image even if it's not present on host**. |
{{< /expand >}}
@@ -49,7 +48,7 @@ The **Container Entrypoint** settings specify both commands and argument options
{{< expand "Click Here for More Information" "v" >}}
Define any [commands and arguments](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) to use for the image.
These can override any existing commands stored in the image.
-Check the documentation for the application you want to install using a Docker Image for entrypoint commands or arguments you need to enter.
+Check the documentation for the application you want to install using a Docker Image for entry point commands or arguments you need to enter.
![LaunchDockerImageAddContainerEntrypoints](/images/SCALE/22.12/LaunchDockerImageAddContainerEntrypoints.png "Add Container Entrypoints")
@@ -65,7 +64,7 @@ Check the documentation for the application you want to install using a Docker I
The **Container Environment Variables** settings specify container environment variables the container/image needs.
You can also [define additional environment variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) for the container.
{{< expand "Click Here for More Information" "v" >}}
-Be sure to check the documentation for the image you are trying to deploy and add any required variables here.
+Check the documentation for the image you are trying to deploy and add any required variables here.
![LaunchDockerImageAddContainerEnvironmentVariables](/images/SCALE/22.12/LaunchDockerImageAddContainerEnvironmentVariables.png "Add Container Environmental Variables")
@@ -73,15 +72,15 @@ Be sure to check the documentation for the image you are trying to deploy and ad
|---------|-------------|
| **Configure Container Environment Variables** | Click **Add** to display a block of **Container Environment Variables**. Click again to add more blocks for environment variables. |
| **Container Environment Variables** | Container environmental variable name and value fields. |
-| **Environment Variable Name** | Enter the environment variable name. For example, if installing Pi-Hole enter **TZ** for timezone. |
-| **Environment Variable Value** | Enter the value for the variable specified in **Environment Variable Name**. For example, for Pi-Hole timezone variable, enter *AmericaNewYork*. |
+| **Environment Variable Name** | Enter the environment variable name. For example, enter **TZ** for the timezone if installing Pi-Hole. |
+| **Environment Variable Value** | Enter the value for the variable specified in **Environment Variable Name**. For example, for the Pi-Hole timezone variable, enter *AmericaNewYork*. |
{{< /expand >}}
## Networking
-The **Networking** settings specify network policy, addresses, and DNS services if the container needs special networking configuration.
+The **Networking** settings specify network policy, addresses, and DNS services if the container needs a custom networking configuration.
{{< expand "Click Here for More Information" "v" >}}
See the [Docker documentation](https://docs.docker.com/network/host/) for more details on host networking.
-Users can create additional network interfaces for the container if needed or give static IP addresses and routes to new interface.
+You can create additional network interfaces for the container or give static IP addresses and routes to a new interface.
By default, containers use the DNS settings from the host system.
You can change the DNS policy and define separate nameservers and search domains.
See the Docker [DNS services documentation](https://docs.docker.com/config/containers/container-networking/#dns-services) for more details.
@@ -92,23 +91,24 @@ See the Docker [DNS services documentation](https://docs.docker.com/config/conta
|---------|-------------|
| **Add External Interfaces** | Click **Add** to displays a block of interface settings. |
| **Host Interface** | Required. Select a host interface on your system from the dropdown list. |
-| **IPAM Type** | Required. Select an option from the dropdown list to specify the type for IPAM. Options are **Use DHCP** or **Use Static IP**. To add a default route, select **Add route** allow you to enter route destination IP /subnet 0.0.0.0/0. Enter the gateway (for example, *192.168.1.1*). After submitting the docker image, navigate to **Installed Applications**, locate the docker image you added, select **Edit** and change the route destination/subnet to equal 0.0.0.0 /0. |
+| **IPAM Type** | Required. Select an option from the dropdown list to specify the IPAM type. Options are **Use DHCP** or **Use Static IP**. To add a default route, select **Add route**, which allows you to enter route destination IP /subnet 0.0.0.0/0. Enter the gateway (for example, *192.168.1.1*). After submitting the docker image, navigate to **Installed Applications**, locate the docker image you added, select **Edit** and change the route destination/subnet to equal 0.0.0.0 /0. |
![LaunchDockerImageAddDNS](/images/SCALE/22.12/LaunchDockerImageAddDNS.png "Add DNS Policy and Settings")
| Setting | Description |
|---------|-------------|
-| **DNS Policy** | Select the option from the dropdown list that specifies the policy. Default behavior is where Pod inherits the name resolution configuration from the node that the pods run on. If **None** is specified, it allows a pod to ignore DNS settings from the Kubernetes environment. Options are:
**Use Default DNS Policy where Pod inherits the name resolution configuration from the node**.
**Kubernetes internal DNS is prioritized and resolved first.** If the domain does not resolve with internal kubernetes DNS, the DNS query forwards to the upstream nameserver inherited from the node. This useful if the workload to access other services, workflows, using kubernetes internal DNS.
**For Pods running with hostNetwork and wanting to prioritize internal kubernetes DNS should make use of this policy.**
**Ignore DNS settings from the Kubernetes cluster**. |
-| **DNS Configuration** | Specify custom DNS configuration to apply to the pod. Click **Add** to dsiplay a **Nameserver** entry field. Click again to add another name server. |
+| **DNS Policy** | Select the option from the dropdown list that specifies the policy. With the default behavior, the pod inherits the name resolution configuration from the node that the pods run on. With **None**, a pod can ignore DNS settings from the Kubernetes environment. Options are:
**Use Default DNS Policy where Pod inherits the name resolution configuration from the node**.
**Kubernetes internal DNS is prioritized and resolved first.** If the domain does not resolve with internal Kubernetes DNS, the DNS query forwards to the upstream nameserver inherited from the node, which is useful if the workload needs to access other services using Kubernetes internal DNS.
**For Pods running with hostNetwork and wanting to prioritize internal Kubernetes DNS should make use of this policy.**
**Ignore DNS settings from the Kubernetes cluster**. |
+| **DNS Configuration** | Specify custom DNS configuration to apply to the pod. Click **Add** to display a **Nameserver** entry field. Click again to add another name server. |
| **Nameserver** | Enter the IP address of the name server. |
| **Searches** | Click **Add** to display a **Search Entry** field to enter the search value you want to configure. |
-| **DSN Options** | Click **Add** to display a block of **Option Entry Configuration** settings. Click again to display another block of settings if needed. |
+| **DNS Options** | Click **Add** to display a block of **Option Entry Configuration** settings. Click again to display another block of settings if needed. |
| **Option Name** | Required. Enter the option name. |
| **Option Value** | Required. Enter the value for the option name. |
+| **Provide access to node network namespace for the workload** | Select to allow the container to bind to any port. Some ports still require appropriate permissions. Unless you need it, we recommend leaving this setting disabled because app containers might try to bind to arbitrary ports like 80 or 443, which the TrueNAS UI already uses. |
{{< /expand >}}
-## Port Forwarding
-The **Port Forwarding** settings specify the container and node ports and the transfer protocol to use.
+## Port Forwarding
+The **Port Forwarding** settings specify the container and node ports and the transfer protocol.
{{< expand "Click Here for More Information" "v" >}}
Choose the protocol and enter port numbers for both the container and node. You can define multiple port forwards.
@@ -116,20 +116,20 @@ Choose the protocol and enter port numbers for both the container and node. You
| Setting | Description |
|---------|-------------|
-| **Configure Specify Node ports to forward to workload** | Click **Add** to display a block of **Port Forwarding Configuration** settings. |
+| **Specify Node ports to forward to workload** | Click **Add** to display a block of **Port Forwarding Configuration** settings. |
| **Container Port** | Required. Do not enter the same port number used by another system service or container. |
| **Node Port** | Required. Enter a node port number over **9000**. |
-| **Protocol** | Select the protocol to use from the dropdown list. Options are **TCP Protocol** or **UDP Protocol**. |
+| **Protocol** | Select the protocol from the dropdown list. Options are **TCP Protocol** or **UDP Protocol**. |
{{< /expand >}}
## Storage
-The **Storage** settings specify the host path configuration, memory backed volumes, and storage volumes.
+The **Storage** settings specify the host path, memory-backed, and storage volumes.
{{< hint ok >}}
-Create the pool, dataset, zvol or directory for the container to use before you begin configuring the container as leaving the wizard closes it without saving.
+Exiting the wizard closes it without saving settings, so create the pool, dataset, zvol, or directory for the container to use before you begin configuring an app.
{{< /hint >}}
{{< expand "Click Here for More Information" "v" >}}
Set the Host Path volume to a dataset and directory path.
-For host path volumes, you can mount SCALE storage locations inside the container. Define the path to the system storage and the container internal path for the system storage location to appear.
+You can mount SCALE storage locations inside the container with host path volumes. Define the path to the system storage and the container internal path for the system storage location to appear.
For more details, see the [Kubernetes hostPath documentation](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath).
Users can create additional Persistent Volumes (PVs) for storage within the container.
PVs consume space from the pool chosen for application management. To do this, name each new dataset and define a path where that dataset appears inside the container.
@@ -140,22 +140,22 @@ PVs consume space from the pool chosen for application management. To do this, n
|---------|-------------|
| **Host Path Volumes** | Click **Add** to display a block of **Host Path Volume** settings. Click again to add another block of settings. |
| **Host Path** | Require. Enter or click arrow_right to the left of folder **/mnt** to browse to the location of the host path. Click on the dataset to select and display it in the **Host Path** field. |
-| **Mount Path** | Required. Enter the **/data** directory where host path mounts inside the pod. |
-| **Read Only** | Select to make the mount path inside the pod read only and prevent using the container to store data. |
+| **Mount Path** | Required. Enter the **/data** directory where the host path mounts inside the pod. |
+| **Read Only** | Select to make the mount path inside the pod read-only and prevent using the container to store data. |
![LaunchDockerImageStorageAddVolumes](/images/SCALE/22.12/LaunchDockerImageStorageAddVolumes.png "Storage Volume Settings")
| Setting | Description |
|---------|-------------|
| **Memory Backed Volumes** | Click **Add** to display a block of **Memory Backed Volume** settings. Click again to display another block of settings. |
-| **Mount Path** | Required. Enter the path where temporary path mounts inside the pod. |
+| **Mount Path** | Required. Enter the path where the temporary path mounts inside the pod. |
| **Volumes** | Click **Add** to display a block of **Volume** settings. Click again to add another block of settings. |
| **Mount Path** | Required. Enter the path where the volume mounts inside the pod. |
| **Dataset Name** | Required. Enter the name of the dataset. |
{{< /expand >}}
## Workload Details
-The **Workload Details** settings specify if containers in a pod run with TTY or STDIN enabled, allow it to enable any device on the host or configure host capabilities, and if you run the container as a user or group.
+The **Workload Details** settings specify if containers in a pod run with TTY or STDIN enabled, allow it to enable any device on the host or configure host capabilities and if you run the container as a user or group.
{{< expand "Click Here for More Information" "v" >}}
![LaunchDockerImageAddWorkloadDetails](/images/SCALE/22.12/LaunchDockerImageAddWorkloadDetails.png "Workload Details")
@@ -163,13 +163,13 @@ The **Workload Details** settings specify if containers in a pod run with TTY or
| Setting | Description |
|---------|-------------|
| **Enable TTY** | Select to set containers in a pod to run with TTY enabled. Disabled by default. |
-| **enable STDIN** | Select to set containers in a pod to run with STDIN enabled. Disabled by default. |
-| **Privileged Mode** | Select to allow any container in a pod to enable any device on the host, but a **privileged** container is given access to all devices on the host. This allows the container nearly all the same access as processes running on the host. |
+| **Enable STDIN** | Select to set containers in a pod to run with STDIN enabled. Disabled by default. |
+| **Privileged Mode** | By default, a container cannot access any devices on the host. With Privileged Mode enabled, the container has access to all devices on the host, which allows the container nearly all the same access as processes running on the host. |
| **Capabilities** | Click **Add** to display a **Add Capability** field. Click again to add another field. |
| **Add Capability** | Enter a capability. |
| **Configure Container User and Group ID** | Select to display the **Run Container as User** and **Run Container as Group** settings to add security context (`runAsUser` and `runAsGroup` variables). |
-| **Run Container As User** | Enter a user ID (numeric value) for container. |
-| **Run Container as Group** | Enter a group ID (numeric value) for container. |
+| **Run Container As User** | Enter a numeric user ID for the container. |
+| **Run Container as Group** | Enter a numeric group ID for the container. |
{{< /expand >}}
## Scaling/Upgrade Policy
@@ -179,12 +179,12 @@ Use **Kill existing pods before creating new ones** to recreate the container or
![LaunchDockerImageScalingUpgrade](/images/SCALE/22.12/LaunchDockerImageScalingUpgrade.png "Scaling/Upgrade Policy")
Select **Create new pods and then kill the old ones** to retain your existing configuration and container until the upgrade completes before removing it.
-Select **Kill existing pods before creating new ones** to remove the exiting pod and start with a new updated pod. This is useful if your old pod was not functioning properly. For fewer issues, select **Kill existing pods before creating new ones**.
+Select **Kill existing pods before creating new ones** to remove the existing pod and start with a newly updated pod. Killing existing pods is useful if your old pod is not functioning properly. For fewer issues, select **Kill existing pods before creating new ones**.
{{< /expand >}}
## Resource Reservation and Limits
-The **Resource Reservation** screen specifies the **GPU configuration**.
+The **Resource Reservation** setting specifies the **GPU configuration**.
-The **Resource Limits** setting specifies limits you want to place on the Kubernetes pod.
+The **Resource Limits** setting specifies the limits you want to place on the Kubernetes pod.
{{< expand "Click Here for More Information" "v" >}}
![LaunchDockerImageResourcesAdd](/images/SCALE/22.12/LaunchDockerImageResourcesAdd.png "Resource Reservation and Limits")
@@ -192,13 +192,11 @@ The **Resource Limits** setting specifies limits you want to place on the Kubern
| Setting | Description |
|---------|-------------|
| **Enable Pod resource limits** | Select to enable resource limits and display the **CPU Limit** and **Memory Limit** settings. |
-| **CPU Limit** | Enter the integer values with suffix m(mill) you want to use to limit the CPU resource. For example, 1000m, 100, etc. |
+| **CPU Limit** | Enter the integer values with the suffix m (mill) you want to use to limit the CPU resource. For example, 1000m, 100, etc. |
| **Memory Limit** | Enter the number of bytes you want to limit memory to. Follow the number with the quantity suffix, like E, P, T, G, M, k or Ei, Pi, Ti, Mi, Gi, Ki. For example, 129e6, 129m, 12897484800m, 123Mi, etc. |
{{< /expand >}}
+
## Portal Configuration
The **Portal Configuration** setting specifies whether to **Enable WebUI Portal (only supported in TrueNAS SCALE Bluefin)**.
-## Confirm Options
-The **Confirm Options** screen displays a summary of the image/container configuration. Click **Back** to return to previous screens to make changes and **Next** to advance back to **Confirm Options**. Click **Save** to create the image and add the application to the **Installed Applications** screen.
-
{{< taglist tag="scaledocker" limit="10" >}}
diff --git a/content/SCALE/SCALEUIReference/Credentials/LocalGroupsScreens.md b/content/SCALE/SCALEUIReference/Credentials/LocalGroupsScreens.md
index a7c6191e88..2eb36108dd 100644
--- a/content/SCALE/SCALEUIReference/Credentials/LocalGroupsScreens.md
+++ b/content/SCALE/SCALEUIReference/Credentials/LocalGroupsScreens.md
@@ -9,17 +9,14 @@ tags:
{{< toc >}}
-The **Credentials > Groups** screen displays a list of groups configured on the screen. By default, built-in groups are hidden until you make them visible.
+The **Credentials > Local Groups** screen displays a list of groups configured on the screen. By default, built-in groups are hidden until you make them visible.
-![LocalGroupsSCALE](/images/SCALE/22.02/LocalGroupsSCALE.png "Local Groups Built-in Accounts")
+![GroupsListedSCALE](/images/SCALE/22.12/GroupsListedSCALE.png "Local Groups Hide Built-in Accounts")
-To see built-in groups, click the settings **Toggle Built-In Groups** icon to open the **Show Built-In Groups** dialog. Click **Show**.
-To hide the built-in groups, click the settings **Toggle Built-In Groups** icon again to open the **Hide Built-in Groups** dialog. click **Hide**.
+To see built-in groups, click the **Show Built-In Groups** toggle. The toggle turns blue and all built-in groups display. Click the **Show Built-In Groups** toggle again to show only non-built-in groups on the system.
-The **Credentials > Groups** screen displays the **No groups** screen if no groups other than built-in groups are configured on the system.
-
-![LocalGroupsNoGroups](/images/SCALE/22.02/LocalGroupsNoGroups.png "Local Groups No Groups")
+The **Credentials > Local Groups** screen displays the **No groups** screen if no groups other than built-in groups are configured on the system.
**Add** or **Add Groups** opens the **[Add Group](#add-group-screen)** configuration screen.
@@ -27,7 +24,7 @@ The **Credentials > Groups** screen displays the **No groups** screen if no grou
The expanded view of each group includes details on that group and provides the option to edit members. Click the expand_more arrow to show the group details screen.
-![LocalGroupDetailsSCALE](/images/SCALE/22.02/LocalGroupDetailsSCALE.png "Local Group Details")
+![GroupsListedExpandedSCALE](/images/SCALE/22.12/GroupsListedExpandedSCALE.png "Local Group Details")
**Members** opens the **[Update Members](#update-members-screen)** screen. **Delete** opens a delete confirmation dialog.
@@ -36,21 +33,24 @@ The expanded view of each group includes details on that group and provides the
The **Add User** and **Edit User** configuration screens display the same setting options.
Built-in users (except the **root** user) do not include the **Home Directory Permissions** settings, but all new users created, such as those for an SMB share like the **smbguest** user do.
-![AddGroupSCALE](/images/SCALE/22.02/AddGroupSCALE.png "Add Group")
+![AddGroupGIDConfigSCALE](/images/SCALE/22.12/AddGroupGIDConfigSCALE.png "Add Group")
| Setting | Description |
|---------|-------------|
| **GID** | Required. Enter a unique number for the group ID (**GID**) TrueNAS uses to identify a Unix group. Enter a number above 1000 for a group with user accounts (you cannot change the GID later). If a system service uses a group, the group ID must match the default port number for the service. |
-| **Name** | Required. Enter a name for the group. The group name cannot begin with a hyphen (-) or contain a space, tab, or any of these characters: colon (:), plus (+), ampersand (&), hash (#), percent (%), carat (^), open or close parentheses ( ), exclamation mark (!), at symbol (@), tilde (~), asterisk (*), question mark (?) greater or less than (<) (>), equal ). You can only use the dollar sign ($) as the last character in a user name. |
-| **Permit Sudo** | Select to give this group administrator permissions and the ability to use [sudo](https://www.sudo.ws/). When using sudo, a group is prompted for their account password. Leave **Permit Sudo** checkbox clear for better security. |
-| **Samba Authentication** | Select to allow Samba permissions and authentication to use this group. |
+| **Name** | Required. Enter a name for the group. The group name cannot begin with a hyphen (-) or contain a space, tab, or any of these characters: colon (:), plus (+), ampersand (&), hash (#), percent (%), carat (^), open or close parentheses ( ), exclamation mark (!), at symbol (@), tilde (~), asterisk (*), question mark (?) greater or less than (<) (>), equal (=). You can only use the dollar sign ($) as the last character in a user name. |
+| **Allowed sudo commands** | Enter specific [sudo](https://www.sudo.ws/) commands allowed for this group member, grants administrator permissions for this group member when using these commands. When using sudo, a group member is prompted for their account password. |
+| **Allow all sudo commands** | Select to give this group member administrator permissions and the ability to use all [sudo](https://www.sudo.ws/) commands. When using sudo, a group member is prompted for their account password. |
+| **Allowed sudo commands with no password** | Enter specific [sudo](https://www.sudo.ws/) commands allowed with no password required for this group member, grants administrator permissions for this group member when using these commands. |
+| **Allow all sudo commands with no password** | Select to give this group member administrator permissions and the ability to use all [sudo](https://www.sudo.ws/) commands with no password required. |
+| **Samba Authentication** | Select to allow this group to authenticate to and access data shares with [SMB]({{< relref "/SCALE/SCALETutorials/Shares/SMB/AddSMBShares.md" >}}) samba shares. |
| **Allow Duplicate GIDs** | Not recommended. Select to allow more than one group to have the same group ID. |
## Update Members Screen
Use the **Update Members** screen to manage group permissions and access for large numbers of user accounts.
-![LocalGroupsUpdateMembersSCALE](/images/SCALE/22.02/LocalGroupsUpdateMembersSCALE.png "Update Members Screen")
+![GroupsManageMembersSCALE](/images/SCALE/22.12/GroupsManageMembersSCALE.png "Update Members Screen")
To add user accounts to the group, select users and then click .
Select **All Users** to move all users to the selected group, or select multiple users by holding Ctrl while clicking each entry.
diff --git a/content/SCALE/SCALEUIReference/Credentials/LocalUsersScreensSCALE.md b/content/SCALE/SCALEUIReference/Credentials/LocalUsersScreensSCALE.md
index 42ac1c6261..a5184b6ffb 100644
--- a/content/SCALE/SCALEUIReference/Credentials/LocalUsersScreensSCALE.md
+++ b/content/SCALE/SCALEUIReference/Credentials/LocalUsersScreensSCALE.md
@@ -77,19 +77,20 @@ Built-in users (except the **root** user) do not include the **Home Directory Pe
**Directory and Permissions** settings specify the user home directory and the permissions for that home directory.
{{< expand "Click Here for More Information" "v" >}}
-![AddUserDirPermAuthSCALE](/images/SCALE/22.12/AddUserDirPermAuthSCALE.png "Add User Directories, Permissions and Authentication Settings")
+![AddUserHomeDirPermSCALE](/images/SCALE/22.12/AddUserHomeDirPermSCALE.png "Add User Directories, Permissions and Authentication Settings")
| Setting | Description |
|---------|-------------|
-| **Home Directory** | Enter or browse to enter the path to the home directory for this user. If the directory exists and matches the **Username**, it is set as the home directory for the user. When the path does not end with a subdirectory matching the username, a new subdirectory is created. The full path to the user home directory displays here on the **Edit User** screen when editing this user. |
-| **Home Directory Permissions** | Select the permissions checkboxes (**Read**, **Write**, **Execute**) for each (**User**, **Group**, **Other**) to set default Unix permissions for the user home directory. Built-in users are read-only and do not see these permissions settings.|
+| **Home Directory** | Enter or browse to enter the path to the home directory for this user. If the directory exists and matches the **Username**, it is set as the home directory for the user. When the path does not end with a subdirectory matching the username, a new subdirectory is created if the **Create Home Directory** checkbox is enabled. The full path to the user home directory displays here on the **Edit User** screen when editing this user. |
+| **Home Directory Permissions** | Select the permissions checkboxes (**Read**, **Write**, **Execute**) for each (**User**, **Group**, **Other**) to set default Unix permissions for the user home directory. Built-in users are read-only and do not see these permissions settings.|
+| **Create Home Directory** | Select to create a home directory for the user when the home directory path for this user does not end in the user name. Creates a home directory for the user within the selected path. |
{{< /expand >}}
### Authentication settings
**Authentication** settings specify authentication methods, the public SSH key, user administration access, and enables/disables password authentication. It also covers the Shell options.
{{< expand "Click Here for More Information" "v" >}}
-![AddUserDirPermAuthSCALE](/images/SCALE/22.12/AddUserDirPermAuthSCALE.png "Add User Directories, Permissions and Authentication Settings")
+![AddUserHomeDirAuthSCALE](/images/SCALE/22.12/AddUserHomeDirAuthSCALE.png "Add User Directories, Permissions and Authentication Settings")
| Setting | Description |
|---------|-------------|
diff --git a/content/SCALE/SCALEUIReference/Storage/Datasets/DatasetsScreensScale.md b/content/SCALE/SCALEUIReference/Storage/Datasets/DatasetsScreensScale.md
index 317b44b0f0..2f391f7cf6 100644
--- a/content/SCALE/SCALEUIReference/Storage/Datasets/DatasetsScreensScale.md
+++ b/content/SCALE/SCALEUIReference/Storage/Datasets/DatasetsScreensScale.md
@@ -24,7 +24,7 @@ The **Datasets** screen displays **No Datasets** with a **Create Pool** button i
After creating a dataset, the left side of the screen displays a tree table that lists parent or child datasets (or zvols). The **Details for *datasetname*** area on the right side of the screen displays a set of dataset widgets.
{{< hint info >}}
-Large petabyte systems may report storage numbers inaccurately. Storage configurations with more than 9,007,199,254,740,992 bytes will round the last 4 digits.
+Large petabyte systems might report storage numbers inaccurately. Storage configurations with more than 9,007,199,254,740,992 bytes will round the last 4 digits.
For example, a system with 18,446,744,073,709,551,615 bytes reports the number as 18,446,744,073,709,552,000 bytes.
{{< /hint >}}
@@ -286,7 +286,7 @@ The **Other Options** help tune the dataset for specific data sharing protocols,
|---------|-------------|
| **ZFS Deduplication** | Select the option from the dropdown list to transparently reuse a single copy of duplicated data to save space. Options are **Inherit** to use the parent or root dataset settings. **On** to use deduplication. **Off** to not use deduplication, or **Verify** to do a byte-to-byte comparison when two blocks have the same signature to verify the block contents are identical.
Deduplication can improve storage capacity, but is RAM intensive. Compressing data is recommended before using deduplication.
Deduplicating data is a one-way process. *Deduplicated data cannot be undeduplicated!* |
| **Case Sensitivity** | Select the option from the dropdown list. **Sensitive** assumes file names are case sensitive. **Insensitive** assumes file names are not case sensitive. You cannot change case sensitivity after the saving the dataset. |
-| **Share Type** | Select the option from the dropdown list to define the type of data sharing the dataset uses to optimize the dataset for that sharing protocol. Select **SMB** if using with an SMB share. Select **Generic** for all other share types. You cannot change this setting after the saving dataset. |
+| **Share Type** | Select the option from the dropdown list to define the type of data sharing the dataset uses to optimize the dataset for that sharing protocol. Select **SMB** if using with an SMB share. Select **Generic** for all other share types. Select **Apps** when creating a dataset to work an application. If you plan to deploy container applications, the system automatically creates the **ix-applications** dataset but this is not used for application data storage. You cannot change this setting after the saving dataset. |
{{< /expand >}}
### Quota Management Settings - Advanced Options
diff --git a/content/SCALE/SCALEUIReference/Storage/Datasets/QuotaScreens.md b/content/SCALE/SCALEUIReference/Storage/Datasets/QuotaScreens.md
index 302751c783..612e137881 100644
--- a/content/SCALE/SCALEUIReference/Storage/Datasets/QuotaScreens.md
+++ b/content/SCALE/SCALEUIReference/Storage/Datasets/QuotaScreens.md
@@ -13,109 +13,94 @@ TrueNAS allows setting data or object quotas for user accounts and groups cached
## User Quotas Screen
Select **User Quotas** on the **Dataset Actions** list of options to display the **User Quotas** screen.
-The **User Quotas** screen displays the names and quota data of any user accounts cached on or connected to the system. If no users exist, the screen displays the **Add Users Quotas** button in the center of the screen.
+The **User Quotas** screen displays the names and quota data of any user accounts cached on or connected to the system. If no users exist, the screen displays **No User Quotas** in the center of the screen.
-![UserQuotasScreenNoQuotas](/images/SCALE/22.02/UserQuotasScreenNoQuotas.png "User Quotas Screen")
+![UserQuotasNoQuotasSCALE](/images/SCALE/22.12/UserQuotasNoQuotasSCALE.png "User Quotas Screen")
-The **Actions** button displays two options, **Add** which displays the **Set User Quotas** screen and **Toggle Display**.
-**Toggle Display** changes the view from filter view to a list view. Click when the screen filters out all users except those with quotas. The **Show all Users** confirmation dialog displays. Click **Show** to display the list of all users.
-If you have a number of user quotas set up, the **Actions** options include **Set Quotas (Bulk)**.
-
-![UserQuotasScreenListView](/images/SCALE/22.02/UserQuotasScreenListView.png "User Quotas List View")
+![UserQuotasDataQuotaSCALE](/images/SCALE/22.12/UserQuotasDataQuotaSCALE.png "User Quotas List View")
-Use the **Columns** button to displays options to customize the table view to add or remove information. Options are **Select All**, **ID**, **Data Quota**, **DQ Used**, **DQ % Used**, **Object Quota**, **Objects Used**, **OQ % Used**, and **Reset to Defaults**. After selecting **Select All** the option toggles to **Unselect All**.
+The **Show All Users** toggle button displays all users or hides built-in users. **Add** displays the **[Set User Quotas](#set-user-quotas-screen)** screen.
-### User Expanded View
-Click the expand_more icon to display a detailed individual user quota screen.
-
-![UserQuotasRootUserExpanded](/images/SCALE/22.02/UserQuotasRootUserExpanded.png "User Quotas Expanded View")
+If you have a number of user quotas set up, the **Actions** options include **Set Quotas (Bulk)**.
-Click the edit **Edit** button to display the **[Edit User](#edit-user-configuration-window)** window.
+Click on the name of the user to display the **[Edit User](#edit-user-configuration-window)** window.
### Edit User Configuration Window
-The **Edit User** window allows you to modify the user data quota and user object quota values for an individual user.
+The **Edit User Quota** window allows you to modify the user data quota and user object quota values for an individual user.
-![EditUserQuotaWindow](/images/SCALE/22.02/EditUserQuotaWindow.png "Edit User Quota")
+![EditUserQuotasSCALE](/images/SCALE/22.12/EditUserQuotasSCALE.png "Edit User Quota")
| Settings | Description |
|----------|-------------|
| **User** | Displays the name of the selected user. |
-| **User Data Quota (Examples: 500KiB, 500M, 2 TB)** | Enter the amount of disk space the selected user can use. Entering **0** allows the user to use all disk space. You can enter human-readable values such as 50 GiB, 500M, 2 TB, etc.). If units are not specified, the value defaults to bytes. |
+| **User Data Quota (Examples: 500KiB, 500M, 2 TB)** | Enter the amount of disk space the selected user can use. Entering **0** allows the user to use all disk space. You can enter human-readable values such as 50 GiB, 500M, 2 TB, etc. If units are not specified, the value defaults to bytes. |
| **User Object Quota** | Enter the number of objects the selected user can own. Entering **0** allows unlimited objects. |
-Click **Set Quota** to save changes or **Cancel** to close the window without saving.
+Click **Save** to save changes or click on the "X" to close the window without saving.
### Set User Quotas Screen
-To display the **Set User Quotas** screen click **Actions** or if the system does not have user quotas configured, click the **Add User Quotas** button.
+To display the **Set User Quotas** screen click the **Add** button.
-![SetUserQuotasScreen](/images/SCALE/22.02/SetUserQuotasScreen.png "Set User Quotas")
+![AddUserQuotasSetQuotasSCALE](/images/SCALE/22.12/AddUserQuotasSetQuotasSCALE.png "Set User Quotas")
#### Set Quotas Settings
| Settings | Description |
|----------|-------------|
-| **User Data Quota (Examples: 500KiB, 500M, 2 TB)** | Enter the amount of disk space the selected user can use. Entering **0** allows the user to use all disk space. You can enter human-readable values such as 50 GiB, 500M, 2 TB, etc.). If units are not specified, the value defaults to bytes. |
+| **User Data Quota (Examples: 500KiB, 500M, 2 TB)** | Enter the amount of disk space the selected user can use. Entering **0** allows the user to use all disk space. You can enter human-readable values such as 50 GiB, 500M, 2 TB, etc. If units are not specified, the value defaults to bytes. |
| **User Object Quota** | Enter the number of objects the selected user can own. Entering **0** allows unlimited objects. |
#### Apply Quotas to Selected Users Settings
| Settings | Description |
|----------|-------------|
-| **Select Users Cached by this System** | Select the users from the dropdown list of options. |
-| **Search for Connected Users** | Click in the field to see the list of users on the system or type a user name and press Enter. A clickable list displays of found matches as you type. Click on the user to add the name. A warning dialog displays if there are not matches found. |
+| **Apply To Users** | Select the users from the dropdown list of options. |
-Click **Save** to set the quotas or **Cancel** to exit without saving.
+Click **Save** to set the quotas or click the "X" to exit without saving.
## Group Quotas Screens
-Select **Group Quotas** on the **Dataset Actions** list of options to display the **Edit Group Quotas** screen.
+Select **Group Quotas** on the **Dataset Actions** list of options to display the **Group Quotas** screen.
-The **Edit Group Quotas** screen displays the names and quota data of any groups cached on or connected to the system. If no groups exist, the screen displays the **Add Groups Quotas** button in the center of the screen.
+The **Group Quotas** screen displays the names and quota data of any groups cached on or connected to the system. If no groups exist, the screen displays **No Group Quotas** in the center of the screen.
-![EditGroupQuotasNoGroups](/images/SCALE/22.02/EditGroupQuotasNoGroups.png "Group Quotas Screen")
+![GroupQuotasNoQuotaSCALE](/images/SCALE/22.12/GroupQuotasNoQuotaSCALE.png "Group Quotas Screen")
-The **Actions** button displays two options, **Add** which displays the **Set Group Quotas** screen and **Toggle Display**.
-**Toggle Display** changes the view from filter view to a list view. Click when the screen filters out all groups except those with quotas. The **Show all Groups** confirmation dialog displays. Click **Show** to display the list of all groups.
+The **Show All Groups** toggle button displays all groups or hides built-in groups. **Add** displays the **[Set Group Quotas](#set-group-quotas-screen)** screen.
-![EditGroupQuotasListView](/images/SCALE/22.02/EditGroupQuotasListView.png "Group Quotas List View")
+If you have a number of group quotas set up, the **Actions** options include **Set Quotas (Bulk)**.
-Use the **Columns** button to displays options to customize the table view to add or remove information. Options are **Select All**, **ID**, **Data Quota**, **DQ Used**, **DQ % Used**, **Object Quota**, **Objects Used**, **OQ % Used**, and **Reset to Defaults**. After selecting **Select All** the option toggles to **Unselect All**.
+Click on the name of the group to display the **[Edit Group](#edit-group-configuration-window)** window.
-### Group Expanded View
-Click the expand_more icon to display a detailed individual group quota screen.
-
-![EditGroupQuotasExpandedView](/images/SCALE/22.02/EditGroupQuotasExpandedView.png "Group Quotas Expanded View")
-
-Click the edit **Edit** button to display the **[Edit Group](#edit-group-configuration-window)** window.
+![GroupQuotasVideoQuotaSCALE](/images/SCALE/22.12/GroupQuotasVideoQuotaSCALE.png "Group Quotas List View")
### Edit Group Configuration Window
The **Edit Group** window allows you to modify the group data quota and group object quota values for an individual group.
-![EditGroupQuotaWindow](/images/SCALE/22.02/EditGroupQuotaWindow.png "Edit Qroup Quota")
+![EditGroupQuotasSCALE](/images/SCALE/22.12/EditGroupQuotasSCALE.png "Edit Qroup Quota")
| Settings | Description |
|----------|-------------|
| **Group** | Displays the name of the selected group(s). |
-| **Group Data Quota (Examples: 500KiB, 500M, 2 TB)** | Enter the amount of disk space the selected group can use. Entering **0** allows the group to use all disk space. You can enter human-readable values such as 50 GiB, 500M, 2 TB, etc.). If units are not specified, the value defaults to bytes. |
+| **Group Data Quota (Examples: 500KiB, 500M, 2 TB)** | Enter the amount of disk space the selected group can use. Entering **0** allows the group to use all disk space. You can enter human-readable values such as 50 GiB, 500M, 2 TB, etc. If units are not specified, the value defaults to bytes. |
| **Group Object Quota** | Enter the number of objects the selected group can own or use. Entering **0** allows unlimited objects. |
-Click **Set Quota** to save changes or **Cancel** to close the window without saving.
+Click **Save** to set the quotas or click the "X" to exit without saving.
-### Set User Quotas Screen
-To display the **Set Group Quotas** screen click **Actions** or if the system does not have group quotas configured, click the **Add Group Quotas** button.
+### Set Group Quotas Screen
+To display the **Set Group Quotas** screen, click the **Add** button.
-![SetGroupQuotasScreen](/images/SCALE/22.02/SetGroupQuotasScreen.png "Set Group Quotas")
+![AddGroupQuotasSetQuotaSCALE](/images/SCALE/22.12/AddGroupQuotasSetQuotaSCALE.png "Set Group Quotas")
#### Set Quotas Settings
| Settings | Description |
|----------|-------------|
-| **Group Data Quota (Examples: 500KiB, 500M, 2 TB)** | Enter the amount of disk space the selected group can use. Entering **0** allows the group to use all disk space. You can enter human-readable values such as 50 GiB, 500M, 2 TB, etc.). If units are not specified, the value defaults to bytes. |
+| **Group Data Quota (Examples: 500KiB, 500M, 2 TB)** | Enter the amount of disk space the selected group can use. Entering **0** allows the group to use all disk space. You can enter human-readable values such as 50 GiB, 500M, 2 TB, etc. If units are not specified, the value defaults to bytes. |
| **Group Object Quota** | Enter the number of objects the selected group can own or use. Entering **0** allows unlimited objects. |
#### Apply Quotas to Selected Groups Settings
| Settings | Description |
|----------|-------------|
-| **Select Groups Cached by this System** | Select the users from the dropdown list of options. |
-| **Search for Connected Groups** | Click in the field to see the list of groups on the system or type a group name and press Enter. A clickable list displays of found matches as you type. Click on the group to add the name. A warning dialog displays if there are no matches found. |
+| **Apply To Groups** | Select groups from the dropdown list of options. |
-Click **Save** to set the quotas or **Cancel** to exit without saving.
+Click **Save** to set the quotas or click the "X" to exit without saving.
{{< taglist tag="scalequotas" limit="10" >}}
{{< taglist tag="scaledatasets" limit="10" title="Related Dataset Articles" >}}
\ No newline at end of file
diff --git a/content/SCALE/SCALEUIReference/SystemSettings/FailoverScreen.md b/content/SCALE/SCALEUIReference/SystemSettings/FailoverScreen.md
new file mode 100644
index 0000000000..8de4b468ff
--- /dev/null
+++ b/content/SCALE/SCALEUIReference/SystemSettings/FailoverScreen.md
@@ -0,0 +1,40 @@
+---
+title: "Failover Screen"
+description: "This article provides information on the **Failover** screen settings and functions."
+weight: 45
+aliases:
+tags:
+- scaleenterprise
+- scalefailover
+---
+
+
+{{< enterprise >}}
+This article only applies to SCALE Enterprise (HA) systems.
+{{< /enterprise >}}
+
+The **System Settings > Failover** screen displays settings used on SCALE Enterprise (HA) systems to turn the failover function on or off, sync the primary and standby controllers, and allow administrator users to configure failover. The main menu option and screen only display on Enterprise (HA) systems with the correct license applied.
+
+![FailoverScreen](/images/SCALE/22.12/FailoverScreen.png "Failover Screen")
+
+| Setting | Description |
+|---------|-------------|
+| **Disable Failover** | Select to turn failover off. Leave clear to enable failover. |
+| **Default TrueNAS controller** | Select to make the current active controller the default controller when both TrueNAS controllers are online and HA is enabled. To change the default TrueNAS controller, leave unselected on the default TrueNAS controller and allow the system to fail over. This process briefly interrupts system services. |
+| **Network Timeout Before Initiating Failover** | Enter a number in seconds to wait after a network failure before triggering a failover. Default is **0** which means failover occurs immediately, or after two seconds when the system is using a link aggregate. |
+| **Sync To Peer** | Initiates a sync operation that copies over the primary controller configuration to the standby controller. Opens the **[Sync To Peer](#sync-to-or-from-peer)** dialog to confirm the operation. |
+| **Sync From Peer** | Initiates a sync operation that copies over the standby controller configuration to the primary controller. |
+
+## Sync To or From Peer
+**Sync To Peer** and **Sync From Peer** buttons each open a confirmation dialog before SCALE performs the operation requested.
+
+![FailoverSyncToPeerDialog](/images/SCALE/22.12/FailoverSyncToPeerDialog.png "Failover Sync To Peer Dialog")
+
+| Setting | Description |
+|---------|-------------|
+| **Reboot standby TrueNAS controller** | Select to cause the standby controller to reboot after the sync operation completes. |
+| **Confirm** | Select to confirm you want to perform the sync-to-peer operation. |
+| **Proceed** | Begins the sync operation. |
+
+
+{{< taglist tag="scaleEnterprise" limit="10" title="Related Enterprise Articles" >}}
\ No newline at end of file
diff --git a/content/SCALE/SCALEUIReference/SystemSettings/Services/FTPServiceScreenSCALE.md b/content/SCALE/SCALEUIReference/SystemSettings/Services/FTPServiceScreenSCALE.md
index d6fb6c140d..cd7e9c48bb 100644
--- a/content/SCALE/SCALEUIReference/SystemSettings/Services/FTPServiceScreenSCALE.md
+++ b/content/SCALE/SCALEUIReference/SystemSettings/Services/FTPServiceScreenSCALE.md
@@ -8,14 +8,13 @@ tags:
- scaleftp
- scalesftp
- scaletftp
+ - scalefiletransfer
---
-
{{< toc >}}
-
The [File Transfer Protocol (FTP)](https://tools.ietf.org/html/rfc959) is a simple option for data transfers.
-The SSH and Trivial FTP options provide secure or simple config file transfer methods respectively.
+The SSH options provide secure transfer methods for critical objects like configuration files, while the Trivial FTP options provide simple file transfer methods for non-critical files.
The **FTP** service has basic and advanced setting options.
Click the edit for **FTP** to open the **Basic Settings** configuration screen.
@@ -31,27 +30,27 @@ To configure FTP, go to **System Settings > Services** and find **FTP**, then cl
| **Port** | Enter the port the FTP service listens on. |
| **Clients** | Enter the maximum number of simultaneous clients. |
| **Connections** | Enter the maximum number of connections per IP address. **0** is unlimited. |
-| **Login Attempts** | Enter the maximum attempts before client is disconnected. Increase if users are prone to misspellings or typos. |
-| **Notransfer Timeout** | Enter the maximum number of seconds a client is allowed to spend connected, after authentication, without issuing a command which results in creating an active or passive data connection (i.e. sending/receiving a file, or receiving a directory listing). |
-| **Timeout** | Enter the maximum client idle time in seconds before disconnect. Default value is **600** seconds. |
+| **Login Attempts** | Enter the maximum attempts before the client disconnects. Increase if users are prone to misspellings or typos. |
+| **Notransfer Timeout** | Enter the maximum number of seconds a client is allowed to spend connected, after authentication, without issuing a command which results in creating an active or passive data connection (sending/receiving a file or receiving a directory listing). |
+| **Timeout** | Enter the maximum client idle time in seconds before disconnecting. The default value is **600** seconds. |
## FTP Advanced Settings
-**Advanced Settings** include the **General Options** on the **Basic Settings** configuration screen, and allow you to specify access permissions, TLS settings, bandwidth and other setting to further customize FTP access.
+**Advanced Settings** include the **General Options** on the **Basic Settings** configuration screen and allow you to specify access permissions, TLS settings, bandwidth, and other settings to customize FTP access.
### Access and TLS Setting Options
![FTPAdvancedSettingsAccess](/images/SCALE/22.12/FTPAdvancedSettingsAccess.png "Services FTP Advanced Settings Access Options")
#### Access Settings
-**Access** settings specify user login, file and directory access permissions.
+**Access** settings specify user login, file, and directory access permissions.
| Settings | Description |
|----------|-------------|
-| **Always Chroot** | Select to only allow users access their home directory if they are in the **wheel** group. This option increases security risk. To confine FTP sessions to a home directory of a local user, enable **chroot** and select **Allow Local User Login**. |
-| **Allow Root Login** | Select to allow root logins. This option increases security risk so enabling this is discouraged. Do *not* allow anonymous or root access unless it is necessary.
-For better security, enable TLS when possible (especially when exposing FTP to a WAN). TLS effectively makes this [FTPS](https://tools.ietf.org/html/rfc4217). |
-| **Allow Anonymous Login** | Select to allow anonymous FTP logins with access to the directory specified in **Path**. Selecting this displays the **Path** field. Enter or browse to the loction to populate the field. |
-| **Allow Local User Login** | Select to allow any local user to log in. By default, only members of the **ftp** group are allowed to log in. |
+| **Always Chroot** | Only allows users to access their home directory if they are in the **wheel** group. This option increases security risk. To confine FTP sessions to a local user home directory, enable **chroot** and select **Allow Local User Login**. |
+| **Allow Root Login** | Select to allow root logins. This option increases security risk, so enabling this is discouraged. Do *not* allow anonymous or root access unless it is necessary.
+Enable TLS when possible (especially when exposing FTP to a WAN). TLS effectively makes this [FTPS](https://tools.ietf.org/html/rfc4217) for better security. |
+| **Allow Anonymous Login** | Select to allow anonymous FTP logins with access to the directory specified in **Path**. Selecting this displays the **Path** field. Enter or browse the location to populate the field. |
+| **Allow Local User Login** | Select to allow any local user to log in. Only members of the **ftp** group may log in by default. |
| **Require IDENT Authentication** | Select to require IDENT authentication. Setting this option results in timeouts when ident (or in **Shell** `identd`) is not running on the client. |
| **File Permissions** | Select the default permissions for newly created files. |
| **Directory Permissions** | Select the default permissions for newly created directories. |
@@ -59,52 +58,51 @@ For better security, enable TLS when possible (especially when exposing FTP to a
![FTPAdvancedSettingsTLS](/images/SCALE/22.12/FTPAdvancedSettingsTLS.png "Services FTP Advanced Settings TLS Options")
#### TLS Settings
-**TLS** settings specify the authentication methods you want to apply and whether you want to encrypt the data you transfer across the Internet.
+**TLS** settings specify the authentication methods, such as if you want to encrypt the data you transfer across the Internet.
| Settings | Description |
|----------|-------------|
| **Enable TLS** | Select to allow encrypted connections. Requires a certificate (created or imported using **System > Certificates**. |
-| **Certificate** | Select the SSL certificate to use for TLS FTP connections from the dropdown list. To create a certificate, go to **System** > **Certificates**. |
+| **Certificate** | Select the SSL certificate for TLS FTP connections from the dropdown list. To create a certificate, go to **System** > **Certificates**. |
| **TLS Policy** | Select the policy from the dropdown list of options. Options are **On**, **off**, **Data**, **!Data**, **Auth**, **Ctrl**, **Ctrl + Data**, **Ctrl +!Data**, **Auth + Data** or **Auth +!Data**. Defines whether the control channel, data channel, both channels, or neither channel of an FTP session must occur over SSL/TLS. The policies are described [here](http://www.proftpd.org/docs/directives/linked/config_ref_TLSRequired.html). |
-| **TLS Allow Client Renegotiations** | Select to allow client renegotiation. This option is not recommended. Setting this option breaks several security measures. See [mod_tls](http://www.proftpd.org/docs/contrib/mod_tls.html) for details. |
-| **TLS Allow Dot Login** | If select, TrueNAS checks the user home directory for a .tlslogin file containing one or more PEM-encoded certificates. If not found, the user is prompted for password authentication. |
-| **TLS Allow Per User** | If set, allows sending a user password unencrypted. |
+| **TLS Allow Client Renegotiations** | Select to allow client renegotiation. We do not recommend this option. Setting this option breaks several security measures. See [mod_tls](http://www.proftpd.org/docs/contrib/mod_tls.html) for details. |
+| **TLS Allow Dot Login** | TrueNAS checks the user home directory for a .tlslogin file containing one or more PEM-encoded certificates. If not found, the user must enter their password. |
+| **TLS Allow Per User** | If select to allow sending a user password unencrypted. |
| **TLS Common Name Required** | Select to require the common name in the certificate to match the FQDN of the host. |
-| **TLS Enable Diagnostics** | Selected to logs more verbose, which is helpful when troubleshooting a connection. |
+| **TLS Enable Diagnostics** | Select for more verbose logging, which is helpful when troubleshooting a connection. |
| **TLS Export Certificate Data** | Select to export the certificate environment variables. |
-| **TLS No Certificate Request** | Select if the client cannot connect likely because the client server is poorly handling the server certificate request. |
+| **TLS No Certificate Request** | Select if the client cannot connect, likely because the client server is not correctly handling the server certificate request. |
| **TLS No Empty Fragments** | Not recommended. This option bypasses a security mechanism. |
| **TLS No Session Reuse Required** | This option reduces connection security. Only use it if the client does not understand reused SSL sessions. |
-| **TLS Export Standard Vars** | Selected to set several environment variables. |
+| **TLS Export Standard Vars** | Select to set several environment variables. |
| **TLS DNS Name Required** | Select to require the client DNS name to resolve to its IP address and the cert contain the same DNS name. |
| **TLS IP Address Required** | Select to require the client certificate IP address to match the client IP address. |
-### Bandwidth Settings
-**Bandwidth** settings specify the amount of space you want to allocate for local and anonymous user uploads and downloads.
-
-![FTPAdvancedSettingsBandwidth](/images/SCALE/22.12/FTPAdvancedSettingsBandwidth.png "Services FTP Advanced Settings Bandwidth Options")
-
-| Settings | Description |
-|----------|-------------|
-| **Local User Upload Bandwidth: (Examples: 500 KiB, 500M, 2 TB)** | Enter a value in KiBs or greater. A default of **0 Kib** means unlimited. If measurement is not specified it defaults to KiB. This field accepts human-readable input in KiBs or greater (M, GiB, TB, etc.). Default **0 KiB** is unlimited. |
-| **Local User Download Bandwidth** | Enter a value in KiBs or greater. A default of **0 Kib** means unlimited. If measurement is not specified it defaults to KiB. This field accepts human-readable input in KiBs or greater (M, GiB, TB, etc.). Default **0 KiB** is unlimited. |
-| **Anonymous User Upload Bandwidth** | Enter a value in KiBs or greater. A default of **0 Kib** means unlimited. If measurement is not specified it defaults to KiB. This field accepts human-readable input in KiBs or greater (M, GiB, TB, etc.). Default **0 KiB** is unlimited. |
-| **Anonymous User Download Bandwidth** | Enter a value in KiBs or greater. A default of **0 Kib** means unlimited. If measurement is not specified it defaults to KiB. This field accepts human-readable input in KiBs or greater (M, GiB, TB, etc.). Default **0 KiB** is unlimited. |
-
### Other Options
-
![FTPAdvancedSettingsOtherOptions](/images/SCALE/22.12/FTPAdvancedSettingsOtherOptions.png "Services FTP Advanced Settings Other Options")
| Settings | Description |
|----------|-------------|
| **Minimum Passive Port** | Enter a numeric value. Used by clients in PASV mode. A default of **0** means any port above 1023. |
| **Maximum Passive Port** | Enter a numeric value. Used by clients in PASV mode. A default of **0** means any port above 1023. |
-| **Enable FXP** | Select to enable the File eXchange Protocol (FXP). Not recommended as this leaves the server vulnerable to FTP bounce attacks. |
+| **Enable FXP** | Select to enable the File eXchange Protocol (FXP). We do not recommend FXP since it leaves the server vulnerable to FTP bounce attacks. |
| **Allow Transfer Resumption** | Select to allow FTP clients to resume interrupted transfers. |
-| **Perform Reverse DNS Lookups** | Select to allow performing reverse DNS lookups on client IPs. Causes long delays if reverse DNS isn't configured. |
-| **Masquerade Address** | Enter a public IP address or host name. Set if FTP clients cannot connect through a NAT device. |
+| **Perform Reverse DNS Lookups** | Select to allow performing reverse DNS lookups on client IPs. This option causes long delays if you do not configure reverse DNS. |
+| **Masquerade Address** | Enter a public IP address or host name. Use if FTP clients cannot connect through a NAT device. |
| **Display Login** | Enter a message that displays to local login users after authentication. Anonymous login users do not see this message. |
| **Auxiliary Parameters** | Used to add additional [proftpd(8)](https://linux.die.net/man/8/proftpd) parameters. |
-{{< taglist tag="scaleftp" limit="10" >}}
\ No newline at end of file
+### Bandwidth Settings
+**Bandwidth** settings specify the space you want to allocate for local and anonymous user uploads and downloads.
+
+![FTPAdvancedSettingsBandwidth](/images/SCALE/22.12/FTPAdvancedSettingsBandwidth.png "Services FTP Advanced Settings Bandwidth Options")
+
+| Settings | Description |
+|----------|-------------|
+| **Local User Upload Bandwidth: (Examples: 500 KiB, 500M, 2 TB)** | Enter a value in KiBs or greater. A default of **0 Kib** means unlimited. If you do not specify a measurement, it defaults to KiB. This field accepts human-readable input in KiBs or greater (M, GiB, TB, etc.). The default **0 KiB** is unlimited. |
+| **Local User Download Bandwidth** | Enter a value in KiBs or greater. A default of **0 Kib** means unlimited. If you do not specify a measurement, it defaults to KiB. This field accepts human-readable input in KiBs or greater (M, GiB, TB, etc.). The default **0 KiB** is unlimited. |
+| **Anonymous User Upload Bandwidth** | Enter a value in KiBs or greater. A default of **0 Kib** means unlimited. If you do not specify a measurement, it defaults to KiB. This field accepts human-readable input in KiBs or greater (M, GiB, TB, etc.). The default **0 KiB** is unlimited. |
+| **Anonymous User Download Bandwidth** | Enter a value in KiBs or greater. A default of **0 Kib** means unlimited. If you do not specify a measurement, it defaults to KiB. This field accepts human-readable input in KiBs or greater (M, GiB, TB, etc.). The default **0 KiB** is unlimited. |
+
+{{< taglist tag="scaleftp" limit="10" >}}
diff --git a/content/SCALE/SCALEUIReference/Virtualization/VirtualizationScreens.md b/content/SCALE/SCALEUIReference/Virtualization/VirtualizationScreens.md
index 96572f7a6d..6bff22c8c1 100644
--- a/content/SCALE/SCALEUIReference/Virtualization/VirtualizationScreens.md
+++ b/content/SCALE/SCALEUIReference/Virtualization/VirtualizationScreens.md
@@ -13,23 +13,22 @@ tags:
The **Virtualization** option displays the **Virtual Machines** screen that displays the list of VMs configured on the TrueNAS SCALE system.
+![VirtualMachinesScreenwithVM](/images/SCALE/22.12/VirtualMachinesScreenwithVM.png "Virtual Machine Screen")
+
If there are no VMs configured on the system, the **No Virtual Machines** screen displays. This also displays if you delete all VMs on the system.
![AddVMNoVMs](/images/SCALE/22.12/AddVMNoVMs.png "No Virtual Machine Screen")
-After adding virtual machines (VMs) to the system the screen displays a list of the VMs.
+**Add Virtual Machines** and the **Add** button in the top right of the screen opens the **[Create Virtual Machine](#create-virtual-machine-wizard-screens)** wizard configuration screens.
-![VMListedSCALE](/images/SCALE/22.12/VMListedSCALE.png "Virtual Machines Listed")
+After adding virtual machines (VMs) to the system the screen displays a list of the VMs.
-Click on the VM name or the expand down arrow to the right of a VM to open the details screen for that VM.
+Click on the VM name or the expand down arrow to the right of a VM to open the details screen for that VM.
The **State** toggle displays and changes the state of the VM.
The **Autostart** checkbox, when selected, automatically starts the VM if the system reboots. When cleared you must manually start the VM.
## Create Virtual Machine Wizard Screens
-
-**Add Virtual Machines** and the **Add** button in the top right of the screen opens the **[Create Virtual Machine](#create-virtual-machine-wizard-screens)** wizard configuration screens.
-
The **Create Virtual Machine** configuration wizard displays all settings to set up a new virtual machine.
Use **Next** and **Back** to advance to the next or return to the previous screen to change a setting.
@@ -39,7 +38,7 @@ Use **Save** to close the wizard screens and add the new VM to the **Virtual Mac
The **Operating System** configuration screen settings specify the VM operating system type, the time it uses, its boot method, and its display type.
{{< expand "Click Here for More Information" "v" >}}
-![CreateVMWOpSysSCALE](/images/SCALE/22.12/CreateVMWOpSysSCALE.png "Operating System 1")
+![AddVMOperSys](/images/SCALE/22.12/AddVMOperSys.png "Operating System 1")
| Setting | Description |
|---------|-------------|
@@ -59,7 +58,7 @@ The **Operating System** configuration screen settings specify the VM operating
The **CPU and Memory** configuration wizard screen settings specify the number of virtual CPUs to allocate to the virtual machine, cores per virtual CPU socket, and threads per core. Also to specify the CPU mode and model, and the memory size.
{{< expand "Click Here for More Information" "v" >}}
-![CreateVMWCPUMemSCALE](/images/SCALE/22.12/CreateVMWCPUMemSCALE.png "CPU and Memory")
+![AddVMMemory](/images/SCALE/22.12/AddVMMemory.png "CPU and Memory")
| Setting | Description |
|---------|-------------|
@@ -68,17 +67,16 @@ The **CPU and Memory** configuration wizard screen settings specify the number o
| **Threads** | Required. Enter the number of threads per core. A single CPU core can have up to two threads per core. A dual core could have up to four threads. The product of vCPUs, cores, and threads must not exceed 16. |
| **Optional: CPU Set (Examples: 0-3,8-11)** | Specify the logical cores that VM is allowed to use. Better cache locality can be achieved by setting CPU set base on CPU topology. E.g. to assign cores: 0,1,2,5,9,10,11 you can write: `1-2,5,9-11` |
| **Pin vcpus** | When number of vcpus is equal to number of cpus in CPU set vcpus can be automatically pinned into CPU set. Pinning is done by mapping each vcpu into single cpu number in following the order in CPU set. This will improve CPU cache locality and can reduce possible stutter in GPU passthrough VMs. |
-| **CPU Mode** | Select the CPU mode attribute from the dropdown list to allow your guest VM CPU to be as close to the host CPU as possible. Select **Custom** to make it so a persistent guest virtual machine sees the same hardware no matter what physical machine the guest VM boots on. It is the default if the CPU mode attribute is not specified. This mode describes the CPU presented to the guest. Select **Host Model** to use this shortcut to copying the physical host machine CPU definition from the capabilities XML into the domain XML. As the CPU definition copies just before starting a domain, a different physical host machine can use the same XML while still providing the best guest VM CPU each physical host machine supports. Select **Host Passthrough** when the CPU visible to the guest VM is exactly the same as the physical host machine CPU, including elements that cause errors within libvirt. The downside of this is you cannot reproduce the guest VM environment on different hardware. |
-| **CPU Model** | Select a CPU model to emulate when **CPU Mode** is set to **Custom**. |
+| **CPU Mode** | Select the CPU mode attribute from the dropdown list to allow your guest VM CPU to be as close to the host CPU as possible. Select **Custom** to make it so a persistent guest virtual machine sees the same hardware no matter what physical physical machine the guest VM boots on. It is the default if the CPU mode attribute is not specified. This mode describes the CPU presented to the guest. Select **Host Model** to use this shortcut to copying the physical host machine CPU definition from the capabilities XML into the domain XML. As the CPU definition copies just before starting a domain, a different physical host machine can use the same XML while still providing the best guest VM CPU each physical host machine supports. Select **Host Passthrough** when the CPU visible to the guest VM is exactly the same as the physical host machine CPU, including elements that cause errors within libvirt. The downside of this is you cannot reproduce the guest VM environment on different hardware. |
+| **CPU Model** | Select a CPU model to emulate. |
| **Memory Size** | Allocate RAM for the VM. Minimum value is 256 MiB. This field accepts human-readable input (Ex. 50 GiB, 500M, 2 TB). If units are not specified, the value defaults to bytes. |
-| **Minimum Memory Size** | When not specified, guest system is given the fixed amount of memory listed in **Memory Size**. When **Minimum Memory Size** is specified, guest system is given memory within the range of **Minimum Memory Size** and **Memory Size** as needed. |
| **Optional: NUMA nodeset (Example: 0-1)** | Node set allows setting NUMA nodes for multi NUMA processors when CPU set was defined. Better memory locality can be achieved by setting node set based on assigned CPU set. Example: if cpus 0,1 belong to NUMA node 0, then setting nodeset to 0 will improve memory locality. |
{{< /expand >}}
### Disks Screen
The **Disks** configuration wizard screen settings specify whether to create a new zvol on an existing dataset for a disk image or use an existing zvol or file for the VM. You also specify the disk type, zvol location and size.
{{< expand "Click Here for More Information" "v" >}}
-![CreateVMWDisksSCALE](/images/SCALE/22.12/CreateVMWDisksSCALE.png "Create VM Disks")
+![AddVMDisks](/images/SCALE/22.12/AddVMDisks.png "Create VM Disks")
| Setting | Description |
|---------|-------------|
@@ -93,7 +91,7 @@ The **Disks** configuration wizard screen settings specify whether to create a n
The **Network Interface** screen settings specify the network adaptor type, mac address and the physical network interface card associated with the VM.
{{< expand "Click Here for More Information" "v" >}}
-![CreateVMWNetworkInterfaceSCALE](/images/SCALE/22.12/CreateVMWNetworkInterfaceSCALE.png "Network Interface")
+![AddVMNetwork](/images/SCALE/22.12/AddVMNetwork.png "Network Interface")
| Setting | Description |
|---------|-------------|
@@ -106,7 +104,7 @@ The **Network Interface** screen settings specify the network adaptor type, mac
The **Installation Media** screen settings specify the operation system installation media image on a dataset or upload one from the local machine.
{{< expand "Click Here for More Information" "v" >}}
-![CreateVMWInstallMediaSCALE](/images/SCALE/22.12/CreateVMWInstallMediaSCALE.png "Installation Media")
+![AddVMInstallMedia](/images/SCALE/22.12/AddVMInstallMedia.png "Installation Media")
| Setting | Description |
|---------|-------------|
@@ -119,7 +117,7 @@ The **Installation Media** screen settings specify the operation system installa
The **GPU** screen settings specify graphic processing unit (GPU) for the VM. It also provides the option to hide the VM from the Microsoft Reserved Partition (MSR) on Windows systems.
{{< expand "Click Here for More Information" "v" >}}
-![CreateVMWGPUsSCALE](/images/SCALE/22.12/CreateVMWGPUsSCALE.png "GPU Screen")
+![AddVMGPU](/images/SCALE/22.12/AddVMGPU.png "GPU Screen")
| Setting | Description |
|---------|-------------|
@@ -129,10 +127,6 @@ The **GPU** screen settings specify graphic processing unit (GPU) for the VM. It
{{< /expand >}}
### Confirm Options Screen
The **Confirm Options** screen displays the settings selected using the **Create Virtual Machine** wizard screens. It displays the number CPUs, cores, threads, the memory, name of the VM and the disk size.
-{{< expand "Click Here for More Information" "v" >}}
-
-![CreateVMWConfirmSCALE](/images/SCALE/22.12/CreateVMWConfirmSCALE.png "Confirm Screen")
-{{< /expand >}}
Click **Save** to add the VM to the **Virtual Machines** screen. Click **Back** to return to the previous screens to make changes.
## Virtual Machine Detail Screen
@@ -142,7 +136,7 @@ The details view of any VM displays the basic information on the number of virtu
![VirtualMachinesScreenwithVMDetails](/images/SCALE/22.12/VirtualMachinesScreenwithVMDetails.png "VM Details Screen")
The buttons below the details show the actions options for each VM.
-
+
| Operation | Icon | Description |
|-----------|------|-------------|
| **Start** | | Starts a VM. The toggle turns blue when the VM switches to running. Toggles to **Stop**. After clicking **Start** the **Restart**,**Power Off**, **Display** and **Serial Shell** option buttons display. |
@@ -203,6 +197,7 @@ The **Edit** screen **General Settings** specify the basic settings for the VM.
### Edit CPU and Memory Settings
The **Edit** screen **CPU and Memory** settings are the same as those in the **Create Virtual Machine** wizard screen.
{{< expand "Click Here for More Information" "v" >}}
+
![EditVMCPUandMemory](/images/SCALE/22.12/EditVMCPUandMemory.png "Virtual Machines Edit CPU and Memory")
| **Virtual CPUs** | Required. Enter the number of virtual CPUs to allocate to the virtual machine. The maximum is 16, or fewer if the host CPU limits the maximum. The VM operating system might impose operational or licensing restrictions on the number of CPUs. |
@@ -213,7 +208,6 @@ The **Edit** screen **CPU and Memory** settings are the same as those in the **C
| **CPU Mode** | Select the CPU mode attribute from the dropdown list to allow your guest VM CPU to be as close to the host CPU as possible. Select **Custom** to make it so a persistent guest virtual machine sees the same hardware no matter what physical physical machine the guest VM boots on. It is the default if the CPU mode attribute is not specified. This mode describes the CPU presented to the guest. Select **Host Model** to use this shortcut to copying the physical host machine CPU definition from the capabilities XML into the domain XML. As the CPU definition copies just before starting a domain, a different physical host machine can use the same XML while still providing the best guest VM CPU each physical host machine supports. Select **Host Passthrough** when the CPU visible to the guest VM is exactly the same as the physical host machine CPU, including elements that cause errors within libvirt. The downside of this is you cannot reproduce the guest VM environment on different hardware. |
| **CPU Model** | Select a CPU model to emulate. |
| **Memory Size** | Allocate RAM for the VM. Minimum value is 256 MiB. This field accepts human-readable input (Ex. 50 GiB, 500M, 2 TB). If units are not specified, the value defaults to bytes. |
-| **Minimum Memory Size** | If this is not specified, guest OS is given the fixed amount of memory defined in **Memory Size**. When **Minimum Memory Size** is specified, guest OS is given memory within a range between **Minimum Memory Size** and **Memory Size** as needed. |
| **Optional: NUMA nodeset (Example: 0-1)** | Node set allows setting NUMA nodes for multi NUMA processors when CPU set was defined. Better memory locality can be achieved by setting node set based on assigned CPU set. Example: if cpus 0,1 belong to NUMA node 0, then setting nodeset to 0 will improve memory locality. |
{{< /expand >}}
### Edit GPU Settings
diff --git a/content/SCALE/SCALEUIReference/Virtualization/_index.md b/content/SCALE/SCALEUIReference/Virtualization/_index.md
index 9fd7bbedb2..20f95cf09c 100644
--- a/content/SCALE/SCALEUIReference/Virtualization/_index.md
+++ b/content/SCALE/SCALEUIReference/Virtualization/_index.md
@@ -6,7 +6,7 @@ weight: 80
The Virtualization section allows users to set up Virtual Machines (VMs) to run alongside TrueNAS. Delegating processes to VMs reduces the load on the physical system, which means users can utilize additional hardware resources. Users can customize six different segments of a VM when creating one in TrueNAS SCALE.
-![VMRunningOptionsSCALE](/images/SCALE/VMRunningOptionsSCALE.png "SCALE Virtualization Screen")
+![VirtualizationSCALE](/images/SCALE/VirtualizationSCALE.png "SCALE Virtualization Screen")
{{< include file="static/includes/General/MenuNav.md.part" markdown="true" >}}
diff --git a/content/_includes/AppsVMsNoHTTPS.md b/content/_includes/AppsVMsNoHTTPS.md
new file mode 100644
index 0000000000..2d231abf35
--- /dev/null
+++ b/content/_includes/AppsVMsNoHTTPS.md
@@ -0,0 +1,4 @@
+---
+---
+
+Special consideration should be given when TrueNAS is installed in a VM, as VMs are not configured to use HTTPS. Enabling HTTPS redirect can interfere with the accessibility of some apps. To determine if HTTPS redirect is active, go to **System Settings** > **GUI** > **Settings** and locate the **Web Interface HTTP -> HTTPS Redirect** checkbox. To disable HTTPS redirects, clear this option and click **Save**, then clear the browser cache before attempting to connect to the app again.
diff --git a/content/_includes/BasicReplicationProcess.md b/content/_includes/BasicReplicationProcess.md
new file mode 100644
index 0000000000..651bff05eb
--- /dev/null
+++ b/content/_includes/BasicReplicationProcess.md
@@ -0,0 +1,28 @@
+---
+---
+
+{{< expand "Replication Task General Overview" "v" >}}
+If using a TrueNAS SCALE Bluefin system on the early release (22.12.1) you must have the [admin user correctly configured]({{< relref "ManageLocalUsersSCALE.md" >}}) with:
+
+* The **Home Directory** set to something other than **/nonexistent**
+* The admin user in the **builtin_admin** group
+* The admin user passwordless sudo permission enabled
+
+Also verify the SSH service settings to make sure you have **Root with Password**, **Log in as Admin with Password**, and **Allow Password Authentication** selected to enable these capabilities.
+{{< hint warning >}}
+Incorrect SSH service settings can impact the admin user ability to establish an SSH session during replication, and require you to obtain and paste a public SSH key into the admin user settings.
+{{< /hint >}}
+
+1. Set up the data storage for where you want to save replicated snapshots.
+
+2. Make sure the admin user is correctly configured.
+
+3. Create an SSH connection between the local SCALE system and the remote system for remote replication tasks. Local replication does not require an SSH connection.
+ You can do this from either **Credentials > Backup Credentials > SSH Connection** and clicking **Add** or from the **Replication Task Wizard** using the **Generate New** option in the settings for the remote system.
+
+4. Go to **Data Protection > Replication Tasks** and click **Add** to open the **Replication Task Wizard** where you specify the settings for the replication task.
+
+ Setting options change based on the source selections. Replicating to or from a local source does not requires an SSH connection.
+
+This completes the general process for all replication tasks.
+{{< /expand >}}
\ No newline at end of file
diff --git a/content/_includes/COREMigratesList.md b/content/_includes/COREMigratesList.md
new file mode 100644
index 0000000000..ab3d8ff74c
--- /dev/null
+++ b/content/_includes/COREMigratesList.md
@@ -0,0 +1,23 @@
+---
+---
+
+Although TrueNAS attempts to keep most of your CORE configuration data when upgrading to SCALE, some CORE-specific items do not transfer.
+These are the items that don't migrate from CORE:
+
+* FreeBSD GELI encryption.
+ If you have GELI-encrypted pools on your system that you plan to import into SCALE, you must migrate your data from the GELI pool to a non-GELI encrypted pool *before* migrating to SCALE.
+* Malformed certificates.
+ TrueNAS SCALE validates the system certificates when a CORE system migrates to SCALE.
+ When a malformed certificate is found, SCALE generates a new self-signed certificate to ensure system accessibility.
+* CORE plugins and jails. Save the configuration information for your plugin and back up any stored data.
+ After completing the SCALE install, add the equivalent SCALE application using the **Apps** option.
+ If your CORE plugin is not listed as an available application in SCALE, use the **Launch Docker Image** option to add it as an application and import data from the backup into a new SCALE dataset for the application.
+* NIS data
+* System tunables
+* ZFS boot environments
+* AFP shares also do not transfer, but migrate into an SMB share with AFP compatibility enabled.
+* CORE `netcli` utility. A new CLI utility is used for the [Console Setup Menu]({{< relref "ConsoleSetupMenuSCALE.md" >}}) and other commands issued in a CLI.
+
+VM storage and its basic configuration transfer over during a migration, but you need to double-check the VM configuration and the network interface settings specifically before starting the VM.
+
+Init/shutdown scripts transfer, but can break. Review them before use.
diff --git a/content/_includes/CreateDatasetSCALE.md b/content/_includes/CreateDatasetSCALE.md
index 8f7dc3a385..8712456d23 100644
--- a/content/_includes/CreateDatasetSCALE.md
+++ b/content/_includes/CreateDatasetSCALE.md
@@ -18,6 +18,10 @@ Select either **SMB** for the **Share Type** or leave set to **Generic**, then c
You can create datasets optimized for SMB shares or with customized settings for your dataset use cases.
+If you plan to deploy container applications, the system automatically creates the **ix-applications** dataset, but it is not used for application data storage.
+If you want to store data by application, create the dataset first, then deploy your application.
+When creating a dataset for an application, select **App** as the **Share Type** setting.
+
{{< hint warning >}}
Review the **Share Type** and **Case Sensitivity** options on the configuration screen before clicking **Save**.
You cannot change these and the **Name** setting after clicking **Save**.
diff --git a/content/_includes/EnterpriseHANetworkIPs.md b/content/_includes/EnterpriseHANetworkIPs.md
new file mode 100644
index 0000000000..d7d7a6dadb
--- /dev/null
+++ b/content/_includes/EnterpriseHANetworkIPs.md
@@ -0,0 +1,12 @@
+---
+---
+
+SCALE Enterprise (HA) systems use three static IP addresses for access to the UI:
+
+* VIP to provide UI access regardless of which controller is active.
+ If your system fails over from controller 1 to 2, then fails over back to controller 1 later you might not know which controller is active.
+* IP for controller 1. If enabled on your network, DHCP assigns only the Controller 1 IP address. If not enabled, you must change this to the static IP address your network administrator assigned to this controller.
+* IP for Controller 2. DHCP does not assign the second controller an IP address.
+
+Have your list of network addresses, host, and domain names ready so you can complete the network configuration without disruption or system timeouts.
+SCALE safeguards allow a default of 60 seconds to test and save changes to a network interface before reverting changes. This is to prevent users from breaking their network connection in SCALE.
\ No newline at end of file
diff --git a/content/_includes/HAControllerInstallBestPracticeSCALE.md b/content/_includes/HAControllerInstallBestPracticeSCALE.md
new file mode 100644
index 0000000000..46ad824075
--- /dev/null
+++ b/content/_includes/HAControllerInstallBestPracticeSCALE.md
@@ -0,0 +1,9 @@
+---
+---
+
+For best results, we recommend executing this procedure on both controllers at the same time.
+You can simultaneously install using two USB flash drives inserted into the USB port for each controller (1 and 2) or by establishing an IPMI connection with each controller in separate browser sessions.
+
+Alternately, install and configure controller 1 while keeping controller 2 powered off.
+When controller 1 is completely configured, power on controller 2 to install TrueNAS and reboot the controller.
+When controller 2 boots after installing, sync the system configuration from controller 1 to controller 2.
\ No newline at end of file
diff --git a/content/_includes/MigrateCOREtoSCALEWarning.md b/content/_includes/MigrateCOREtoSCALEWarning.md
new file mode 100644
index 0000000000..e0caabe7b9
--- /dev/null
+++ b/content/_includes/MigrateCOREtoSCALEWarning.md
@@ -0,0 +1,12 @@
+---
+---
+
+{{< hint danger >}}
+Migrating TrueNAS from CORE to SCALE is a one-way operation.
+Attempting to activate or roll back to a CORE boot environment can break the system.
+{{< /hint >}}
+
+{{< enterprise >}}
+High Availability systems cannot migrate from CORE to SCALE.
+Enterprise customers should contact iXsystems Support before attempting any migration.
+{{< /enterprise >}}
diff --git a/content/_includes/ReplicationConfigNewSSHConnection.md b/content/_includes/ReplicationConfigNewSSHConnection.md
new file mode 100644
index 0000000000..6050e2b8f6
--- /dev/null
+++ b/content/_includes/ReplicationConfigNewSSHConnection.md
@@ -0,0 +1,30 @@
+---
+---
+When using a TrueNAS system on a different release, like CORE or SCALE Angelfish, the remote or destination system user is always root.
+
+To configure a new SSH connection from the **Replication Task Wizard**:
+
+1. Select **Create New** on the **SSH Connection** dropdown list to open the **New SSH Connection** configuration screen.
+
+2. Enter a name for the connection.
+
+ ![NewSSHConnectionNameAndMethod](/images/SCALE/22.12/NewSSHConnectionNameAndMethod.png "New SSH Connection Name and Method")
+
+3. Select the **Setup Method** from the dropdown list. If a TrueNAS system, select **Semi-Automatic**.
+
+4. Enter the URL to the remote TrueNAS in **TrueNAS URL**.
+
+ ![NewSSHConnectionAuthetication](/images/SCALE/22.12/NewSSHConnectionAuthetication.png "New SSH Connection Authentication")
+
+5. Enter the administration user (i.e., root or admin) that logs into the remote system with the web UI in **Admin Username**.
+ Enter the password in **Admin Password**.
+
+6. Enter the administration user (i.e., root or admin) for remote system SSH session.
+ If you clear root as the the user and type any other name the **Enable passwordless sudo for ZFS commands** option displays.
+ This option does nothing so leave it cleared.
+
+7. Select **Generate New** from the **Private Key** dropdown list.
+
+8. (Optional) Select a cipher from the dropdown list, or enter a new value in seconds for the **Connection Timeout** if you want to change the defaults.
+
+9. Click **Save** to create a new SSH connection and populate the **SSH Connection** field in the **Replication Task Wizard**.
diff --git a/content/_includes/ReplicationCreateDatasetAndAdminHomeDirSteps.md b/content/_includes/ReplicationCreateDatasetAndAdminHomeDirSteps.md
new file mode 100644
index 0000000000..f915feb188
--- /dev/null
+++ b/content/_includes/ReplicationCreateDatasetAndAdminHomeDirSteps.md
@@ -0,0 +1,35 @@
+---
+---
+
+{{< hint warning >}}
+Before you begin configuring the replication task, first verify the destination dataset you want to use to store the replication snapshots is free of existing snapshots, or that snapshots with critical data are backed up before you create the task.
+{{< /hint >}}
+
+To create a replication task:
+
+1. Create the destination dataset or storage location you want to use to store the replication snapshots.
+ If using another TrueNAS SCALE system, [create a dataset]({{< relref "DatasetsSCALE.md" >}}) in one of your pools.
+
+2. Verify the admin user home directory, auxiliary groups, and sudo setting on both the local and remote destination systems.
+ Local replication does not require an SSH connection so this only applies to replication to another system.
+
+ If using a TrueNAS CORE system as the remote server, the remote user is always root.
+
+ If using a TrueNAS SCALE system on an earlier release like Angelfish, the remote user is always root.
+
+ If using an earlier TrueNAS SCALE Bluefin system (22.12.1) or you installed SCALE as the root user, then created the admin user after initial installation, you must verify the admin user is correctly configured.
+ {{< expand "Verify Admin User Settings" "v" >}}
+
+ a. Go to **Credentials > Local User**, click anywhere on the **admin** user row to expand it.
+ Scroll down to the **Home Directory** setting. If set to **/home/admin**, select **Create Home Directory**, then Click **Save**.
+
+ ![ChangeAdminUserHomeDirectorySetting](/images/SCALE/22.12/ChangeAdminUserHomeDirectorySetting.png "Home Directory Settings Early Bluefin")
+
+ If set to **/nonexistent**, first create a dataset to use for home directories, like */tank/homedirs*. Enter this in the **Home Directory** field, make sure this is not read only.
+
+ b. Select the sudo permission level you want the admin user to have. If you select **Allow all sudo commands with no password** you do not need to make changes.
+ If you select **Allowed sudo commands with no password** enter `/var/sbin/zfs` in the **Allowed sudo commands** field.
+
+ c. Click **Save**.
+ {{< /expand >}}
+
\ No newline at end of file
diff --git a/content/_includes/ReplicationIntroSCALE.md b/content/_includes/ReplicationIntroSCALE.md
new file mode 100644
index 0000000000..f1cac3f9dc
--- /dev/null
+++ b/content/_includes/ReplicationIntroSCALE.md
@@ -0,0 +1,12 @@
+---
+---
+
+The first snapshot taken for a task creates a full file system snapshot, and all subsequent snapshots taken for that task are incremental to capture differences occurring between the full and subsequent incremental snapshots.
+
+Scheduling options allow users to run replication tasks daily, weekly, monthly, or on a custom schedule.
+Users also have the option to run a scheduled job on demand.
+{{< hint info >}}
+Replication tasks require a periodic snapshot task.
+The earlier releases of SCALE required you to create a periodic snapshot task before you create the replication task, but SCALE now creates this task before it runs a replication task according to a schedule.
+If you want to run the replication task using the **Run Now** option on the **Replication Task** widget or by selecting **Run Once** in the **Replication Task Wizard**, you must [create a periodic snapshot task]({{< relref "PeriodicSnapshotTasksSCALE.md" >}}) first.
+{{< /hint >}}
\ No newline at end of file
diff --git a/content/_includes/ReplicationSSHTransferSecurity.md b/content/_includes/ReplicationSSHTransferSecurity.md
new file mode 100644
index 0000000000..89e006cf7b
--- /dev/null
+++ b/content/_includes/ReplicationSSHTransferSecurity.md
@@ -0,0 +1,13 @@
+---
+---
+
+{{< hint info >}}
+Using encryption for SSH transfer security is always recommended.
+{{< /hint >}}
+
+In situations where you use two systems within an absolutely secure network for replication, disabling encryption speeds up the transfer.
+However, the data is completely unprotected from eavesdropping.
+
+Choosing **No Encryption** for the task is less secure but faster. This method uses common port settings but you can override these by switching to the **Advanced Replication Creation** options or by editing the task after creation.
+
+![TasksReplicationTaskSecuritySCALE](/images/SCALE/RepSecurityTaskSCALE.png "Replication Security and Task Name")
diff --git a/content/_includes/ReplicationScheduleAndRetentionSteps.md b/content/_includes/ReplicationScheduleAndRetentionSteps.md
new file mode 100644
index 0000000000..63d2ac33ad
--- /dev/null
+++ b/content/_includes/ReplicationScheduleAndRetentionSteps.md
@@ -0,0 +1,36 @@
+---
+---
+
+4. Click **Next** to display the scheduling options.
+
+5. Select the schedule and snapshot retention life time.
+
+ a. Select the **Replication Schedule** radio button you want to use. Select **Run Once** to set up a replication task you run one time.
+ Select **Run On a Schedule** then select when from the **Schedule** dropdown list.
+
+ ![CreateReplicationTaskSetSchedule](/images/SCALE/22.12/CreateReplicationTaskSetSchedule.png "Set Replication Task Schedule")
+
+ b. Select the **Destination Snapshot Lifetime** radio button option you want to use.
+ This specifies how long SCALE should store copied snapshots in the destination dataset before SCALE deletes it.
+ **Same as Source** is selected by default. Select **Never Delete** to keep all snapshots until you delete them manually.
+ Select **Custom** to show two additional settings, then enter the number of the duration you select from the dropdown list. For example, *2 Weeks*.
+
+6. Click **START REPLICATION**.
+ A dialog displays if this is the first snapshot taken using the destination dataset.
+ If SCALE does not find a replicated snapshot in the destination dataset to use to create an incremental snapshot, it deletes any existing snapshots found and creates a full copy of the day snapshot to use as a basis for the future scheduled incremental snapshots for this schedule task.
+ This operation can delete important data, so ensure you can delete any existing snapshots or back them up in another location.
+
+ ![ReplicationSnapshotConfirmationDialog](/images/SCALE/22.12/ReplicationSnapshotConfirmationDialog.png "Local Replication Task Confirmation")
+
+ Click **Confirm**, then **Continue** to add the task to the **Replication Task** widget.
+ The newly added task shows the status as **PENDING** until it runs on the schedule you set.
+
+ ![ReplicationTaskWidgetWithPendingTask](/images/SCALE/22.12/ReplicationTaskWidgetWithPendingTask.png "Replication Task in Pending State")
+
+ Select **Run Now** if you want to run the task immediately.
+
+To see a log for a task, click the task **State** to open a dialog with the log for that replication task.
+
+To see the replication snapshots, go to **Datasets**, select the destination dataset on the tree table, then select **Manage Snapshots** on the **Data Protection** widget to see the list of snapshots in that dataset. Click **Show extra columns** to add more information columns to the table such as the date created which can help you locate a specific snapshot or enter part of or the full the name in the search field to narrow the list of snapshots.
+
+![ReplicationSnapthotListInDestinationDataset](/images/SCALE/22.12/ReplicationSnapthotListInDestinationDataset.png "Snapshot List in Destination Dataset")
diff --git a/static/images/SCALE/22.12/AddGroupGIDConfigSCALE.png b/static/images/SCALE/22.12/AddGroupGIDConfigSCALE.png
new file mode 100644
index 0000000000..cc08a5312b
Binary files /dev/null and b/static/images/SCALE/22.12/AddGroupGIDConfigSCALE.png differ
diff --git a/static/images/SCALE/22.12/AddGroupQuotasSetQuotaSCALE.png b/static/images/SCALE/22.12/AddGroupQuotasSetQuotaSCALE.png
new file mode 100644
index 0000000000..00723d43ba
Binary files /dev/null and b/static/images/SCALE/22.12/AddGroupQuotasSetQuotaSCALE.png differ
diff --git a/static/images/SCALE/22.12/AddNextcloudAppNameSCALE.png b/static/images/SCALE/22.12/AddNextcloudAppNameSCALE.png
new file mode 100644
index 0000000000..90b53ce08e
Binary files /dev/null and b/static/images/SCALE/22.12/AddNextcloudAppNameSCALE.png differ
diff --git a/static/images/SCALE/22.12/AddNextcloudAvailableAppsSCALE.png b/static/images/SCALE/22.12/AddNextcloudAvailableAppsSCALE.png
new file mode 100644
index 0000000000..732b7f9681
Binary files /dev/null and b/static/images/SCALE/22.12/AddNextcloudAvailableAppsSCALE.png differ
diff --git a/static/images/SCALE/22.12/AddNextcloudConfigurationSCALE.png b/static/images/SCALE/22.12/AddNextcloudConfigurationSCALE.png
new file mode 100644
index 0000000000..a41ab1cc0e
Binary files /dev/null and b/static/images/SCALE/22.12/AddNextcloudConfigurationSCALE.png differ
diff --git a/static/images/SCALE/22.12/AddNextcloudEnvironmentSCALE.png b/static/images/SCALE/22.12/AddNextcloudEnvironmentSCALE.png
new file mode 100644
index 0000000000..69491e12f2
Binary files /dev/null and b/static/images/SCALE/22.12/AddNextcloudEnvironmentSCALE.png differ
diff --git a/static/images/SCALE/22.12/AddUserHomeDirAuthSCALE.png b/static/images/SCALE/22.12/AddUserHomeDirAuthSCALE.png
new file mode 100644
index 0000000000..f515c0ef30
Binary files /dev/null and b/static/images/SCALE/22.12/AddUserHomeDirAuthSCALE.png differ
diff --git a/static/images/SCALE/22.12/AddUserHomeDirPermSCALE.png b/static/images/SCALE/22.12/AddUserHomeDirPermSCALE.png
new file mode 100644
index 0000000000..0abbfe3fc7
Binary files /dev/null and b/static/images/SCALE/22.12/AddUserHomeDirPermSCALE.png differ
diff --git a/static/images/SCALE/22.12/AddUserQuotasSetQuotasSCALE.png b/static/images/SCALE/22.12/AddUserQuotasSetQuotasSCALE.png
new file mode 100644
index 0000000000..257e57cb5d
Binary files /dev/null and b/static/images/SCALE/22.12/AddUserQuotasSetQuotasSCALE.png differ
diff --git a/static/images/SCALE/22.12/AppsAdvancedSettingsKubernetesSettingsReInitialization.png b/static/images/SCALE/22.12/AppsAdvancedSettingsKubernetesSettingsReInitialization.png
new file mode 100644
index 0000000000..a0ceb3ef5e
Binary files /dev/null and b/static/images/SCALE/22.12/AppsAdvancedSettingsKubernetesSettingsReInitialization.png differ
diff --git a/static/images/SCALE/22.12/AppsChooseLogWindow.png b/static/images/SCALE/22.12/AppsChooseLogWindow.png
new file mode 100644
index 0000000000..e6b6d7ddfd
Binary files /dev/null and b/static/images/SCALE/22.12/AppsChooseLogWindow.png differ
diff --git a/static/images/SCALE/22.12/AppsChoosePodWindow.png b/static/images/SCALE/22.12/AppsChoosePodWindow.png
new file mode 100644
index 0000000000..9d42d76343
Binary files /dev/null and b/static/images/SCALE/22.12/AppsChoosePodWindow.png differ
diff --git a/static/images/SCALE/22.12/AppsPodShellWindow.png b/static/images/SCALE/22.12/AppsPodShellWindow.png
new file mode 100644
index 0000000000..b22eba57ef
Binary files /dev/null and b/static/images/SCALE/22.12/AppsPodShellWindow.png differ
diff --git a/static/images/SCALE/22.12/AppsSummaryScreen.png b/static/images/SCALE/22.12/AppsSummaryScreen.png
new file mode 100644
index 0000000000..0e0f3622db
Binary files /dev/null and b/static/images/SCALE/22.12/AppsSummaryScreen.png differ
diff --git a/static/images/SCALE/22.12/ChangeAdminUserHomeDirectorySetting.png b/static/images/SCALE/22.12/ChangeAdminUserHomeDirectorySetting.png
new file mode 100644
index 0000000000..1f27af8567
Binary files /dev/null and b/static/images/SCALE/22.12/ChangeAdminUserHomeDirectorySetting.png differ
diff --git a/static/images/SCALE/22.12/CreateLocalReplicationTask.png b/static/images/SCALE/22.12/CreateLocalReplicationTask.png
new file mode 100644
index 0000000000..3920398634
Binary files /dev/null and b/static/images/SCALE/22.12/CreateLocalReplicationTask.png differ
diff --git a/static/images/SCALE/22.12/CreateRemoteReplicationTask.png b/static/images/SCALE/22.12/CreateRemoteReplicationTask.png
new file mode 100644
index 0000000000..b2b125ec21
Binary files /dev/null and b/static/images/SCALE/22.12/CreateRemoteReplicationTask.png differ
diff --git a/static/images/SCALE/22.12/CreateRemoteReplicationTaskSetSudo.png b/static/images/SCALE/22.12/CreateRemoteReplicationTaskSetSudo.png
new file mode 100644
index 0000000000..4242ac8273
Binary files /dev/null and b/static/images/SCALE/22.12/CreateRemoteReplicationTaskSetSudo.png differ
diff --git a/static/images/SCALE/22.12/CreateReplicationTaskSetSchedule.png b/static/images/SCALE/22.12/CreateReplicationTaskSetSchedule.png
new file mode 100644
index 0000000000..ef910a37bb
Binary files /dev/null and b/static/images/SCALE/22.12/CreateReplicationTaskSetSchedule.png differ
diff --git a/static/images/SCALE/22.12/CreateVMWCPUMemSCALE.png b/static/images/SCALE/22.12/CreateVMWCPUMemSCALE.png
deleted file mode 100644
index 5ab6520d48..0000000000
Binary files a/static/images/SCALE/22.12/CreateVMWCPUMemSCALE.png and /dev/null differ
diff --git a/static/images/SCALE/22.12/CreateVMWConfirmSCALE.png b/static/images/SCALE/22.12/CreateVMWConfirmSCALE.png
deleted file mode 100644
index 2f1c036659..0000000000
Binary files a/static/images/SCALE/22.12/CreateVMWConfirmSCALE.png and /dev/null differ
diff --git a/static/images/SCALE/22.12/CreateVMWDisksSCALE.png b/static/images/SCALE/22.12/CreateVMWDisksSCALE.png
deleted file mode 100644
index 7097e378c3..0000000000
Binary files a/static/images/SCALE/22.12/CreateVMWDisksSCALE.png and /dev/null differ
diff --git a/static/images/SCALE/22.12/CreateVMWGPUsSCALE.png b/static/images/SCALE/22.12/CreateVMWGPUsSCALE.png
deleted file mode 100644
index de00d18fbe..0000000000
Binary files a/static/images/SCALE/22.12/CreateVMWGPUsSCALE.png and /dev/null differ
diff --git a/static/images/SCALE/22.12/CreateVMWInstallMediaSCALE.png b/static/images/SCALE/22.12/CreateVMWInstallMediaSCALE.png
deleted file mode 100644
index d73a926fc6..0000000000
Binary files a/static/images/SCALE/22.12/CreateVMWInstallMediaSCALE.png and /dev/null differ
diff --git a/static/images/SCALE/22.12/CreateVMWInstallMediaUploadSCALE.png b/static/images/SCALE/22.12/CreateVMWInstallMediaUploadSCALE.png
deleted file mode 100644
index 70b158f7d0..0000000000
Binary files a/static/images/SCALE/22.12/CreateVMWInstallMediaUploadSCALE.png and /dev/null differ
diff --git a/static/images/SCALE/22.12/CreateVMWNetworkInterfaceSCALE.png b/static/images/SCALE/22.12/CreateVMWNetworkInterfaceSCALE.png
deleted file mode 100644
index 0e6d56709c..0000000000
Binary files a/static/images/SCALE/22.12/CreateVMWNetworkInterfaceSCALE.png and /dev/null differ
diff --git a/static/images/SCALE/22.12/CreateVMWOpSysSCALE.png b/static/images/SCALE/22.12/CreateVMWOpSysSCALE.png
deleted file mode 100644
index 75893b56d3..0000000000
Binary files a/static/images/SCALE/22.12/CreateVMWOpSysSCALE.png and /dev/null differ
diff --git a/static/images/SCALE/22.12/CreatelReplicationTaskSetSchedule.png b/static/images/SCALE/22.12/CreatelReplicationTaskSetSchedule.png
new file mode 100644
index 0000000000..ef910a37bb
Binary files /dev/null and b/static/images/SCALE/22.12/CreatelReplicationTaskSetSchedule.png differ
diff --git a/static/images/SCALE/22.12/DeleteDeviceVMDiskSCALE.png b/static/images/SCALE/22.12/DeleteDeviceVMDiskSCALE.png
deleted file mode 100644
index d4165d3c14..0000000000
Binary files a/static/images/SCALE/22.12/DeleteDeviceVMDiskSCALE.png and /dev/null differ
diff --git a/static/images/SCALE/22.12/EditDeviceVMDiskSCALE.png b/static/images/SCALE/22.12/EditDeviceVMDiskSCALE.png
deleted file mode 100644
index fa0341b26d..0000000000
Binary files a/static/images/SCALE/22.12/EditDeviceVMDiskSCALE.png and /dev/null differ
diff --git a/static/images/SCALE/22.12/EditGroupQuotasSCALE.png b/static/images/SCALE/22.12/EditGroupQuotasSCALE.png
new file mode 100644
index 0000000000..44b150e64e
Binary files /dev/null and b/static/images/SCALE/22.12/EditGroupQuotasSCALE.png differ
diff --git a/static/images/SCALE/22.12/EditReplicationTaskIncludeDatasetProperties.png b/static/images/SCALE/22.12/EditReplicationTaskIncludeDatasetProperties.png
new file mode 100644
index 0000000000..2ce4de6047
Binary files /dev/null and b/static/images/SCALE/22.12/EditReplicationTaskIncludeDatasetProperties.png differ
diff --git a/static/images/SCALE/22.12/EditUserQuotasSCALE.png b/static/images/SCALE/22.12/EditUserQuotasSCALE.png
new file mode 100644
index 0000000000..a667b5d2b4
Binary files /dev/null and b/static/images/SCALE/22.12/EditUserQuotasSCALE.png differ
diff --git a/static/images/SCALE/22.12/EditVMCPUandMemory.png b/static/images/SCALE/22.12/EditVMCPUandMemory.png
index d7beb0b5d9..b741751807 100644
Binary files a/static/images/SCALE/22.12/EditVMCPUandMemory.png and b/static/images/SCALE/22.12/EditVMCPUandMemory.png differ
diff --git a/static/images/SCALE/22.12/FailoverServiceScreen.png b/static/images/SCALE/22.12/FailoverScreen.png
similarity index 100%
rename from static/images/SCALE/22.12/FailoverServiceScreen.png
rename to static/images/SCALE/22.12/FailoverScreen.png
diff --git a/static/images/SCALE/22.12/FailoverSyncToPeerDialog.png b/static/images/SCALE/22.12/FailoverSyncToPeerDialog.png
index f225ac9c95..6b9e38b48d 100644
Binary files a/static/images/SCALE/22.12/FailoverSyncToPeerDialog.png and b/static/images/SCALE/22.12/FailoverSyncToPeerDialog.png differ
diff --git a/static/images/SCALE/22.12/FirstTimeLoginInstallOpt3SCALE.png b/static/images/SCALE/22.12/FirstTimeLoginInstallOpt3SCALE.png
new file mode 100644
index 0000000000..8f9c0e090f
Binary files /dev/null and b/static/images/SCALE/22.12/FirstTimeLoginInstallOpt3SCALE.png differ
diff --git a/static/images/SCALE/22.12/GroupQuotasNoQuotaSCALE.png b/static/images/SCALE/22.12/GroupQuotasNoQuotaSCALE.png
new file mode 100644
index 0000000000..5ddf624120
Binary files /dev/null and b/static/images/SCALE/22.12/GroupQuotasNoQuotaSCALE.png differ
diff --git a/static/images/SCALE/22.12/GroupQuotasVideoQuotaSCALE.png b/static/images/SCALE/22.12/GroupQuotasVideoQuotaSCALE.png
new file mode 100644
index 0000000000..d08489114d
Binary files /dev/null and b/static/images/SCALE/22.12/GroupQuotasVideoQuotaSCALE.png differ
diff --git a/static/images/SCALE/22.12/GroupsListedExpandedSCALE.png b/static/images/SCALE/22.12/GroupsListedExpandedSCALE.png
new file mode 100644
index 0000000000..702e41cbce
Binary files /dev/null and b/static/images/SCALE/22.12/GroupsListedExpandedSCALE.png differ
diff --git a/static/images/SCALE/22.12/GroupsListedSCALE.png b/static/images/SCALE/22.12/GroupsListedSCALE.png
new file mode 100644
index 0000000000..63d1533781
Binary files /dev/null and b/static/images/SCALE/22.12/GroupsListedSCALE.png differ
diff --git a/static/images/SCALE/22.12/GroupsManageMembersSCALE.png b/static/images/SCALE/22.12/GroupsManageMembersSCALE.png
new file mode 100644
index 0000000000..581e9d6470
Binary files /dev/null and b/static/images/SCALE/22.12/GroupsManageMembersSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppAdvancedDNSSettingsSCALE.png b/static/images/SCALE/22.12/InstallNetDAppAdvancedDNSSettingsSCALE.png
new file mode 100644
index 0000000000..d1aff07346
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppAdvancedDNSSettingsSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppAvailAppSCALE.png b/static/images/SCALE/22.12/InstallNetDAppAvailAppSCALE.png
new file mode 100644
index 0000000000..ef36246ca7
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppAvailAppSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppCloudSignUpSCALE.png b/static/images/SCALE/22.12/InstallNetDAppCloudSignUpSCALE.png
new file mode 100644
index 0000000000..79689121dd
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppCloudSignUpSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppDatasetsSCALE.png b/static/images/SCALE/22.12/InstallNetDAppDatasetsSCALE.png
new file mode 100644
index 0000000000..860a9c162e
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppDatasetsSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppDeployingSCALE.png b/static/images/SCALE/22.12/InstallNetDAppDeployingSCALE.png
new file mode 100644
index 0000000000..03702fbb54
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppDeployingSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppNameSCALE.png b/static/images/SCALE/22.12/InstallNetDAppNameSCALE.png
new file mode 100644
index 0000000000..d42b6e5c50
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppNameSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppNetDAgentCropSCALE.png b/static/images/SCALE/22.12/InstallNetDAppNetDAgentCropSCALE.png
new file mode 100644
index 0000000000..07898de6da
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppNetDAgentCropSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppNetDAgentDashNoCloudSCALE.png b/static/images/SCALE/22.12/InstallNetDAppNetDAgentDashNoCloudSCALE.png
new file mode 100644
index 0000000000..cc40bae405
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppNetDAgentDashNoCloudSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppResourceSCALE.png b/static/images/SCALE/22.12/InstallNetDAppResourceSCALE.png
new file mode 100644
index 0000000000..5e7b8dc68c
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppResourceSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppRunningOptionsSCALE.png b/static/images/SCALE/22.12/InstallNetDAppRunningOptionsSCALE.png
new file mode 100644
index 0000000000..59a5086bbc
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppRunningOptionsSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppRunningSCALE.png b/static/images/SCALE/22.12/InstallNetDAppRunningSCALE.png
new file mode 100644
index 0000000000..f709bd0939
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppRunningSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppServiceConfHostPSCALE.png b/static/images/SCALE/22.12/InstallNetDAppServiceConfHostPSCALE.png
new file mode 100644
index 0000000000..be0ce757e5
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppServiceConfHostPSCALE.png differ
diff --git a/static/images/SCALE/22.12/InstallNetDAppServiceConfSCALE.png b/static/images/SCALE/22.12/InstallNetDAppServiceConfSCALE.png
new file mode 100644
index 0000000000..5103b6ccb5
Binary files /dev/null and b/static/images/SCALE/22.12/InstallNetDAppServiceConfSCALE.png differ
diff --git a/static/images/SCALE/22.12/LaunchDockerImageAddDNS.png b/static/images/SCALE/22.12/LaunchDockerImageAddDNS.png
index 24a2a06fa5..4f68a5c9cb 100644
Binary files a/static/images/SCALE/22.12/LaunchDockerImageAddDNS.png and b/static/images/SCALE/22.12/LaunchDockerImageAddDNS.png differ
diff --git a/static/images/SCALE/22.12/NewSSHConnectionAuthetication.png b/static/images/SCALE/22.12/NewSSHConnectionAuthetication.png
new file mode 100644
index 0000000000..6b4217795b
Binary files /dev/null and b/static/images/SCALE/22.12/NewSSHConnectionAuthetication.png differ
diff --git a/static/images/SCALE/22.12/NewSSHConnectionNameAndMethod.png b/static/images/SCALE/22.12/NewSSHConnectionNameAndMethod.png
new file mode 100644
index 0000000000..d2840e77a0
Binary files /dev/null and b/static/images/SCALE/22.12/NewSSHConnectionNameAndMethod.png differ
diff --git a/static/images/SCALE/22.12/RemoteReplicateSnapshotAndNameSchema.png b/static/images/SCALE/22.12/RemoteReplicateSnapshotAndNameSchema.png
new file mode 100644
index 0000000000..7e3993dba7
Binary files /dev/null and b/static/images/SCALE/22.12/RemoteReplicateSnapshotAndNameSchema.png differ
diff --git a/static/images/SCALE/22.12/ReplicationSnapshotConfirmationDialog.png b/static/images/SCALE/22.12/ReplicationSnapshotConfirmationDialog.png
new file mode 100644
index 0000000000..26e753212b
Binary files /dev/null and b/static/images/SCALE/22.12/ReplicationSnapshotConfirmationDialog.png differ
diff --git a/static/images/SCALE/22.12/ReplicationSnapthotListInDestinationDataset.png b/static/images/SCALE/22.12/ReplicationSnapthotListInDestinationDataset.png
new file mode 100644
index 0000000000..97b7cbfd6e
Binary files /dev/null and b/static/images/SCALE/22.12/ReplicationSnapthotListInDestinationDataset.png differ
diff --git a/static/images/SCALE/22.12/ReplicationTaskEncryptionOptions.png b/static/images/SCALE/22.12/ReplicationTaskEncryptionOptions.png
new file mode 100644
index 0000000000..46cadb5bb4
Binary files /dev/null and b/static/images/SCALE/22.12/ReplicationTaskEncryptionOptions.png differ
diff --git a/static/images/SCALE/22.12/ReplicationTaskWidgetWithPendingTask.png b/static/images/SCALE/22.12/ReplicationTaskWidgetWithPendingTask.png
new file mode 100644
index 0000000000..c472f51693
Binary files /dev/null and b/static/images/SCALE/22.12/ReplicationTaskWidgetWithPendingTask.png differ
diff --git a/static/images/SCALE/22.12/SystemSettingsGUISettingsSCALE.png b/static/images/SCALE/22.12/SystemSettingsGUISettingsSCALE.png
new file mode 100644
index 0000000000..34c1def9c0
Binary files /dev/null and b/static/images/SCALE/22.12/SystemSettingsGUISettingsSCALE.png differ
diff --git a/static/images/SCALE/22.12/UseSudoForZFSCommandsDialog.png b/static/images/SCALE/22.12/UseSudoForZFSCommandsDialog.png
new file mode 100644
index 0000000000..8e172315c7
Binary files /dev/null and b/static/images/SCALE/22.12/UseSudoForZFSCommandsDialog.png differ
diff --git a/static/images/SCALE/22.12/UserQuotasDataQuotaSCALE.png b/static/images/SCALE/22.12/UserQuotasDataQuotaSCALE.png
new file mode 100644
index 0000000000..b22fee1391
Binary files /dev/null and b/static/images/SCALE/22.12/UserQuotasDataQuotaSCALE.png differ
diff --git a/static/images/SCALE/22.12/UserQuotasNoQuotasSCALE.png b/static/images/SCALE/22.12/UserQuotasNoQuotasSCALE.png
new file mode 100644
index 0000000000..3d1f44bd6d
Binary files /dev/null and b/static/images/SCALE/22.12/UserQuotasNoQuotasSCALE.png differ
diff --git a/static/images/SCALE/22.12/VMAddDeviceCDROM.png b/static/images/SCALE/22.12/VMAddDeviceCDROM.png
index b0b40d4015..75772a36c5 100644
Binary files a/static/images/SCALE/22.12/VMAddDeviceCDROM.png and b/static/images/SCALE/22.12/VMAddDeviceCDROM.png differ
diff --git a/static/images/SCALE/22.12/VMAddDeviceDisk.png b/static/images/SCALE/22.12/VMAddDeviceDisk.png
index fb3e31e3a3..f3d1c353cd 100644
Binary files a/static/images/SCALE/22.12/VMAddDeviceDisk.png and b/static/images/SCALE/22.12/VMAddDeviceDisk.png differ
diff --git a/static/images/SCALE/22.12/VMAddDeviceDisplay.png b/static/images/SCALE/22.12/VMAddDeviceDisplay.png
index a1421a9795..feef4e6995 100644
Binary files a/static/images/SCALE/22.12/VMAddDeviceDisplay.png and b/static/images/SCALE/22.12/VMAddDeviceDisplay.png differ
diff --git a/static/images/SCALE/22.12/VMAddDeviceNIC.png b/static/images/SCALE/22.12/VMAddDeviceNIC.png
index 101c25a6e1..91772e28bd 100644
Binary files a/static/images/SCALE/22.12/VMAddDeviceNIC.png and b/static/images/SCALE/22.12/VMAddDeviceNIC.png differ
diff --git a/static/images/SCALE/22.12/VMAddDevicePCIpass.png b/static/images/SCALE/22.12/VMAddDevicePCIpass.png
index 2763070c12..697580ab02 100644
Binary files a/static/images/SCALE/22.12/VMAddDevicePCIpass.png and b/static/images/SCALE/22.12/VMAddDevicePCIpass.png differ
diff --git a/static/images/SCALE/22.12/VMAddDeviceRawFile.png b/static/images/SCALE/22.12/VMAddDeviceRawFile.png
index d9ddbdd775..3890e88011 100644
Binary files a/static/images/SCALE/22.12/VMAddDeviceRawFile.png and b/static/images/SCALE/22.12/VMAddDeviceRawFile.png differ
diff --git a/static/images/SCALE/22.12/VMAddDeviceUSBpass.png b/static/images/SCALE/22.12/VMAddDeviceUSBpass.png
index ba21d05318..f71d7036b8 100644
Binary files a/static/images/SCALE/22.12/VMAddDeviceUSBpass.png and b/static/images/SCALE/22.12/VMAddDeviceUSBpass.png differ
diff --git a/static/images/SCALE/22.12/VMDetailsDevicesSCALE.png b/static/images/SCALE/22.12/VMDetailsDevicesSCALE.png
deleted file mode 100644
index d94ad86d76..0000000000
Binary files a/static/images/SCALE/22.12/VMDetailsDevicesSCALE.png and /dev/null differ
diff --git a/static/images/SCALE/22.12/VMListedSCALE.png b/static/images/SCALE/22.12/VMListedSCALE.png
deleted file mode 100644
index 8b4a2b1b15..0000000000
Binary files a/static/images/SCALE/22.12/VMListedSCALE.png and /dev/null differ
diff --git a/static/images/SCALE/22.12/VMRunningOptionsSCALE.png b/static/images/SCALE/22.12/VMRunningOptionsSCALE.png
deleted file mode 100644
index 20c0038859..0000000000
Binary files a/static/images/SCALE/22.12/VMRunningOptionsSCALE.png and /dev/null differ
diff --git a/words-to-ignore.txt b/words-to-ignore.txt
index f0653c554e..dd70298639 100644
--- a/words-to-ignore.txt
+++ b/words-to-ignore.txt
@@ -1168,4 +1168,40 @@ Nvidia
Gbs
Gbps
GBs
-GBps
\ No newline at end of file
+GBps
+corezfs
+scalezfs
+pre-failover
+InstanceNotFound
+PoolDataset
+passwordless
+DatasetsSCALE
+homdirs
+PeriodicSnapshotTasksSCALE
+GetSupportSCALE
+mprutil
+plx
+eeprom
+elasticsearch
+sas
+nodePort
+emptirDirVolume
+mtime
+subdir
+homedirs
+ConsoleSetupMenuSCALE
+emptyDirVolume
+IntegrityError
+pasdb-backed
+netbiosname
+GPUs
+subquestions
+solidigm
+vrrp
+SMR
+ACPI
+nvdimms
+sudoers
+passdb-backed
+nex
+