Skip to content

Release 4.2.0 - Release Notes (07 July 2022)


This is our newest release (release 4.2.0) with breaking changes as far as compatibility to the previous 4.0.20 release with respect to single physical device installed clusters.

Additional instructions would be published as far as the upgrade procedure is concerned.

New Features

  1. We have added the ability to create a replicated DS when using a single physical storage device cluster installations.
  2. Have added modes of installation in order to accommodate different hardware specifications. These modes are:
    • Edge tiny - 512MB RAM CONTROLLER - 512MB STORAGE, 1 shared core
      • systems like RPi (less than 6 cores, less that 6GB RAM)
      • Jetson with up to 8 cores and 8GB of RAM
      • Other small IOT devices of the future etc
    • Edge - 1GB RAM CONTROLLER - 512MB RAM STORAGE, 1 shared core
      • systems with up to 16 cores and 16 GB RAM
    • Edge performance - 2GB RAM CONTROLLER - 1GB RAM STORAGE, 2 shared cores
      • systems with up to 16 cores and 32 GB RAM
    • DC - 4GB RAM CONTROLLER - 2GB RAM STORAGE, 2 shared cores
      • dc enterprise systems (bobcats etc)


  1. INSTALLER - We have added extra tools in order to accommodate for better and quicker physical disk initialisation.

Bug Fixes

  1. INSTALLER - Fixed an issue with the slow/fast path of all reserved vdisks. Fast path across all of them now.
  2. INSTALLER - Fixed an issue with UEFI boot sequenses.
  3. API - Fix minor issue, by hidding the system vdisks thus allowing for a clear view of user resources.

In Development - targeted for an upcoming release

  1. Striping of volumes across multiple physical NVMe drives, in order to enable larger volume capacity and enhanced single drive performance to more than 1 Million IOPs.
  2. Revisions of supported hardware are planned, in order to accommodate faster NIC performance.
  3. Multi-LUN aggregation support (creating super LUNs, consisting of many smaller LUNs that can span physical NVMe drives for increased capacity).
  4. Physical NVMe drive hotplug removal/addition.
  5. SIM enhancements towards added VM actions.
  6. Advanced recipe marketplace management with easier UI components.
  7. The SAUS service, is under development towards the API automated upgrade. Next iterations will focus in providing similar functionality towards the lower NexVisor stack.
  8. Manage the Sunlight Clusters licensing via the SIM UI.
  9. The latest distro versions are currently in testing
  10. Windows Server 2022 is currently under investigation to be adopted.

Supported Features

For additional information on supported features, please visit the following link : Supported Features for version 4.2.0.

Compatibility and Limits Matrix

Please visit the following link for further information on compatibility and limits: Compatibility and Limits Matrix for version 4.2.0

Current list of supported Network Adapters

Please visit the following link for further information on supported Nics: supported Network adapters

Known Issues

  1. In order for the GPU device to be assigned to a different VM, the persistence mode should be disabled. For more information, please refer to the section "Disable NVIDIA GPU Persistence mode" in the following document: Disable NVIDIA GPU Persistence mode
  2. Occasional connection issues to PV guest console. Suggested mitigation solution is to reboot the VM.
  3. The automated installation of PV drivers for MS Windows ISO images is not currently supported. An end-to-end functional solution is currently in the development process.
  4. Editing an existing VLAN network is not currently supported. In order to edit the VLAN, you must delete and recreate the network.
  5. It is not possible to edit the network configuration of a VM in the case the instance boots from an CD/ISO.
  6. For a VM instance that is booted from CD/ISO, Sunlight does not initialize the cloud init logic. Network configuration must be applied on the VM by the user.
  7. The maximum supported virtual disk size which is currently tested in the system is up to 2TB. Larger size vdisk deployments are currently under test.
  8. Simultaneous resizing of multiple VM instances is not currently supported.
  9. During the upgrade phase, master and slave nodes will be required to be booted down. Currently, the upgrade is performed manually by the Sunlight support team.
  10. It is recommended that the maximum number of instances created in a cluster (all at once) should be limited to less or equal than 8.
  11. Please use “SHIFT”, instead of the “CAPS LOCK” button, for capital letters, when typing the login/password of an instance through the VNC console. Using the “CAPS LOCK” button currently results in misspelled username/password.
  12. Instances that will be moved and/or backed up in the SIM dashboard should have only one root disk. The existence of extra disks is currently not supported in this case.
  13. The "Create snapshot" action currently does not support massive parallelism. We are currently testing to lift the limitations.
  14. There is a caveat in using all physical NICs on an AWS cluster. Physical NICs 0,2,3 should be used only for private networks. Physical NIC 1 should not be used at all. The rest of the physical NICs, 4 to 15 should be used only for public access networks. In future releases this will be handled automatically via the virtupian UI.
  15. Currently we do not support VMs using the UEFI bootloader.
  16. We have noticed an issue when using large NVME drives, larger than 2TB, during the installation process. We suggest that the NVMEs are completely zeroe'd out before starting the installation.

Breaking Changes

  1. Single physical storage device installed cluster are not able to be upgraded currently. The suggested way forward would to be a new clean install. Otherwise please contact the CS team, by openiong a support request at Support

Supported Versions

You can visit the Release notes of the previous versions in the archive section. Please be reminded, versions prior to 2.4.2 are not supported.

GPL Code Patches

The modified GPL code patches used in this release are available at: GPL Code Patches Release 4.0.0