Skip to content

Supported Features for version 4.8.0

Symbol Meaning
Fully Supported
Partially Supported
CE team enabled
Under development
Features Support status Notes
Install system from USB drive Single image required for Primary and Secondary installations
Check system hardware Physical disks and 10GBit links are now detected and installed properly
Initialise NVMe drives on the main product The node must be rebooted in order to add and initialize a new nvme drive. Drive can be initialized by selecting the NV-configure option at boot time
Initialise installation drive Installation drive must appear as a SCSI compatible device (e.g. SATA or SAS) or NVMe device
Install to local disk
Check Network configuration Verification of NICs connectivity and proper MTU configuration
Configure IP settings of the dashboard
Set cluster-wide shared encryption key
Configure internal channel mode
Configure external VLAN mode External 802.1Q VLAN switch is now compatible
Configure dedicated Physical NIC MAC addresses for non management traffic mode
Upgrade support
System upgrade support followed by reboot of the cluster Upgrade must currently be requested from the Sunlight support team
System upgrade support enabled by default Different parts of the platform can be independently upgraded with the help of the CE team
System upgrade support with live patching (no reboot required) A new service is currently active, allowing the automated patching of the SIM. The same procedure for the rest of the platform stacks is under development
Infrastructure Management
Overall infrastructure stats presentation on main dashboard page
Rack/Chassis/Blade visualisation - drag and drop
Chassis placement in rack visualisation tool
Visualisation of blade and chassis positioning
Fault tolerance visualisation
Core statistics presentation per compute node
Physical resources utilisation
Physical memory utilisation per node
Ability to reboot nodes from the UI
Drive utilisation and datastore membership
Storage replica repair (CLI/API)
Automated storage repair on reboot
Automated Storage replica repair
License Key management and enforcement
License system tethered to hardware
License system tethered to number of nodes
License system tethered to time period
Add/remove licences via the dashboard
Automated addition of new Nexvisor nodes
Power on Nexvisor nodes and auto discover them via the UI Requires physical installation and correct channel/encryption key settings
Simple install and deploy
PXE boot auto deployment of new nodes Used extensively in our internal labs, will be productised soon
Resource groups
Create resource groups that include individual cores, storage and network resources We do not impose memory assignment restrictions on the resource group level
Create fault tolerance policies for VM running within those groups Fault tolerance policy currently tested and supported across blades. Full support for chassis and rack fault tolerance is not supported yet
Create metadata disk from the UI to facilitate restore of primary node in the event of a failure During this process template images are not restored. An action via the UI is required, in order to re-download the templates
Create resourse groups that include PCI device PCI device should be attached to the selected nexvisor node
Visualisation of physical resources assigned to a group
Overcommit policy for virtual to physical core allocation
vCPU to pCPU pinning
vNUMA architecture enablement Physical NUMA architecture is supported at the hypervisor level, but presenting a corresponding virtual NUMA architecture to the VM is not yet enabled
Software Defined Storage
Group drives together into datastores
Assign different replication policies per datastore
Concatenate volumes together into larger volume - striping
Take storage level snapshots for fast clone and backup
Move content around between physical drives Requires assistance from support team
Achieve 1M IOPs for virtualised IO Requires guest support to achieve best performance - Multiqueue block support + recent kernel. See user docs for configuration requirements
Physical NVMe drive hotplug removal/addition
Virtual Disk Snapshotting Create/Rename/Delete a snaphot of an existing vdisk
Template Creation Create a Sunlight template based on a vDisk
Software Defined Network
Attach to physical subnet
Attach to virtual network created between VMs running across cluster
Aggregate physical network paths to create high bandwidth channel
Network path redundancy ╬Łetwork availability guaranteed in the event of losing no more than 50% of the physical network paths.
Create VLAN networks in software and attach to switch trunk interfaces
VLAN Network Fault Tolerance In case one of the available NICs attached to a VLAN virtual network fails, an alternative one will be automatically available to take over the connectivity
Allocate IP addresses via DHCP
Allocate IP addresses statically VMs get an IP automatically from a statically assigned block of IPs. The user can change the IP given to a specific one as a separate step via the UI. Windows users have an additional step of manually changing the network details inside the VM after changing the IP
Allow external DHCP service to allocate IP addresses
MAC address passthrough for physical NICs to virtual NICs Only on AWS at the moment
MAC address masking for passthrough NICs to support VM motion Only on AWS at the moment
Physical NIC cable removal, link up/down
IPv6 enable
VM instances
Fully supported list of tested instances published with each release
Deploy instances from template library
Manage template library to add/remove templates from customer own on-premise private template repository
Create flavors to describe resource attributes for instances
Add PCI device (GPU) to a VM User can attach one PCI device per VM. Non-validated GPUs need to be explicitly included in the supported list by the CE team. Please check the gpu compatibility list for default supported models.
Add multiple volume drives to a VM (3 supported)
Add multiple volume drives to a VM , format accordingly and attach to any mount point Must be handled by an administrator within the VM
Manage the extra vDisks from the UI, ephemeral vs long term standing vDisks
Add multiple virtual NIC interfaces to a VM Is not supported for VMs booted from CD/ISO
Edit VM size attributes - add more vCPU, RAM, storage and network resources Adding VIFs is not supported for VMs booted from CD/ISO
Move a VM between physical nodes in a cluster (Cold migration)
Move a VM between physical nodes in a cluster (Warm migration)
Move a VM between physical nodes in a cluster (Hot migration)
VM functionality - reboot - linux
VM functionality - reboot - windows
VM functionality - upgrade linux For HVM linux VMs, the distro upgrade is not currently presented on the UI as the new distro version of the VM.
VM functionality - upgrade windows
VM functionality - console access linux
VM functionality - console access windows
Reboot NexVisor
Fail over primary node (automated) Once the primary node fails, the failover primary node commences, in order to provide services by recovering the failed node
Single NV image to install Utilized installation images limited to one
AWS environment support
AWS floating IP assignment to VM instance VIF Requires Sunlight AWS dashboard
Application testing
MySQL Manual testing as documented on the Sunlight performance portal
PostgreSQL Manual testing as documented on the Sunlight performance portal
Oracle DB Manual testing as documented on the Sunlight performance portal
Hadoop Manual testing as documented on the Sunlight performance portal
External API support
Complete documentation for API support
Complete API support for all operations
CLI utility for non-UI based management Tool exists but not exposed to system administrators yet
Openstack API support Coming soon
Sunlight Tested System limits [Note that these are not hard system limits but indicate what is validated and supported by Sunlight]
Maximum supported number of VMs per NexVisor host (PV and HVM) 125
Maximum supported number of VMs per resource group 250
Maximum supported number of tolerated failures (blade/chassis/rack) per Resource Group FT policy 1 Currently we have validated the failure tolerance for the blade level
Maximum supported number of NexVisor hosts in a logical cluster 8
Maximum supported physical RAM per NexVisor 512GB
Maximum supported virtual RAM per VM 512GB
Maximum supported physical cores per NexVisor 96
Maximum supported virtual cores per VM 70
Maximum supported vDisk size 2TB
Maximum supported physical disk size 9TB
Maximum supported NVMe drives per NexVisor 16
Maximum supported SATA drives per NexVisor 8
Maximum number of vDisks per VM 3
Maximum number of vNICs per VM 8
Maximum number of physical NICs per NexVisor 4
Maximum number of physical NICs assigned to an encapsulated network 4
Maximum number of VMs that can be deployed in a batch at the same time 8

Primary Failover Timing Measurements

The table below demonstrates how much time is needed once the primary node fails, until the high available instance (which has been created there), becomes active and healthy again on failover node.

Recovery Status on Failover node Maximum Elapsed time
Controller active on Failover node 1 minutes, 22 seconds
Controller is reachable 1 minutes, 24 seconds
Database METADATA synchronization 3 minutes, 35 seconds
Sunlight API becomes ready 6 minutes, 13 seconds
High available VM is active and healthy on Failover node.
(the reported time might vary according to the
number and the size of the HA VMs to be migrated
8 minutes, 12 seconds