Skip to content

Supported Features for version 2.2.0

Symbol Meaning
Fully Supported
Partially Supported
Under development
Unsupported
Features Support status Notes
Installation
Install system from USB drive Single image required for Master and Slave installations
Initialise NVMe drives Will be possible to execute at runtime via the UI soon
Initialise installation drive Installation drive must be presented as a SCSI compatible device (e.g. SATA or SAS) or NVMe device
Install to local disk
Configure IP settings of the dashboard
Set cluster-wide shared encryption key
Configure internal channel mode
Configure external VLAN mode External 802.1Q VLAN switch compatible
Configure dedicated Physical NIC MAC addresses for non management traffic mode
Upgrade support
System upgrade support followed by reboot of the cluster Upgrade must currently be requested from he Sunlight support team.
Self-initiated system upgrade support Coming soon
System upgrade support with live patching, no reboot required Coming soon
Infrastructure Management
Overall infrastructure stats presentation on main dashboard page
Rack/Chassis/Blade visualisation - drag and drop
Chassis placement in rack visualisation tool
Visualisation on blade and chassis positioning
Fault tolerance visualisation
Per compute node core stats presentation
Physical resources utilisation
Physical memory utilisation per node
Ability to reboot nodes from the UI
Drive utilisation and datastore membership
Storage replica repair (manual UI)
Storage replica repair (CLI/API)
Automated storage repair on reboot
Storage replica repair (automated) Coming soon
License Key management and enforcement
License system tied to hardware
License system tied to number of nodes
License system tied to time period
Add/remove licences via the dashboard
Automated addition of new Nexvisor nodes
Power on Nexvisor nodes and auto discover them via the UI Requires physical installation and correct channel/encryption key settings
Simple install and deploy
PXE boot auto deployment of new nodes Used extensively in our internal labs, will be productised soon
Resource groups
Create resource groups that include individual cores, storage and network resources We do not impose memory assignment restrictions on the resource group level
Create fault tolerance policies for VM running within those groups Fault tolerance policy currently tested and supported across blades. Full support for chassis and rack fault tolerance will be released soon.
Create metadata disk from the UI to facilitate restore of master node in the event of a failure During this process template images are not restored. An action via the UI is required in order to re-download the templates
Visualisation of physical resources assigned to a group
Overcommit policy for virtual to physical core allocation
vCPU to pCPU pinning
vNUMA architecture enablement Physical NUMA architecture is supported at the hypervisor level, but presenting a corresponding virtual NUMA architecture to the VM is not yet enabled
Software Defined Storage
Group NVMe drives together into datastores
Assign different replication policies per datastore
concatenate volumes together into larger volume - striping
take storage level snapshots for fast clone and backup
move content around between physical drives Requires assistance from support team
achieve 1M IOPs for virtualised IO Requires guest support to achieve best performance - Multiqueue block support + recent kernel. See user docs for configuration requirements.
Physical NVMe drive hotplug removal/addition
Software Defined Network
Attach to physical subnet
Attach to virtual network created between VMs running across cluster
Aggregate physical network paths to create high bandwidth channel
Create VLAN networks in software and attach to switch trunk interfaces
VLAN Network Fault Tolerance In case one of the available NICs attached to a VLAN virtual network fails, an alternative one will be automatically available to take over the connectivity
Allocate IP addresses via DHCP
Allocate IP addresses statically VMs get an IP automatically from a statically assigned block of IPs. The user can change the IP given to a specific one as a separate step via the UI. Windows users have an additional step of manually changing the network details inside the VM after changing the IP
Allow external DHCP service to allocate IP addresses
MAC address passthrough for physical NICs to virtual NICs Only on AWS at the moment
MAC address masking for passthrough NICs to support VM motion Only on AWS at the moment
Physical NIC cable removal, link up/down
IPv6 enable
VM instances
Fully supported list of tested instances declared with each release
Deploy instances from template library
Manage template library to add/remove templates from customer own on-prem private template repo
Manage template library to add/remove templates from Sunlight hosted private customer template repo Self-service Template upload support provided, template deletion must be requested via support ticket
Create flavours to describe resource attributes for instances
Add multiple volume drives to a VM (3 supported)
Add multiple volume drives to a VM , format accordingly and attach to any mount point Must be handled by admin within the VM
Manage the extra vDisks from the UI, extra disk vs long term standing vDisks Coming soon
Add multiple virtual NIC interfaces to a VM Is not supported for VMs booted from CD/ISO
Edit VM size attributes - add more vCPU, RAM, storage and network resources Adding VIFs is not supported for VMs booted from CD/ISO
Move a VM between physical nodes in a cluster (Cold migration)
Move a VM between physical nodes in a cluster (Warm migration)
Move a VM between physical nodes in a cluster (Hot migration)
VM functionality - reboot - linux
VM functionality - reboot - windows
VM functionality - upgrade linux For HVM linux VMs, the distro upgrade currently is not reflected on the UI as the new distro version of the VM.
VM functionality - upgrade windows
VM functionality - console access linux
VM functionality - console access windows
nexvisor
Reboot nexvisor
Fail over master node (automated) Once the master node fails, the failover master node starts to provide services by recovering the failed node
Single NV image to install Used installation images limited to one
container and VM cluster support
Deploy VM cluster
Deploy Docker swarm clusters, fully configured We are using Ubuntu 16.04 and Docker 18.03 as our base docker enabled distribution
Deploy docker to different OS Requires external scripts (e.g. ansible)
Deploy different docker version Requires external scripts (e.g. ansible)
Manage number of masters and slaves from the UI
Deploy Portainer UI to manage the cluster The current version of portainer installed is 1.19.2 which is tested and working with the current docker template version
Kubernetes cluster deployment support We are using an third party deployment script in Ansible
AWS environment support
AWS floating IP assignment to VM instance vif Requires Sunlight AWS dashboard
Application testing
MySQL Manual testing as documented via Sunlight performance portal
PostgreSQL Manual testing as documented via Sunlight performance portal
Oracle DB Manual testing as documented via Sunlight performance portal
Hadoop Manual testing as documented via Sunlight performance portal
Fio
iperf
External API support
Full documentation for API support
Full API support for all operations
CLI utility for non-UI based management Tool exists but not exposed to system admins yet
Openstack API support Coming soon
Sunlight Tested System limits [Note that these are not hard system limits but indicate what is validated and supported by Sunlight]
Maximum supported number of VMs per NexVisor host (PV and HVM) 40
Maximum supported number of VMs per resource group 120
Maximum supported number of tolerated failures (blade/chassis/rack) per Resource Group FT policy 1 Currently we have validated the failure tolerance for the blade level
Maximum supported number of NexVisor hosts in a logical cluster 8
Maximum supported physical RAM per NexVisor 512GB
Maximum supported virtual RAM per VM 180GB
Maximum supported physical cores per NexVisor 96
Maximum supported virtual cores per VM 32
Maximum supported vDisk size 2TB
Maximum supported physical disk size 2TB
Maximum supported NVMe drives per NexVisor 8
Maximum supported SATA drives per NexVisor 8
Maximum number of vDisks per VM 3
Maximum number of vNICs per VM 2
Maximum number of physical NICs per NexVisor 4
Maximum number of physical NICs assigned to an encapsulated network 4
Maximum number of VMs that can be deployed in a batch at the same time 8

Master Failover Timing Measurements

The table below demonstrates how much time is needed, once the master node fails until the high available instance which has been created there, becomes active and healthy again on failover node.

Recovery Status on Failover node Maximum Elapsed time
Controller active on Failover node 1 minutes, 22 seconds
Controller is reachable 1 minutes, 24 seconds
Database METADATA synchronization 3 minutes, 35 seconds
Sunlight API becomes ready 6 minutes, 13 seconds
High available VM is active and healthy on Failover node 8 minutes, 12 seconds