Installation |
|
|
Install system from USB drive |
|
Single image required for Master and Slave installations |
Check system hardware |
|
Physical disks and 10G links are now detected and installed properly |
Initialise NVMe drives |
|
The capability of initializing NVMe drives at runtime via the UI will be available soon |
Initialise installation drive |
|
Installation drive must appear as a SCSI compatible device (e.g. SATA or SAS) or NVMe device |
Install to local disk |
|
|
Check Network configuration |
|
Verification of NICS connectivity and proper MTU configuration |
Configure IP settings of the dashboard |
|
|
Set cluster-wide shared encryption key |
|
|
Configure internal channel mode |
|
|
Configure external VLAN mode |
|
External 802.1Q VLAN switch is now compatible |
Configure dedicated Physical NIC MAC addresses for non management traffic mode |
|
|
|
|
|
Upgrade support |
|
|
System upgrade support followed by reboot of the cluster |
|
Upgrade must currently be requested from the Sunlight support team |
System upgrade support enabled by default |
|
Coming soon |
System upgrade support with live patching (no reboot required) |
|
Coming soon |
|
|
|
Infrastructure Management |
|
|
Overall infrastructure stats presentation on main dashboard page |
|
|
Rack/Chassis/Blade visualisation - drag and drop |
|
|
Chassis placement in rack visualisation tool |
|
|
Visualisation on blade and chassis positioning |
|
|
Fault tolerance visualisation |
|
|
Core statistics presentation per compute node |
|
|
Physical resources utilisation |
|
|
Physical memory utilisation per node |
|
|
Ability to reboot nodes from the UI |
|
|
Drive utilisation and datastore membership |
|
|
Storage replica repair (manual UI) |
|
|
Storage replica repair (CLI/API) |
|
|
Automated storage repair on reboot |
|
|
Automated Storage replica repair |
|
Coming soon |
|
|
|
License Key management and enforcement |
|
|
License system tethered to hardware |
|
|
License system tethered to number of nodes |
|
|
License system tethered to time period |
|
|
Add/remove licences via the dashboard |
|
|
|
|
|
Automated addition of new Nexvisor nodes |
|
|
Power on Nexvisor nodes and auto discover them via the UI |
|
Requires physical installation and correct channel/encryption key settings |
Simple install and deploy |
|
|
PXE boot auto deployment of new nodes |
|
Used extensively in our internal labs, will be productised soon |
|
|
|
Resource groups |
|
|
Create resource groups that include individual cores, storage and network resources |
|
We do not impose memory assignment restrictions on the resource group level |
Create fault tolerance policies for VM running within those groups |
|
Fault tolerance policy currently tested and supported across blades. Full support for chassis and rack fault tolerance will be released soon |
Create metadata disk from the UI to facilitate restore of master node in the event of a failure |
|
During this process template images are not restored. An action via the UI is required, in order to re-download the templates |
Visualisation of physical resources assigned to a group |
|
|
Overcommit policy for virtual to physical core allocation |
|
|
vCPU to pCPU pinning |
|
|
vNUMA architecture enablement |
|
Physical NUMA architecture is supported at the hypervisor level, but presenting a corresponding virtual NUMA architecture to the VM is not yet enabled |
|
|
|
Software Defined Storage |
|
|
Group drives together into datastores |
|
|
Assign different replication policies per datastore |
|
|
Concatenate volumes together into larger volume - striping |
|
|
Take storage level snapshots for fast clone and backup |
|
|
Move content around between physical drives |
|
Requires assistance from support team |
Achieve 1M IOPs for virtualised IO |
|
Requires guest support to achieve best performance - Multiqueue block support + recent kernel. See user docs for configuration requirements |
Physical NVMe drive hotplug removal/addition |
|
|
|
|
|
Software Defined Network |
|
|
Attach to physical subnet |
|
|
Attach to virtual network created between VMs running across cluster |
|
|
Aggregate physical network paths to create high bandwidth channel |
|
|
Network path redundancy |
|
Νetwork availability guarantees in the event of losing no more than 50% of the physical network paths. |
Create VLAN networks in software and attach to switch trunk interfaces |
|
|
VLAN Network Fault Tolerance |
|
In case one of the available NICs attached to a VLAN virtual network fails, an alternative one will be automatically available to take over the connectivity |
Allocate IP addresses via DHCP |
|
|
Allocate IP addresses statically |
|
VMs get an IP automatically from a statically assigned block of IPs. The user can change the IP given to a specific one as a separate step via the UI. Windows users have an additional step of manually changing the network details inside the VM after changing the IP |
Allow external DHCP service to allocate IP addresses |
|
|
MAC address passthrough for physical NICs to virtual NICs |
|
Only on AWS at the moment |
MAC address masking for passthrough NICs to support VM motion |
|
Only on AWS at the moment |
Physical NIC cable removal, link up/down |
|
|
IPv6 enable |
|
|
|
|
|
VM instances |
|
|
Fully supported list of tested instances published with each release |
|
|
Deploy instances from template library |
|
|
Manage template library to add/remove templates from customer own on-premise private template repository |
|
|
Manage template library to add/remove templates from Sunlight hosted private customer template repository |
|
Self-service Template upload support provided, template deletion must be requested via support ticket |
Create flavours to describe resource attributes for instances |
|
|
Add multiple volume drives to a VM (3 supported) |
|
|
Add multiple volume drives to a VM , format accordingly and attach to any mount point |
|
Must be handled by an administrator within the VM |
Manage the extra vDisks from the UI, extra disk vs long term standing vDisks |
|
Coming soon |
Add multiple virtual NIC interfaces to a VM |
|
Is not supported for VMs booted from CD/ISO |
Edit VM size attributes - add more vCPU, RAM, storage and network resources |
|
Adding VIFs is not supported for VMs booted from CD/ISO |
Move a VM between physical nodes in a cluster (Cold migration) |
|
|
Move a VM between physical nodes in a cluster (Warm migration) |
|
|
Move a VM between physical nodes in a cluster (Hot migration) |
|
|
VM functionality - reboot - linux |
|
|
VM functionality - reboot - windows |
|
|
VM functionality - upgrade linux |
|
For HVM linux VMs, the distro upgrade is not currently presented on the UI as the new distro version of the VM. |
VM functionality - upgrade windows |
|
|
VM functionality - console access linux |
|
|
VM functionality - console access windows |
|
|
|
|
|
nexvisor |
|
|
Reboot nexvisor |
|
|
Fail over master node (automated) |
|
Once the master node fails, the failover master node commences, in order to provide services by recovering the failed node |
Single NV image to install |
|
Utilized installation images limited to one |
|
|
|
container and VM cluster support |
|
|
Deploy VM cluster |
|
|
Deploy Docker swarm clusters, fully configured |
|
We are using Ubuntu 16.04 and Docker 18.03 as our base docker enabled distribution |
Deploy docker to different OS |
|
Requires external scripts (e.g. ansible) |
Deploy different docker version |
|
Requires external scripts (e.g. ansible) |
Manage the number of masters and slaves from the UI |
|
|
Deploy Portainer UI to manage the cluster |
|
The current version of portainer installed is 1.19.2, which has been tested and is working with the current docker template version |
Kubernetes cluster deployment support |
|
We are using a third party deployment script in Ansible |
|
|
|
AWS environment support |
|
|
AWS floating IP assignment to VM instance vif |
|
Requires Sunlight AWS dashboard |
|
|
|
Application testing |
|
|
MySQL |
|
Manual testing as documented on the Sunlight performance portal |
PostgreSQL |
|
Manual testing as documented on the Sunlight performance portal |
Oracle DB |
|
Manual testing as documented on the Sunlight performance portal |
Hadoop |
|
Manual testing as documented on the Sunlight performance portal |
Fio |
|
|
iperf |
|
|
|
|
|
External API support |
|
|
Complete documentation for API support |
|
|
Complete API support for all operations |
|
|
CLI utility for non-UI based management |
|
Tool exists but not exposed to system administrators yet |
Openstack API support |
|
Coming soon |
|
|
|
Sunlight Tested System limits |
|
[Note that these are not hard system limits but indicate what is validated and supported by Sunlight] |
Maximum supported number of VMs per NexVisor host (PV and HVM) |
40 |
|
Maximum supported number of VMs per resource group |
120 |
|
Maximum supported number of tolerated failures (blade/chassis/rack) per Resource Group FT policy |
1 |
Currently we have validated the failure tolerance for the blade level |
Maximum supported number of NexVisor hosts in a logical cluster |
8 |
|
Maximum supported physical RAM per NexVisor |
512GB |
|
Maximum supported virtual RAM per VM |
180GB |
|
Maximum supported physical cores per NexVisor |
96 |
|
Maximum supported virtual cores per VM |
32 |
|
Maximum supported vDisk size |
2TB |
|
Maximum supported physical disk size |
2TB |
|
Maximum supported NVMe drives per NexVisor |
8 |
|
Maximum supported SATA drives per NexVisor |
8 |
|
Maximum number of vDisks per VM |
3 |
|
Maximum number of vNICs per VM |
4 |
|
Maximum number of physical NICs per NexVisor |
4 |
|
Maximum number of physical NICs assigned to an encapsulated network |
4 |
|
Maximum number of VMs that can be deployed in a batch at the same time |
8 |
|