Skip to content

Frequently Asked Questions

Instance Creation

Q: Is there limitation of RAM/vCPU support for guest OS? For example Ubuntu 16.04 HVM, is the maximum RAM only 4096?

A: No limitation from Sunlight end. It is limited by what the maximum RAM Ubuntu can support.

Q: For Windows instance VM, can we use Windows Remote Desktop to access Windows instance VM instead of “Through VNC Console” that is mentioned in Sunlight website? In which case “Through VNC Console” is a must to use for accessing Windows VM?

A: You can use RDP to access Windows VM and that is the preferred method. Only once after the VM is created you need to access the VM using the VNC Console to configure the IP address and enable Remote Desktop. After that, we will need VNC only if RDP is not working.

Q: Sunlight HCI can support VM snapshot. How many snapshot can be supported?

A: Yes, Sunlight can support VM vdisk snapshot. More info

Q: Can Sunlight HCI support VM clone?

A: Yes, to clone a VM on Sunlight we could make a backup of a VM firstly, then this backup would be available as a template for us to create another VM based on this template.

Storage

Q: What is the purpose of DB Metadata? Can datastore work properly without DB Metadata enabled?

A: DB Metadata - stores the metadata of the entire cluster. This is useful if we need to recreate the cluster after a failure. One use case is - if the master node fails and one of the assigned slave node wants to come up as the next master - it needs the metadata information to recreate the cluster. Hence it makes sense to have the metadata on a datastore which has two replicas. So that even if one node goes down, the metadata is available on one of the disks on the other node. You can create datastore without enabling “DB Metadata” too. Usually only one datastore will be used to store metadata, otherwise we’ll have duplicates (of metadata) occupying a lot of disk space.

Q: Beside “1 Replica” and “2 Replicas”, is there other option we can choose in the condition of enough nodes and datastores?

A: As of today we support only 2 replicas. More will come in future.

Q: We want to know how data is spreading to nodes?

A: We support mirror (2 replication) on a datastore, which means the data will be spread to just two nodes. If the datastore has disk members from more than 2 nodes, there is a dynamic algorithm to select the node of mirrored copy to guarantee the data won’t be lost when one node is down. How the nodes are selected by the algorithm under this scenario is not presented to the user in the dashboard currently, but we’ll enhance this to make it more clear for the user to know the disk/nodes with redundancy.

Q: Does Sunlight have software-define storage product that can co-work with OpenStack?

A: The Sunlight platform is a complete Hyperconverged stack that manages software defined storage, network and compute resources internally and completely independently from openstack. There is a community project upstream that provides some basic driver integration for OpenStack to control a sunlight cluster that the team contribute to, however this is not a Sunlight product and it is not supported by Sunlight. Please find this community project here

Network

Q: Virtual Network: it allows administrators to aggregate the physical network paths (A and B) in order to scale up throughput. My question is if the feature is like “network teaming”? If so, is there any requirement for network switch? For example, is the LACP a must for network switch to co-work with the feature?

A: Yes, this is like NIC teaming and the Sunlight HCI stack takes care of the implementation of this. There is no need of any configuration on the switch side.

Q: Is there a built-in internal DHCP server in Sunlight HCI?

A: Yes, in the case of "Internal" DHCP type, there is a built-in internal DHCP server running. You can use it. But to have access to any resource outside the network (outside the cluster) you need a proper router/gateway configuration.

Q: Do you have a scenario about network configuration? In which condition users are recommended to use which DHCP type? To me, “Static” and “External” seem to be the same thing.

A: External: the user doesn’t need to specify the subnet/mask/gateway. It would be connected to the physical network directly and the VM would request a DHCP IP address from the DHCP server located in the physical network, i.e. outside of Sunlight HCI. Internal: the user needs to specify the network information. A DHCP server would be created on Sunlight for this specific network. The VM would request from here. Static: the same as "internal”, a new network would be created, but there won’t be a DHCP server. The VM would be allocated and configured with an IP address statically during the creation stage.

Q: As far as I know, if a vNIC of a VM is enabled with VLAN tag in VMware, The physical connection port of the physical switch requires it to be configured for the VLAN network. For Sunlight HCI VLAN, is it the same? Or nothing we need to do with the physical switch?

A: For the network with VLAN enabled on Sunlight, the physical switch would need someone to configure the port with the VLAN accordingly. The Sunlight stack won’t talk to the switch and configure the switch automatically.

Templates/Images

Q: When creating a VM, is the image referring to the list from Image Template – Local?

A: Yes it refers to the local repository. There are different types of repositories. a) “Local” repository - which means the template is downloaded to your local cluster b) “sunlight” - cloud repository - which shows all the available templates on Sunlight Cloud repository which any customer can use. To use any of the templates here, you need to download it to the local repository first. c) If you create your own repository whether on the cloud or local and add it to this cluster, it’ll be displayed as additional tab.

Q: Shall the template server be located at the network of the controller?

A: The controller is connected the Physical NIC 0 network, so the template server could be placed in that physical network. Essentially the template server could be also connected to a virtual network on Sunlight as long as this network is routable to the controller.

Q: Where does the space used for local templates come from? Is it the space of the installation drive of the Sunlight stack?

A: Yes, this is the remaining space of the installation drive.

High Availability / Fault Tolerance

Q: Please clarify if the master node failover feature is enabled by default or required manually configure it?

A: The master node failover has to be configured manually. more info

Q: What's the available resources in an HA setup?

A: In such a setup with HA enabled, there would be only half of CPU cores and memory which are available to be used, so that it guarantees that there are reserved resources for the VMs to be migrated when there is a failure on one node. There would be a short-time disruption to the VM but it would boot up on the failover node shortly.

Performance

Q: What are the requirements of the VM’s operating system in order to have best performance?

A: There are two parts at the operating system level to affect the performance. 1) Multi-queue IOschedulers (this is pretty much a standard on the recent versions of all distros kernels, such as Ubuntu, RHEL, Centos...) Reference 2) Multi-queue xen-blkfront driver implementation that takes advantage of the multi-queue IOSchedulers API (this is available since kernel 4.5, but this is not available on stock centos distros and needs updated kernel) Reference

Q: What is the purpose of the prep_vm.sh script?

A: When the VM is using a multi-queue xen-blkfront driver, the VM’s virtual disk would have N queues (as many as the number of CPU cores). Those queues have interrupts for signalling the VM that an operation has completed. When a VM boots up initially, all of those interrupts are handled by CPU0. What prep_vm.sh does is to pin each of those queues' interrupts to different CPUs. This is done for every virtual disk of the VM. After that, instead of all the interrupts hammering CPU0, they are evenly distributed to all the VM's CPU cores. The interrupt distribution could be checked by running “cat /proc/interrupts | grep blkif”.