Getting Started
The Sunlight support team is on stand by to assist with any installation and configuration issues you may encounter. Link : support portal
Note
In case your system is shipped with a preinstalled Sunlight Enterprise Software platform, you can skip this step and navigate straight to System Configuration.
Note
To gain access to the installer images, please submit a request via our online support portal support portaland a Sunlight representative will get in touch to provide instructions.
In order to initialize the Sunlight Enterprise Software platform installation, please ensure that a minimum of 1 x 32GB USB thumb drive is available, such as the 32GB SanDisk Ultra Fit USB 3.0 flash drive.
Once the USB image is downloaded, please transfer the data to the raw USB flash drive (the image contains its own partition structure so the downloaded image should be transferred directly to the root directory, not an existing partition). The method for writing the image on the USB flash drive may vary, depending on the type of the Operating system you are using.
For additional information on installing the provided images, please visit the following links: - For MAC systems - For Windows systems
At this stage it must be decided which physical blade you wish to designate as the master node to host the cluster controller. In practice, any blade can be used as a master node. Please proceed to powering on the 4 nodes for which the USB boot image has been downloaded for.
All of the 10GBit LOM NICs on the blades have to be cabled together into a single logical Layer 2 ethernet network segment. For redundancy reasons and in order to ensure the optimal performance of the High Availability algorithms, we strongly recommend using separate switches as indicated in the diagram below:
Please read the tutorials of BIOS settings for different hardware platforms, in order to make sure that the installation procedure meets the minimum requirements for running the Sunlight Enterprise Software.
Note
A keyboard must be attached for navigating through the configuration utility (for a remote access, the IPMI interface should be used). Use the TAB key to scroll through the menu or input fields.
Insert the USB drive and boot up the system. The installation process is performed through the following steps:
Press 'crtl+x' to initiate the Installation process
The first step is to inspect the system hardware information, in order to ensure that the system meets the requirements for the Sunlight Enterprise Software platform installation.
Select continue to proceed to the detection of hardware requirements:
The results are displayed along with a message of whether the system meets the specified requirements. Press ok to proceed:
In the next step, select "continue" to install the Sunlight platform into a local drive:
Select the local installation drive. This wizard is used to install the platform to the destination drive (local drive SATA/NVMe):
Please confirm your selection.
Check the installation progress bar.
Once complete, the system will prompt you to remove the USB drive and reboot the system:
Following the reboot, the installation continues with the system's configurations.
System Configurations
Step 1 of 8
Press 'Continue' to check the harware information
The initial step checks the system hardware information once again, in order to detect any hardware changes. Press ok to proceed (If the system still meets the requirements):
Step 2 of 8
The second step is the initialisation of the NVMe drives. All NVMe drives must be initialised, in order to use them either for local/distributed storage (when the system is booted up and running) or for installation drive. Please select each drive on the list and go through the initialisation wizard.
Click continue:
Choose the drive to initialise:
Select confirm, once the initialization process is completed:
Once the initialisation of each desired NVMe drive is finalized, select 'Finish' to proceed.
Please ensure that you have initiliazed all your selected drives.
Step 3 of 8
The third step checks for network connectivity. If both NICs have network connectivity and the MTU is properly configured, the configuration process can continue:
In case the switch connectivity has problem or the MTU of the ports is not configured properly, the following message will be appeared otherwise the installation proceeds to the step 4.
Step 4 of 8
Configure Virtual LANs. This option allows the configuration of the VLAN on the virtualised controller service, that runs when the system boots up. By enabling this VLAN feature, the system supports management network path redundancy in case of a network link failure. That means you have to enable this feature in order to support network redundancy for the management network of the system not for the virtual networks (VLAN) which the user can create after the installation.
Two options are available, default and custom. The default option (defvlan = 1) disables the VLAN feature, while the custom option enables the VLAN feature with the selected value.
Note
The management network VLAN fault tolerance is independent of virtual network VLAN fault tolerance. The user can create a fault tolerant virtual network VLAN even if the management network VLAN it has not been enabled during the installation.
Please select continue to configure VLAN:
Select a custom value which will represent the VLAN ID, in order to activate this feature. Specify the default VLAN ID to 1, in order to deactivate the VLAN feature:
Step 5 of 8
Configure node mode.This option allows you to define the (master/slave) role of the installer node. Configure chassis and blade id. This option permits the definition of the chassis id as well as the blade id for the specific node being installed. The chassis ID should be a hexadecimal number,(e.g 0x2f1a), while the blade ID should be a decimal number (e.g. 1).
Please select continue to proceed:
Choose master node mode (master node installation):
Accept and proceed:
OR
Choose slave node mode (slave node installation):
Accept and proceed:
Change the default values and press TAB to submit:
Warning
The Chassis id must be a Hex value in the format of "0x##", e.g. 0xCC82. The Blade id must be a number between 1 and 99.
Step 6 of 8 (on master node only)
Configure controller network interface. This option allows the configuration of the IP address on the virtualised controller service that runs when the system boots up.
Two options are available, DHCP and static. For the static option, the IP address, net mask, default gateway and DNS settings of the controller can be specified.
Provide dynamic DHCP settings:
OR
Provide static settings:
Specify the IP address of the host, the default Gateway, the Netmask and the DNS server:
Step 7 of 8
Configure the NV channel and the encryption key. This option allows the configuration of the NV channel, which isolates clusters on the same subnet, according to the channel. The configuration of the encryption key is also provided, for allowing traffic to flow between the NVs with the same encryption key. As a result, the nodes that belong in the same cluster should be configured with the same channel id and encryption key, in order to detect each other.
Select continue to proceed to the configuration:
Enter custom values and press TAB to submit:
Warning
The Channel value must be a number between 1 and 65534.The Encryption key must be a string of exactly 16 characters, e.g. 0123456789abcdef
Step 8 of 8
Exit and restart. This step will reboot the system back to the initial boot menu.
Confirm reboot option:
After reboot you can view the following menu options that are provided. You can choose NV-Configure, in order to change the configuration settings:
Following the configuration options described above, the controller UI will boot up with the declared IP settings, or a default pre-configured IP address:
IP Address | 192.168.1.254 (link) |
Netmask | 255.255.255.0 |
Default Gateway | 192.168.1.1 |
Default Username | administrator |
Default Password | NexVisor |
Now the system is booted up and fully operational. Please configure a local method to access the ethernet segment with the network settings outlined above. The duration for the cluster controller to boot up is approximately one minute, however a web browser can be pointed at the IP address of the controller UI, which has been previously configured in Step 6. The following screen is displayed upon the boot up: