All Photos are here:
https://photos.app.goo.gl/KKfH8dpxpELSHy3U7
Starting info and advice:
- I will try to do Part 5 using these idea: I’d recommend using FreeNAS instead of Ubuntu. I’ve just done a test and managed to set up a FreeNAS VM with 2 GB of RAM and managed to create a volume and connect it to ESXi using iSCSI.
Mine lab parts:
GraspingTech’s helping guid:
Overview
My idea is to add Disk and configure iSCSI in FreeNAS and connect mine ESXi hosts to it.
In the Part 4 I’ve created a three node cluster but I couldn’t enable DRS or HA because it requires centralised storage. In this part, I’ll create a storage server with FreeNAS and configure it so that mine ESXi hosts can access it via iSCSI (with multipathing).
After completing the steps in the previous page, I will be at a point where I have:
- Three ESXi 6.7 VMs running on VMware Fusion
- The first ESXi VM contains a pfSense firewall VM with built in DNS Resolver
- One vCenter Server Appliance in VMware Fusion
- I am able to access the hosts and vCenter from the Mac using domain names
- A cluster with the three ESXi 6.7 hosts use to it
For this project I need to download the FreeNAS image, named FreeNAS-11.3-U2.1.iso, and install it. I publish here all I do to install: 2020.04.27 – Install FreeNAS 11.3 on VMware Fusion with iSCSI Disks
Now I go to the next step ….
-> Configure Network and open the existing FreeNAS 11.3 U2.1
Open VMware Fusion and find the Virtual Machine FreeNAS and select it.
Clicking on Virtual Machine > Hard Disk (SCSI)
-> Processor and Memory
Verify Processor and Memory
Modify to 2 processor cores and 2048 MB meaning 2 GB
-> Hard Disk
Verify total GB for both/all Hard Disk. If needed add a New Hard Disk to create a total of 80 GB. I have 2 Hard Disk with 20 GB…
…. and 10 GB
If needed add a new Hard Disk with 50 GB to have a total of 80 GB
-> Network Adapter
Need to have a total of 3 Network Adapter. If needed, create new Network Adapter’s
Tag vSphere Network for all existing Network Adapter
-> FreeNAS Settings – Power-on FreeNAs 11.3 U2.1 and configure all three Networks
Power-on FreeNAS
Login into FireFox
In left side click on Network > Interfaces then in up-right side ADD
Create vlans named vlan101 and vlan102 with details below. The same is for each
All Networks created are here
Now is possible to ping the VM from the MacBook Terminal using the hostname freenas
murgescusilvia@Murgescus-MacBook-Pro ~ % ping freenas
PING freenas.silvique.ro (10.1.1.201): 56 data bytes
64 bytes from 10.1.1.201: icmp_seq=0 ttl=64 time=0.285 ms
64 bytes from 10.1.1.201: icmp_seq=1 ttl=64 time=0.594 ms
64 bytes from 10.1.1.201: icmp_seq=2 ttl=64 time=0.447 ms
64 bytes from 10.1.1.201: icmp_seq=3 ttl=64 time=0.532 ms
64 bytes from 10.1.1.201: icmp_seq=4 ttl=64 time=0.352 ms
64 bytes from 10.1.1.201: icmp_seq=5 ttl=64 time=0.560 ms
^C
--- freenas.silvique.ro ping statistics ---
6 packets transmitted, 6 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.285/0.462/0.594/0.112 ms
-> Verify iSCSI Port groups inside vCenter
The next thing needed to do is add two new VMkernel adapters to our standard virtual switches so that the hosts can communicate with the storage server created using multiple paths.
Power-on the needed of three ESXi’s, one FreeNAS, one vCenter and one pfSense (inside esxi01). To be able to use as much RAM as possible for vCenter I do NOT power-on all in the same time but in the order mentioned before.
Login to vCenter via Firefox
- Click Host and Clusters
- Click on the first ESXi host, esxi01
- Click Configure
- Click Virtual switches
Virtual switches that appears there include ISCSI-1 and ISCSI-2 including during Part 5: Create a Ubuntu iSCSI Target and Configure Multipathing – major problem and not finished
Go on FreeNAS web bowser then >- Shell and ping all ISCSI‘s:
root@freenas[~]# ping 10.10.1.11
PING 10.10.1.11 (10.10.1.11): 56 data bytes
64 bytes from 10.10.1.11: icmp_seq=0 ttl=64 time=2.170 ms
64 bytes from 10.10.1.11: icmp_seq=1 ttl=64 time=1.299 ms
64 bytes from 10.10.1.11: icmp_seq=2 ttl=64 time=0.885 ms
^C
--- 10.10.1.11 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.885/1.451/2.170/0.536 ms
root@freenas[~]# ping 10.10.2.11
PING 10.10.2.11 (10.10.2.11): 56 data bytes
64 bytes from 10.10.2.11: icmp_seq=0 ttl=64 time=1.173 ms
64 bytes from 10.10.2.11: icmp_seq=1 ttl=64 time=0.848 ms
^C
--- 10.10.2.11 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.848/1.010/1.173/0.162 ms
root@freenas[~]# ping 10.10.1.12
PING 10.10.1.12 (10.10.1.12): 56 data bytes
64 bytes from 10.10.1.12: icmp_seq=0 ttl=64 time=1.100 ms
64 bytes from 10.10.1.12: icmp_seq=1 ttl=64 time=0.665 ms
64 bytes from 10.10.1.12: icmp_seq=2 ttl=64 time=0.557 ms
64 bytes from 10.10.1.12: icmp_seq=3 ttl=64 time=0.643 ms
64 bytes from 10.10.1.12: icmp_seq=4 ttl=64 time=0.877 ms
^C
--- 10.10.1.12 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.557/0.768/1.100/0.196 ms
root@freenas[~]# ping 10.10.2.12
PING 10.10.2.12 (10.10.2.12): 56 data bytes
64 bytes from 10.10.2.12: icmp_seq=0 ttl=64 time=0.954 ms
64 bytes from 10.10.2.12: icmp_seq=1 ttl=64 time=0.720 ms
64 bytes from 10.10.2.12: icmp_seq=2 ttl=64 time=0.713 ms
^C
--- 10.10.2.12 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.713/0.796/0.954/0.112 ms
root@freenas[~]# ping 10.10.1.13
PING 10.10.1.13 (10.10.1.13): 56 data bytes
64 bytes from 10.10.1.13: icmp_seq=0 ttl=64 time=1.090 ms
64 bytes from 10.10.1.13: icmp_seq=1 ttl=64 time=0.981 ms
64 bytes from 10.10.1.13: icmp_seq=2 ttl=64 time=0.642 ms
^C
--- 10.10.1.13 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.642/0.904/1.090/0.191 ms
root@freenas[~]# ping 10.10.2.13
PING 10.10.2.13 (10.10.2.13): 56 data bytes
64 bytes from 10.10.2.13: icmp_seq=0 ttl=64 time=0.497 ms
64 bytes from 10.10.2.13: icmp_seq=1 ttl=64 time=0.533 ms
^C
--- 10.10.2.13 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.497/0.515/0.533/0.018 ms
-> Configure iSCSI Target into FreeNAS
Now that the network is configured and all three ESXi hosts can communicate with the storage server, I need to configure the iSCSI target into FreeNAC.
The idea is to test VMware vSphere 6.7 and the vCenter appliance going to VMotion and Storage VMotion.
A full description of iSCSI installation for FreeNAC is into 2020.04.27 – Install FreeNAS 11.3 on VMware Fusion with iSCSI Disks.
Go to Sharing and select Block (iSCSI) to configure more options. We are into Target Global Configuration. Then click SAVE
Click on Portals. Click ADD for 2 different timess.
- Name: ISCSI-1
- IP adress: 10.10.1.201 default port 3260
- Click Save
- Name: ISCSI-2
- IP adress: 10.10.2.201default port 3260
- Click Save
Click on Initiators. Click ADD. Tag Allow ALL Initiators then SAVE.
Click on Targets. Click ADD for 2 different times.
- Target Name: iscsi-1
- Portal Group ID: 1 (ISCSI-1)
- No other default configuration and click SAVE.
- Target Name: iscsi-2
- Portal Group ID: 2 (ISCSI-2)
- No other default configuration and click SAVE.
Click Extents. Click ADD for 2 different times. Below are only modification and new setup.
- Extent name: ISCSI-1
- Extend Type: Device
- Device: pool2/vmware-disk-01 (20.0G)
- SSD – Enable
- Click SAVE
- Extent name: ISCSI-2
- Extend Type: Device
- Device: pool2/vmware-disk-01 (20.0G)
- SSD – Enable
- Click SAVE
Click on Associated Targets. Click ADD for 2 different times
- Target: iscsi-1
- LUN ID: 1
- Extensors: ISCSI-1
- Target: iscsi-2
- LUN ID: 2
- Extensors: ISCSI-2
-> Verify iSCSI Target into vSphere
vSphere is already configured using the installation in Part 5: Create a Ubuntu iSCSI Target and Configure Multipathing.
Clicking on the Devices tab I do not see the thin provisioned disk I created when configuring the iSCSI target.
Open ssh for esxi01, esxi02 and esxi03. Use the following command on all
[root@esxi01:~] esxcli storage core adapter rescan --all
[root@esxi01:~] esxcli storage core adapter list
HBA Name Driver Link State UID Capabilities Description
-------- --------- ---------- -------------------------------------- ------------------- ------------------------------------------------------------------------
vmhba0 pvscsi link-n/a pscsi.vmhba0 (0000:03:00.0) VMware Inc. PVSCSI SCSI Controller
vmhba1 vmkata link-n/a ide.vmhba1 (0000:00:07.1) Intel Corporation PIIX4 for 430TX/440BX/MX IDE Controller
vmhba64 vmkata link-n/a ide.vmhba64 (0000:00:07.1) Intel Corporation PIIX4 for 430TX/440BX/MX IDE Controller
vmhba65 iscsi_vmk online iqn.1998-01.com.vmware:esxi01-10a4398c Second Level Lun ID iSCSI Software Adapter
Clicking again on the Devices tab and now see the thin provisioned disk I created when configuring the iSCSI target.
When you click on the Paths tab you should see two paths. It is said that and one of them is with Active I/O bit I have both with Active I/O.
This is all…
A lot of useful photo are here:
https://photos.app.goo.gl/KKfH8dpxpELSHy3U7