2020.05.1 – Configuring a Junos Space Virtual Appliance in EVE-PRO

Note: Not Working for me! I install Junos Space into ESXi and try to make it work.

What I need to read and apply are here: Configuring a Junos Space Virtual Appliance as a Standalone or Primary FMPM Node

After you deploy a Junos Space Virtual Appliance on a EVE-PRO server, you must enter basic network and machine information to make your Junos Space Virtual Appliance accessible on the network. You must also add disk space to the partitions of the Junos Space Virtual Appliance.

Before you begin, ensure that you have the following information available:

  • IPv4 address and subnet mask for the node management (eth0) Ethernet interface
  • IPv4 address of the default gateway for the eth0 Ethernet interface
  • IPv4 address of the name server
  • Virtual IP (VIP) address in IPv4 and IPv6 formats
    • The IPv4 format of the VIP address is used for accessing the Junos Space Network Management Platform GUI through a Web browser. This IP address must be in the same subnet as the IP address assigned to the eth0 Ethernet interface
  • IPv4 address or URI of the NTP source to synchronize time

To configure a Junos Space Virtual Appliance:

  • At the Junos Space login prompt, type admin as your default login name and press Enter.
space-node login:admin 
Password:
  • Type abc123 as the default administrator password and press Enter. Junos Space prompts you to change your default password.
  • To change the default password, do the following: 
    • Type the default password and press Enter.
    • Type your new password and press Enter.
    • Retype your new password and press Enter.
  • Enter the new password to log in to Junos Space.
  • Type S to install the virtual appliance as a Junos Space node.
  • Configure the IP address for the eth0 interface.
    • Type 1.
    • Type the IPv4 address for eth0 interface in dotted-decimal notation and press Enter.
Please enter new IPv4 address for interface eth0:
172.25.11.99
  • Type the subnet mask for the IPv4 address and press Enter.
Please enter new IPv4 subnet mask for interface eth0:
255.255.255.0
  • Type the IPv4 address of the default gateway for the eth0 Ethernet interface in dotted-decimal notation and press Enter.
Enter the default IPv4 gateway as a dotted-decimal IP Address:
172.25.11.254

2020.05.1 – Start Juniper JSE lab into EVE-PRO

Juniper Space Essentials (JSE) lab is a laboratory for JSE cours. I will configure this lab on:

  • MacBook PRO
  • VMware Fusion
  • EVE-PRO

Step 1. Create tha lab in EVE-PRO

Step 2. Power-on all

To be sure it works Ok I power-on one at a time in this order:

  • JuniperSpace1
  • CentOS
  • vMX-VCP1
  • vMX-VFP1
  • vMX-VCP2
  • vMX-VFP2

Powered-on all but I have difficulty to continue to work on installation of JuniperSpace.

I will try to install all these lab in ESXi and see if it is working.

SSH commands

Move in a directory

root@eve-ng:~# cd ..
root@eve-ng:~# cd /opt/unetlab/addons/qemu

View all existing file

root@eve-ng:/opt/unetlab/addons/qemu# ls
root@eve-ng:/opt/unetlab/addons/qemu# ls nextExisting_File_Name

Create a new directory

root@eve-ng:/opt/unetlab/addons/qemu# mkdir newDirName

Rename Directory

root@eve-ng:/opt/unetlab/addons/qemu# mv oldDirName NewDirName

Rename Files:

root@eve-ng:/opt/unetlab/addons/qemu# mv oldFileName newFileName

Create qcow2 file

root@eve-ng:/opt/unetlab/addons/qemu/NeededFolder# /opt/qemu/bin/qemu-img create -f qcow2 hda.qcow2 30G (use what needed)

Delete files

root@eve-ng:/opt/unetlab/addons/qemu# rm myFile.txt myFile1.txt myFile2.txt

Delete a whole folder and its content

root@eve-ng:/opt/unetlab/addons/qemu# rm -rf foldername/

Problem and solution

Problem when using ssh

murgescusilvia@Murgescus-MacBook-Pro ~ % ssh silvia@ip-address  

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 
@       WARNING: POSSIBLE DNS SPOOFING DETECTED!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 

The ECDSA host key for us01 has changed, and the key for the corresponding IP address ip-address is unchanged. 
This could either mean that DNS SPOOFING is happening or the IP address for the host and its host key have changed at the same time. 
Offending key for IP in /Users/murgescusilvia/.ssh/known_hosts:20 

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 

IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. 

The fingerprint for the ECDSA key sent by the remote host is SHA256:5b11LsICh7VVaHkfY/HiLh6IThcZYjkkDD7Pt6dixJw. 

Please contact your system administrator. Add correct host key in /Users/murgescusilvia/.ssh/known_hosts to get rid of this message. Offending ECDSA key in /Users/murgescusilvia/.ssh/known_hosts:19 ECDSA host key for ip-address has changed and you have requested strict checking. 

Host key verification failed.

Solution to solve this problem:

murgescusilvia@Murgescus-MacBook-Pro ~ % ssh-keygen -R ip-address 

# Host ip-address found: line 19 /Users/murgescusilvia/.ssh/known_hosts updated. 
Original contents retained as /Users/murgescusilvia/.ssh/known_hosts.old

I will add a command every time I search and use something.

Posted in NFV

2020.04.30 – CentOS installation into EVE-NG

All used information to install I found here:

Using things:

  • EVE-PRO meaning with license
  • CentOS CentOS-8.1.1911-x86_64-dvd1.iso

Step 1. Download CentOS – donloaded from Download CentOS version Linux DVD ISO

See 2. Go into EVE-PRO and move into qemu directory

root@eve-ng:~# cd /opt/unetlab/addons/qemu

Step 3. Create a new directory and move inside it

root@eve-ng:/opt/unetlab/addons/qemu# mkdir linux-centeros-8
root@eve-ng:/opt/unetlab/addons/qemu# cd linux-centeros-8
root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8#

Step 4. Upload CentOS-8.1.1911-x86_64-dvd1.iso in EVE-PRO using FilaFilla application

root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# ls
CentOS-8.1.1911-x86_64-dvd1.iso

Step 6. Create qcow2 file

root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# /opt/qemu/bin/qemu-img create -f qcow2 hda.qcow2 40G
Formatting 'hda.qcow2', fmt=qcow2 size=42949672960 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# 

root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# ls
CentOS-8.1.1911-x86_64-dvd1.iso  hda.qcow2
root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# 

Step 6. Rename CentOS-8.1.1911-x86_64-dvd1.iso file

oot@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# mv CentOS-8.1.1911-x86_64-dvd1.iso cdrom.iso

root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# ls
cdrom.iso  hda.qcow2

Step 7. Login to EVE-PRO then Add new lab end add CentOS node

CentOS need to have

  • Name: CentOS
  • CPU: 2
  • RAM (MB): 4096
  • Ethernets: 2
  • QEMU Version : 2.4
  • QEMU arch: tpl (x86_64)
  • QEMU Nic: e1000-

Click SAVE

Add a switch and configure it as Type: Management (Cloud0) and click Save

Connect

Step 8. Now ready to start CentOS

Do not click anything, just wait …

Step 9. Go throw installation process

English and Continue

Verify and configure different things…

Keuboard: English

Software: Server with GUI

Installation Destination: ATA QENU HARDDISK

After finishing click Begin Installation

During installation you need to insert toot password and user name and password

Root Pasword

User Name and Password. Also select Make this user admin

Wait to finish installation

Step 10. Restating CentOS

Before restarting CentOS you need to modify the name if application needed to install

root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# mv cdrom.iso centos-install.iso

I do not like restart. I click restart, then power-off and them power-on or Start

Step 11. Go throw internal installation process.

Click License Information

I accept the license agreement.

Click Finish configuration

English and Next

English (US) and Next

Privacy OFF and Next

Connect Your Online: Skip

Finish to install internal

Step 12. Networks

Select Wired Settings

I don’t have Automatic Ethernet on these network, si I have to configure it manually

Open Terminal

Use ifconfig command

I will do not this now ….

Step 13. Save the configured CentOS for all new EVE-PRO lab

Into EVE-PRO open Lab Details

Use SSH

root@eve-ng:/opt/unetlab/tmp/0/085884f1-7807-492d-814f-7b588fd1892c/1# ls
cdrom.iso  hda.qcow2  l1down_1  l1up_0  mon-sock  mon2-sock  qmp-sock  wrapper.txt

root@eve-ng:/opt/unetlab/tmp/0/085884f1-7807-492d-814f-7b588fd1892c/1# /opt/qemu/bin/qemu-img commit hda.qcow2


Image committed.

Here is tha same

root@eve-ng:/opt/unetlab/tmp/0/085884f1-7807-492d-814f-7b588fd1892c/1# ls
cdrom.iso  hda.qcow2  l1down_1  l1up_0  mon-sock  mon2-sock  qmp-sock  wrapper.txt

A needed command to make it work as configured

root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# /opt/unetlab/wrappers/unl_wrapper -a fixpermissions


May 01 07:09:28 May 01 07:09:28 Online Check state: Valid

All done. CentOS is ready to go. We will now add a couple of nodes to confirm that CentOS is working.

Step 14. Verify that CentOS is working

Add a total of three CentOS

Power-on all

All 3 can power-on …

Step 14. Configure and Mange Network Connection using nmcli

The needed information I find here: How to configure and Manage Network Connections using nmcli. I use The nmcli connection sub-command chapter.

  • View connection profiles
# nmcli connection show 
NAME UUID TYPE DEVICE 
virbr0 bbe539aa-7042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet virbr0
virbr0-nic bbe539aa-8042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet virbr0-nic
ens3 bbe539aa-5042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet --
ens4 bbe539aa-6042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet --
  • The nmcli connection add Command
$ nmcli connection add con-name new-ens3 ifname ens3 type ethernet ip4 192.168.1.25/24 gw4 192.168.2.1 
Connection 'new-ens3' (f0c23472-1aec-4e84-8f1b-be8a2ecbeade) successfully added.

$ nmcli connection add con-name new-ens4 ifname ens4 type ethernet ip4 172.25.11.254/24 gw4 172.25.11.254
Connection 'new-ens4' (f0c23472-1aec-5e84-8f1b-be8a2ecbeade) successfully added.
# nmcli connection show
NAME UUID TYPE DEVICE
new-ens3 bbe539aa-9042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet ens3
new-ens4 bbe539aa-1042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet ebs4
virbr0 bbe539aa-7042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet virbr0
virbr0-nic bbe539aa-8042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet virbr0-nic
ens3 bbe539aa-5042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet --
ens4 bbe539aa-6042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet --
# ls /etc/sysconfig/network-scripts/ifcfg* 
/etc/sysconfig/network-scripts/ifcfg-ens3 /etc/sysconfig/network-scripts/ifcfg-new-ens3
/etc/sysconfig/network-scripts/ifcfg-ens4 /etc/sysconfig/network-scripts/ifcfg-new-ens4

I can see this but I modify using nmcli command

I hope all detail help me and you in future use.

2020.04.28 – Building a VMware vSphere Virtual Lab with VMware Fusion – Part 5: Create a FreeNAS iSCSI and Configure Multipathing – working now

Working now …

Starting info and advice:

  • I will try to do Part 5 using these idea: I’d recommend using FreeNAS instead of Ubuntu. I’ve just done a test and managed to set up a FreeNAS VM with 2 GB of RAM and managed to create a volume and connect it to ESXi using iSCSI. 

Mine lab parts:

GraspingTech’s helping guid:

Overview

My idea is to add Disk and configure iSCSI in FreeNAS and connect mine ESXi hosts to it.

In the Part 4 I’ve created a three node cluster but I couldn’t enable DRS or HA because it requires centralised storage. In this part, I’ll create a storage server with FreeNAS and configure it so that mine ESXi hosts can access it via iSCSI (with multipathing).

After completing the steps in the previous page, I will be at a point where I have:

  • Three ESXi 6.7 VMs running on VMware Fusion
  • The first ESXi VM contains a pfSense firewall VM with built in DNS Resolver
  • One vCenter Server Appliance in VMware Fusion
  • I am able to access the hosts and vCenter from the Mac using domain names
  • A cluster with the three ESXi 6.7 hosts use to it

For this project I need to download the FreeNAS image, named FreeNAS-11.3-U2.1.iso, and install it. I publish here all I do to install: 2020.04.27 – Install FreeNAS 11.3 on VMware Fusion with iSCSI Disks

Now I go to the next step ….

Step 1. Configure Network and open the existing FreeNAS 11.3 U2.1

Open VMware Fusion and find the Virtual Machine FreeNAS and select it.

Clicking on Virtual Machine > Hard Disk (SCSI)

-> Processor and Memory

Verify Processor and Memory

Modify to 2 processor cores and 2048 MB meaning 2 GB

-> Hard Disk

Verify total GB for both/all Hard Disk. If needed add a New Hard Disk to create a total of 80 GB. I have 2 Hard Disk with 20 GB

… and 10 GB

If needed add a new Hard Disk with 50 GB to have a total of 80 GB

-> Network Adapter

Need to have a total of 3 Network Adapter. If needed, create new Network Adapter’s

Tag vSphere for all existing Network Adapter

-> FreeNAS Settings looks like this

Step 2. Power-on FreeNAs 11.3 U2.1 and configure all three Networks

Power-on FreeNAS

Login into FireFox

In left side click on Network > Interfaces then in up-right side ADD

Create vlans named vlan101 and vlan102 with details below. The same is for each

All Networks created are here

Now is possible to ping the VM from the MacBook using the hostname freenas

murgescusilvia@Murgescus-MacBook-Pro ~ % ping freenas
PING freenas.silvique.ro (10.1.1.201): 56 data bytes
64 bytes from 10.1.1.201: icmp_seq=0 ttl=64 time=0.285 ms
64 bytes from 10.1.1.201: icmp_seq=1 ttl=64 time=0.594 ms
64 bytes from 10.1.1.201: icmp_seq=2 ttl=64 time=0.447 ms
64 bytes from 10.1.1.201: icmp_seq=3 ttl=64 time=0.532 ms
64 bytes from 10.1.1.201: icmp_seq=4 ttl=64 time=0.352 ms
64 bytes from 10.1.1.201: icmp_seq=5 ttl=64 time=0.560 ms
^C
--- freenas.silvique.ro ping statistics ---
6 packets transmitted, 6 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.285/0.462/0.594/0.112 ms

Step 3. Verify iSCSI Port groups inside vCenter

The next thing needed to do is add two new VMkernel adapters to our standard virtual switches so that the hosts can communicate with the storage server created using multiple paths.

Power-on the needed of three ESXi’s, one FreeNAS, one vCenter and one pfSense (inside esxi01). To be able to use as much RAM as possible for vCenter I do NOT power-on all in the same time but in the order mentioned before.

Login to vCenter via Firefox

  • Click Host and Clusters
  • Click on the first ESXi host, esxi01
  • Click Configure
  • Click Virtual switches

Virtual switches that appears there include ISCSI-1 and ISCSI-2 including during Part 5: Create a Ubuntu iSCSI Target and Configure Multipathing – major problem and not finished

Go on FreeNAS web bowser then >- Shell and ping all ISCSI‘s:

  • 10.10.1.11, 10.10.1.12
root@freenas[~]# ping 10.10.1.11
PING 10.10.1.11 (10.10.1.11): 56 data bytes
64 bytes from 10.10.1.11: icmp_seq=0 ttl=64 time=2.170 ms
64 bytes from 10.10.1.11: icmp_seq=1 ttl=64 time=1.299 ms
64 bytes from 10.10.1.11: icmp_seq=2 ttl=64 time=0.885 ms
^C
--- 10.10.1.11 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.885/1.451/2.170/0.536 ms

root@freenas[~]# ping 10.10.2.11
PING 10.10.2.11 (10.10.2.11): 56 data bytes
64 bytes from 10.10.2.11: icmp_seq=0 ttl=64 time=1.173 ms
64 bytes from 10.10.2.11: icmp_seq=1 ttl=64 time=0.848 ms
^C
--- 10.10.2.11 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.848/1.010/1.173/0.162 ms
  • 10.10.2.11, 10.10.2.12
root@freenas[~]# ping 10.10.1.12
PING 10.10.1.12 (10.10.1.12): 56 data bytes
64 bytes from 10.10.1.12: icmp_seq=0 ttl=64 time=1.100 ms
64 bytes from 10.10.1.12: icmp_seq=1 ttl=64 time=0.665 ms
64 bytes from 10.10.1.12: icmp_seq=2 ttl=64 time=0.557 ms
64 bytes from 10.10.1.12: icmp_seq=3 ttl=64 time=0.643 ms
64 bytes from 10.10.1.12: icmp_seq=4 ttl=64 time=0.877 ms
^C
--- 10.10.1.12 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.557/0.768/1.100/0.196 ms


root@freenas[~]# ping 10.10.2.12
PING 10.10.2.12 (10.10.2.12): 56 data bytes
64 bytes from 10.10.2.12: icmp_seq=0 ttl=64 time=0.954 ms
64 bytes from 10.10.2.12: icmp_seq=1 ttl=64 time=0.720 ms
64 bytes from 10.10.2.12: icmp_seq=2 ttl=64 time=0.713 ms
^C
--- 10.10.2.12 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.713/0.796/0.954/0.112 ms
  • 10.10.1.13, 10.10.2.13
root@freenas[~]# ping 10.10.1.13
PING 10.10.1.13 (10.10.1.13): 56 data bytes
64 bytes from 10.10.1.13: icmp_seq=0 ttl=64 time=1.090 ms
64 bytes from 10.10.1.13: icmp_seq=1 ttl=64 time=0.981 ms
64 bytes from 10.10.1.13: icmp_seq=2 ttl=64 time=0.642 ms
^C
--- 10.10.1.13 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.642/0.904/1.090/0.191 ms


root@freenas[~]# ping 10.10.2.13
PING 10.10.2.13 (10.10.2.13): 56 data bytes
64 bytes from 10.10.2.13: icmp_seq=0 ttl=64 time=0.497 ms
64 bytes from 10.10.2.13: icmp_seq=1 ttl=64 time=0.533 ms
^C
--- 10.10.2.13 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.497/0.515/0.533/0.018 ms

Step 4. Configure iSCSI Target into FreeNAS

Now that the network is configured and all three ESXi hosts can communicate with the storage server, I need to configure the iSCSI target into FreeNAC.

The idea is to test VMware vSphere 6.7 and the vCenter appliance going to VMotion and Storage VMotion.

A full description of iSCSI installation for FreeNAC is into 2020.04.27 – Install FreeNAS 11.3 on VMware Fusion with iSCSI Disks.

Go to Sharing and select Block (iSCSI) to configure more options. We are into Target Global Configuration. Then click SAVE

Click on Portals. Click ADD for 2 different timess.

  • Name: ISCSI-1
  • IP adress: 10.10.1.201 default port 3260
  • Click Save
  • Name: ISCSI-2
  • IP adress: 10.10.2.201default port 3260
  • Click Save

Click on Initiators. Click ADD. Tag Allow ALL Initiators then SAVE.

Click on Targets. Click ADD for 2 different times.

  • Target Name: iscsi-1
  • Portal Group ID: 1 (ISCSI-1)
  • No other default configuration and click SAVE.
  • Target Name: iscsi-2
  • Portal Group ID: 2 (ISCSI-2)
  • No other default configuration and click SAVE.

Click Extents. Click ADD for 2 different times. Below are only modification and new setup.

  • Extent name: ISCSI-1
  • Extend Type: Device
  • Device: pool2/vmware-disk-01 (20.0G)
  • SSD – Enable
  • Click SAVE
  • Extent name: ISCSI-2
  • Extend Type: Device
  • Device: pool2/vmware-disk-01 (20.0G)
  • SSD – Enable
  • Click SAVE

We can see this:

Click on Associated Targets. Click ADD for 2 different times

  • Target: iscsi-1
  • LUN ID: 1
  • Extensors: ISCSI-1
  • Target: iscsi-2
  • LUN ID: 2
  • Extensors: ISCSI-2

Step 5. Verify iSCSI Target into vSphere

vSphere is already configured using the installation in Part 5: Create a Ubuntu iSCSI Target and Configure Multipathing.

Clicking on the Devices tab I do not see the thin provisioned disk I created when configuring the iSCSI target.

Open ssh for esxi01, esxi02 and esxi03. Use the following command on all

[root@esxi01:~] esxcli storage core adapter rescan --all
[root@esxi01:~] esxcli storage core adapter list
HBA Name  Driver     Link State  UID                                     Capabilities         Description
--------  ---------  ----------  --------------------------------------  -------------------  ------------------------------------------------------------------------
vmhba0    pvscsi     link-n/a    pscsi.vmhba0                                                 (0000:03:00.0) VMware Inc. PVSCSI SCSI Controller
vmhba1    vmkata     link-n/a    ide.vmhba1                                                   (0000:00:07.1) Intel Corporation PIIX4 for 430TX/440BX/MX IDE Controller
vmhba64   vmkata     link-n/a    ide.vmhba64                                                  (0000:00:07.1) Intel Corporation PIIX4 for 430TX/440BX/MX IDE Controller
vmhba65   iscsi_vmk  online      iqn.1998-01.com.vmware:esxi01-10a4398c  Second Level Lun ID  iSCSI Software Adapter

Clicking again on the Devices tab and now see the thin provisioned disk I created when configuring the iSCSI target.

When you click on the Paths tab you should see two paths. It is said that and one of them is with Active I/O bit I have both with Active I/O.

This is all…

2020.04.27 – Install FreeNAS 11.3 on VMware Fusion with iSCSI Disks

Download from here: https://www.freenas.org/download/

Informations that helps me to learn and install here: https://www.sysprobs.com/nas-vmware-workstation-iscsi-target

VMware is one of the best and user-friendly virtualization software in the market. Their Fusion can be installed on most of the client Operating Systems to virtualize the physical hardware and install multiple Operating Systems top of it.

Not only the server or client Operating Systems but even we can also install network storage Operating Systems on VMware as a virtual machine. Here I’m going to show how to install FreeNAS on VMware Fusion and configure iSCS disks. This method gives the ideal test lab setup to have NAS as a virtual machine on single computer hardware.

What is NAS (Network Access Storage)?

In the physical environment, NAS is a hardware device with hard drives and it is accessible via the network port. The controller will have its own Operating System to manage the disks and allow the access. Every NAS devices have plenty of features and tools to make it scalable, secure and accessible.

These NAS devices support iSCSI, that can be used to set up VMware vCenter. But for the testing purpose in VMware, we can’t have the expensive physical NAS devices to configure a cluster with high availability in VMware Fusion. So, there is some free open source NAS software available to install on computers or servers to build a NAS system with existing hard disks and partitions. These free storage virtualization software make your computer hard disk as network access storage.

I found that these two famous free NAS software can be installed in the computer and make NAS.

Installing FreeNAS on VMware Fusion is simple and straight forward. But setting up and configuring the iSCSI disks involves several steps. Also, remember that the steps involved in configuring iSCSI disks in FreeNAS as shown below remain the same on any platform.

Install FreeNAS Server on VMware Fusion

1) Download the latest stable version from the official site here. At the moment you can find FreeNAS 11.3-U2.1 as on writing this guide.

NOTE: The current version requires a minimum 8GB of RAM. Since I have enough resources on my MacBook PRO, I could configure a VM with 8GB RAM. If you do not have enough RAM, then you can try with lower capacity. It may impact the performance of VM.

2) Create a virtual machine in VMware Fusion

3) Select Install from disk or image and click Continue

4) If needed click Use an other disk or disk image, find the FreeNAT file and select it. Then press Continue

5) Let Legacy BIOS and click Continue

6) Click Continue Settings

7) Search for Users > murgescusilvia > Virtual Machines. If you want, create a New Folder chose a name like FreeNAS, make sure to chose a name into Save As like FreeNAS, and click Save

8) Continue with FreeNAS Settings

9) Into Processors & Memory

  • Processors: 4
  • Memory: 8GB meaning 8192 MB
  • Keep Advanced options unselected as default

10) Connect CD/DVD Drive is already installed

11) Keep Hard Disk configuration default of 20GB. Remember, this hard disk will be used to install the Operating System only. We can’t use this disk to create storage, disks and LUNs for sharing a purpose. We need to attach another hard disk again to this VM. We will discuss that later.

12) Boot the system with the first option (default)

13) Let it for Autoboot and wait…

14) Select the Install/Upgrade option and press OK

15) On the next screen, select the virtual hard disk to install. You need to choose the hard drive and press spacebar key to make the selection.

16) Press Yes

17) Insert your Password and make sure you will remember it for future use

18) Select the Boot mode as ‘Boot via BIOS’ option to make the things simple

19) Click OK

20) Chose Reboot and click OK

21) Immediately remove the loaded ISO file

At this point, we have successfully installed the latest FreeNAS on VMware Fusion which is running on MacBook PRO.

Let’s see some more settings to make it work.

Network Settings in FreeNAS VM – Vmware Fusion

Once the VM booted, you can see the below screen which gives several options.

By default, the VM network is in NAT mode in VMware Fusion. I’m not going to explain more about VMware Fusion networking.

In the NAT mode, your virtual machines and host will communicate well even though the host and VM IP look different.

If you want you can change the network mode to ‘Bridge Mode’ so the FreeNAS virtual machine will get the same IP scheme of your host computer physical network.

In both cases, we need to configure static IP for FreeNAS storage. That is the ideal way to keep the IP unchanged for your storage device.

22) First I change the network connectivity

23) Before configure the IP for FreeNAS I power-on the application that offer connectivity to internet, pfSense. I don’t know if I need internet for fart configuration of FreeNAS but I make sure it is connected to internet.

24) Configure FreeNAT with a static IP. Press 1 and enter

25) I have used File Name: em1 not em0 as in the follow Photo. I forget to make a photo withered version. All other are the same as shown

Once the IP changes, it will display the web URL on the screen.

26) Open a browser and access the URL. I use Firefox. Login with the root user name and password you set during the installation.

You must land on the FreeNAS management page without any issues.

Add Disk and Configure iSCSI in FreeNAS 11.3 on VMware Fusion

27) Now we are ready to configure the storage system and iSCSI disks. But we do not have any more drives than the OS disk. Hence we need to add another disk. You can add a few disks if you want.

  • -> It is thinking that luckily VMware allows adding the virtual disks to a virtual machine while it is working. SCSI disks can be added. We will see that it is not true for VMware Fusion.
  • -> In VMware Fusion shout down before adding a new Hard Disk
  • -> Power-off FreeNAS
  • -> Add a new Hard Disk. I added another 10GB disk for testing purpose.

28) Power-on FreeNAT and make sure VMware and FreeNAS detected the new disk successfully. It should be listed under the Storage > Disks.

29) Clock Storage > pool. Select ADD

30) Clock CREATE POOL

31) Add the new disk Name pool1 to the pool, select da1 and click ->

  • -> da1 is moved right. Click Created
  • -> Click CREATE POOL
  • -> it is successfully created

32) Now start the iSCSI service in FreeNAS. By default, it is off.  Go to ‘Services’, select and switch on the iSCSI service.

33) Go to ‘Sharing’ and select ‘Block (iSCSI)’ to configure more options. Then SAVE

34) Click on ‘Portals’ and add a new one. If it is the first time you are configuring, most probably you need to add a new portal. Click ADD

35) You can comment for your reference. Click on the drop-down and select the IP address of the FreeNAS VM.

36) Now click on ‘Initiators’ tab and add a new one.

37) If you do not want more restriction, as me, then keep both ‘All’. Otherwise, add the client network where you will be accessing the iSCSI storage. I left ‘All’ and applied the settings.

38) Time to add targets. Click on ‘Targets’ and ADD a new one.

39) Give a name related to the type so that you can understand later. Here select the portal you created in the earlier step.

40) We need to create Extents to add the storage. Click on ‘Extends’ and ADD new.

41) Give an appropriate name, and select the type as ‘File’.

  • -> Maybe it can be given any name you chose without .vmdy end but …
  • -> I give the next name from here. It is correct?
  • -> Browse the mount point where you intended to store the iSCSI disk and give a name at the end of the mount point. This method will allow hosting several LUNs in the same disk. Give the size of the extend. When host access this iSCSI target, it will read the disk size what you mention here. It should be less than the mount point size.

42) As the final step, add an ‘Associated Targets’. 

43) Make sure to select the correct names from the drop-down and add LUN ID. It can be any number between 0 to 256 but should be unique. Click SAVE

This is it!

With those steps, we have successfully created an iSCSI disk in FreeNAS which is running on VMware workstation.

Connect and Test the iSCSI Target in FreeNAS from VMware vCenter

This is done into Building a VMware vSphere Virtual Lab with VMware Fusion part 5

2020.04.26 – Ubuntu Server Problem: Failed to obtain SCST version information. Are the SCST modules loaded?

This page is abut a problem into Ubuntu server 18.04. Same problem, same preparation that cause the problem and in different Ubuntu 18.04 and.

All happen when working to chapter of Building a VMware vSphere Virtual Lab with VMware Fusion – Part 5: Create a Ubuntu iSCSI Target and Configure Multipathing

Case 1. After Upgrading Ubuntu Server 16.04 to 18.04 LTS in 2020.04.11

Problem:

silvia@silvia:~$ sudo scstadmin -config /etc/scst.conf
Failed to obtain SCST version information. Are the SCST modules loaded?

I try restarting scst and running the config command again:

silvia@silvia:~$ sudo /etc/init.d/scst stop
[ ok ] Stopping scst (via systemctl): scst.service.
silvia@silvia:~$ sudo /etc/init.d/scst start
[ ok ] Starting scst (via systemctl): scst.service.
silvia@silvia:~$ sudo scstadmin -config /etc/scst.conf
Failed to obtain SCST version information. Are the SCST modules loaded?

What does dkms status show?

silvia@silvia:~$ dkms status
scst, 3.4.0, 4.15.0-96-generic, x86_64: installed

Case 2. After installing Linux (Ubuntu Server 18.04) on a Mac with VMware Fusion in 2020.04.13

At the and of chapter Building a VMware vSphere Virtual Lab with VMware Fusion – Part 5: Create a Ubuntu iSCSI Target and Configure Multipathing it is a problem. To solve the problem I apply again all from the chapter of Step 4.2. Installing SCST on Ubuntu 18.04.

The same problem appears at Step 4.6. Export the disk image as an iSCSI LUN:

silvia@silvia:~/scst-build/scst$ sudo scstadmin -config /etc/scst.conf
Failed to obtain SCST version information. Are the SCST modules loaded?

I don’t know a solution to solve this problem.

2020.04.26 – Juniper Junos Space Network Management installation into EVE-PRO

The information for installation is from here : https://www.eve-ng.net/index.php/documentation/howtos/juniper-j-space/

Note: I have installed 2 versions and I do not find a solution to make it work.

>>>>>>>> Chapter 1:

  • EVE Image Name: jspace-19.3R1.3
  • Downloaded Original Filename: space-19.3R1.3.qcow2
  • Version: 19.3R1.3
  • vCPUs: 2
  • vRAM: 8192
  • HDD Format: virtioa
  • Console: vnc/https
  • Interfaces: x2 virtio

Chapter 1 topic:

Step 1. Download KVM qcow2 image from Juniper.

Step 2. Using our image table, create correct image folder, this example is for image jspace- in the table above.

mkdir /opt/unetlab/addons/qemu/jspace-19.3R1.3/

Step 3. Upload the downloaded image to the EVE /opt/unetlab/addons/qemu/jspace-19.3R1.3 folder using for example FileZilla or WinSCP.

Step 4. From the EVE cli, go to newly created image folder.

cd /opt/unetlab/addons/qemu/jspace-19.3R1.3/

Step 5. Rename original filename to virtioa.qcow2

mv space-19.3R1.3.qcow2 virtioa.qcow2 

Step 6.  Fix permissions:

/opt/unetlab/wrappers/unl_wrapper -a fixpermissions


Step 7. Open a lab, add Junos Space and power-on

Step 8. Default logins:

CLI: admin/abc123
https: super/juniper123

>>>>>>>> Chapter 2:

  • EVE Image Name: jspace-20.1R1.2
  • Downloaded Original Filename: space-20.1R1.2.qcow2
  • Version: 20.1R1.2
  • vCPUs: 2
  • vRAM: 8192
  • HDD Format: virtioa
  • Console: vnc/https
  • Interfaces: x2 virtio

Chapter 2 topic:

Note: I have installed and it does’n work for me. Maybe it work for you … just try it!

Step 1. Download KVM qcow2 image from Juniper.

Step 2. Using our image table, create correct image folder, this example is for image jspace- in the table above.

mkdir /opt/unetlab/addons/qemu/jspace-20.1R1.2/

Step 3. Upload the downloaded image to the EVE /opt/unetlab/addons/qemu/jspace-20.1R1.2 folder using for example FileZilla or WinSCP.

Step 4. From the EVE cli, go to newly created image folder.

cd /opt/unetlab/addons/qemu/jspace-20.1R1.2/

Step 5. Rename original filename to virtioa.qcow2

mv space-20.1R1.2.qcow2 virtioa.qcow2 

Step 6.  Fix permissions:

/opt/unetlab/wrappers/unl_wrapper -a fixpermissions


Step 7. Open a lab, add Junos Space and power-on

Step 8. Default logins:

CLI: admin/abc123
https: super/juniper123

Getting Start Guid: https://www.juniper.net/documentation/en_US/junos-space20.1/platform/topics/concept/junos-space-getting-started-fabric-architecture-overview.html

2020.04.25 – Juniper vSRX-NG installation into EVE-PRO

I used this guid for my installation:: https://www.eve-ng.net/index.php/documentation/howtos/howto-add-juniper-vsrx-ng-15-x-and-later/

Versions this guide is based on:

  • Name: vsrxng-20.1R1.11
  • Download original filename: junos-vsrx3-x86-64-20.1R1.11.qcow2
  • Version: 20.1R1.11
  • VCPUS: 2
  • VRAM: 4096

Step 1. Create correct image folder

root@eve-ng:/opt/unetlab/addons/qemu# mkdir vsrxng-20.1R1.11

Step 2. Upload the downloaded image to the EVE /opt/unetlab/addons/qemu/vsrxng-17.3R1.10/  folder using for example FileZilla or WinSCP.

Step 3. From the EVE cli, go to newly created image folder.

root@eve-ng:/opt/unetlab/addons/qemu# cd vsrxng-20.1R1.11

root@eve-ng:/opt/unetlab/addons/qemu/vsrxng-20.1R1.11# ls
junos-vsrx3-x86-64-20.1R1.11.qcow2

Step 4. Rename original filename to virtioa.qcow2

root@eve-ng:/opt/unetlab/addons/qemu/vsrxng-20.1R1.11# mv junos-vsrx3-x86-64-20.1R1.11.qcow2 virtioa.qcow2

Step 5. Fix permissions:

root@eve-ng:/opt/unetlab/addons/qemu/vsrxng-20.1R1.11# /opt/unetlab/wrappers/unl_wrapper -a fixpermissions


Apr 25 06:51:19 Apr 25 06:51:19 Online Check state: Valid

Step 6. Create a testing lab and open:

Maybe I should increase used RAM for EVE-PRO to open all 4. Now I can open only 3 vSRX’s.

Step 6. By default the number of interfaces are 4: fxp0 and ge-0/0/0 – ge-0/0/2.

To increase the number of interfaces change the default Ethernets configuration of 4 to 10. The picture below shows maximum to ge-0/0/6 but it is maximum to ge-0/0/8.

Note: To open vSRX with Terminal in MacBool Pro make sure you configured/changed QEMU Nic to vmxnet3

Information about vSRX and vSRX-NG:

Junos release 18.4R1 has introduced a new model of virtual SRX (referred to as “vSRX 3.0”), which will be available in addition to the existing virtual SRX model (referred to as “vSRX”), which has been available since Junos 15.1X49-D15 release.

The vSRX 3.0 has a new architecture, which has benefits for operating in virtual environments. Some enhancements are a faster boot time, smaller install image size and better agility due to no nested routing-engine VM being used anymore.

However, the original vSRX model will still be available as long as not all features which are available on vSRX have been ported to vSRX 3.0 yet.

With respect to the security features, the both virtual SRX models are in feature parity. However, some platform related features may not be in parity yet.

The below table specifies differences and similarities in features between vSRX and vSRX 3.0, so that you can decide when to best use which type of virtual SRX, based on your needs and environment.

Platform feature differences overview between vSRX and vSRX 3.0

 vSRXvSRX 3.0
Resources supported  
2 vCPU / 4 GB RAMyesyes
5 vCPU / 8 GB RAMyesyes
9 vCPU / 16 GB RAMyesyes (*2)
17 vCPU / 32 GB RAMyesyes (*2)
Flexible flow session capacity scaling by adding additional vRAMyesyes (*3)
Multi-core scaling support (Software RSS)noyes (*4)
Add one additional vCPU to give the nested RE two vCPU’syesN/A
VMXNET3yesyes
Virtio (virtio-net, vhost-net)yesyes
SR-IOV over Intel 82599 seriesyesyes
SR-IOV over Intel X710/XL710 seriesyesyes
SR-IOV over Mellanox ConnectX-3 and ConnectX-4yesno
   
Hypervisors supported  
VMware ESXi 5.5, 6.0, 6.5yesyes
VMware ESXi 6.7noyes (*4)
KVM on Ubuntu 16.04, Centos 7.1, Redhat 7.2yesyes
Hyper-Vyesyes (*2)
Nutanixnoyes (*2)
Contrail Networking 3.xyesyes
Contrail Networking 5.xnoyes (*4)
AWSyesyes (*6)
Azureyesyes (*7)
Google Cloud Platform (GCP)noyes (*4)
   
Other features  
Cloud-inityesyes
Powermode IPSecyesno
vMotion / live migrationnoyes
AWS ELB and ENA using C5 instancesnoyes (*1)
Chassis Clusteryesyes
GTP TEID based session distribution using Software RSSnoyes (*4)
On-Device Antivirus Scan Engine (Avira)noyes (*5)
   
Requirements  
Requires Hardware Acceleration / VMX CPU flag enabled in the hypervisoryesno
Disk space16 GB18 GB

Notes:

  1. Supported in Junos 18.4R1 and higher
  2. Supported in Junos 19.1R1 and higher
  3. Supported in Junos 19.2R1 and higher
  4. Supported in Junos 19.3R1 and higher
  5. Supported in Junos 19.4R1 and higher
  6. vSRX model available on AWS is vSRX 3.0 from Junos 18.3 onwards (before vSRX 3.0 was generally available, it was already available on AWS).
  7. vSRX model available on Azure is vSRX 3.0 from Junos 19.1 onwards

2020.04.24 – Juniper vMX (limited) installation into EVE-PRO

I used this guid for my installation: https://www.eve-ng.net/index.php/documentation/howtos/howto-add-juniper-vmx-16-x-17-x/

This guide is based on version:

  • EVE images name, vCPUs and vRAM
    • vmxvcp-limited-20.1R1.18-domestic-VCP, 1 vCPU, 2 Gb vRAM
    • vmxvfp-limited-20.1R1.18-domestic-VFP, 3 vCPUs, 4 Gb vRAM
  • Downloaded Filename
    • vmx-bundle-20.1R1.11-limited.tar
  • Version
    • Junos: 20.1R1.11
Read more