Uncompress the package in a location accessible in MacBook Pro
Step 2. Launch the VMware ESXi server, esxi00.silvique.ro, and log in to the server with your credentials.
Step 3. If using Dropbox make sure the files needed are totally active
Right – click on the ova folder inside vm-esxi/ova
Click Smart Sync > Local
Step 4. Setting Up the Network
In VMware ESXi, to set up the different networks for management (br-ext), internal connection of the VMs (br-int), and WAN ports for data:
Enter VMware ESXi using Firefox
4.1. Virtual Switch Configuration
Click Networking > Virtual Switch > Add Standard virtual switch
1. Configure vSwitch Name: vmnic1
MTU 1500
Uplink 1: vmnic1
Security Accept to all:
Promiscuous mode: Accept
Mac address changes: Accept
Forged transmits: Accept
2. Configure vSwitch Name: vmnic2
MTU 1500
Uplink 1: vmnic2
Security Accept to all:
Promiscuous mode: Accept
Mac address changes: Accept
Forged transmits: Accept
3. Configure vSwitch Name: Internal.vMX
MTU 1500
Uplink 1: delete
Security Accept to all:
Promiscuous mode: Accept
Mac address changes: Accept
Forged transmits: Accept
4.2. Port groups Configuration
Click Networking > Port groups > Add Standard port group
1. Configure Name: br-ext.vMX
VLAN ID: 0
Virtual Switch: vmnic1
Security Accept to all:
Promiscuous mode: Accept
Mac address changes: Accept
Forged transmits: Accept
2. Configure Name: br-int.vMX
VLAN ID: 0
Virtual Switch: Internal.vMX
Security Accept to all:
Promiscuous mode: Accept
Mac address changes: Accept
Forged transmits: Accept
3. Configure Name: p2p1-ge.vMX
VLAN ID: 0
Virtual Switch: vmnic2
Security Accept to all:
Promiscuous mode: Accept
Mac address changes: Accept
Forged transmits: Accept
4. Configure Name: p2p2-ge.vMX
VLAN ID: 0
Virtual Switch: vmnic2
Security Accept to all:
Promiscuous mode: Accept
Mac address changes: Accept
Forged transmits: Accept
4.3. Note: I made a discovery opening vMX in SSH using ssh root@172.25.11.3 command
murgescusilvia@Murgescus-MacBook-Pro ~ % ssh root@172.25.11.3
Password:
Last login: Fri May 15 00:30:49 2020
--- JUNOS 20.1R1.11 Kernel 64-bit JNPR-11.0-20200219.fb120e7_buil
root@vMX:~ # cli
root@vMX> show interfaces terse | match ge-
ge-0/0/0 up up
ge-0/0/0.16386 up up
ge-0/0/1 up up
ge-0/0/1.16386 up up
ge-0/0/2 up down
ge-0/0/2.16386 up down
ge-0/0/3 up down
ge-0/0/3.16386 up down
ge-0/0/4 up down
ge-0/0/4.16386 up down
ge-0/0/5 up down
ge-0/0/5.16386 up down
ge-0/0/6 up down
ge-0/0/6.16386 up down
ge-0/0/7 up down
ge-0/0/7.16386 up down
ge-0/0/8 up down
ge-0/0/8.16386 up down
ge-0/0/9 up down
ge-0/0/9.16386 up down
Only ge-0/0/0 and ge-0/0/1 are up up. All other networks are up down. You have to create other Port group networks to put more in up up. For example, the total number o network creation in VM is p2p3-ge.vMX to maximum p2p8-ge.vMX
root@vMX> show interfaces terse | match ge-
ge-0/0/0 up up
ge-0/0/0.16386 up up
ge-0/0/1 up up
ge-0/0/1.16386 up up
ge-0/0/2 up up
ge-0/0/2.16386 up up
ge-0/0/3 up up
ge-0/0/3.16386 up up
ge-0/0/4 up up
ge-0/0/4.16386 up up
ge-0/0/5 up up
ge-0/0/5.16386 up up
ge-0/0/6 up up
ge-0/0/6.16386 up up
ge-0/0/7 up up
ge-0/0/7.16386 up up
ge-0/0/8 up down
ge-0/0/8.16386 up down
ge-0/0/9 up down
ge-0/0/9.16386 up down
! At the moment I do not know how to make all ge interfaces, including ge-0/0/8 and ge-0/0/9, up up. I will search a solution when I will needed.
Step 5. Deploying the VCP VM
To deploy the VCP VM using .ova files:
Enter VMware ESXi using Firefox
Click Virtual Machine > Create/ Register VM
Select create type: click Deploy a virtual Machine for an OVF to OVA file and Next
Select OVF and VMDK files:
Name: vMX-vVCP_20.1R1.1
File: vcp_20.1R1.11.ova
Click Next
Select storage: ESXi00.datastore1 and Next
Untag Power on automatically end Next
Click Finish
Step 6. Deploying the FPC VM
To deploy the FPC VM using .ova files:
Enter VMware ESXi using Firefox
Click Virtual Machine > Create/ Register VM
Select create type: click Deploy a virtual Machine for an OVF to OVA file and Next
Select OVF and VMDK files:
Name: vMX-vFPC_20.1R1.1
File: vfpc_20.1R1.11.ova
Click Next
Select storage: ESXi00.datastore1 and Next
Untag Power on automatically end Next
Click Finish
After you have deployed the vVCP and vFPC VMs, you can modify the amount of memory, the number of vCPUs, and the number of WAN (here vmnic2) ports.
Step 7.Settings for the vVCP VM
CPU: 1
Memory: 1024 MB
Network Adapter 1: br-ext.vMX
Adapter Type: E1000
Network Adapter 2: br-int.vMX
Adapter Type: E1000
Step 8.Settings for the vFPC VM
CPU: 3
Memory: 2048 MB
Network Adapter 1: p2p1-ge.vMX
Adapter Type: VMXNET 3
Network Adapter 2: p2p2-ge.vMX
Adapter Type: VMXNET 3
Network Adapter 3: br-ext.vMX
Adapter Type: E1000
Network Adapter 4: br-int.vMX
Adapter Type: E1000
Not mandatory but you can add more networks:
Network Adapter 5: p2p3-ge.vMX
Adapter Type: VMXNET 3
Network Adapter 6: p2p4-ge.vMX
Adapter Type: VMXNET 3
Network Adapter 7: p2p5-ge.vMX
Adapter Type: VMXNET 3
Network Adapter 8: p2p6-ge.vMX
Adapter Type: VMXNET 3
Network Adapter 9: p2p7-ge.vMX
Adapter Type: VMXNET 3
Network Adapter 10: p2p8-ge.vMX
Adapter Type: VMXNET 3
Step 9.Launching vMX on VMware
Now you are ready to launching vMX on VMware. The firs basic configuration is the following:
[edit]
root@silvia# show
## Last changed: 2020-01-17 04:53:09 UTC
version 20.1R1.11;
system {
host-name vMX;
root-authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
login {
class c1only {
logical-system C1;
permissions all;
}
class c2only {
logical-system C2;
permissions all;
}
class c3only {
logical-system C3;
permissions all;
}
class c4only {
logical-system C4;
permissions all;
}
class r1only {
logical-system R1;
permissions all;
}
class r2only {
logical-system R2;
permissions all;
}
class r3only {
logical-system R3;
permissions all;
}
class r4only {
logical-system R4;
permissions all;
}
class r5only {
logical-system R5;
permissions all;
}
class r6only {
logical-system R6;
permissions all;
}
class r7only {
logical-system R7;
permissions all;
}
user class01 {
uid 2001;
class c1only;
authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
}
user class02 {
uid 2002;
class c2only;
authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
}
user class03 {
uid 2003;
class c3only;
authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
}
user class04 {
uid 2004;
class c4only;
authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
}
user junos01 {
uid 2023;
class r1only;
authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
}
user junos02 {
uid 2024;
class r2only;
authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
}
user junos03 {
uid 2223;
class r3only;
authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
}
user junos04 {
uid 2224;
class r4only;
authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
}
user junos05 {
uid 2225;
class r5only;
authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
}
user junos06 {
uid 2226;
class r6only;
authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
}
user junos07 {
uid 2227;
class r7only;
authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
}
user vMX {
full-name "Silvia Murgescu";
uid 2000;
class super-user;
authentication {
encrypted-password "your_passord"; ## SECRET-DATA
}
}
}
services {
ssh {
root-login allow;
protocol-version v2;
}
}
syslog {
user * {
any emergency;
}
file messages {
any notice;
authorization info;
}
file interactive-commands {
interactive-commands any;
}
}
processes {
dhcp-service {
traceoptions {
file dhcp_logfile size 10m;
level all;
flag all;
}
}
}
}
logical-systems {
C1;
C2;
C3;
C4;
R1;
R2;
R3;
R4;
R5;
R6;
R7;
Source;
Receiver;
}
chassis {
fpc 0 {
pic 0 {
tunnel-services {
bandwidth 10g;
}
interface-type ge;
number-of-ports 8;
}
lite-mode;
}
network-services enhanced-ip;
}
interfaces {
ge-0/0/0 {
vlan-tagging;
}
ge-0/0/1 {
vlan-tagging;
}
ge-0/0/2 {
vlan-tagging;
}
ge-0/0/3 {
vlan-tagging;
}
ge-0/0/4 {
vlan-tagging;
}
ge-0/0/5 {
vlan-tagging;
}
ge-0/0/6 {
vlan-tagging;
}
ge-0/0/7 {
vlan-tagging;
}
fxp0 {
unit 0 {
description For_SSH_Connection;
family inet {
address 172.25.11.3/24;
}
}
}
}
Note: The 172.25.11.1 IP or 172.25.11.2 IP not working to open/run vMX in MacBook Pro Terminal application. I have tried and works if configure IP 172.25.11.3/24.
To copy and paste a config from a text file. Use the CTRL-D or ^D option to exit the terminal mode and return to the firewall prompt.
[edit]
root@vMX# load replace terminal
-> Copy and Paste here
CTRL-D
[edit]
root@SRX# commit
NOTE: If interfaces connectivity and communication is needed, into Port Groups include VLAN ID 6095.
Below is an example: two logical-systems with 2 difference interfaces, ge-0/0/1.12 and ge-0/0/5.12 and the ping command for testing works
There are 6 license for 6 ESXi and 2 license for vCenter
Maximum 3 ESXi can be included into vCenter
Introducing into vCenter the ESXi I power on and use in that moment and need vCenter help for configuration
The License show Usage 4 CPUs and Capacity 6 CPUs
VM ESXi esxi00.silvique.ro has Evaluation License
When open Assign License show it is possible because the Usage 4 CPUs and Capacity 6 CPUs
Choose ESXi Licensing. Then the Usage change to 8 CPUs and impossible to click OK
Important information:
The License is based on 6 CPUs capacity NOT on 6 ESXi VM.
If you use ESXi’s with 2 CPUs then you can do this using maximum of 3 ESXi’s meaning a total of 6 CPUs capacity for the license. If you want to use ESXi with 4 CPUs, then a licence accepts only one ESXi with 2 CPUs.
Before this I have added a new Hard Disk into VMware Fusion end the configure Storage Datastore into VMware ESXi.
Now I want to install a VM and I get an error:
– The vm configuration was rejected. Please see Browser console
– Failed to create virtual machine vm. The operation is not allowed in the current state.
The answer I find on internet including the solution: “check the vmware state.. if it is in maintenance mode means you are not able to create virtual machine… keep the vmware on normal state.”
The security director: Security-Director-19.4R1.53.img
Overview
You can deploy the Junos Space Virtual Appliance *.ova file on a VMware ESXi server version 5.5, 6.0, or 6.5. Basic I have ESXi 6.7 but into installation step maybe I can modify to 6.0.
After the Junos Space Virtual Appliance is deployed, you can use the VMware vSphere client or Virtual Machine Manager (VMM) to connect to the VMware ESXi server and configure the Junos Space Virtual Appliance.
The minimum hardware requirements for deploying a Junos Space Virtual Appliance are as follows:
one core of the processor
64-bit quad processor with a clock speed of at least 2.66 GHz
Installing a Junos Space Virtual Appliance on a VMware ESXi Server
Login ESXi, mine name is esxi00. Go to Virtual Machine > Create/Register VM and click Deploy a virtual machine from an OVF or OVA file. Then Next
Enter a name as jSpace-1-20.1R1.2. I needed an other version to install and I use the name jSpace-2-19.4R1.3. Find in MacBook and chose space-19.4R1.3.ova file. Click Next
Chose the datastore where jSpace will be installed
Note: I have install a new ESXi VM into VMware Fusion with
Datastore name: datastre1
Capacity: 532 GB
Free: 504 GB (as I install CentOS firs)
Type: VMFS6
Please untag the Power on Automatically and you will si way … Click Next
Verify that all is correctly and click Finish
Failed to Power On. I mine case some modification have to be done before powering on.
Down in Recent Tasks you will see and wot to finish to complete. After finished go further.
Go to Virtual Machine > jSpace-1-20.1R1.2. This si default
Virtual Machine > jSpace-2-19.4T1.3
Click Edit
CPU 2, Memory 8 GB meaning 8192 MB, Default Hard Disk is minimal accepted of 500 GB.
You will be asked to enter user and password which are admin and abc123 respectively also for UNIX password.
Once you entered these, you will be asked to change the password. Choose your new password according to the local instructions. Otherwise you may fail to set a proper password.
[sudo] password for admin: the_configured_password
Press enter and continue
Choose the type of node to be installed [S/F] S
Configuring Eth0:
1) Configure UPv4
2) Configure Both IPv4 and IPv6
R) Redraw Menu
Click 1 and continue
Choice [1-2,R]: 1
Please enter new IPv4 address for interface eth0
172.25.11.109
Please enter new IPv4 subnet mask for interface eth0
255.255.255.0
Enter the default IPv4 gateway as a dotted-decimal IP address:
172.25.11.254
Please type the IPv4 nameservicer address in dotted decimal notation:
8.8.8.8
Configure a separate interface for device management? [y/N] n
Will this Junos Space system be added to an existing cluster? [y/N] n
Web GUI configuration
Configuring IP address for web GUI:
1) Configure IPv4
R) Redraw Menu
Chose [1,R]: 1
Please enter IPv4 Address for web GUI:
172.25.11.100
Do you want to enable NAT service? [y/N] n
Add NTP Server? [y/N] y
Please type the new NTP server: 82.197.221.30
Note: In NTP server you can use also the default IPv4, here meaning 172.25.11.254.
Please enter display name for this node: space2
Enter password for cluster maintenance mode: mine_password
Re-enter password: mine_password
-----
A> Apply settings
-----
Chose [ACQR]: A
Here is an image but all to do list is up
Now you can connect to the box via SSH at its ip 172.25.11.109:
% ssh admin@172.25.11.109
admin@172.25.11.109's password: the-password
Junos Space Settings Menu
...
7) (Debug) run shell
...
Chose [1-7,AQR]: 7
[sudo] password for admin:
[root@space-000c29cb6706 ~]# ip -4 addr
1: lo: <LOOPBACK,UP,LOWER_UP> meu 655536 disc no queue state UNKNOWN
inet 127.0.0.1/8 scope host lo
2: eth0: <BROUDCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
inet 172.25.11.109/24 brd 172.25.11.255 scope global eth0 <---Primary IP
inet 172.25.11.100/24 brd 172.25.11.255 scope global secondary eth0:0 <---Secondary GUI IP Address
Now it is time to login to the web UI.
Get inside CentOS using the password
Open Firefox application
Use https://172.25.11.100 to open Junos Space
Username: super
Password: juniper123
Change Temporary Password
Now going to install Security-Director-19.4R1.53.img file.
Security Director
Testing ping in MacBook Pro Terminal
murgescusilvia@Murgescus-MacBook-Pro ~ % ping centos
PING centos.silvique.ro (10.1.1.50): 56 data bytes
64 bytes from 10.1.1.50: icmp_seq=0 ttl=64 time=0.832 ms
64 bytes from 10.1.1.50: icmp_seq=1 ttl=64 time=1.320 ms
64 bytes from 10.1.1.50: icmp_seq=2 ttl=64 time=0.684 ms
c64 bytes from 10.1.1.50: icmp_seq=3 ttl=64 time=0.705 ms
64 bytes from 10.1.1.50: icmp_seq=4 ttl=64 time=0.461 ms
^C
--- centos.silvique.ro ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.461/0.800/1.320/0.286 ms
murgescusilvia@Murgescus-MacBook-Pro ~ %
-> Copy the file Security-Director-19.4R1.53.img from MacBook Pro to CentOS using Terminal from MacBook PRO
Not possible to use user name Silvia to update Security-Director-19.4R1.53.img file to CentOS:
[root@CenrOS /]# sudo rm Security-Director-19.4R1.53.img
[root@CenrOS /]# ls
Ready to use jSpace to deploy the security director.
Inside CentOS open Firefox than jSpace using the web IP like https://172.25.11.100
Web user is super and the configured password
Go to Adminitration -> Applications -> + button meaning Add Application
Select Upload via HTTP and upload the Security-Director-19.4R1.53.img
Please click on Job ID to new details > OK
Once it appears, click install then OK
Application Management Job Information: Please logout and log in again after the installation of new application completed successfully. Click on Job ID to new details. > OK
It will take a while for the application to be installed. I exit and I will not enter again. Now I take a break to be sure it will be installed after mine break.
When it is finished you will see other new applications
Application Visibility – new
Version 19.4,
Release R1,
Build 53,
Server Group Platform
Log Director – new
Version 19.4,
Release R1,
Build 53,
Server Group Platform
Network Management Platform – exited already
Version 19.4,
Release R1,
Build 3,
Server Group Platform
NSM Migration
Version 19.4,
Release R1,
Build 53,
Server Group Platform
Security Director – new
Version 19.4,
Release R1,
Build 53,
Server Group Platform
Security Director Login and Reporting – new
Version 19.4,
Release R1,
Build 53,
Server Group Platform
In Administation > Licenses
License Type Tryal
Sku Mode Trial-license
Total License Days 60
Remaining Days 60
And here we are! We have installed both space platform and security director. Last but not least I need to recap usernames we have configured so far to avoid any confusion.
1) admin user: We set this for the Linux shell and default password during the installation is abc123 2) maintenance user: we also set password for this but it is used for special operations. No default password for this. It must be set. 3)super user: this user is used for WEB UI and initial default password is juniper123
I will try to do Part 5 using these idea: I’d recommend using FreeNAS instead of Ubuntu. I’ve just done a test and managed to set up a FreeNAS VM with 2 GB of RAM and managed to create a volume and connect it to ESXi using iSCSI.
My idea is to add Disk and configure iSCSI in FreeNAS and connect mine ESXi hosts to it.
In the Part 4 I’ve created a three node cluster but I couldn’t enable DRS or HA because it requires centralised storage. In this part, I’ll create a storage server with FreeNAS and configure it so that mine ESXi hosts can access it viaiSCSI (with multipathing).
After completing the steps in the previous page, I will be at a point where I have:
Three ESXi 6.7 VMs running on VMware Fusion
The first ESXi VM contains a pfSense firewall VM with built in DNS Resolver
One vCenter Server Appliance in VMware Fusion
I am able to access the hosts and vCenter from the Mac using domain names
Step 1. Configure Networkand open the existing FreeNAS 11.3 U2.1
Open VMware Fusion and find the Virtual MachineFreeNAS and select it.
Clicking on Virtual Machine >Hard Disk (SCSI)
-> Processor and Memory
Verify Processor and Memory
Modify to 2 processor cores and 2048 MB meaning 2 GB
-> Hard Disk
Verify total GB for both/all Hard Disk. If needed add a New Hard Disk to create a total of 80 GB. I have 2 Hard Disk with 20GB…
… and 10GB
If needed add a new Hard Disk with 50 GB to have a total of 80 GB
-> Network Adapter
Need to have a total of 3 Network Adapter. If needed, create new Network Adapter’s
Tag vSphere for all existing Network Adapter
-> FreeNAS Settings looks like this
Step 2. Power-on FreeNAs 11.3 U2.1 and configure all three Networks
Power-on FreeNAS
Login into FireFox
In left side click on Network > Interfaces then in up-right side ADD
Create vlans named vlan101 and vlan102 with details below. The same is for each
All Networks created are here
Now is possible to ping the VM from the MacBook using the hostname freenas
murgescusilvia@Murgescus-MacBook-Pro ~ % ping freenas
PING freenas.silvique.ro (10.1.1.201): 56 data bytes
64 bytes from 10.1.1.201: icmp_seq=0 ttl=64 time=0.285 ms
64 bytes from 10.1.1.201: icmp_seq=1 ttl=64 time=0.594 ms
64 bytes from 10.1.1.201: icmp_seq=2 ttl=64 time=0.447 ms
64 bytes from 10.1.1.201: icmp_seq=3 ttl=64 time=0.532 ms
64 bytes from 10.1.1.201: icmp_seq=4 ttl=64 time=0.352 ms
64 bytes from 10.1.1.201: icmp_seq=5 ttl=64 time=0.560 ms
^C
--- freenas.silvique.ro ping statistics ---
6 packets transmitted, 6 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.285/0.462/0.594/0.112 ms
Step 3. Verify iSCSI Port groups inside vCenter
The next thing needed to do is add two new VMkernel adapters to our standard virtual switches so that the hosts can communicate with the storage server created using multiple paths.
Power-on the needed of three ESXi’s, one FreeNAS, one vCenter and one pfSense (inside esxi01). To be able to use as much RAM as possible for vCenter I do NOT power-on all in the same time but in the order mentioned before.
Go on FreeNAS web bowser then >- Shell and ping all ISCSI‘s:
10.10.1.11, 10.10.1.12
root@freenas[~]# ping 10.10.1.11
PING 10.10.1.11 (10.10.1.11): 56 data bytes
64 bytes from 10.10.1.11: icmp_seq=0 ttl=64 time=2.170 ms
64 bytes from 10.10.1.11: icmp_seq=1 ttl=64 time=1.299 ms
64 bytes from 10.10.1.11: icmp_seq=2 ttl=64 time=0.885 ms
^C
--- 10.10.1.11 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.885/1.451/2.170/0.536 ms
root@freenas[~]# ping 10.10.2.11
PING 10.10.2.11 (10.10.2.11): 56 data bytes
64 bytes from 10.10.2.11: icmp_seq=0 ttl=64 time=1.173 ms
64 bytes from 10.10.2.11: icmp_seq=1 ttl=64 time=0.848 ms
^C
--- 10.10.2.11 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.848/1.010/1.173/0.162 ms
10.10.2.11, 10.10.2.12
root@freenas[~]# ping 10.10.1.12
PING 10.10.1.12 (10.10.1.12): 56 data bytes
64 bytes from 10.10.1.12: icmp_seq=0 ttl=64 time=1.100 ms
64 bytes from 10.10.1.12: icmp_seq=1 ttl=64 time=0.665 ms
64 bytes from 10.10.1.12: icmp_seq=2 ttl=64 time=0.557 ms
64 bytes from 10.10.1.12: icmp_seq=3 ttl=64 time=0.643 ms
64 bytes from 10.10.1.12: icmp_seq=4 ttl=64 time=0.877 ms
^C
--- 10.10.1.12 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.557/0.768/1.100/0.196 ms
root@freenas[~]# ping 10.10.2.12
PING 10.10.2.12 (10.10.2.12): 56 data bytes
64 bytes from 10.10.2.12: icmp_seq=0 ttl=64 time=0.954 ms
64 bytes from 10.10.2.12: icmp_seq=1 ttl=64 time=0.720 ms
64 bytes from 10.10.2.12: icmp_seq=2 ttl=64 time=0.713 ms
^C
--- 10.10.2.12 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.713/0.796/0.954/0.112 ms
10.10.1.13, 10.10.2.13
root@freenas[~]# ping 10.10.1.13
PING 10.10.1.13 (10.10.1.13): 56 data bytes
64 bytes from 10.10.1.13: icmp_seq=0 ttl=64 time=1.090 ms
64 bytes from 10.10.1.13: icmp_seq=1 ttl=64 time=0.981 ms
64 bytes from 10.10.1.13: icmp_seq=2 ttl=64 time=0.642 ms
^C
--- 10.10.1.13 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.642/0.904/1.090/0.191 ms
root@freenas[~]# ping 10.10.2.13
PING 10.10.2.13 (10.10.2.13): 56 data bytes
64 bytes from 10.10.2.13: icmp_seq=0 ttl=64 time=0.497 ms
64 bytes from 10.10.2.13: icmp_seq=1 ttl=64 time=0.533 ms
^C
--- 10.10.2.13 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.497/0.515/0.533/0.018 ms
Step 4. Configure iSCSI Target into FreeNAS
Now that the network is configured and all three ESXi hosts can communicate with the storage server, I need to configure the iSCSI target into FreeNAC.
The idea is to test VMware vSphere 6.7 and the vCenter appliance going to VMotion and Storage VMotion.
I need vSphere Web Client to use to install Juniper vQFX server. I do not find guid about this so I will use some ideas from Installing Juniper vMX inline guide. And this guide work on vSphere Web Client. I need to have all material to be prepare to understand that page.