2020.05.2 – Connecting EVE Lab to a physical device

Some help ideas are in EVE professional Cookbook.

A virtual lab that include CentOS is configured in EVE-PRO. The EVE-PRO is configure in VMWare Fusion. VMware Fusion is in MacBook PRO. The idea is to connect virtual CentOS to internet.

Open VMware Fusion > Click on EVE-PRO virtual machine > Click on Settings…

Create, or select if already created, 3 Network Adapter:

  • Network Adapter > Connect to vSphere network
  • Network Adapter 2 > Connect to vSphere network
  • Network Adapter 3 > Connect to vSphere network

Into EVE-PRO lab create a new Network type Cloud1 and connect to CentOS. Do not forget to configure the correct IP in CentOS. Details of how I done it are here: 2020.04.30 – CentOS installation into EVE-NG.

The Internet is working now

I am happy!!!

2020.05.1 – Configuring a Junos Space Virtual Appliance in EVE-PRO

Note: Not Working for me! I install Junos Space into ESXi and try to make it work.

What I need to read and apply are here: Configuring a Junos Space Virtual Appliance as a Standalone or Primary FMPM Node

After you deploy a Junos Space Virtual Appliance on a EVE-PRO server, you must enter basic network and machine information to make your Junos Space Virtual Appliance accessible on the network. You must also add disk space to the partitions of the Junos Space Virtual Appliance.

Before you begin, ensure that you have the following information available:

  • IPv4 address and subnet mask for the node management (eth0) Ethernet interface
  • IPv4 address of the default gateway for the eth0 Ethernet interface
  • IPv4 address of the name server
  • Virtual IP (VIP) address in IPv4 and IPv6 formats
    • The IPv4 format of the VIP address is used for accessing the Junos Space Network Management Platform GUI through a Web browser. This IP address must be in the same subnet as the IP address assigned to the eth0 Ethernet interface
  • IPv4 address or URI of the NTP source to synchronize time

To configure a Junos Space Virtual Appliance:

  • At the Junos Space login prompt, type admin as your default login name and press Enter.
space-node login:admin 
Password:
  • Type abc123 as the default administrator password and press Enter. Junos Space prompts you to change your default password.
  • To change the default password, do the following: 
    • Type the default password and press Enter.
    • Type your new password and press Enter.
    • Retype your new password and press Enter.
  • Enter the new password to log in to Junos Space.
  • Type S to install the virtual appliance as a Junos Space node.
  • Configure the IP address for the eth0 interface.
    • Type 1.
    • Type the IPv4 address for eth0 interface in dotted-decimal notation and press Enter.
Please enter new IPv4 address for interface eth0:
172.25.11.99
  • Type the subnet mask for the IPv4 address and press Enter.
Please enter new IPv4 subnet mask for interface eth0:
255.255.255.0
  • Type the IPv4 address of the default gateway for the eth0 Ethernet interface in dotted-decimal notation and press Enter.
Enter the default IPv4 gateway as a dotted-decimal IP Address:
172.25.11.254

2020.05.1 – Start Juniper JSE lab into EVE-PRO

Juniper Space Essentials (JSE) lab is a laboratory for JSE cours. I will configure this lab on:

  • MacBook PRO
  • VMware Fusion
  • EVE-PRO

Step 1. Create tha lab in EVE-PRO

Step 2. Power-on all

To be sure it works Ok I power-on one at a time in this order:

  • JuniperSpace1
  • CentOS
  • vMX-VCP1
  • vMX-VFP1
  • vMX-VCP2
  • vMX-VFP2

Powered-on all but I have difficulty to continue to work on installation of JuniperSpace.

I will try to install all these lab in ESXi and see if it is working.

2020.04.30 – CentOS installation into EVE-NG

All used information to install I found here:

Using things:

  • EVE-PRO meaning with license
  • CentOS CentOS-8.1.1911-x86_64-dvd1.iso

Step 1. Download CentOS – donloaded from Download CentOS version Linux DVD ISO

See 2. Go into EVE-PRO and move into qemu directory

root@eve-ng:~# cd /opt/unetlab/addons/qemu

Step 3. Create a new directory and move inside it

root@eve-ng:/opt/unetlab/addons/qemu# mkdir linux-centeros-8
root@eve-ng:/opt/unetlab/addons/qemu# cd linux-centeros-8
root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8#

Step 4. Upload CentOS-8.1.1911-x86_64-dvd1.iso in EVE-PRO using FilaFilla application

root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# ls
CentOS-8.1.1911-x86_64-dvd1.iso

Step 6. Create qcow2 file

root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# /opt/qemu/bin/qemu-img create -f qcow2 hda.qcow2 40G
Formatting 'hda.qcow2', fmt=qcow2 size=42949672960 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# 

root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# ls
CentOS-8.1.1911-x86_64-dvd1.iso  hda.qcow2
root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# 

Step 6. Rename CentOS-8.1.1911-x86_64-dvd1.iso file

oot@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# mv CentOS-8.1.1911-x86_64-dvd1.iso cdrom.iso

root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# ls
cdrom.iso  hda.qcow2

Step 7. Login to EVE-PRO then Add new lab end add CentOS node

CentOS need to have

  • Name: CentOS
  • CPU: 2
  • RAM (MB): 4096
  • Ethernets: 2
  • QEMU Version : 2.4
  • QEMU arch: tpl (x86_64)
  • QEMU Nic: e1000-

Click SAVE

Add a switch and configure it as Type: Management (Cloud0) and click Save

Connect

Step 8. Now ready to start CentOS

Do not click anything, just wait …

Step 9. Go throw installation process

English and Continue

Verify and configure different things…

Keuboard: English

Software: Server with GUI

Installation Destination: ATA QENU HARDDISK

After finishing click Begin Installation

During installation you need to insert toot password and user name and password

Root Pasword

User Name and Password. Also select Make this user admin

Wait to finish installation

Step 10. Restating CentOS

Before restarting CentOS you need to modify the name if application needed to install

root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# mv cdrom.iso centos-install.iso

I do not like restart. I click restart, then power-off and them power-on or Start

Step 11. Go throw internal installation process.

Click License Information

I accept the license agreement.

Click Finish configuration

English and Next

English (US) and Next

Privacy OFF and Next

Connect Your Online: Skip

Finish to install internal

Step 12. Networks

Select Wired Settings

I don’t have Automatic Ethernet on these network, si I have to configure it manually

Open Terminal

Use ifconfig command

I will do not this now ….

Step 13. Save the configured CentOS for all new EVE-PRO lab

Into EVE-PRO open Lab Details

Use SSH

root@eve-ng:/opt/unetlab/tmp/0/085884f1-7807-492d-814f-7b588fd1892c/1# ls
cdrom.iso  hda.qcow2  l1down_1  l1up_0  mon-sock  mon2-sock  qmp-sock  wrapper.txt

root@eve-ng:/opt/unetlab/tmp/0/085884f1-7807-492d-814f-7b588fd1892c/1# /opt/qemu/bin/qemu-img commit hda.qcow2


Image committed.

Here is tha same

root@eve-ng:/opt/unetlab/tmp/0/085884f1-7807-492d-814f-7b588fd1892c/1# ls
cdrom.iso  hda.qcow2  l1down_1  l1up_0  mon-sock  mon2-sock  qmp-sock  wrapper.txt

A needed command to make it work as configured

root@eve-ng:/opt/unetlab/addons/qemu/linux-centeros-8# /opt/unetlab/wrappers/unl_wrapper -a fixpermissions


May 01 07:09:28 May 01 07:09:28 Online Check state: Valid

All done. CentOS is ready to go. We will now add a couple of nodes to confirm that CentOS is working.

Step 14. Verify that CentOS is working

Add a total of three CentOS

Power-on all

All 3 can power-on …

Step 14. Configure and Mange Network Connection using nmcli

The needed information I find here: How to configure and Manage Network Connections using nmcli. I use The nmcli connection sub-command chapter.

  • View connection profiles
# nmcli connection show 
NAME UUID TYPE DEVICE 
virbr0 bbe539aa-7042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet virbr0
virbr0-nic bbe539aa-8042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet virbr0-nic
ens3 bbe539aa-5042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet --
ens4 bbe539aa-6042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet --
  • The nmcli connection add Command
$ nmcli connection add con-name new-ens3 ifname ens3 type ethernet ip4 192.168.1.25/24 gw4 192.168.2.1 
Connection 'new-ens3' (f0c23472-1aec-4e84-8f1b-be8a2ecbeade) successfully added.

$ nmcli connection add con-name new-ens4 ifname ens4 type ethernet ip4 172.25.11.254/24 gw4 172.25.11.254
Connection 'new-ens4' (f0c23472-1aec-5e84-8f1b-be8a2ecbeade) successfully added.
# nmcli connection show
NAME UUID TYPE DEVICE
new-ens3 bbe539aa-9042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet ens3
new-ens4 bbe539aa-1042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet ebs4
virbr0 bbe539aa-7042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet virbr0
virbr0-nic bbe539aa-8042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet virbr0-nic
ens3 bbe539aa-5042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet --
ens4 bbe539aa-6042-4d28-a0e6-2a4d4f5dd744 802-3-ethernet --
# ls /etc/sysconfig/network-scripts/ifcfg* 
/etc/sysconfig/network-scripts/ifcfg-ens3 /etc/sysconfig/network-scripts/ifcfg-new-ens3
/etc/sysconfig/network-scripts/ifcfg-ens4 /etc/sysconfig/network-scripts/ifcfg-new-ens4

I can see this but I modify using nmcli command

I hope all detail help me and you in future use.

2020.04.26 – Juniper Junos Space Network Management installation into EVE-PRO

The information for installation is from here : https://www.eve-ng.net/index.php/documentation/howtos/juniper-j-space/

Note: I have installed 2 versions and I do not find a solution to make it work.

>>>>>>>> Chapter 1:

  • EVE Image Name: jspace-19.3R1.3
  • Downloaded Original Filename: space-19.3R1.3.qcow2
  • Version: 19.3R1.3
  • vCPUs: 2
  • vRAM: 8192
  • HDD Format: virtioa
  • Console: vnc/https
  • Interfaces: x2 virtio

Chapter 1 topic:

Step 1. Download KVM qcow2 image from Juniper.

Step 2. Using our image table, create correct image folder, this example is for image jspace- in the table above.

mkdir /opt/unetlab/addons/qemu/jspace-19.3R1.3/

Step 3. Upload the downloaded image to the EVE /opt/unetlab/addons/qemu/jspace-19.3R1.3 folder using for example FileZilla or WinSCP.

Step 4. From the EVE cli, go to newly created image folder.

cd /opt/unetlab/addons/qemu/jspace-19.3R1.3/

Step 5. Rename original filename to virtioa.qcow2

mv space-19.3R1.3.qcow2 virtioa.qcow2 

Step 6.  Fix permissions:

/opt/unetlab/wrappers/unl_wrapper -a fixpermissions


Step 7. Open a lab, add Junos Space and power-on

Step 8. Default logins:

CLI: admin/abc123
https: super/juniper123

>>>>>>>> Chapter 2:

  • EVE Image Name: jspace-20.1R1.2
  • Downloaded Original Filename: space-20.1R1.2.qcow2
  • Version: 20.1R1.2
  • vCPUs: 2
  • vRAM: 8192
  • HDD Format: virtioa
  • Console: vnc/https
  • Interfaces: x2 virtio

Chapter 2 topic:

Note: I have installed and it does’n work for me. Maybe it work for you … just try it!

Step 1. Download KVM qcow2 image from Juniper.

Step 2. Using our image table, create correct image folder, this example is for image jspace- in the table above.

mkdir /opt/unetlab/addons/qemu/jspace-20.1R1.2/

Step 3. Upload the downloaded image to the EVE /opt/unetlab/addons/qemu/jspace-20.1R1.2 folder using for example FileZilla or WinSCP.

Step 4. From the EVE cli, go to newly created image folder.

cd /opt/unetlab/addons/qemu/jspace-20.1R1.2/

Step 5. Rename original filename to virtioa.qcow2

mv space-20.1R1.2.qcow2 virtioa.qcow2 

Step 6.  Fix permissions:

/opt/unetlab/wrappers/unl_wrapper -a fixpermissions


Step 7. Open a lab, add Junos Space and power-on

Step 8. Default logins:

CLI: admin/abc123
https: super/juniper123

Getting Start Guid: https://www.juniper.net/documentation/en_US/junos-space20.1/platform/topics/concept/junos-space-getting-started-fabric-architecture-overview.html

2020.04.25 – Juniper vSRX-NG installation into EVE-PRO

I used this guid for my installation:: https://www.eve-ng.net/index.php/documentation/howtos/howto-add-juniper-vsrx-ng-15-x-and-later/

Versions this guide is based on:

  • Name: vsrxng-20.1R1.11
  • Download original filename: junos-vsrx3-x86-64-20.1R1.11.qcow2
  • Version: 20.1R1.11
  • VCPUS: 2
  • VRAM: 4096

Step 1. Create correct image folder

root@eve-ng:/opt/unetlab/addons/qemu# mkdir vsrxng-20.1R1.11

Step 2. Upload the downloaded image to the EVE /opt/unetlab/addons/qemu/vsrxng-17.3R1.10/  folder using for example FileZilla or WinSCP.

Step 3. From the EVE cli, go to newly created image folder.

root@eve-ng:/opt/unetlab/addons/qemu# cd vsrxng-20.1R1.11

root@eve-ng:/opt/unetlab/addons/qemu/vsrxng-20.1R1.11# ls
junos-vsrx3-x86-64-20.1R1.11.qcow2

Step 4. Rename original filename to virtioa.qcow2

root@eve-ng:/opt/unetlab/addons/qemu/vsrxng-20.1R1.11# mv junos-vsrx3-x86-64-20.1R1.11.qcow2 virtioa.qcow2

Step 5. Fix permissions:

root@eve-ng:/opt/unetlab/addons/qemu/vsrxng-20.1R1.11# /opt/unetlab/wrappers/unl_wrapper -a fixpermissions


Apr 25 06:51:19 Apr 25 06:51:19 Online Check state: Valid

Step 6. Create a testing lab and open:

Maybe I should increase used RAM for EVE-PRO to open all 4. Now I can open only 3 vSRX’s.

Step 6. By default the number of interfaces are 4: fxp0 and ge-0/0/0 – ge-0/0/2.

To increase the number of interfaces change the default Ethernets configuration of 4 to 10. The picture below shows maximum to ge-0/0/6 but it is maximum to ge-0/0/8.

Note: To open vSRX with Terminal in MacBool Pro make sure you configured/changed QEMU Nic to vmxnet3

Information about vSRX and vSRX-NG:

Junos release 18.4R1 has introduced a new model of virtual SRX (referred to as “vSRX 3.0”), which will be available in addition to the existing virtual SRX model (referred to as “vSRX”), which has been available since Junos 15.1X49-D15 release.

The vSRX 3.0 has a new architecture, which has benefits for operating in virtual environments. Some enhancements are a faster boot time, smaller install image size and better agility due to no nested routing-engine VM being used anymore.

However, the original vSRX model will still be available as long as not all features which are available on vSRX have been ported to vSRX 3.0 yet.

With respect to the security features, the both virtual SRX models are in feature parity. However, some platform related features may not be in parity yet.

The below table specifies differences and similarities in features between vSRX and vSRX 3.0, so that you can decide when to best use which type of virtual SRX, based on your needs and environment.

Platform feature differences overview between vSRX and vSRX 3.0

 vSRXvSRX 3.0
Resources supported  
2 vCPU / 4 GB RAMyesyes
5 vCPU / 8 GB RAMyesyes
9 vCPU / 16 GB RAMyesyes (*2)
17 vCPU / 32 GB RAMyesyes (*2)
Flexible flow session capacity scaling by adding additional vRAMyesyes (*3)
Multi-core scaling support (Software RSS)noyes (*4)
Add one additional vCPU to give the nested RE two vCPU’syesN/A
VMXNET3yesyes
Virtio (virtio-net, vhost-net)yesyes
SR-IOV over Intel 82599 seriesyesyes
SR-IOV over Intel X710/XL710 seriesyesyes
SR-IOV over Mellanox ConnectX-3 and ConnectX-4yesno
   
Hypervisors supported  
VMware ESXi 5.5, 6.0, 6.5yesyes
VMware ESXi 6.7noyes (*4)
KVM on Ubuntu 16.04, Centos 7.1, Redhat 7.2yesyes
Hyper-Vyesyes (*2)
Nutanixnoyes (*2)
Contrail Networking 3.xyesyes
Contrail Networking 5.xnoyes (*4)
AWSyesyes (*6)
Azureyesyes (*7)
Google Cloud Platform (GCP)noyes (*4)
   
Other features  
Cloud-inityesyes
Powermode IPSecyesno
vMotion / live migrationnoyes
AWS ELB and ENA using C5 instancesnoyes (*1)
Chassis Clusteryesyes
GTP TEID based session distribution using Software RSSnoyes (*4)
On-Device Antivirus Scan Engine (Avira)noyes (*5)
   
Requirements  
Requires Hardware Acceleration / VMX CPU flag enabled in the hypervisoryesno
Disk space16 GB18 GB

Notes:

  1. Supported in Junos 18.4R1 and higher
  2. Supported in Junos 19.1R1 and higher
  3. Supported in Junos 19.2R1 and higher
  4. Supported in Junos 19.3R1 and higher
  5. Supported in Junos 19.4R1 and higher
  6. vSRX model available on AWS is vSRX 3.0 from Junos 18.3 onwards (before vSRX 3.0 was generally available, it was already available on AWS).
  7. vSRX model available on Azure is vSRX 3.0 from Junos 19.1 onwards

2020.04.24 – Juniper vMX (limited) installation into EVE-PRO

I used this guid for my installation: https://www.eve-ng.net/index.php/documentation/howtos/howto-add-juniper-vmx-16-x-17-x/

This guide is based on version:

  • EVE images name, vCPUs and vRAM
    • vmxvcp-limited-20.1R1.18-domestic-VCP, 1 vCPU, 2 Gb vRAM
    • vmxvfp-limited-20.1R1.18-domestic-VFP, 3 vCPUs, 4 Gb vRAM
  • Downloaded Filename
    • vmx-bundle-20.1R1.11-limited.tar
  • Version
    • Junos: 20.1R1.11
Read more

2020.04.23 – Juniper vRR installation into EVE-PRO

Inspiration in EVE guid: https://www.eve-ng.net/index.php/documentation/howtos/howto-add-juniper-vrr/ where it is made with version 17.4R1.16.

  • Note: I have installed three times.
    • Firs time with vRR 19.2R1.8 (vrr-bundle-kvm-19.2R1.8.tar) – workilg OK!
    • second time with vRR 20.1R1.11 (vrr-bundle-kvm-20.1R1.11.tar) – not work!
    • third time with vRR 19.4R1.10 (vrr-bundle-kvm-19.4R1.10.tar) – working OK!

The blog is about third installation exactly as the first installation.

Read more

2020.04.18 – Install EVE-NG: THE Network Emulator!

What I found and I use as help to download and install EVE-NG is from here: https://thenetworkberg.com/eve-ng-first-time-configuration/

Note: I had problems and I’be installed 2 times in 2 different version: -NG and -PRO. I will update here as Note into each Step to remember if I need to installed again.

Need to have something similar:

  • MacBook Pro (15-inch, 2016):
    • macOS Catalina version 10.15.3
    • Processor 2,9 GHs Quad-Core Intel
    • Memory 16 GB 2133 MHsLPDDR3 (of RAM)
  • VMWare Fusion 11.5.3 installed

EVE-NG Community Cookbook: https://www.eve-ng.net/index.php/documentation/community-cookbook/

Read more