Skip to content

Commit 736ac10

Browse files
author
Bryan Fu
committed
RTD infrasim-compute update - get started
1 parent 0df6eac commit 736ac10

File tree

5 files changed

+176
-127
lines changed

5 files changed

+176
-127
lines changed

docs/configuration.rst

Lines changed: 91 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,95 @@ Configuration
66
Virtual Server Configuration file
77
------------------------------------------------
88

9-
**Forrest - Under construction**
9+
There's one central virtual server configuration file which is /etc/infrasim/infrasim.yml (`source code <https://github.com/InfraSIM/infrasim-compute/blob/master/template/infrasim.yml>`_). All adjustable parameters are defined in this file. This is only one place if you want to customize or make adjustment on the virtual server node. While not all the options are explicitly listed in this file for purpose of simplicity. However there's one example configuration file - /etc/infrasim.full.yml.example (`source code <https://github.com/InfraSIM/infrasim-compute/blob/master/etc/infrasim.full.yml.example>`_) - listed all supported parameters and definitions. By referring content in example file, you can modify real file infrasim.yml and then restart infrasim-main service, new properties will take effect.
10+
11+
Here's full list of the example configuration file; every single key-value pair is supported to be add/modify in your real-in-use infrasim.yml::
12+
13+
# Unique identifier
14+
name: node-1
15+
16+
# Node type is mandatory
17+
type: quanta_d51
18+
19+
compute:
20+
kvm_enabled: true
21+
numa_control: true
22+
cpu:
23+
model: host
24+
features: +vmx
25+
quantities: 8
26+
memory:
27+
size: 4096
28+
storage_backend:
29+
-
30+
controller:
31+
type: ahci
32+
max_drive_per_controller: 6
33+
drives:
34+
-
35+
model: SATADOM
36+
serial: HUSMM142
37+
bootindex: 1
38+
# To boot esxi, please set ignore_msrs to Y
39+
# sudo -i
40+
# echo 1 > /sys/module/kvm/parameters/ignore_msrs
41+
# cat /sys/module/kvm/parameters/ignore_msrs
42+
file: chassis/node1/esxi6u2-1.qcow2
43+
-
44+
vendor: Hitachi
45+
model: HUSMM0SSD
46+
serial: 0SV3XMUA
47+
# To set rotation to 1 (SSD), need some customization
48+
# on qemu
49+
# rotation: 1
50+
# Use RAM-disk to accelerate IO
51+
file: /dev/ram0
52+
-
53+
vendor: Samsung
54+
model: SM162521
55+
serial: S0351X2B
56+
# Create your disk image first
57+
# e.g. qemu-img create -f qcow2 sda.img 2G
58+
file: chassis/node1/sda.img
59+
-
60+
vendor: Samsung
61+
model: SM162521
62+
serial: S0351X3B
63+
file: chassis/node1/sdb.img
64+
-
65+
vendor: Samsung
66+
model: SM162521
67+
serial: S0451X2B
68+
file: chassis/node1/sdc.img
69+
networks:
70+
-
71+
network_mode: bridge
72+
network_name: br0
73+
device: vmxnet3
74+
-
75+
network_mode: bridge
76+
network_name: br0
77+
device: vmxnet3
78+
ipmi:
79+
interface: bt
80+
host: 127.0.0.1
81+
smbios: chassis/node1/quanta_d51_smbios.bin
82+
bmc:
83+
interface: br0
84+
username: admin
85+
password: admin
86+
emu_file: chassis/node1/quanta_d51.emu
87+
88+
# Renamed from telnet_listen_port to ipmi_console_port, extracted from bmc
89+
ipmi_console_port: 9000
90+
91+
# Used by ipmi_sim and qemu
92+
bmc_connection_port: 9100
93+
94+
# Used by socat and qemu
95+
serial_port: 9003
96+
97+
1098

1199
Networking
12100
------------------------------------------------
@@ -43,12 +131,10 @@ Networking
43131
.. image:: _static/networking_bridge_multiple.PNG
44132
:align: center
45133

46-
Virtual Power Distribution Unit
134+
Virtual Power Distribution Unit - Robert - Under construction
47135
------------------------------------------------
48136

49137
Current Virtual PDU implementation only supports running entire virutal infrastructure on VMWare ESXi because it only supports functionality of simulating power control chassis through VMWare SDK.
50138

51139
.. image:: _static/networkwithoutrackhd.png
52-
:align: center
53-
54-
**Robert - Under construction**
140+
:align: center

docs/get_start.rst

Lines changed: 46 additions & 116 deletions
Original file line numberDiff line numberDiff line change
@@ -6,150 +6,80 @@ This chapter describes how to access virtual server, virtual PDU and virtual inf
66
Virtual Server
77
------------------------------------------------
88

9-
**Forrest - Under construction**
9+
Command interfaces
10+
~~~~~~~~~~~~~~~~~~~~~
1011

11-
#. Check Virtual Machine Status after deployment
12+
#. Initialization (you need do it once) ::
1213

13-
* Check vBMC status using ipmitool on host machine.::
14+
sudo infrasim-init
1415

15-
#sudo apt-get install ipmitool
16-
#ipmitool -I lanplus -U admin -P admin -H <vm ip address> sdr list
16+
#. Start Infrasim Service::
1717

18+
sudo infrasim-main start
1819

19-
You can get the command result like the following ::
20+
#. Status and version number check::
2021

21-
Pwr Unit Status | Not Readable | ns
22-
IPMI Watchdog | Not Readable | ns
23-
FP NMI Diag Int | Not Readable | ns
24-
SMI TimeOut | Not Readable | ns
25-
System Event Log | Not Readable | ns
26-
System Event | Not Readable | ns
27-
Button | Not Readable | ns
28-
PCH Therm Trip | Not Readable | ns
29-
BMC Board TEMP | 41 degrees C | ok
30-
Front Panel Temp | 30 degrees C | ok
31-
Board Inlet TEMP | 29 degrees C | ok
32-
Sys Fan 2 | 7138 RPM | ok
33-
Sys Fan 1 | 7310 RPM | ok
34-
Sys Fan 3 | 7138 RPM | ok
35-
PS1 Status | Not Readable | ns
36-
PS1 Power In | 44 Watts | ok
37-
PS1 Temperature | 35 degrees C | ok
38-
P1 Status | Not Readable | ns
39-
P1 Therm Margin | disabled | ns
40-
P1 Therm Ctrl % | 0 percent | ok
41-
P1 ERR2 TimeOut | Not Readable | ns
42-
CATERR | Not Readable | ns
43-
MSID Misatch | Not Readable | ns
44-
CPU Missing | Not Readable | ns
45-
P1 VRD Hot | Not Readable | ns
46-
Mem P1 Thrm Mrgn | -46 degrees C | ok
47-
Mem P1 Thrm Trip | Not Readable | ns
48-
BB +12.0V | 11.67 Volts | ok
49-
BB +5.0V | 0.03 Volts | ok
50-
BB +3.3V | 3.24 Volts | ok
51-
BB +5.0V STBY | 4.92 Volts | ok
52-
BB +3.3V AUX | 3.25 Volts | ok
53-
BB P1 Vcc | 0.94 Volts | ok
54-
BB +1.5V P1 MEM | 1.50 Volts | ok
55-
BB +3V Vbat | 3.21 Volts | ok
56-
BB P1 Vccp | 1.03 Volts | ok
57-
BB P1 VccUSA | 0.91 Volts | ok
58-
BB +1.05V PCH | 1.04 Volts | ok
59-
BB +1.05V AUX | 1.03 Volts | ok
60-
BB +12.0V V1 | 11.73 Volts | ok
61-
BB +1.5V AUX | 1.47 Volts | ok
62-
HSBP Temp | 35 degrees C | ok
63-
HDD 0 Status | Not Readable | ns
64-
HDD 1 Status | Not Readable | ns
65-
HDD 2 Status | Not Readable | ns
66-
HDD 3 Status | Not Readable | ns
67-
68-
69-
* Check virtual monitor output through VNC
70-
In vagrant file, we port-forwarding guest VNC default port 5901 to host 15901. So, you can access VNC service via host 15901 port. You can see the virtual monitor is already running and listing boot devices of virtual node. Through this booting devices, you can deploy hypervisor or operating system into virtual compute node.
71-
72-
.. image:: _static/vnc.png
73-
74-
75-
Virtual Power Distribution Unit
76-
------------------------------------------------
77-
78-
Current Virtual PDU implementation only supports running entire virutal infrastructure on VMWare ESXi because it only supports functionality of simulating power control chassis through VMWare SDK.
79-
80-
**Robert - Under construction**
81-
82-
83-
Setup InfraSIM Virtual infrastructure on ESXi
84-
------------------------------------------------
22+
sudo infrasim-main status
23+
sudo infrasim-main version
8524

86-
**Mark - Under construction**
25+
#. Stop Infrasim Service::
8726

88-
Below diagram shows virtual infrastructure deployed on one ESXi instance. Please follow the step by step manual in the following sections to setup this environment.
27+
sudo infrasim-main stop
8928

90-
.. image:: _static/onesxi.png
91-
:height: 400
92-
:align: center
9329

94-
You can quickly deploy a virtual rack system including: 2x Dell_R630 and 1x vPDU inside VMWare ESXi.
30+
Interface to access virtual server
31+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
9532

96-
.. image:: _static/vrack.png
33+
#. Server graphic UI
34+
VNC viewer through port **5901**. If you can also port-forward guest VNC default port 5901 to host port (for example, 15901). So, you can access VNC service via host 15901 port. You can see the virtual monitor is already running and listing boot devices of virtual node. Through this booting devices, you can deploy hypervisor or operating system into virtual compute node.
9735

98-
**Prerequisite**
99-
100-
#. Git clone `tools <https://github.com/InfraSIM/tools.git>`_ repository
101-
#. You can access your VMWare ESXi server through network with username and password certified, and with all VMs cleared in this ESXi.
102-
#. Download DEll_R630 and vPDU OVA file under "tools/vrack_builder" directory. You can also build your own DELL_R630/vPDU file by refering `here <how_tos.html#build-vnode-and-vpdu>`_
103-
#. You can run this script on Ubuntu Linux distributions.(version > 12.04)
104-
105-
**Install necessary Softwares**
106-
107-
#. Install VMWare Python SDK ::
108-
109-
# sudo pip install pyvmomi
110-
111-
#. Install VMWare ovftool file
112-
113-
* Download the VMWare OVF bundle, version 4.1.0 for Linux. Go to https://my.vmware.com/group/vmware/details?productId=491&downloadGroup=OVFTOOL410 (4.1.0 version, for Linux).
114-
* Install the OVF tool::
115-
116-
# sudo bash VMware-ovftool-4.1.0-2459827-lin.x86_64.bundle
117-
118-
**Deploy the virtual rack**
36+
.. image:: _static/vnc.png
11937

120-
#. Deploy the virtual rack::
38+
#. Virtual BMC
12139

122-
# cd tools/vrack_builder
123-
# ./vrack_builder -u <esxi_username> -p <esxi_password> -h <esxi_ip>
40+
* Install ipmitool on host machine.::
12441

42+
sudo apt-get install ipmitool
12543

126-
#. Check virtual rack status
44+
IPMI over LAN::
12745

128-
If the virtual rack deployed successfully, you will got the message::
46+
ipmitool -I lanplus -U admin -P admin -H <IP address> sdr list
12947

130-
"2 Dell R630, 1 vPDU deployed finished on ESXi"
48+
.. note:: <IP address> is address of NIC assigned to BMC access in YAML configuration file
13149

132-
#. vPDU port mapping
50+
IPMI over internal path (vKCS) which requires OS and ipmitool application deployed inside virtual server::
13351

134-
Two Dell R630 vms will be mapped to vPDU port 1.1 and 1.2 respectively, the vPDU password is "123456" as default.
52+
ipmitool sdr list
13553

54+
You can get the command result like the following ::
13655

56+
Pwr Unit Status | Not Readable | ns
57+
IPMI Watchdog | Not Readable | ns
58+
FP NMI Diag Int | Not Readable | ns
59+
SMI TimeOut | Not Readable | ns
60+
System Event Log | Not Readable | ns
61+
System Event | Not Readable | ns
62+
...
13763

13864

13965

140-
Play with InfraSIM -- ?? ??
141-
--------------------------------------------
66+
Virtual Power Distribution Unit - Robert - Under construction
67+
------------------------------------------------
14268

143-
After virtual node, or a virtual rack is deployed, you can start to play with InfraSIM, either develop or validate your software on top of it.
69+
Current Virtual PDU implementation only supports running entire virutal infrastructure on VMWare ESXi because it only supports functionality of simulating power control chassis through VMWare SDK.
14470

145-
#. Chassis management and hardware failure simulation. If the software application you're working on has logic designed to deal with server enclosures, for example, discovering, cataloging and monitoring every server node and related chassis, below commands are able to manipulate all chassis properties and generating hardware failures through virtual BMC module:
14671

147-
Please play with InfraSIM IPMI_SIM data by accessing `How to access vBMC data <userguide.html#access-vbmc-data>`_
72+
Setup a mini InfraSIM Virtual infrastructure on ESXi
73+
---------------------------------------------------------
14874

75+
Combining virtual server, virtual PDU and virtual switch, you can quickly deploy a small virtual infrastructure system. Here's a expamle to explicitly list to to setup one small virtual rack with 2 Dell R630 nodes and 1 virtual PDU on VMWare ESXi:
14976

150-
#. Virtual PDU functionality are able to setup and simulate one power distribution network so that software developers don't have to pile up those physical PDUs, do cabling among server nodes, etc.
151-
Please access `vPDU Node and Control <userguide.html#vpdu-deployment-and-control>`_ Section 3,4,5,6 for more information.
77+
#. Get ESXi environment prepared by following `instruction <how_to.html#how-to-install-vmware-esxi-on-physical-server>`_
78+
#. Spin up 3 virtual machines with satisfying `resrouces <installation.html#resource-requirement>`_ - 2 for hosting virtual Dell servers and the other one for virtual PDU
79+
#. Configure virtual switch to compose desired data network topology. `Virtual server networking <configuration.html#networking>`_ only covers virtual nodes networking; for configuring "outside network", you need to refer to VMWare ESXi manual in order to properly connect these 3 virtual machines together.
80+
#. Install InfraSIM application by following `intallation guide <installation.html#installation>`_
81+
#. Modify YAML configuration file as specified in `Configuration file <configuration.html#virtual-server-configuration-file>`_
82+
#. Kick off all InfraSIM `services <get_start.html#command-interfaces>`_
83+
#. Done, enjoy this virtual stack!
15284

153-
#. Operating system and hypervisor installation. All these software could be easily deployed on top of these simulated server nodes.
154-
InfraSIM supports using different booting device, optical disk, hard disk drive, network device to boot into and install many operating systems and hypervisors. Then software developer could start developing and validating their application without noticing they're working with virtual hardware.
155-
85+
.. image:: _static/vrack.png

docs/installation.rst

Lines changed: 21 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -46,16 +46,32 @@ Ubuntu Linux 64-bit - 16.04 is recommended
4646
Virtual Server
4747
------------------------------------------------
4848

49-
**Mark - Under construction**
49+
#. Ensure sources.list integrity then install dependency::
5050

51+
sudo apt-get update
52+
sudo apt-get install python-pip libpython-dev libssl-dev
5153

52-
Virtual Power Distribution Unit
53-
------------------------------------------------
54+
#. Upgrade pip and install setuptools::
55+
56+
sudo pip install --upgrade pip
57+
sudo pip install setuptools
58+
59+
#. Select either one of below ways to install infrasim:
5460

55-
Current Virtual PDU implementation only supports running entire virtual infrastructure on VMWare ESXi because it only supports functionality of simulating power control chassis through VMWare SDK.
61+
* install infrasim from source code::
5662

57-
**Robert - Under construction**
63+
git clone https://github.com/InfraSIM/infrasim-compute.git
64+
cd infrasim-compute
65+
sudo pip install -r requirements.txt
5866

67+
sudo python setup.py install
5968

69+
* install infrasim from python library::
6070

71+
sudo pip install infrasim-compute
72+
73+
74+
Virtual Power Distribution Unit - **Robert - Under construction**
75+
------------------------------------------------
6176

77+
Current Virtual PDU implementation only supports running entire virtual infrastructure on VMWare ESXi because it only supports functionality of simulating power control chassis through VMWare SDK.

docs/user_guide.rst

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,23 @@ Many functionalities described in this chapter, such as vRackSystem for InfraSIM
2222
#. `Test InfraSIM <userguide.html#puffer-infrasim-test>`_
2323

2424

25+
Play with InfraSIM -- ?? ??
26+
--------------------------------------------
27+
28+
After virtual node, or a virtual rack is deployed, you can start to play with InfraSIM, either develop or validate your software on top of it.
29+
30+
#. Chassis management and hardware failure simulation. If the software application you're working on has logic designed to deal with server enclosures, for example, discovering, cataloging and monitoring every server node and related chassis, below commands are able to manipulate all chassis properties and generating hardware failures through virtual BMC module:
31+
32+
Please play with InfraSIM IPMI_SIM data by accessing `How to access vBMC data <userguide.html#access-vbmc-data>`_
33+
34+
35+
#. Virtual PDU functionality are able to setup and simulate one power distribution network so that software developers don't have to pile up those physical PDUs, do cabling among server nodes, etc.
36+
Please access `vPDU Node and Control <userguide.html#vpdu-deployment-and-control>`_ Section 3,4,5,6 for more information.
37+
38+
#. Operating system and hypervisor installation. All these software could be easily deployed on top of these simulated server nodes.
39+
InfraSIM supports using different booting device, optical disk, hard disk drive, network device to boot into and install many operating systems and hypervisors. Then software developer could start developing and validating their application without noticing they're working with virtual hardware.
40+
41+
2542
Access vBMC Data
2643
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2744

docs/why_infrasim.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ Why InfraSIM?
33

44
InfraSIM provides effective, economic way to simulate a bare-metal infrastructure on which engineering team can leverage to achieve purpose of:
55

6-
* Costing saving by simulating a scaled infrastructure with limited hardware materials
6+
* Cost saving by simulating a scaled infrastructure with limited hardware materials
77
* Less dependency on hardware material which is in short of
88
* Increase automation level and eventually increase development and testing efficiency
99
* Increase test coverage by leveraging InfraSIM error injection functionality

0 commit comments

Comments
 (0)