Skip to content
This repository was archived by the owner on Jun 6, 2024. It is now read-only.

Commit a76a65a

Browse files
committed
Smart Edge Open 22.02 release
1 parent abcb165 commit a76a65a

38 files changed

+1337
-70
lines changed

components/security/application-security-using-sgx.md

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,15 +35,22 @@ Copyright (c) 2021 Intel Corporation
3535
10. DEK Cluster is up along with K8s Intel device plugin installed.
3636
11. Graphenized app is deployed on the kubenetes node using intel device plugin which exposes SGX memory to pods/jobs running on it.
3737

38-
<img src="images/ESP-SGX-workflow.png" style="zoom:100%;" />
38+
![ESP based SGX deployment flow](images/ESP-SGX-workflow.png)
39+
40+
*Figure - ESP based SGX deployment flow*
41+
3942

4043
## Architecture
4144

42-
<img src="images/SGX-integration-with-DEK-design.png" style="zoom:100%;" />
45+
![SGX integration into DEK setup](images/SGX-integration-with-DEK-design.png)
46+
47+
*Figure - SGX integration into DEK setup*
4348

4449
### Sequential deployment workflow showing flow of credentials/certificates
4550

46-
<img src="images/SGX-provisioning-seq-diagram.png" style="zoom:100%;" />
51+
![SGX provisioning sequence diagram](images/SGX-provisioning-seq-diagram.png)
52+
53+
*Figure - SGX provisioning sequence diagram*
4754

4855
## How To
4956
### Enable Intel SGX in BIOS
-34.6 KB
Loading

components/security/platform-attestation-using-isecl.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -88,12 +88,15 @@ Push model - TAs will push trust reports to HVS via message queue(Using NATS pub
8888
9. Cluster is up and Trust agent is deployed that reads the PCR registers for the Secure boot key values (a.k.a measurements). TA agent passes these values to the HVS deployed on the cloud instance.
8989
10. Depending on the success of comparing the Secure boot values, HVS provides trust report of successful or failure attestation. This report is used by higher orchestration layers for taking further action. IHUB (Integration Hub) also uses Kube API server on the edge cluster to label the node based on cluster is successfully attested or not.
9090

91-
<img src="images/Secure_boot_workflow.png" style="zoom:150%;" />
91+
![Secure boot - Workflow](images/Secure_boot_workflow.png)
9292

93+
*Figure - Secure boot - Workflow*
9394

9495
### Design/Architecture
9596

96-
<img src="images/Isecl-platform-attestation-m1.png" style="zoom:150%;" />
97+
![ISecL platform attestation - Workflow](images/Isecl-platform-attestation-m1.png)
98+
99+
*Figure - ISecL platform attestation - Workflow*
97100

98101
1. Get certificate hash from CMS service. It is be used while boorstrap the trust agent and ihub.
99102
2. Get bearer/auth token from AAS using curl. It is be used while boorstrap the trust agent and ihub.
@@ -115,7 +118,9 @@ Push model - TAs will push trust reports to HVS via message queue(Using NATS pub
115118
### Sequence diagram for certificate management
116119
Following diagram shows how certificates of AAS and CMS are used by TA, IHUB components during deployments.
117120

118-
<img src="images/IseclControlerCluster.png" style="zoom:150%;" />
121+
![ISecL end-to-end deployment flow](images/IseclControlerCluster.png)
122+
123+
*Figure - ISecL end-to-end deployment flow*
119124

120125
## How To
121126

components/storage/rook-ceph.md

Lines changed: 280 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,280 @@
1+
```text
2+
SPDX-License-Identifier: Apache-2.0
3+
Copyright (c) 2019-2021 Intel Corporation
4+
```
5+
# Rook-Ceph
6+
7+
- [Rook-Ceph](#Rook-Ceph)
8+
- [Overview](#overview)
9+
- [Overview of Ceph](#overview-of-ceph)
10+
- [Overview of Rook and CSI](#overview-of-rook-and-csi)
11+
- [Rook-Ceph configuration and usage](#rook-ceph-configuration-and-usage)
12+
- [Configuration](#configuration)
13+
- [Pre-requisites](#pre-requisites)
14+
- [Ceph Configuration](#ceph-configuration)
15+
- [Ceph Resource Restraint](#rook-resource-restraint)
16+
- [Usage](#usage)
17+
- [SR-IOV VM application](#sr-iov-vm-application)
18+
- [Limitations](#limitations)
19+
- [Reference](#reference)
20+
21+
## Overview
22+
Edge applications based on Kubernetes* need to store persistent data, Kubernetes provides a persistent volume provisioning based on local disk directly attached to a single node.
23+
This solution has several shortcomings:
24+
25+
1. Dynamic provisioning is not supported. Dynamic provisioning allows to automatically provision storage when it is requested by users.
26+
2. No data reliability. Only one copy of user data is stored in local node, this causes a single point of failure issue.
27+
28+
Ceph storage is introduced to resolve the problems, to integrate Ceph storage into Kubernetes cluster, we propose Rook-Ceph as storage orchestration solution.
29+
30+
### Overview of Ceph
31+
Ceph is an open-source, highly scalable, distributed storage solution for block storage, shared filesystems, and object storage with years of production deployments. It was born in 2003 as an outcome of Sage Weil’s doctoral dissertation and then released in 2006 under the LGPL license.
32+
33+
Ceph is a prime example of distributed storage, a distributed storage system is composed of many server nodes interconnected through a network to provide storage services externally as a whole.
34+
The benefit of distributed storage are:
35+
36+
- Large capacity and easy expansion
37+
- High reliability. No single point of failure, data security and business continuity are guaranteed.
38+
- High performance. Data shared and enable concurrent access.
39+
- Data management and more features support. The storage cluster can be managed and configured easily, also provide functions such as data encryption/decryption and data deduplication, etc.
40+
41+
Ceph consists of several components:
42+
43+
- MON (Ceph Monitors) are responsible for forming cluster quorums. All the cluster nodes report to MON and share information about every change in their state.
44+
- OSD (Ceph Object Store Devices) are responsible for storing objects and providing access to them over the network.
45+
- MGR (Ceph Manager) provides additional monitoring and interfaces to external management systems.
46+
- RADOS (Reliable Autonomic Distributed Object Stores) is the core of Ceph cluster. RADOS ensures that stored data always remains consistent with data replication, failure detection, and recovery among others.
47+
- LibRADOS is the library used to gain access to RADOS. With support for several programing languages, LibRADOS provides a native interface for RADOS as well as a base for other high-level services, such as RBD, RGW, and CephFS.
48+
- RBD (RADOS Block Device), which are now known as the Ceph block device, provide persistent block storage, which is thin-provisioned, resizable, and stores data striped over multiple OSDs.
49+
- RGW (RADOS Gateway) is an interface that provides object storage service. It uses libRGW (the RGW library) and libRADOS to establish connections with the Ceph object storage among applications. RGW provides RESTful APIs that are compatible with Amazon* S3 and OpenStack* Swift.
50+
- CephFS is the Ceph Filesystem that provides a POSIX-compliant filesystem. CephFS uses the Ceph cluster to store user data.
51+
- MDS (Ceph Metadata Server) keeps track of file hierarchy and stores metadata only for CephFS.
52+
53+
### Overview of Rook operator and CSI
54+
Rook operator is an open-source cloud-native storage orchestrator that transforms storage software into self-managing, self-scaling, and self-healing storage services.
55+
56+
Rook-Ceph is the Rook operator for Ceph, it starts and monitors Ceph daemon pods, such as MONs, OSDs, MGR, and others, Rook-Ceph also monitors the daemon to ensure the cluster is healthy. Ceph MONs are started or failed over when necessary. Other adjustments are made as the cluster grows or shrinks.
57+
58+
Just like native Ceph, Rook-Ceph provides block, filesystem, and object storage for applications.
59+
60+
- Ceph CSI (Container Storage Interface) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems like Kubernetes. Ceph CSI is integrated with Rook and enables two scenarios:
61+
- RBD (block storage): This driver is optimized for RWO pod access where only one pod may access the storage.
62+
- CephFS (filesystem): This driver allows for RWX with one or more pods accessing the same storage.
63+
- For object storage, Rook supports the creation of new buckets and access to existing buckets via two custom resources: Object Bucket Claim (OBC) and Object Bucket (OB). Applications can access the objects via RGW.
64+
65+
## Rook-Ceph configuration and usage
66+
67+
To deploy the Rook-Ceph operator and Ceph to the Intel® Smart Edge Open cluster, `rook_ceph.enabled: True` must be set in `inventory/default/group_vars/all/20-enhanced.yml`, which is disabled by default. This will perform Rook operator and Ceph daemons install and deploy, images for Rook operator and Ceph daemon with CSI plugins are download from public docker repository.
68+
69+
### Configuration
70+
71+
Rook-Ceph operator provide custom resource for CephCluster to configure Ceph parameters by user.
72+
73+
#### Pre-requisites
74+
Currently, the deployment searches for any available raw drive in the node and then consumes it for Ceph. If there are 2 drives, 1 with OS and another drive with
75+
partitions then the second drive with partitions will be wiped out and consumed by Ceph. Also the deployment will wipeout if a Ceph OSD exists and will create a new
76+
Ceph OSD in the 2nd drive. Note that, creating paritions on the drive after deploying Ceph has to be avoided.
77+
If the user wants to manually prepare the drive then he can follow
78+
the steps to reset the drive to a usable state as described in : https://rook.io/docs/rook/v1.7/ceph-teardown.html#zapping-devices
79+
80+
#### Ceph Configuration
81+
Specifies Ceph configuration, pwek-upf-app flavor as example, in `deployments/pwek-upf-apps/all.yml`:
82+
83+
```yaml
84+
rook_ceph:
85+
enabled: True
86+
mon_count: 1
87+
host_name: "{{ hostvars[groups['controller_group'][0]]['ansible_nodename'] }}"
88+
osds_per_device: "1"
89+
replica_pool_size: 1
90+
```
91+
92+
#### Ceph Resource Restraint
93+
Since Rook-Ceph daemons deploy in Edge Node, considering that the resource consumption of the ceph daemon may affect the operation of the edge application, the default value of resource restraint for Ceph daemons defined in `roles/kubernetes/rook_ceph/defaults/main.yml`.
94+
95+
```yaml
96+
_rook_ceph_limits:
97+
api:
98+
requests:
99+
cpu: "500m"
100+
memory: "512Mi"
101+
limits:
102+
cpu: "500m"
103+
memory: "512Mi"
104+
mgr:
105+
requests:
106+
cpu: "500m"
107+
memory: "512Mi"
108+
limits:
109+
cpu: "500m"
110+
memory: "512Mi"
111+
mon:
112+
requests:
113+
cpu: "500m"
114+
memory: "512Mi"
115+
limits:
116+
cpu: "500m"
117+
memory: "512Mi"
118+
osd:
119+
requests:
120+
cpu: "500m"
121+
memory: "4Gi"
122+
limits:
123+
cpu: "500m"
124+
memory: "4Gi"
125+
```
126+
127+
### Verify the Ceph OSD creation on the node
128+
The user can do the following to verify the Ceph creation on the node by running
129+
```bash
130+
kubectl get pods -n rook-ceph
131+
NAME READY STATUS RESTARTS AGE
132+
csi-cephfsplugin-provisioner-689686b44-rwzpc 6/6 Running 0 40m
133+
csi-cephfsplugin-zv7hk 3/3 Running 0 40m
134+
csi-rbdplugin-fn56f 3/3 Running 0 40m
135+
csi-rbdplugin-provisioner-5775fb866b-dpph4 6/6 Running 0 40m
136+
rook-ceph-crashcollector-silpixa00401187-f98464b49-4pvk4 1/1 Running 0 40m
137+
rook-ceph-mgr-a-7cb77f9c6c-lx7tl 1/1 Running 0 40m
138+
rook-ceph-mon-a-6d647c9546-bpp9r 1/1 Running 0 40m
139+
rook-ceph-operator-54655cf4cd-kjbm8 1/1 Running 0 40m
140+
rook-ceph-osd-0-7b45dfcc77-5nc95 1/1 Running 0 40m
141+
rook-ceph-osd-prepare-silpixa00401187--1-5rs2b 0/1 Completed 0 40m
142+
rook-ceph-tools-54474cfc96-xrvzw 1/1 Running 0 40m
143+
```
144+
145+
### Usage
146+
After deployment of Rook operator and Ceph daemons, there is an example for VM application using PVC (Ceph block storage) provision by Ceph-CSI.
147+
148+
#### PVC VM application
149+
150+
Create a PVC using the following yaml file
151+
1. The PVC yaml file sample-pvc.yaml contains
152+
```yaml
153+
apiVersion: cdi.kubevirt.io/v1beta1
154+
kind: DataVolume
155+
metadata:
156+
name: ubuntu-reg-dv
157+
spec:
158+
source:
159+
registry:
160+
url: "docker://tedezed/ubuntu-container-disk:20.0"
161+
pvc:
162+
storageClassName: rook-ceph-block
163+
accessModes:
164+
- ReadWriteOnce
165+
resources:
166+
requests:
167+
storage: 10Gi
168+
```
169+
170+
2. Apply the PVC
171+
```bash
172+
kubectl apply -f sample-pvc.yaml
173+
```
174+
175+
3. Check that the PVC exists - it will take some time to import the image
176+
```bash
177+
kubectl get pvc
178+
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
179+
ubuntu-reg-dv Bound pvc-86050fce-e695-4586-b800-7bc477d9eb03 10Gi RWO rook-ceph-block 9s
180+
ubuntu-reg-dv-scratch Bound pvc-702e606b-3c0e-4508-b925-412aee318414 10Gi RWO rook-ceph-block 9s
181+
# kubectl get pods
182+
NAME READY STATUS RESTARTS AGE
183+
importer-ubuntu-reg-dv 1/1 Running 0 54s
184+
# kubectl get pvc # ubuntu-reg-dv-scratch goes away after it completes image upload
185+
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
186+
ubuntu-reg-dv Bound pvc-86050fce-e695-4586-b800-7bc477d9eb03 10Gi RWO rook-ceph-block 9s
187+
```
188+
189+
4. Create a yaml file for deploying the VM
190+
```yaml
191+
apiVersion: kubevirt.io/v1alpha3
192+
kind: VirtualMachine
193+
metadata:
194+
name: testvmceph
195+
spec:
196+
running: true
197+
template:
198+
metadata:
199+
labels:
200+
kubevirt.io/size: small
201+
kubevirt.io/domain: testvmceph
202+
spec:
203+
domain:
204+
cpu:
205+
cores: 2
206+
devices:
207+
disks:
208+
- disk:
209+
bus: virtio
210+
name: containervolume
211+
- disk:
212+
bus: virtio
213+
name: cloudinitvolume
214+
interfaces:
215+
- name: default
216+
bridge: {}
217+
resources:
218+
requests:
219+
memory: 4096M
220+
networks:
221+
- name: default
222+
pod: {}
223+
volumes:
224+
- name: containervolume
225+
persistentVolumeClaim:
226+
claimName: ubuntu-reg-dv
227+
- name: cloudinitvolume
228+
cloudInitNoCloud:
229+
userData: |-
230+
#cloud-config
231+
chpasswd:
232+
list: |
233+
debian:debian
234+
root:toor
235+
expire: False
236+
```
237+
238+
5. Deploy the VM
239+
```bash
240+
kubectl apply -f sample-vm.yaml
241+
```
242+
243+
6. Check that the VM is running
244+
```bash
245+
# kubectl get vm
246+
testvmceph 3s Starting False
247+
# kubectl get vmi
248+
testvmceph 45s Running 10.245.225.34 silpixa00400489 True
249+
```
250+
251+
7. Log in to the VM and create a file
252+
```bash
253+
# kubectl virt console testvmceph (root toor)
254+
# touch myfileishere.txt
255+
```
256+
257+
8. Remove the VM
258+
```bash
259+
# kubectl delete vm testvmceph
260+
```
261+
262+
9. Re-deploy the VM
263+
```bash
264+
# kubectl apply -f vm.yaml
265+
```
266+
267+
10. Log in to the VM and check if the file exists
268+
```bash
269+
# kubectl console vm testvmceph (root toor)
270+
# ls
271+
```
272+
273+
## Limitations
274+
There is a limitation for Rook-Ceph ansible role implementation in Smart Edge Open cluster, currently there is only support for single node and single disk deployment for Ceph. So by default, you don't need to set replica_pool_size > 1.
275+
276+
## Reference
277+
For further details:
278+
- Rook github: https://github.com/rook/rook
279+
- Rook Introduction: https://01.org/kubernetes/blogs/tingjie/2020/introduction-cloud-native-storage-orchestrator-rook
280+
- Kubernetes CSI: https://kubernetes-csi.github.io/docs/

components/telemetry/images/OpenDek_Telemetry.svg

Lines changed: 4 additions & 0 deletions
Loading
-107 KB
Loading

0 commit comments

Comments
 (0)