Skip to content

Commit b5fec9b

Browse files
authored
Merge pull request #93 from MonashDeepNeuron/dev
Dev
2 parents 77317f3 + f3b5f6d commit b5fec9b

File tree

13 files changed

+83
-62
lines changed

13 files changed

+83
-62
lines changed

src/SUMMARY.md

Lines changed: 2 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,7 @@
4141

4242
- [M3 & SLURM](./m3-slurm/m3-slurm.md)
4343

44-
- [Batch Processing vs. Cloud Computing](./m3-slurm/batch-cloud.md)
45-
- [Parallel & Distributed Computing](./m3-slurm/parallel-distributed.md)
44+
- [What is HPC really?](./m3-slurm/hpc-intro.md)
4645
- [M3 Login - SSH & Strudel](./m3-slurm/login.md)
4746
- [Intro to SLURM](./m3-slurm/slurm_intro.md)
4847
- [M3 Interface & Usage](./m3-slurm/m3-interface.md)
@@ -51,21 +50,13 @@
5150

5251
- [Intro to Parallel Computing](./intro-to-parallel-comp/intro-to-parallel-comp.md)
5352

53+
- [Parallel Algorithms](./intro-to-parallel-comp/parallel-algos.md)
5454
- [OpenMP: Multithreading](./intro-to-parallel-comp/multithreading.md)
5555
- [Synchronisation Issues](./intro-to-parallel-comp/synchronisation.md)
5656
- [Dead & Live Locks](./intro-to-parallel-comp/locks.md)
5757
- [MPI: Message Passing](./intro-to-parallel-comp/message-passing.md)
5858
- [Challenges](./intro-to-parallel-comp/challenges.md)
5959

60-
- [Parallellisation of Algorithms](./parallel-algos/parallel-algos.md)
61-
62-
- [Parallel Search](./parallel-algos/parallel-search.md)
63-
- [Parallel Sort](./parallel-algos/parallel-sort.md)
64-
- [Other Parallel Algorithms](./parallel-algos/other-parallel-algos.md)
65-
- [Machine Learning & HPC](./parallel-algos/machine-learning-and-hpc.md)
66-
- [Optimisation Algorithms](./parallel-algos/optim-algos.md)
67-
- [Challenges](./parallel-algos/challenges.md)
68-
6960
- [Apache Spark](./apache-spark/apache-spark.md)
7061
- [Installation & Cluster Set-up](./apache-spark/set-up.md)
7162
- [Internal Architecture](./apache-spark/internals.md)
90.7 KB
Loading
5.91 KB
Loading
108 KB
Loading
Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,9 @@
11
# Parallel Computing
22

3-
In this chapter, we will discuss the abstraction of parallel computing. To facilitate our exploration, we will employ a API within the C Programming Language: OpenMP. This tool will serve as a means to concretely illustrate the underlying language-independent theory.
3+
As introduced in chapter 5, parallel computing is all about running instructions simultaneously on multiple computers rather than doing it all sequentially/serially on the same computer. This is relatively straightforward if we have multiple, completely independent tasks that don't need to share resources or data i.e. inter-query parallelism.
44

5-
**Parallel computing is about executing the instructions of the program simultaneously.**
5+
![query-parallelism](./imgs/query-parallelism.png)
66

7-
One of the core values of computing is the breaking down of a big problem into smaller easier to solve problems, or at least smaller problems. In some cases, the steps required to solve the problem can be executed simultaneously (in parallel) rather than sequentially (in order).
7+
In this context, you can consider a query to be a job that carries out a series of steps on a particular dataset in order to achieve something e.g. a SORT query on a table. It's fairly straightforward to execute multiple queries at the same time using a parallel/distributed system but what if we want to parallelise and speed up the individual operations within a query?
8+
9+
This is where things like synchronisation, data/workload distribution and aggregation needs to be considered. In this chapter we will provide some theoretical context before learning how to implement parallelism using OpenMP & MPI.
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# Parallel Algorithms
2+
3+
You can think of all parallel algorithms as having a serial portion and a parallel portion i.e. local and global phases.
4+
5+
![serial-vs-parallel](./imgs/serial-parallel-parts.png)
6+
7+
> This applies to both local parallel computing between CPU cores with a shared RAM as well as distributed computing between multiple HPC nodes with a distributed memory architecture. The only difference between the two is additional complexities involved in managing data sharing & sending instructions across a network.
8+
9+
Let's go through this with a simple example. To calculate the sum of all numbers between 1 to N serially, you would execute the following pseudocode,
10+
11+
```
12+
function sumOfNumbers(N):
13+
result = 0
14+
15+
for x from 1 to N:
16+
result += x
17+
18+
return result
19+
```
20+
21+
To do this in parallel (assuming you have M processors/nodes) you would do something like this,
22+
23+
```
24+
function parallelSum(N):
25+
globalResult = 0
26+
partition_size = N // M
27+
28+
for node from 1 to M:
29+
partition = node * partition_size
30+
localResult = sendJobToNode(node, sumOfNumbers(partition))
31+
globalResult += localResult
32+
33+
return globalResult
34+
```
35+
36+
This is how one of the simplest parallel algorithms - **parallel sum** works. All lines of code beside the `sendJobToNode` function call are executed serially on the master node/thread. This is all illustrated in the diagram below.
37+
38+
![parallel-sum](./imgs/parallel-sum-diagram.png)
39+
40+
Besides the difference between serial & parallel regions another important concept to note here is **partitioning** aka. chunking. Often when you're parallelising your serial algorithm you will have to define local, parallel tasks that will execute on different parts of your dataset simultaneously in order to acheive a speedup. This can be anything from a sum operation in this case, to a local/serial sort or even as complex as the training of a CNN model on a particular batch of images.

src/m3-slurm/batch-cloud.md

Lines changed: 0 additions & 29 deletions
This file was deleted.

src/m3-slurm/challenges.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ Let this run fully. Check the output of the script to make sure it ran correctly
3838
## Challenge 7
3939

4040
Edit your submission script so that you get a gpu node, and run the script using the gpu.
41-
> Hint: Use the m3h partition
41+
> Hint: Use the m3g partition
4242
4343
## Challenge 8
4444

Lines changed: 35 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,42 @@
1-
# Parallel & Distributed Computing
1+
# What is HPC really?
2+
3+
You are all likely familiar with the definition of High Performance Computing. Here is one from IBM,
4+
5+
> High-performance computing (HPC) is technology that uses clusters of powerful processors that work in parallel to process massive multi-dimensional data sets, also known as big data, and solve complex problems at extremely high speeds. HPC solves some of today’s most complex computing problems in real time.
6+
7+
But the term HPC is not really used much outside the scientific research community. A lot of cloud systems involve similar scale of hardware, parallel & distributed computing, similar computational workload, data processing capacity and low latency/high throughput capability as HPC clusters. *So what exactly is the difference between a cloud system and a HPC cluster?*
8+
9+
At the end of the day this comes down to semantics but a key difference is that a HPC cluster implies a system primarily used for **batch processing** whereas a cloud system would involve **interactive processing**.
10+
11+
### Batch Processing vs. Cloud Computing
12+
13+
The vast majority of computer systems and nearly 100% of the ones that the average person uses is a cloud-based interactive system. Due to the nature of use cases specific to researchers, batch processing is a much more suitable choice for them.
14+
15+
__Batch Processing:__
16+
- Jobs (code scripts) submitted are executed at a later time.
17+
- User can't interact (or only limited interaction).
18+
- Performance measure is **throughput**.
19+
- Snapshot of output is used for debugging.
20+
21+
![batch-image](./imgs/batch-processing.jpeg)
22+
23+
__Interactive Processing:__
24+
- Jobs submitted are executed immediately.
25+
- User can interact.
26+
- Performance measure is **response time**.
27+
- Interactive debugging.
28+
29+
![interactive-image](./imgs/interactive-processing.png)
30+
31+
## Parallel & Distributed Computing
232

333
Nearly all modern computer systems utilise parallel computing to speed up the execution of algorithms. To see how this works in practice look at the diagram below.
434

535
![parallel vs. distributed](imgs/parallel-distributed.png)
636

7-
As you can see, in a scenario where a program (job) takes 3 seconds and 3 independent jobs have to be executed by a system, doing it serially in a single computer takes a total of 9 seconds. But doing it simultaneously across 3 computers will only take 3 seconds thus achieving a 3x speedup through parallel computing.
37+
As you can see, in a scenario where a program (job) takes 3 seconds and 3 independent jobs have to be executed by a system, doing it serially in a single processor (computer) takes a total of 9 seconds. But doing it simultaneously across 3 processors will only take 3 seconds thus achieving a 3x speedup through parallel computing. This parallel computing is performed locally in a **multi-processing** systems with more than 1 CPU core (processor).
38+
39+
![multi-processing](imgs/Multiprocessor-System.png)
840

941
This is the fundamental principle that High Performance Computing is based on. The trouble (or fun) is when your tasks have dependencies on each other which is gonna be the case for the vast majority of algorithms. That's when things like synchronisation issues, data sharing and all of that comes into play - which we'll explore in later chapters.
1042

@@ -52,19 +84,4 @@ And finally, not everything needs to be done on a parallel or distributed system
5284
### Advantages of serial computing:
5385
- **More simple** to design & implement algorithms. Parallel algorithms can get quite complex, especially when dealing with more complicated instructions with dependencies.
5486
- **Less overhead** involved in managing a parallel & distributed job. No need to manage data sharing between threads, processes, nodes, etc...
55-
- **No synchronisation issues** & headaches involved in concurrent computing. Don't have to deal with race conditions, deadlocks, livelocks, etc...
56-
57-
## Parallel Scalability
58-
The speed up achieved from parallelism is dictated by your algorithm. Notably the serial parts of your algorithm can not be sped up by increasing the number of processors. The diagram below looks at the benefits we can achieve from writing parallel code as the number of processes increases.
59-
60-
![amdahl](./imgs/parallel_scalability.jpg)
61-
62-
Amdahl's Law, formulated by computer architect Gene Amdahl in 1967, is a principle used to analyze the potential speedup of parallel computing. It states that the speedup of a program from parallelization is limited by the proportion of the program that must be executed serially. In other words, it helps to determine the maximum performance improvement that can be achieved by using parallel processing.
63-
64-
The implications of Amdahl's Law for HPC is very significant:
65-
66-
- **Limitation of Speedup:** Amdahl's Law highlights that even with an increase in the number of processors (parallelization), the overall speedup is limited by the sequential portion of the code. Thus, if a significant portion of the code is inherently serial, the potential speedup achievable through parallelization is restricted.
67-
- **Importance of Identifying Serial Sections:** In HPC, it's crucial to identify the sections of code that are inherently serial and cannot be parallelized. Optimizing these sections can lead to better overall performance. Conversely, focusing solely on parallelizing code without addressing these serial bottlenecks can result in suboptimal speedup.
68-
- **Efficiency vs. Scalability:** Amdahl's Law emphasizes the importance of balancing efficiency and scalability in parallel computing. While increasing the number of processors can improve performance to a certain extent, beyond a certain point, diminishing returns occur due to the overhead of synchronization, communication, and managing parallel tasks.
69-
- **Architectural Considerations:** HPC system architects must consider Amdahl's Law when designing hardware and software architectures. Designing systems that minimize the impact of serial portions of code and optimize parallel execution can lead to better overall performance.
70-
- **Algorithm Selection:** When choosing algorithms for HPC applications, it's essential to consider their parallelizability. Algorithms that can be efficiently parallelized are more suitable for HPC environments, as they can leverage the potential for speedup provided by parallel computing resources more effectively.
87+
- **No synchronisation issues** & headaches involved in concurrent computing. Don't have to deal with race conditions, deadlocks, livelocks, etc...
19.9 KB
Loading

0 commit comments

Comments
 (0)