Skip to content

Commit

Permalink
URL encode links in Markdown files
Browse files Browse the repository at this point in the history
There are a ton of broken markdown links in the READMEs throughout this
repo because the files and directories they refer to have spaces in
their names but these aren't HTML encoded in their links. This commit
just replaces spaces with %20 to fix these.
  • Loading branch information
Jake Romer committed Mar 25, 2017
1 parent ef1c33b commit 3d03374
Show file tree
Hide file tree
Showing 21 changed files with 49 additions and 49 deletions.
8 changes: 4 additions & 4 deletions AVL Tree/README.markdown
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# AVL Tree

An AVL tree is a self-balancing form of a [binary search tree](../Binary Search Tree/), in which the height of subtrees differ at most by only 1.
An AVL tree is a self-balancing form of a [binary search tree](../Binary%20Search%20Tree/), in which the height of subtrees differ at most by only 1.

A binary tree is *balanced* when its left and right subtrees contain roughly the same number of nodes. That is what makes searching the tree really fast. But if a binary search tree is unbalanced, searching can become really slow.

This is an example of an unbalanced tree:

![Unbalanced tree](Images/Unbalanced.png)

All the children are in the left branch and none are in the right. This is essentially the same as a [linked list](../Linked List/). As a result, searching takes **O(n)** time instead of the much faster **O(log n)** that you'd expect from a binary search tree.
All the children are in the left branch and none are in the right. This is essentially the same as a [linked list](../Linked%20List/). As a result, searching takes **O(n)** time instead of the much faster **O(log n)** that you'd expect from a binary search tree.

A balanced version of that tree would look like this:

Expand Down Expand Up @@ -78,14 +78,14 @@ Insertion never needs more than 2 rotations. Removal might require up to __log(n

Most of the code in [AVLTree.swift](AVLTree.swift) is just regular [binary search tree](../Binary Search Tree/) stuff. You'll find this in any implementation of a binary search tree. For example, searching the tree is exactly the same. The only things that an AVL tree does slightly differently are inserting and deleting the nodes.

> **Note:** If you're a bit fuzzy on the regular operations of a binary search tree, I suggest you [catch up on those first](../Binary Search Tree/). It will make the rest of the AVL tree easier to understand.
> **Note:** If you're a bit fuzzy on the regular operations of a binary search tree, I suggest you [catch up on those first](../Binary%20Search%20Tree/). It will make the rest of the AVL tree easier to understand.
The interesting bits are in the `balance()` method which is called after inserting or deleting a node.

## See also

[AVL tree on Wikipedia](https://en.wikipedia.org/wiki/AVL_tree)

AVL tree was the first self-balancing binary tree. These days, the [red-black tree](../Red-Black Tree/) seems to be more popular.
AVL tree was the first self-balancing binary tree. These days, the [red-black tree](../Red-Black%20Tree/) seems to be more popular.

*Written for Swift Algorithm Club by Mike Taghavi and Matthijs Hollemans*
2 changes: 1 addition & 1 deletion Big-O Notation.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,6 @@ Big-O | Name | Description

Often you don't need math to figure out what the Big-O of an algorithm is but you can simply use your intuition. If your code uses a single loop that looks at all **n** elements of your input, the algorithm is **O(n)**. If the code has two nested loops, it is **O(n^2)**. Three nested loops gives **O(n^3)**, and so on.

Note that Big-O notation is an estimate and is only really useful for large values of **n**. For example, the worst-case running time for the [insertion sort](Insertion Sort/) algorithm is **O(n^2)**. In theory that is worse than the running time for [merge sort](Merge Sort/), which is **O(n log n)**. But for small amounts of data, insertion sort is actually faster, especially if the array is partially sorted already!
Note that Big-O notation is an estimate and is only really useful for large values of **n**. For example, the worst-case running time for the [insertion sort](Insertion%20Sort/) algorithm is **O(n^2)**. In theory that is worse than the running time for [merge sort](Merge Sort/), which is **O(n log n)**. But for small amounts of data, insertion sort is actually faster, especially if the array is partially sorted already!

If you find this confusing, don't let this Big-O stuff bother you too much. It's mostly useful when comparing two algorithms to figure out which one is better. But in the end you still want to test in practice which one really is the best. And if the amount of data is relatively small, then even a slow algorithm will be fast enough for practical use.
10 changes: 5 additions & 5 deletions Binary Search Tree/README.markdown
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Binary Search Tree (BST)

A binary search tree is a special kind of [binary tree](../Binary Tree/) (a tree in which each node has at most two children) that performs insertions and deletions such that the tree is always sorted.
A binary search tree is a special kind of [binary tree](../Binary%20Tree/) (a tree in which each node has at most two children) that performs insertions and deletions such that the tree is always sorted.

If you don't know what a tree is or what it is for, then [read this first](../Tree/).

Expand Down Expand Up @@ -49,7 +49,7 @@ If we were looking for the value `5` in the example, it would go as follows:

![Searching the tree](Images/Searching.png)

Thanks to the structure of the tree, searching is really fast. It runs in **O(h)** time. If you have a well-balanced tree with a million nodes, it only takes about 20 steps to find anything in this tree. (The idea is very similar to [binary search](../Binary Search) in an array.)
Thanks to the structure of the tree, searching is really fast. It runs in **O(h)** time. If you have a well-balanced tree with a million nodes, it only takes about 20 steps to find anything in this tree. (The idea is very similar to [binary search](../Binary%20Search) in an array.)

## Traversing the tree

Expand Down Expand Up @@ -535,7 +535,7 @@ The code for `successor()` works the exact same way but mirrored:

Both these methods run in **O(h)** time.

> **Note:** There is a cool variation called a ["threaded" binary tree](../Threaded Binary Tree) where "unused" left and right pointers are repurposed to make direct links between predecessor and successor nodes. Very clever!
> **Note:** There is a cool variation called a ["threaded" binary tree](../Threaded%20Binary%20Tree) where "unused" left and right pointers are repurposed to make direct links between predecessor and successor nodes. Very clever!
### Is the search tree valid?

Expand Down Expand Up @@ -713,11 +713,11 @@ The root node is in the middle; a dot means there is no child at that position.

A binary search tree is *balanced* when its left and right subtrees contain roughly the same number of nodes. In that case, the height of the tree is *log(n)*, where *n* is the number of nodes. That's the ideal situation.

However, if one branch is significantly longer than the other, searching becomes very slow. We end up checking way more values than we'd ideally have to. In the worst case, the height of the tree can become *n*. Such a tree acts more like a [linked list](../Linked List/) than a binary search tree, with performance degrading to **O(n)**. Not good!
However, if one branch is significantly longer than the other, searching becomes very slow. We end up checking way more values than we'd ideally have to. In the worst case, the height of the tree can become *n*. Such a tree acts more like a [linked list](../Linked%20List/) than a binary search tree, with performance degrading to **O(n)**. Not good!

One way to make the binary search tree balanced is to insert the nodes in a totally random order. On average that should balance out the tree quite nicely. But it doesn't guarantee success, nor is it always practical.

The other solution is to use a *self-balancing* binary tree. This type of data structure adjusts the tree to keep it balanced after you insert or delete nodes. See [AVL tree](../AVL Tree) and [red-black tree](../Red-Black Tree) for examples.
The other solution is to use a *self-balancing* binary tree. This type of data structure adjusts the tree to keep it balanced after you insert or delete nodes. See [AVL tree](../AVL%20Tree) and [red-black tree](../Red-Black%20Tree) for examples.

## See also

Expand Down
2 changes: 1 addition & 1 deletion Binary Search/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ let numbers = [11, 59, 3, 2, 53, 17, 31, 7, 19, 67, 47, 13, 37, 61, 29, 43, 5, 4
numbers.indexOf(43) // returns 15
```

The built-in `indexOf()` function performs a [linear search](../Linear Search/). In code that looks something like this:
The built-in `indexOf()` function performs a [linear search](../Linear%20Search/). In code that looks something like this:

```swift
func linearSearch<T: Equatable>(_ a: [T], _ key: T) -> Int? {
Expand Down
2 changes: 1 addition & 1 deletion Binary Tree/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The child nodes are usually called the *left* child and the *right* child. If a

Often nodes will have a link back to their parent but this is not strictly necessary.

Binary trees are often used as [binary search trees](../Binary Search Tree/). In that case, the nodes must be in a specific order (smaller values on the left, larger values on the right). But this is not a requirement for all binary trees.
Binary trees are often used as [binary search trees](../Binary%20Search%20Tree/). In that case, the nodes must be in a specific order (smaller values on the left, larger values on the right). But this is not a requirement for all binary trees.

For example, here is a binary tree that represents a sequence of arithmetical operations, `(5 * (a - 10)) + (-4 * (3 / b))`:

Expand Down
2 changes: 1 addition & 1 deletion Bloom Filter/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ An advantage of the Bloom Filter over a hash table is that the former maintains
## Inserting objects into the set

A Bloom Filter is essentially a fixed-length [bit vector](../Bit Set/), an array of bits. When we insert objects, we set some of these bits to `1`, and when we query for objects we check if certain bits are `0` or `1`. Both operations use hash functions.
A Bloom Filter is essentially a fixed-length [bit vector](../Bit%20Set/), an array of bits. When we insert objects, we set some of these bits to `1`, and when we query for objects we check if certain bits are `0` or `1`. Both operations use hash functions.

To insert an element in the filter, the element is hashed with several different hash functions. Each hash function returns a value that we map to an index in the array. We then set the bits at these indices to `1` or true.

Expand Down
2 changes: 1 addition & 1 deletion Bounded Priority Queue/README.markdown
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Bounded Priority queue

A bounded priority queue is similar to a regular [priority queue](../Priority Queue/), except that there is a fixed upper bound on the number of elements that can be stored. When a new element is added to the queue while the queue is at capacity, the element with the highest priority value is ejected from the queue.
A bounded priority queue is similar to a regular [priority queue](../Priority%20Queue/), except that there is a fixed upper bound on the number of elements that can be stored. When a new element is added to the queue while the queue is at capacity, the element with the highest priority value is ejected from the queue.

## Example

Expand Down
2 changes: 1 addition & 1 deletion Boyer-Moore/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ animals.indexOf(pattern: "🐮")

> **Note:** The index of the cow is 6, not 3 as you might expect, because the string uses more storage per character for emoji. The actual value of the `String.Index` is not so important, just that it points at the right character in the string.
The [brute-force approach](../Brute-Force String Search/) works OK, but it's not very efficient, especially on large chunks of text. As it turns out, you don't need to look at _every_ character from the source string -- you can often skip ahead multiple characters.
The [brute-force approach](../Brute-Force%20String%20Search/) works OK, but it's not very efficient, especially on large chunks of text. As it turns out, you don't need to look at _every_ character from the source string -- you can often skip ahead multiple characters.

The skip-ahead algorithm is called [Boyer-Moore](https://en.wikipedia.org/wiki/Boyer–Moore_string_search_algorithm) and it has been around for a long time. It is considered the benchmark for all string search algorithms.

Expand Down
4 changes: 2 additions & 2 deletions Breadth-First Search/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ This will output: `["a", "b", "c", "d", "e", "f", "g", "h"]`

Breadth-first search can be used to solve many problems. A small selection:

* Computing the [shortest path](../Shortest Path/) between a source node and each of the other nodes (only for unweighted graphs).
* Calculating the [minimum spanning tree](../Minimum Spanning Tree (Unweighted)/) on an unweighted graph.
* Computing the [shortest path](../Shortest%20Path/) between a source node and each of the other nodes (only for unweighted graphs).
* Calculating the [minimum spanning tree](../Minimum%20Spanning%20Tree%20(Unweighted)/) on an unweighted graph.

*Written by [Chris Pilcher](https://github.com/chris-pilcher) and Matthijs Hollemans*
4 changes: 2 additions & 2 deletions Count Occurrences/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

Goal: Count how often a certain value appears in an array.

The obvious way to do this is with a [linear search](../Linear Search/) from the beginning of the array until the end, keeping count of how often you come across the value. That is an **O(n)** algorithm.
The obvious way to do this is with a [linear search](../Linear%20Search/) from the beginning of the array until the end, keeping count of how often you come across the value. That is an **O(n)** algorithm.

However, if the array is sorted you can do it much faster, in **O(log n)** time, by using a modification of [binary search](../Binary Search/).
However, if the array is sorted you can do it much faster, in **O(log n)** time, by using a modification of [binary search](../Binary%20Search/).

Let's say we have the following array:

Expand Down
2 changes: 1 addition & 1 deletion Hash Set/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ difference2.allElements() // [5, 6]

If you look at the [documentation](http://swiftdoc.org/v2.1/type/Set/) for Swift's own `Set`, you'll notice it has tons more functionality. An obvious extension would be to make `HashSet` conform to `SequenceType` so that you can iterate it with a `for`...`in` loop.

Another thing you could do is replace the `Dictionary` with an actual [hash table](../Hash Table), but one that just stores the keys and doesn't associate them with anything. So you wouldn't need the `Bool` values anymore.
Another thing you could do is replace the `Dictionary` with an actual [hash table](../Hash%20Table), but one that just stores the keys and doesn't associate them with anything. So you wouldn't need the `Bool` values anymore.

If you often need to look up whether an element belongs to a set and perform unions, then the [union-find](../Union-Find/) data structure may be more suitable. It uses a tree structure instead of a dictionary to make the find and union operations very efficient.

Expand Down
2 changes: 1 addition & 1 deletion How to Contribute.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Want to help out with the Swift Algorithm Club? Great!

Take a look at the [list](README.markdown). Any algorithms or data structures that don't have a link yet are up for grabs.

Algorithms in the [Under construction](Under Construction.markdown) area are being worked on. Suggestions and feedback is welcome!
Algorithms in the [Under construction](Under%20Construction.markdown) area are being worked on. Suggestions and feedback is welcome!

New algorithms and data structures are always welcome (even if they aren't on the list).

Expand Down
2 changes: 1 addition & 1 deletion Ordered Array/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ a // [-2, -1, 1, 3, 4, 5, 7, 9, 10]

The array's contents will always be sorted from low to high, now matter what.

Unfortunately, the current `findInsertionPoint()` function is a bit slow. In the worst case, it needs to scan through the entire array. We can speed this up by using a [binary search](../Binary Search) to find the insertion point.
Unfortunately, the current `findInsertionPoint()` function is a bit slow. In the worst case, it needs to scan through the entire array. We can speed this up by using a [binary search](../Binary%20Search) to find the insertion point.

Here is the new version:

Expand Down
2 changes: 1 addition & 1 deletion Ordered Set/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ The next function is `indexOf()`, which takes in an object of type `T` and retur
}
```

> **Note:** If you are not familiar with the concept of binary search, we have an [article that explains all about it](../Binary Search).
> **Note:** If you are not familiar with the concept of binary search, we have an [article that explains all about it](../Binary%20Search).

However, there is an important issue to deal with here. Recall that two objects can be unequal yet still have the same "value" for the purposes of comparing them. Since a set can contain multiple items with the same value, it is important to check that the binary search has landed on the correct item.

Expand Down
6 changes: 3 additions & 3 deletions Selection Sort/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ You are given an array of numbers and need to put them in the right order. The s

[ ...sorted numbers... | ...unsorted numbers... ]

This is similar to [insertion sort](../Insertion Sort/), but the difference is in how new numbers are added to the sorted portion.
This is similar to [insertion sort](../Insertion%20Sort/), but the difference is in how new numbers are added to the sorted portion.

It works as follows:

Expand Down Expand Up @@ -108,9 +108,9 @@ The source file [SelectionSort.swift](SelectionSort.swift) has a version of this

## Performance

Selection sort is easy to understand but it performs quite badly, **O(n^2)**. It's worse than [insertion sort](../Insertion Sort/) but better than [bubble sort](../Bubble Sort/). The killer is finding the lowest element in the rest of the array. This takes up a lot of time, especially since the inner loop will be performed over and over.
Selection sort is easy to understand but it performs quite badly, **O(n^2)**. It's worse than [insertion sort](../Insertion%20Sort/) but better than [bubble sort](../Bubble Sort/). The killer is finding the lowest element in the rest of the array. This takes up a lot of time, especially since the inner loop will be performed over and over.

[Heap sort](../Heap Sort/) uses the same principle as selection sort but has a really fast method for finding the minimum value in the rest of the array. Its performance is **O(n log n)**.
[Heap sort](../Heap%20Sort/) uses the same principle as selection sort but has a really fast method for finding the minimum value in the rest of the array. Its performance is **O(n log n)**.

## See also

Expand Down
2 changes: 1 addition & 1 deletion Shell Sort/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ As you can see, each sublist contains only every 4th item from the original arra

We now call `insertionSort()` once on each sublist.

This particular version of [insertion sort](../Insertion Sort/) sorts from the back to the front. Each item in the sublist is compared against the others. If they're in the wrong order, the value is swapped and travels all the way down until we reach the start of the sublist.
This particular version of [insertion sort](../Insertion%20Sort/) sorts from the back to the front. Each item in the sublist is compared against the others. If they're in the wrong order, the value is swapped and travels all the way down until we reach the start of the sublist.

So for sublist 0, we swap `4` with `72`, then swap `4` with `64`. After sorting, this sublist looks like:

Expand Down
Loading

0 comments on commit 3d03374

Please sign in to comment.