You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| Remainder after division ([Modulo](#modulo)) | % |
52
48
| Powers |\*\*|
53
-
|Assign values| =, $\leftarrow$, := |
49
+
|Assignment| =, $\leftarrow$, := |
54
50
55
51
#### Data type comparison operations
56
52
@@ -60,7 +56,7 @@ PRINT "abc"
60
56
| Less than or equal to $\leqslant$ | <= |
61
57
| Greater than $>$ | > |
62
58
| Greater than or equal to $\geqslant$ | >= |
63
-
| Equal to $\equiv$ |\== |
59
+
| Equal to $\equiv$ | == |
64
60
| Not equal to $\neq$ | != |
65
61
66
62
@@ -120,7 +116,7 @@ Can be represented differently in different languages
120
116
121
117
### Data structures
122
118
123
-
Also known as Abstract Data Types or ADTs
119
+
Also known as Abstract Data Types or ADTs. Excluding the [array](#array)
124
120
125
121
#### Array
126
122
@@ -137,13 +133,17 @@ An array that supports multiple types of variables and is able to undergo severa
137
133
138
134
##### Cons method
139
135
136
+
!!! warning "Rare content"
137
+
138
+
Just noting that I have never seen this used before
139
+
140
140
The List ADT standardly includes a special method called "cons", short for construct, with the following [signature specification](#type-signature):
141
141
142
142
```signature
143
143
cons: item × List → List
144
144
```
145
145
146
-
The behaviour of this operation is the inverse of the first and rest operations above. In other words, for any list l, if we "cons" the head of l with the rest of l, we get l. (Formally, for all non-empty lists l, cons(first(l), rest(l)) = l.)
146
+
The behaviour of this operation is the inverse of the first and rest operations above. In other words, for any list l, if we "cons" the head of l with the rest of l, we get l. (Formally, for all non-empty lists l, `cons(first(l), rest(l)) = l`.)
147
147
148
148
#### Associative array
149
149
@@ -155,20 +155,26 @@ associcativeArray[key]
155
155
156
156
Associative array methods:
157
157
158
-
-`.add()`
159
-
-`.remove()`
160
-
-`.change()`
158
+
-`.add()` or `associcativeArray[key] = xyz`
159
+
-`.remove()` or `del associcativeArray[key]`
160
+
-`.change()` or `associcativeArray[key] = xyz`
161
161
162
162
#### Hash table
163
163
164
-
A Hash Table is a type of [associative array](#associative-array) that uses a `(key,bucket)` layout. Where the bucket (or slot) is accessed via the key using hashes to instantly traverse to the end value. Which is usually preferred over other types of arrays.
164
+
A Hash Table is an implimentation of an[associative array](#associative-array) that uses a `(key,bucket)` layout. Where the bucket (or slot) is accessed via the key using hashes to instantly traverse to the end value. Which is usually preferred over other types of arrays.
When the [hash table](#hash-table) has one hash per item in the array
171
171
172
+
#### Set
173
+
174
+
A set is simmilar to a [hash table](#hash-table), but there are no values. It's just keys.
175
+
176
+
Sets have no order and no duplicate items. They are like a pool of keys. They also have a constant lookup time $O(1)$. This means you could have a set of size 3 and size 1 million and checking if an item was in either set would take the same time for both of them. This is useful for things like checking the existance of things in a database.
177
+
172
178
#### Queue
173
179
174
180
Queues are a list of values sorted by entry time. A FIFO (first in first out) system. The first value in is first value out. New values are added at then end of the line via (`enqueue`). Common methods include:
@@ -661,11 +662,11 @@ The shortest distances from $A$ to $B$ for each pair of nodes in a graph. Used i
661
662
A list of adjacent nodes:
662
663
663
664
$$
664
-
\begin{aligned}
665
-
A = \{B,C\} \\
666
-
B = \{A\} \\
667
-
C = \{A\}
668
-
\end{aligned}
665
+
\begin{align}
666
+
A &= \{B,C\} \newline
667
+
B &= \{A\} \newline
668
+
C &= \{A\}
669
+
\end{align}
669
670
$$
670
671
671
672
@@ -677,17 +678,18 @@ $$
677
678
678
679
"The basic idea of breadth-first search is to fan out to as many vertices as possible before penetrating deep into a graph. "A more cautious and balanced approach."
679
680
680
-
681
+
Main points:
681
682
682
683
-**To see if a node is connected to another**
683
684
- Uses a [queue](computer-science.md#queue) to keep track of nodes to visit soon
684
-
- Uses an array/set `seen` to mark visited vertices
685
+
- Uses an array/[set](computer-science.md#set) called`seen` to mark visited vertices
685
686
- If the graph is connected, BFS will will visit all the nodes
686
687
- A BFS tree will show the shortest path from `A` to `B` for any unweighted graph
687
688
688
689
**Algorithm**:
689
690
690
691
1. Add initial node to `queue` and mark in `seen`
692
+
1. Then add it's neighbours to the queue
691
693
2. Remove the next element from the `queue` and call it `current`.
692
694
3. Get all neighbours of the current node **which are not yet marked in `seen`.**
693
695
1. Store all these neighbours into the `queue` and mark them all in `seen`.
@@ -704,16 +706,14 @@ SAC example:
704
706
**Uses of BFS**
705
707
706
708
- Check connectivity
707
-
- Bucketfill tool in
709
+
- Bucket-fill tool from Microsoft paint
708
710
- Finding shortest path (when modified. pure BFS will not do this)
709
711
-[Diameter](#graph-diameter) of a graph
710
712
- The diameter of a graph is the longest shortest path between any two nodes in a graph. Using BFS in a loop and finding the shortest path starting from every node in the graph, keep record of the longest shortest path found so far.
711
-
- Cycle detection
713
+
-[Cycle](#cycle) detection
712
714
-[Bipartite graph](#bipartite-graphs) detection using BFS
713
715
714
-
#### Waveform
715
-
716
-
[BFS Breadth First Search](#bfs-breadth-first-search) can be considered a waveform due to its nature
716
+
[BFS Breadth First Search](#bfs-breadth-first-search) can also be considered a waveform due to its nature
717
717
718
718
<imgsrc="images/WAVEg.gif"alt="WAVEg"width="200">
719
719
@@ -723,22 +723,22 @@ SAC example:
723
723
724
724
"The basic idea of depth-first search is to penetrate as deeply as possible into a graph before fanning out to other vertices. "You must be brave and go forward quickly."
725
725
726
-
726
+
Main points:
727
727
728
728
-**To see if a node is connected to another**
729
-
- Uses a `stack` for storing vertices
729
+
- Uses a [`stack`](computer-science.md#stack) for storing vertices
730
730
- We do not check whether node was seen when storing neighbours in the `stack` - instead we perform this checking when retrieving the node from it.
731
731
- Builds a [spanning tree](#spanning-tree) if the graph is [connected](#connected-graph)
732
732
- Creates longer branches than BFS
733
-
- Unsuitable for searching shortest paths for unweighted graphs.
734
733
735
734
**Algorithm**:
736
735
737
736
1. We add the initial node to `stack`.
738
-
2. Remove the next element from the `stack` and call it `current`.
739
-
3. If the `current` node was `seen` then skip it (going to step 6).
737
+
1. And then push it's neighbours onto the stack
738
+
2. Pop an element from the `stack` and call it `current`.
739
+
3. If the `current` node is `seen` then skip it (going to step 6).
740
740
4. Otherwise mark the current node as `seen`
741
-
5. Get all neighbours of the `current` node and add all them to`stack`.
741
+
5. Get all neighbours of the `current` node and push them onto the`stack`.
742
742
6. Repeat from the step 2 until the `stack` becomes empty.
743
743
744
744
Gif example of DFS:
@@ -751,24 +751,16 @@ Practice example:
751
751
752
752
**Uses for DFS**
753
753
754
-
- Detecting cycles in a graph
755
-
- Topological sorting
754
+
- Detecting [cycles](#cycle) in a graph
755
+
-[Topological sorting](#topological-sort)
756
756
- Path Finding between initial state and target state in a graph
- Checking if a graph is [bipartite](#bipartite-graphs)
759
759
760
-
### Exhaustive search
761
-
762
-
Also known as a [brute force](algorithms.md#brute-force) algorithm
763
-
764
-
[Hamiltonian Circuit](#hamiltonian-circuit) example
765
-
1. Find all Hamiltonian circuits
766
-
2. Find length of of each circuit
767
-
3. Select the circuit with minimal total weight
768
-
769
760
### Tree traversal
770
761
771
762
How to search trees:
763
+
772
764
- Any tree, but normally a [Binary tree](#binary-tree)
773
765
- You can use
774
766
- [DFS Depth First Search](#dfs-depth-first-search)
@@ -844,11 +836,11 @@ A [greedy](algorithms.md#greedy) algorithm used to find the [minimum spanning tr
844
836
845
837
846
838
Algorithm
847
-
- Begin at any vertex (**Can be any**)
848
-
- Select the cheapest edge emanating from the vertex
849
-
- Look at edges coming from the vertices selected so far. Select the cheapest edge. If the edge forms a circuit discard it and select the next cheapest.
839
+
840
+
- Begin at **any** vertex
841
+
- Select the cheapest edge emanating from any vertex you've visited so far. If the edge forms a cycle discard it and select the next cheapest.
842
+
- Draw an edge to that node and consider it visited
850
843
- Repeat until all vertices have been selected.
851
-
- Double check by repeating process with a different starting vertex.
Prim's algorithm for finding an [MST](graphs.md#minimum-spanning-tree) is **guaranteed** to produce a correct result
868
+
Prim's algorithm for finding an [MST](graphs.md#minimum-spanning-tree) is **proved** to produce a correct result
877
869
878
870
### Kruskal's algorithm
879
871
880
-
Finds minimum spanning forest by connecting the shortest edges in the graph.
872
+
!!! note
873
+
874
+
Just for fun. Not in the study design
875
+
876
+
Finds minimum spanning forest by connecting the shortest edges in the graph. Works like prims, but orders the list of edges from shortest to longest and builds a tree that way
881
877
882
878

883
879
884
880
## Shortest Path algorithms
885
881
886
882
Quickest way to get from $A$ to $B$. Usually by shortest weight.
887
883
888
-
Scenarios:
889
-
- Worst case graph:
890
-
- A [complete graph](#complete-graphs). As it has the most edges.
891
-
- Worst case brute force
892
-
- A [complete graph](#complete-graphs) will have the most permutations of paths. It will have: $\sum_{k=0}^{(n-2)} {n\choose k} \cdot k!$
893
-
894
884
### Dijkstra's algorithm
895
885
896
-
Pronounced *Dikestra*.
886
+
Pronounced *Dike-stra*.
897
887
Finds the shortest **greedy** path via a **[priority queue](computer-science.md#priority-queue)**.
898
888
899
889
- Although dijkstras does store information as it is building a solution, it is not [Dynamic Programming](algorithms.md#dynamic-programming) because it does not explicitly or fully solve any discrete sub-problems of the original input. This is debated, but for the sake of VCE just remember it as [greedy](algorithms.md#greedy)
@@ -1003,10 +993,12 @@ See [wiki example](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm#:~:text=
1003
993
1004
994
### Belman-Ford algorithm
1005
995
1006
-
Step 1: initialise graph
1007
-
Step 2: [relax](#relaxation) edges repeatedly
1008
-
Step 3: check for negative-weight cycles
1009
-
Output shortest path as a distance and predecessor list (depending on setup)
996
+
Steps:
997
+
998
+
1. Initialise graph
999
+
2.[Relax](#relaxation) edges repeatedly
1000
+
3. Check for negative-weight cycles
1001
+
4. Output shortest path as a distance and predecessor list (depending on setup)
Unlike Dijkstra's Algorithm the Bellman-Ford Algorithm does not use a [priority queue](computer-science.md#priority-queue) to process the edges.
1011
+
Unlike Dijkstra's Algorithm, the Bellman-Ford Algorithm does not use a [priority queue](computer-science.md#priority-queue) to process the edges.
1020
1012
1021
-
In Step 2 ([Relaxation](#relaxation)) the nested forloop process all edges in the graph for each vertex $(V-1)$. Bellman-Ford is not a Greedy Algorithm, it uses Brute Force to build the solution incrementally, possibly going over each vertex several times. If a vertex has a lot of incoming edges it is updating the vertex's distance and predecessor many times.
1013
+
In Step 2 ([Relaxation](#relaxation)) the nested for-loop processes all edges in the graph for each vertex $(V-1)$. Bellman-Ford is not a Greedy Algorithm, it uses Brute Force to build the solution incrementally, possibly going over each vertex several times. If a vertex has a lot of incoming edges it is updating the vertex's distance and predecessor many times.
1022
1014
1023
1015
Pseudocode
1024
1016
@@ -1075,8 +1067,8 @@ $$O (|V|^3)$$
1075
1067
1076
1068
- Assumes there are no negative cycles, so this needs to be checked after
1077
1069
- Returns a [Distance Matrix](#distance-matrix) of shortest path weights.
1078
-
- Dynamic program
1079
-
- Generates the transitive closure of a graph, if a graph is constructed from the distance matrix output.
0 commit comments