Skip to content

Commit e974fbc

Browse files
committed
finished
1 parent 85be9a7 commit e974fbc

File tree

3 files changed

+71
-73
lines changed

3 files changed

+71
-73
lines changed

docs/computer-science.md

Lines changed: 22 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,9 @@ icon: material/state-machine
44

55
# Computer Science
66

7-
!!! danger
8-
9-
This is a work in progress. Some information may be incorrect or outdated
10-
117
!!! note
128

13-
Lots of this content is not applicable to the study design. Consider it extra-curricular or supplimental
9+
Some of this content is not applicable to the study design. Consider it extra-curricular or supplimental
1410

1511
## Variables
1612

@@ -39,18 +35,18 @@ A single action
3935
PRINT "abc"
4036
```
4137

42-
#### Fundamental data types:
38+
#### Fundamental operations:
4339

4440
| Operation | Symbol |
4541
| ---------------------------------- | ------------------- |
4642
| Addition | + |
4743
| Subtraction | - |
4844
| Multiplication | \* |
4945
| Division | / |
50-
| Whole number division (`int(x/y)`) | // |
51-
| Remainder after division (#Modulo) | % |
46+
| Whole number division (`floor(x/y)`) | // |
47+
| Remainder after division ([Modulo](#modulo)) | % |
5248
| Powers | \*\* |
53-
| Assign values | =, $\leftarrow$, := |
49+
| Assignment | =, $\leftarrow$, := |
5450

5551
#### Data type comparison operations
5652

@@ -60,7 +56,7 @@ PRINT "abc"
6056
| Less than or equal to $\leqslant$ | <= |
6157
| Greater than $>$ | > |
6258
| Greater than or equal to $\geqslant$ | >= |
63-
| Equal to $\equiv$ | \== |
59+
| Equal to $\equiv$ | == |
6460
| Not equal to $\neq$ | != |
6561

6662

@@ -120,7 +116,7 @@ Can be represented differently in different languages
120116

121117
### Data structures
122118

123-
Also known as Abstract Data Types or ADTs
119+
Also known as Abstract Data Types or ADTs. Excluding the [array](#array)
124120

125121
#### Array
126122

@@ -137,13 +133,17 @@ An array that supports multiple types of variables and is able to undergo severa
137133

138134
##### Cons method
139135

136+
!!! warning "Rare content"
137+
138+
Just noting that I have never seen this used before
139+
140140
The List ADT standardly includes a special method called "cons", short for construct, with the following [signature specification](#type-signature):
141141

142142
```signature
143143
cons: item × List → List
144144
```
145145

146-
The behaviour of this operation is the inverse of the first and rest operations above. In other words, for any list l, if we "cons" the head of l with the rest of l, we get l. (Formally, for all non-empty lists l, cons(first(l), rest(l)) = l.)
146+
The behaviour of this operation is the inverse of the first and rest operations above. In other words, for any list l, if we "cons" the head of l with the rest of l, we get l. (Formally, for all non-empty lists l, `cons(first(l), rest(l)) = l`.)
147147

148148
#### Associative array
149149

@@ -155,20 +155,26 @@ associcativeArray[key]
155155

156156
Associative array methods:
157157

158-
- `.add()`
159-
- `.remove()`
160-
- `.change()`
158+
- `.add()` or `associcativeArray[key] = xyz`
159+
- `.remove()` or `del associcativeArray[key]`
160+
- `.change()` or `associcativeArray[key] = xyz`
161161

162162
#### Hash table
163163

164-
A Hash Table is a type of [associative array](#associative-array) that uses a `(key,bucket)` layout. Where the bucket (or slot) is accessed via the key using hashes to instantly traverse to the end value. Which is usually preferred over other types of arrays.
164+
A Hash Table is an implimentation of an [associative array](#associative-array) that uses a `(key,bucket)` layout. Where the bucket (or slot) is accessed via the key using hashes to instantly traverse to the end value. Which is usually preferred over other types of arrays.
165165

166166
![Pasted image 20220209103939.png](images/Pasted image 20220209103939.png)
167167

168168
##### Perfect Hash function
169169

170170
When the [hash table](#hash-table) has one hash per item in the array
171171

172+
#### Set
173+
174+
A set is simmilar to a [hash table](#hash-table), but there are no values. It's just keys.
175+
176+
Sets have no order and no duplicate items. They are like a pool of keys. They also have a constant lookup time $O(1)$. This means you could have a set of size 3 and size 1 million and checking if an item was in either set would take the same time for both of them. This is useful for things like checking the existance of things in a database.
177+
172178
#### Queue
173179

174180
Queues are a list of values sorted by entry time. A FIFO (first in first out) system. The first value in is first value out. New values are added at then end of the line via (`enqueue`). Common methods include:

docs/graphs.md

Lines changed: 49 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,6 @@ icon: material/graph
44

55
# Graphs
66

7-
!!! danger
8-
9-
This is a work in progress. Some information may be incorrect or outdated
10-
117
## Terminology
128

139
### Nodes
@@ -625,8 +621,13 @@ The direction can be read from **the left of the matrix** (left side of rows) **
625621

626622
!!! note
627623

628-
- Undirected graphs are diagonaly symetrical, where directed are not
629-
- All adjacency matricies are empty on the diagonal
624+
- Undirected graphs are diagonaly symetrical with respect to the rows and collums of an adjacency matrix. Directed graphs are not
625+
- For graphs with no [loops](#loops) (e.g. [simple graphs](#simple-graph)) All adjacency matricies are empty on the diagonal.
626+
An example of a graph with loops:
627+
628+
$
629+
\displaystyle {\begin{pmatrix}2&1&0&0&1&0\newline1&0&1&0&1&0\newline0&1&0&1&0&0\newline0&0&1&0&1&1\newline1&1&0&1&0&0\newline0&0&0&1&0&0\end{pmatrix}}
630+
$. ![looped graph with adjacency matrix](images/looped-adj.png){width=200px}
630631

631632
<img src="images/Pasted image 20220222165941.png" alt="Pasted image 20220222165941">
632633

@@ -661,11 +662,11 @@ The shortest distances from $A$ to $B$ for each pair of nodes in a graph. Used i
661662
A list of adjacent nodes:
662663

663664
$$
664-
\begin{aligned}
665-
A = \{B,C\} \\
666-
B = \{A\} \\
667-
C = \{A\}
668-
\end{aligned}
665+
\begin{align}
666+
A &= \{B,C\} \newline
667+
B &= \{A\} \newline
668+
C &= \{A\}
669+
\end{align}
669670
$$
670671

671672

@@ -677,17 +678,18 @@ $$
677678

678679
"The basic idea of breadth-first search is to fan out to as many vertices as possible before penetrating deep into a graph. "A more cautious and balanced approach."
679680

680-
681+
Main points:
681682

682683
- **To see if a node is connected to another**
683684
- Uses a [queue](computer-science.md#queue) to keep track of nodes to visit soon
684-
- Uses an array/set `seen` to mark visited vertices
685+
- Uses an array/[set](computer-science.md#set) called `seen` to mark visited vertices
685686
- If the graph is connected, BFS will will visit all the nodes
686687
- A BFS tree will show the shortest path from `A` to `B` for any unweighted graph
687688

688689
**Algorithm**:
689690

690691
1. Add initial node to `queue` and mark in `seen`
692+
1. Then add it's neighbours to the queue
691693
2. Remove the next element from the `queue` and call it `current`.
692694
3. Get all neighbours of the current node **which are not yet marked in `seen`.**
693695
1. Store all these neighbours into the `queue` and mark them all in `seen`.
@@ -704,16 +706,14 @@ SAC example:
704706
**Uses of BFS**
705707

706708
- Check connectivity
707-
- Bucket fill tool in
709+
- Bucket-fill tool from Microsoft paint
708710
- Finding shortest path (when modified. pure BFS will not do this)
709711
- [Diameter](#graph-diameter) of a graph
710712
- The diameter of a graph is the longest shortest path between any two nodes in a graph. Using BFS in a loop and finding the shortest path starting from every node in the graph, keep record of the longest shortest path found so far.
711-
- Cycle detection
713+
- [Cycle](#cycle) detection
712714
- [Bipartite graph](#bipartite-graphs) detection using BFS
713715

714-
#### Waveform
715-
716-
[BFS Breadth First Search](#bfs-breadth-first-search) can be considered a waveform due to its nature
716+
[BFS Breadth First Search](#bfs-breadth-first-search) can also be considered a waveform due to its nature
717717

718718
<img src="images/WAVEg.gif" alt="WAVEg" width="200">
719719

@@ -723,22 +723,22 @@ SAC example:
723723

724724
"The basic idea of depth-first search is to penetrate as deeply as possible into a graph before fanning out to other vertices. "You must be brave and go forward quickly."
725725

726-
726+
Main points:
727727

728728
- **To see if a node is connected to another**
729-
- Uses a `stack` for storing vertices
729+
- Uses a [`stack`](computer-science.md#stack) for storing vertices
730730
- We do not check whether node was seen when storing neighbours in the `stack` - instead we perform this checking when retrieving the node from it.
731731
- Builds a [spanning tree](#spanning-tree) if the graph is [connected](#connected-graph)
732732
- Creates longer branches than BFS
733-
- Unsuitable for searching shortest paths for unweighted graphs.
734733

735734
**Algorithm**:
736735

737736
1. We add the initial node to `stack`.
738-
2. Remove the next element from the `stack` and call it `current`.
739-
3. If the `current` node was `seen` then skip it (going to step 6).
737+
1. And then push it's neighbours onto the stack
738+
2. Pop an element from the `stack` and call it `current`.
739+
3. If the `current` node is `seen` then skip it (going to step 6).
740740
4. Otherwise mark the current node as `seen`
741-
5. Get all neighbours of the `current` node and add all them to `stack`.
741+
5. Get all neighbours of the `current` node and push them onto the `stack`.
742742
6. Repeat from the step 2 until the `stack` becomes empty.
743743

744744
Gif example of DFS:
@@ -751,24 +751,16 @@ Practice example:
751751

752752
**Uses for DFS**
753753

754-
- Detecting cycles in a graph
755-
- Topological sorting
754+
- Detecting [cycles](#cycle) in a graph
755+
- [Topological sorting](#topological-sort)
756756
- Path Finding between initial state and target state in a graph
757-
- Finding strongly connected components
757+
- Finding [strongly connected](#strongly-connected-digraph) components
758758
- Checking if a graph is [bipartite](#bipartite-graphs)
759759

760-
### Exhaustive search
761-
762-
Also known as a [brute force](algorithms.md#brute-force) algorithm
763-
764-
[Hamiltonian Circuit](#hamiltonian-circuit) example
765-
1. Find all Hamiltonian circuits
766-
2. Find length of of each circuit
767-
3. Select the circuit with minimal total weight
768-
769760
### Tree traversal
770761

771762
How to search trees:
763+
772764
- Any tree, but normally a [Binary tree](#binary-tree)
773765
- You can use
774766
- [DFS Depth First Search](#dfs-depth-first-search)
@@ -844,11 +836,11 @@ A [greedy](algorithms.md#greedy) algorithm used to find the [minimum spanning tr
844836

845837

846838
Algorithm
847-
- Begin at any vertex (**Can be any**)
848-
- Select the cheapest edge emanating from the vertex
849-
- Look at edges coming from the vertices selected so far. Select the cheapest edge. If the edge forms a circuit discard it and select the next cheapest.
839+
840+
- Begin at **any** vertex
841+
- Select the cheapest edge emanating from any vertex you've visited so far. If the edge forms a cycle discard it and select the next cheapest.
842+
- Draw an edge to that node and consider it visited
850843
- Repeat until all vertices have been selected.
851-
- Double check by repeating process with a different starting vertex.
852844

853845
Example:
854846

@@ -873,27 +865,25 @@ Or
873865

874866
<img src="images/Pasted image 20220308183646.png" alt="Pasted image 20220308183646">
875867

876-
Prim's algorithm for finding an [MST](graphs.md#minimum-spanning-tree) is **guaranteed** to produce a correct result
868+
Prim's algorithm for finding an [MST](graphs.md#minimum-spanning-tree) is **proved** to produce a correct result
877869

878870
### Kruskal's algorithm
879871

880-
Finds minimum spanning forest by connecting the shortest edges in the graph.
872+
!!! note
873+
874+
Just for fun. Not in the study design
875+
876+
Finds minimum spanning forest by connecting the shortest edges in the graph. Works like prims, but orders the list of edges from shortest to longest and builds a tree that way
881877

882878
![Wiki example of kruskals|#invert](images/KruskalDemo.gif)
883879

884880
## Shortest Path algorithms
885881

886882
Quickest way to get from $A$ to $B$. Usually by shortest weight.
887883

888-
Scenarios:
889-
- Worst case graph:
890-
- A [complete graph](#complete-graphs). As it has the most edges.
891-
- Worst case brute force
892-
- A [complete graph](#complete-graphs) will have the most permutations of paths. It will have: $\sum_{k=0}^{(n-2)} {n\choose k} \cdot k!$
893-
894884
### Dijkstra's algorithm
895885

896-
Pronounced *Dikestra*.
886+
Pronounced *Dike-stra*.
897887
Finds the shortest **greedy** path via a **[priority queue](computer-science.md#priority-queue)**.
898888

899889
- Although dijkstras does store information as it is building a solution, it is not [Dynamic Programming](algorithms.md#dynamic-programming) because it does not explicitly or fully solve any discrete sub-problems of the original input. This is debated, but for the sake of VCE just remember it as [greedy](algorithms.md#greedy)
@@ -1003,10 +993,12 @@ See [wiki example](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm#:~:text=
1003993

1004994
### Belman-Ford algorithm
1005995

1006-
Step 1: initialise graph
1007-
Step 2: [relax](#relaxation) edges repeatedly
1008-
Step 3: check for negative-weight cycles
1009-
Output shortest path as a distance and predecessor list (depending on setup)
996+
Steps:
997+
998+
1. Initialise graph
999+
2. [Relax](#relaxation) edges repeatedly
1000+
3. Check for negative-weight cycles
1001+
4. Output shortest path as a distance and predecessor list (depending on setup)
10101002

10111003
![GraphFromWikiBellmanFord](images/Bellman–Ford_algorithm_example.gif)
10121004

@@ -1016,9 +1008,9 @@ Output shortest path as a distance and predecessor list (depending on setup)
10161008
[Online runner](https://algorithms.discrete.ma.tum.de/graph-algorithms/spp-bellman-ford/index_en.html)
10171009

10181010

1019-
Unlike Dijkstra's Algorithm the Bellman-Ford Algorithm does not use a [priority queue](computer-science.md#priority-queue) to process the edges.
1011+
Unlike Dijkstra's Algorithm, the Bellman-Ford Algorithm does not use a [priority queue](computer-science.md#priority-queue) to process the edges.
10201012

1021-
In Step 2 ([Relaxation](#relaxation)) the nested for loop process all edges in the graph for each vertex $(V-1)$. Bellman-Ford is not a Greedy Algorithm, it uses Brute Force to build the solution incrementally, possibly going over each vertex several times. If a vertex has a lot of incoming edges it is updating the vertex's distance and predecessor many times.
1013+
In Step 2 ([Relaxation](#relaxation)) the nested for-loop processes all edges in the graph for each vertex $(V-1)$. Bellman-Ford is not a Greedy Algorithm, it uses Brute Force to build the solution incrementally, possibly going over each vertex several times. If a vertex has a lot of incoming edges it is updating the vertex's distance and predecessor many times.
10221014

10231015
Pseudocode
10241016

@@ -1075,8 +1067,8 @@ $$O (|V|^3)$$
10751067

10761068
- Assumes there are no negative cycles, so this needs to be checked after
10771069
- Returns a [Distance Matrix](#distance-matrix) of shortest path weights.
1078-
- Dynamic program
1079-
- Generates the transitive closure of a graph, if a graph is constructed from the distance matrix output.
1070+
- Uses [dynamic programming](algorithms.md#dynamic-programming)
1071+
- Generates the [transitive closure](#transitive-closure) of a graph, if a graph is constructed from the distance matrix output.
10801072

10811073
```js
10821074
Algorithm Floyd-Warshall

docs/images/looped-adj.png

235 KB
Loading

0 commit comments

Comments
 (0)