You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _pages/dat450/assignment2.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ nav_order: 4
10
10
# DAT450/DIT247: Programming Assignment 2: Transformer language models
11
11
12
12
In this assignment, we extend the models we investigated in the previous assignment in two different ways:
13
-
-In the previous assignment, we used a model that takes a fixed number of previous words into account. Now, we will use a model capable of considering a variable number of previous words: a *recurrent neural network*. (Optionally, you can also investigate *Transformers*.)
13
+
-We will now use a *Transformer* instead of the recurrent neural network we had previously.
14
14
- In this assignment, we will also use our language model to generate texts.
15
15
16
16
### Pedagogical purposes of this assignment
@@ -19,13 +19,13 @@ In this assignment, we extend the models we investigated in the previous assignm
19
19
20
20
### Requirements
21
21
22
-
Please submit your solution in [Canvas](https://chalmers.instructure.com/courses/XX/assignments/YY). **Submission deadline**: November XX.
22
+
Please submit your solution in [Canvas](https://chalmers.instructure.com/courses/XX/assignments/YY). **Submission deadline**: November 17.
23
23
24
24
Submit a XX
25
25
26
26
## Step 0: Preliminaries
27
27
28
-
Make sure you have access to your solution for Programming Assignment 1 since you will reuse some parts.
28
+
Make sure you have access to your solution for Programming Assignment 1 since you will reuse the training loop. (Optionally, use HuggingFace's `Trainer` instead.)
29
29
30
30
Copy the skeleton from SOMEWHERE.
31
31
@@ -84,7 +84,7 @@ The figure below shows what we will have to implement.
84
84
85
85
Continuing to work in `forward`, now compute query, key, and value representations; don't forget the normalizers after the query and key representations.
86
86
87
-
Now, we need to reshape the query, key, and value tensors so that the individual attention heads are stored separately. Assume your tensors have the shape \( (b, m, d) \), where \( b \) is the batch size, \( m \) the text length, and \( d \) the hidden layer size. We now need to reshape and transpose so that we get \( (b, n_h, m, d_h) \) where \( n_h \) is the number of attention heads and \( d_h \) the attention head dimensionality. Your code could be something like the following (apply this to queries, keys, and values):
87
+
Now, we need to reshape the query, key, and value tensors so that the individual attention heads are stored separately. Assume your tensors have the shape $$ (b, m, d) $$, where $$ b $$ is the batch size, $$ m $$ the text length, and $$ d $$ the hidden layer size. We now need to reshape and transpose so that we get $$ (b, n_h, m, d_h) $$ where $$ n_h $$ is the number of attention heads and $$ d_h $$ the attention head dimensionality. Your code could be something like the following (apply this to queries, keys, and values):
88
88
89
89
```
90
90
q = q.view(b, m, n_h, d_h).transpose(1, 2)
@@ -103,7 +103,7 @@ We will explain the exact computations in the hint below, but conveniently enoug
In that case, the <ahref="https://docs.pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html">documentation of the PyTorch implementation</a> includes a piece of code that can give you some inspiration and that you can simplify somewhat.
105
105
106
-
Assuming your query, key, and value tensors are called \(q\), \(k\), and \(v\), then the computations you should carry out are the following. First, we compute the <em>attention pre-activations</em>, which are compute by multiplying query and key representations, and scaling:
106
+
Assuming your query, key, and value tensors are called $$q$$, $$k$$, and $$v$$, then the computations you should carry out are the following. First, we compute the <em>attention pre-activations</em>, which are compute by multiplying query and key representations, and scaling:
0 commit comments