Please note that this page does not hosts or makes available any of the listed filenames. You
cannot download any of those files from here.
|
001. Chapter 1. Python Environment Setup.mp4 |
90.23MB |
002. Chapter 2. Tokenizing text.mp4 |
99.09MB |
003. Chapter 2. Converting tokens into token IDs.mp4 |
39.96MB |
004. Chapter 2. Adding special context tokens.mp4 |
34.81MB |
005. Chapter 2. Byte pair encoding.mp4 |
69.70MB |
006. Chapter 2. Data sampling with a sliding window.mp4 |
91.88MB |
007. Chapter 2. Creating token embeddings.mp4 |
32.81MB |
008. Chapter 2. Encoding word positions.mp4 |
49.18MB |
009. Chapter 3. A simple self-attention mechanism without trainable weights Part 1.mp4 |
173.89MB |
010. Chapter 3. A simple self-attention mechanism without trainable weights Part 2.mp4 |
54.97MB |
011. Chapter 3. Computing the attention weights step by step.mp4 |
63.51MB |
012. Chapter 3. Implementing a compact self-attention Python class.mp4 |
33.62MB |
013. Chapter 3. Applying a causal attention mask.mp4 |
56.36MB |
014. Chapter 3. Masking additional attention weights with dropout.mp4 |
16.80MB |
015. Chapter 3. Implementing a compact causal self-attention class.mp4 |
41.52MB |
016. Chapter 3. Stacking multiple single-head attention layers.mp4 |
45.54MB |
017. Chapter 3. Implementing multi-head attention with weight splits.mp4 |
127.05MB |
018. Chapter 4. Coding an LLM architecture.mp4 |
62.12MB |
019. Chapter 4. Normalizing activations with layer normalization.mp4 |
84.01MB |
020. Chapter 4. Implementing a feed forward network with GELU activations.mp4 |
102.08MB |
021. Chapter 4. Adding shortcut connections.mp4 |
44.29MB |
022. Chapter 4. Connecting attention and linear layers in a transformer block.mp4 |
64.11MB |
023. Chapter 4. Coding the GPT model.mp4 |
66.96MB |
024. Chapter 4. Generating text.mp4 |
65.74MB |
025. Chapter 5. Using GPT to generate text.mp4 |
71.59MB |
026. Chapter 5. Calculating the text generation loss cross entropy and perplexity.mp4 |
97.57MB |
027. Chapter 5. Calculating the training and validation set losses.mp4 |
94.71MB |
028. Chapter 5. Training an LLM.mp4 |
138.82MB |
029. Chapter 5. Decoding strategies to control randomness.mp4 |
20.07MB |
030. Chapter 5. Temperature scaling.mp4 |
42.17MB |
031. Chapter 5. Top-k sampling.mp4 |
26.26MB |
032. Chapter 5. Modifying the text generation function.mp4 |
33.45MB |
033. Chapter 5. Loading and saving model weights in PyTorch.mp4 |
22.00MB |
034. Chapter 5. Loading pretrained weights from OpenAI.mp4 |
106.61MB |
035. Chapter 6. Preparing the dataset.mp4 |
103.83MB |
036. Chapter 6. Creating data loaders.mp4 |
54.34MB |
037. Chapter 6. Initializing a model with pretrained weights.mp4 |
42.27MB |
038. Chapter 6. Adding a classification head.mp4 |
73.66MB |
039. Chapter 6. Calculating the classification loss and accuracy.mp4 |
64.48MB |
040. Chapter 6. Fine-tuning the model on supervised data.mp4 |
162.72MB |
041. Chapter 6. Using the LLM as a spam classifier.mp4 |
35.89MB |
042. Chapter 7. Preparing a dataset for supervised instruction fine-tuning.mp4 |
47.20MB |
043. Chapter 7. Organizing data into training batches.mp4 |
79.82MB |
044. Chapter 7. Creating data loaders for an instruction dataset.mp4 |
32.29MB |
045. Chapter 7. Loading a pretrained LLM.mp4 |
24.68MB |
046. Chapter 7. Fine-tuning the LLM on instruction data.mp4 |
98.16MB |
047. Chapter 7. Extracting and saving responses.mp4 |
42.30MB |
048. Chapter 7. Evaluating the fine-tuned LLM.mp4 |
102.12MB |
Bonus Resources.txt |
70B |
Get Bonus Downloads Here.url |
180B |