|
105 | 105 | " train=True,\n", |
106 | 106 | ")\n", |
107 | 107 | "\n", |
| 108 | + "\n", |
108 | 109 | "for features, target in data_set:\n", |
| 110 | + " # print the features and targets here\n", |
109 | 111 | " pass" |
110 | 112 | ] |
111 | 113 | }, |
|
124 | 126 | "source": [ |
125 | 127 | "### Task 4: Applying transforms to the data\n", |
126 | 128 | "\n", |
127 | | - "A common way of transforming inputs to neural networks is to apply a series of transforms using ``torchvision.transforms.Compose``. The ``Compose`` object takes a list of callable objects and applies them to the incoming data.\n", |
| 129 | + "A common way of transforming inputs to neural networks is to apply a series of transforms using ``torchvision.transforms.Compose``. The [``Compose``](https://pytorch.org/vision/stable/generated/torchvision.transforms.Compose.html) object takes a list of callable objects (i.e., functions) and applies them to the incoming data.\n", |
128 | 130 | "\n", |
129 | 131 | "These transforms can be very useful for mapping between file paths and tensors of images, etc.\n", |
130 | 132 | "\n", |
|
141 | 143 | "outputs": [], |
142 | 144 | "source": [ |
143 | 145 | "from torchvision.transforms import Compose\n", |
| 146 | + "# import some useful functions here, see https://pytorch.org/docs/stable/torch.html\n", |
| 147 | + "# where `tensor` and `eye` are used for constructing tensors,\n", |
| 148 | + "# and using a lower-precision float32 is advised for performance\n", |
| 149 | + "from torch import tensor, eye, float32 \n", |
144 | 150 | "\n", |
145 | | - "# Apply the transforms we need to the PenguinDataset to get out inputs\n", |
| 151 | + "# Apply the transforms we need to the PenguinDataset to get out input\n", |
146 | 152 | "# targets as Tensors." |
147 | 153 | ] |
148 | 154 | }, |
|
154 | 160 | "\n", |
155 | 161 | "- Once we have created a ``Dataset`` object, we wrap it in a ``DataLoader``.\n", |
156 | 162 | " - The ``DataLoader`` object allows us to put our inputs and targets in mini-batches, which makes for more efficient training.\n", |
157 | | - " - Note: rather than supplying one input-target pair to the model at a time, we supply \"mini-batches\" of these data at once.\n", |
| 163 | + " - Note: rather than supplying one input-target pair to the model at a time, we supply \"mini-batches\" of these data at once (typically a small power of 2, like 16 or 32).\n", |
158 | 164 | " - The number of items we supply at once is called the batch size.\n", |
159 | 165 | " - The ``DataLoader`` can also randomly shuffle the data each epoch (when training).\n", |
160 | 166 | " - It allows us to load different mini-batches in parallel, which can be very useful for larger datasets and images that can't all fit in memory at once.\n", |
|
0 commit comments