Skip to content

Commit ef9bddd

Browse files
committed
Various fixes to Inferentia lab
1 parent da74db3 commit ef9bddd

File tree

4 files changed

+10
-10
lines changed

4 files changed

+10
-10
lines changed

website/docs/aiml/inferentia/compile.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -77,14 +77,14 @@ $ kubectl logs -l app.kubernetes.io/instance=karpenter -n kube-system -f | jq
7777
}
7878
```
7979

80-
The Pod should be scheduled on the node provisioned by Karpenter. Check if the Pod is in it's ready state:
80+
The Pod should be scheduled on the node provisioned by Karpenter. Check if the Pod is in its ready state:
8181

8282
```bash timeout=600
8383
$ kubectl -n aiml wait --for=condition=Ready --timeout=10m pod/compiler
8484
```
8585

8686
:::warning
87-
This command can take up to 10 min.
87+
This command can take up to 10 minutes.
8888
:::
8989

9090
Next, copy the code for compiling a model on to the Pod and run it:

website/docs/aiml/inferentia/inference.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ $ echo $AIML_DL_INF_IMAGE
1515

1616
This is a different image than we used for training and has been optimized for inference.
1717

18-
Now we can deploy a Pod for inference. This is the the manifest file for running the inference Pod:
18+
Now we can deploy a Pod for inference. This is the manifest file for running the inference Pod:
1919

2020
::yaml{file="manifests/modules/aiml/inferentia/inference/inference.yaml" paths="spec.nodeSelector,spec.containers.0.resources.limits"}
2121

@@ -63,7 +63,7 @@ $ kubectl logs -l app.kubernetes.io/instance=karpenter -n kube-system -f | jq
6363
...
6464
```
6565

66-
The inference Pod should be scheduled on the node provisioned by Karpenter. Check if the Pod is in it's ready state:
66+
The inference Pod should be scheduled on the node provisioned by Karpenter. Check if the Pod is in its ready state:
6767

6868
:::note
6969
It can take up to 12 minutes to provision the node, add it to the EKS cluster, and start the pod.
@@ -96,7 +96,7 @@ This output shows the capacity this node has:
9696
}
9797
```
9898

99-
We can see that this node as a `aws.amazon.com/neuron` of 1. Karpenter provisioned this node for us as that's how many neuron the Pod requested.
99+
We can see that this node has an `aws.amazon.com/neuron` of 1. Karpenter provisioned this node for us as that's how many Neuron cores the Pod requested.
100100

101101
### Run inference
102102

website/docs/aiml/inferentia/karpenter.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ In this section we will configure Karpenter to allow the creation of Inferentia
99
You can learn more about Karpenter in the [Karpenter module](../../fundamentals/compute/karpenter/index.md) that's provided in this workshop.
1010
:::
1111

12-
Karpenter has been installed in our EKS cluster, and runs as a deployment:
12+
Karpenter has been installed in our EKS cluster, and runs as a Deployment:
1313

1414
```bash
1515
$ kubectl get deployment -n kube-system
@@ -32,4 +32,4 @@ $ kubectl kustomize ~/environment/eks-workshop/modules/aiml/inferentia/nodepool
3232
| envsubst | kubectl apply -f-
3333
```
3434

35-
Now the NodePool is ready for the creation for our training and inference Pods.
35+
Now the NodePool is ready for the creation of our training and inference Pods.

website/docs/aiml/inferentia/wrapup.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
22
title: "Real world implementation"
3-
sidebar_position: 40
3+
sidebar_position: 50
44
---
55

6-
In the previous sections we've seen how we can use Amazon EKS to build models for AWS Inferentia and deploy models on EKS using Inferentia nodes. In both these examples we've executed Python code inside our containers from our command-line. In a real world scenario we do not want to run these commands manually, but rather have the container execute the commands.
6+
In the previous sections we've seen how we can use Amazon EKS to train models for AWS Inferentia and deploy models on EKS using Inferentia nodes. In both these examples we've executed Python code inside our containers from our command line. In a real world scenario we do not want to run these commands manually, but rather have the container execute the commands.
77

8-
For building the model we would want use the DLC container as our base image and add our Python code to it. We would then store this container image in our container repository like Amazon ECR. We would use a Kubernetes Job to run this container image on EKS and store the generated model to S3.
8+
For training the model we would want to use the DLC container as our base image and add our Python code to it. We would then store this container image in our container repository like Amazon ECR. We would use a Kubernetes Job to run this container image on EKS and store the generated model to S3.
99

1010
![Build Model](./assets/CreateModel.webp)
1111

0 commit comments

Comments
 (0)