You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Instructions/Exercises/01-get-started-azure-openai.md
+16-20Lines changed: 16 additions & 20 deletions
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ lab:
5
5
6
6
# Get started with Azure OpenAI service
7
7
8
-
Azure OpenAI Service brings the generative AI models developed by OpenAI to the Azure platform, enabling you to develop powerful AI solutions that benefit from the security, scalability, and integration of services provided by the Azure cloud platform. In this exercise, you'll learn how to get started with Azure OpenAI by provisioning the service as an Azure resource and using Azure OpenAI Studio to deploy and explore generative AI models.
8
+
Azure OpenAI Service brings the generative AI models developed by OpenAI to the Azure platform, enabling you to develop powerful AI solutions that benefit from the security, scalability, and integration of services provided by the Azure cloud platform. In this exercise, you'll learn how to get started with Azure OpenAI by provisioning the service as an Azure resource and using Azure AI Studio to deploy and explore generative AI models.
9
9
10
10
In the scenario for this exercise, you will perform the role of a software developer who has been tasked to implement an AI agent that can use generative AI to help a marketing organization improve its effectiveness at reaching customers and advertising new products. The techniques used in the exercise can be applied to any scenario where an organization wants to use generative AI models to help employees be more effective and productive.
11
11
@@ -39,37 +39,33 @@ If you don't already have one, provision an Azure OpenAI resource in your Azure
39
39
40
40
## Deploy a model
41
41
42
-
Azure OpenAI service provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model.
42
+
Azure provides a web-based portal named **Azure AI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Studio to deploy a model.
43
43
44
-
> **Note**: As you use Azure OpenAI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise.
44
+
> **Note**: As you use Azure AI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise.
45
45
46
-
1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab.
47
-
48
-
After the new tab opens, you can close any banner notifications for new preview services that are displayed at the top of the Azure OpenAI Studio page.
49
-
50
-
1. In Azure OpenAI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
46
+
1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Studio**.
47
+
1. In Azure AI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
51
48
-**Deployment name**: *A unique name of your choice*
52
49
-**Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)*
53
-
-**Model version**: Auto-update to default
50
+
-**Model version**: *Use default version*
54
51
-**Deployment type**: Standard
55
52
-**Tokens per minute rate limit**: 5K\*
56
53
-**Content filter**: Default
57
-
-**Enable dynamic quota**: Enabled
54
+
-**Enable dynamic quota**: Disabled
58
55
59
56
> \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
60
57
61
58
## Use the Chat playground
62
59
63
-
Now that you've deployed a model, you can use it to generate responses based on natural language prompts. The *Chat* playground in Azure OpenAI Studio provides a chatbot interface for GPT 3.5 and higher models.
60
+
Now that you've deployed a model, you can use it to generate responses based on natural language prompts. The *Chat* playground in Azure AI Studio provides a chatbot interface for GPT 3.5 and higher models.
64
61
65
62
> **Note:** The *Chat* playground uses the *ChatCompletions* API rather than the older *Completions* API that is used by the *Completions* playground. The Completions playground is provided for compatibility with older models.
66
63
67
-
1. In the **Playground** section, select the **Chat** page. The **Chat** playground page consists of three main panels (which may be arranged right-to-left horizontally, or top-to-bottom vertically depending on your screen resolution):
68
-
-**Setup** - used to set the context for the model's responses.
64
+
1. In the **Playground** section, select the **Chat** page. The **Chat** playground page consists of a row of buttons and two main panels (which may be arranged right-to-left horizontally, or top-to-bottom vertically depending on your screen resolution):
65
+
-**Configuration** - used to select your deployment, define system message, and set parameters for interacting with your deployment.
69
66
-**Chat session** - used to submit chat messages and view responses.
70
-
-**Configuration** - used to configure settings for the model deployment.
71
-
1. In the **Configuration** panel, ensure that your gpt-35-turbo-16k model deployment is selected.
72
-
1. In the **Setup** panel, review the default **System message**, which should be *You are an AI assistant that helps people find information.* The system message is included in prompts submitted to the model, and provides context for the model's responses; setting expectations about how an AI agent based on the model should interact with the user.
67
+
1. Under **Deployments**, ensure that your gpt-35-turbo-16k model deployment is selected.
68
+
1. Review the default **System message**, which should be *You are an AI assistant that helps people find information.* The system message is included in prompts submitted to the model, and provides context for the model's responses; setting expectations about how an AI agent based on the model should interact with the user.
73
69
1. In the **Chat session** panel, enter the user query `How can I use generative AI to help me market a new product?`
74
70
75
71
> **Note**: You may receive a response that the API deployment is not yet ready. If so, wait for a few minutes and try again.
@@ -84,7 +80,7 @@ Now that you've deployed a model, you can use it to generate responses based on
84
80
85
81
So far, you've engaged in a chat conversation with your model based on the default system message. You can customize the system setup to have more control over the kinds of responses generated by your model.
86
82
87
-
1. In the **Setup** panel, under **Use a system message template**, select the **Marketing Writing Assistant**template and confirm that you want to update the system message.
83
+
1. In the main toolbar, select the **Prompt samples**, and use the **Marketing Writing Assistant**prompt template.
88
84
1. Review the new system message, which describes how an AI agent should use the model to respond.
89
85
1. In the **Chat session** panel, enter the user query `Create an advertisement for a new scrubbing brush`.
90
86
1. Review the response, which should include advertising copy for a scrubbing brush. The copy may be quite extensive and creative.
@@ -96,7 +92,7 @@ So far, you've engaged in a chat conversation with your model based on the defau
96
92
97
93
The response should now be more useful, but to have even more control over the output from the model, you can provide one or more *few-shot* examples on which responses should be based.
98
94
99
-
1.In the **Setup**panel, under **Examples**, select **Add**. Then type the following message and response in the designated boxes:
95
+
1.Under the **System message**text box, expand the dropdown for **Add section** and select **Examples**. Then type the following message and response in the designated boxes:
100
96
101
97
**User**:
102
98
@@ -139,7 +135,7 @@ You've explored how the system message, examples, and prompts can help refine th
139
135
140
136
## Deploy your model to a web app
141
137
142
-
Now that you've explored some of the capabilities of a generative AI model in the Azure OpenAI Studio playground, you can deploy an Azure web app to provide a basic AI agent interface through which users can chat with the model.
138
+
Now that you've explored some of the capabilities of a generative AI model in the Azure AI Studio playground, you can deploy an Azure web app to provide a basic AI agent interface through which users can chat with the model.
143
139
144
140
1. At the top right of the **Chat** playground page, in the **Deploy to** menu, select **A new web app**.
145
141
1. In the **Deploy to a web app** dialog box, create a new web app with the following settings:
@@ -162,7 +158,7 @@ Now that you've explored some of the capabilities of a generative AI model in th
162
158
163
159
> **Note**: You deployed the *model* to a web app, but this deployment doesn't include the system settings and parameters you set in the playground; so the response may not reflect the examples you specified in the playground. In a real scenario, you would add logic to your application to modify the prompt so that it includes the appropriate contextual data for the kinds of response you want to generate. This kind of customization is beyond the scope of this introductory-level exercise, but you can learn about prompt engineering techniques and Azure OpenAI APIs in other exercises and product documentation.
164
160
165
-
1. When you have finished experimenting with your model in the web app, close the web app tab in your browser to return to Azure OpenAI Studio.
161
+
1. When you have finished experimenting with your model in the web app, close the web app tab in your browser to return to Azure AI Studio.
Copy file name to clipboardExpand all lines: Instructions/Exercises/02-natural-language-azure-openai.md
+8-6Lines changed: 8 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -39,17 +39,19 @@ If you don't already have one, provision an Azure OpenAI resource in your Azure
39
39
40
40
## Deploy a model
41
41
42
-
Azure OpenAI provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model.
42
+
Azure provides a web-based portal named **Azure AI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Studio to deploy a model.
43
43
44
-
1. On the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab.
45
-
2. In Azure OpenAI Studio, on the **Deployments** page, view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
44
+
> **Note**: As you use Azure AI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise.
45
+
46
+
1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Studio**.
47
+
1. In Azure AI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
46
48
-**Deployment name**: *A unique name of your choice*
47
49
-**Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)*
48
-
-**Model version**: Auto-update to default
50
+
-**Model version**: *Use default version*
49
51
-**Deployment type**: Standard
50
52
-**Tokens per minute rate limit**: 5K\*
51
53
-**Content filter**: Default
52
-
-**Enable dynamic quota**: Enabled
54
+
-**Enable dynamic quota**: Disabled
53
55
54
56
> \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
55
57
@@ -95,7 +97,7 @@ Applications for both C# and Python have been provided. Both apps feature the sa
95
97
96
98
4. Update the configuration values to include:
97
99
- The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
98
-
- The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure OpenAI Studio).
100
+
- The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure AI studio).
Copy file name to clipboardExpand all lines: Instructions/Exercises/03-prompt-engineering.md
+17-16Lines changed: 17 additions & 16 deletions
Original file line number
Diff line number
Diff line change
@@ -39,30 +39,31 @@ If you don't already have one, provision an Azure OpenAI resource in your Azure
39
39
40
40
## Deploy a model
41
41
42
-
Azure OpenAI provides a web-based portal named **Azure OpenAI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure OpenAI Studio to deploy a model.
42
+
Azure provides a web-based portal named **Azure AI Studio**, that you can use to deploy, manage, and explore models. You'll start your exploration of Azure OpenAI by using Azure AI Studio to deploy a model.
43
43
44
-
1. On the **Overview** page for your Azure OpenAI resource, use the **Go to Azure OpenAI Studio** button to open Azure OpenAI Studio in a new browser tab.
45
-
2. In Azure OpenAI Studio, on the **Deployments** page, view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
44
+
> **Note**: As you use Azure AI Studio, message boxes suggesting tasks for you to perform may be displayed. You can close these and follow the steps in this exercise.
45
+
46
+
1. In the Azure portal, on the **Overview** page for your Azure OpenAI resource, scroll down to the **Get Started** section and select the button to go to **AI Studio**.
47
+
1. In Azure AI Studio, in the pane on the left, select the **Deployments** page and view your existing model deployments. If you don't already have one, create a new deployment of the **gpt-35-turbo-16k** model with the following settings:
46
48
-**Deployment name**: *A unique name of your choice*
47
49
-**Model**: gpt-35-turbo-16k *(if the 16k model isn't available, choose gpt-35-turbo)*
48
-
-**Model version**: Auto-update to default
50
+
-**Model version**: *Use default version*
49
51
-**Deployment type**: Standard
50
52
-**Tokens per minute rate limit**: 5K\*
51
53
-**Content filter**: Default
52
-
-**Enable dynamic quota**: Enabled
54
+
-**Enable dynamic quota**: Disabled
53
55
54
56
> \* A rate limit of 5,000 tokens per minute is more than adequate to complete this exercise while leaving capacity for other people using the same subscription.
55
57
56
58
## Explore prompt engineering techniques
57
59
58
60
Let's start by exploring some prompt engineering techniques in the Chat playground.
59
61
60
-
1. In **Azure OpenAI Studio** at `https://oai.azure.com`, in the **Playground** section, select the **Chat** page. The **Chat** playground page consists of three main sections:
61
-
-**Setup** - used to set the context for the model's responses.
62
+
1. In the **Playground** section, select the **Chat** page. The **Chat** playground page consists of a row of buttons and two main panels (which may be arranged right-to-left horizontally, or top-to-bottom vertically depending on your screen resolution):
63
+
-**Configuration** - used to select your deployment, define system message, and set parameters for interacting with your deployment.
62
64
-**Chat session** - used to submit chat messages and view responses.
63
-
-**Configuration** - used to configure settings for the model deployment.
64
-
2. In the **Configuration** section, ensure that your model deployment is selected.
65
-
3. In the **Setup** area, select the default system message template to set the context for the chat session. The default system message is *You are an AI assistant that helps people find information*.
65
+
2. Under **Deployments**, ensure that your gpt-35-turbo-16k model deployment is selected.
66
+
1. Review the default **System message**, which should be *You are an AI assistant that helps people find information.*
66
67
4. In the **Chat session**, submit the following query:
67
68
68
69
```prompt
@@ -79,9 +80,9 @@ Let's start by exploring some prompt engineering techniques in the Chat playgrou
79
80
80
81
The response provides a description of the article. However, suppose you want a more specific format for article categorization.
81
82
82
-
5. In the **Setup** section change the system message to `You are a news aggregator that categorizes news articles.`
83
+
5. In the **Configuration** section change the system message to `You are a news aggregator that categorizes news articles.`
83
84
84
-
6. Under the new system message, in the **Examples** section, select the **Add** button. Then add the following example.
85
+
6. Under the new system message, select the **Add section** button, and choose **Examples**. Then add the following example.
85
86
86
87
**User:**
87
88
@@ -126,7 +127,7 @@ Let's start by exploring some prompt engineering techniques in the Chat playgrou
126
127
Entertainment
127
128
```
128
129
129
-
8. Use the **Apply changes** button at the top of the **Setup** section to update the system message.
130
+
8. Use the **Apply changes** button at the top of the **Configuration** section to save your changes.
130
131
131
132
9. In the **Chat session** section, resubmit the following prompt:
132
133
@@ -144,7 +145,7 @@ Let's start by exploring some prompt engineering techniques in the Chat playgrou
144
145
145
146
The combination of a more specific system message and some examples of expected queries and responses results in a consistent format for the results.
146
147
147
-
10. In the **Setup** section, change the system message back to the default template, which should be `You are an AI assistant that helps people find information.` with no examples. Then apply the changes.
148
+
10. Change the system message back to the default template, which should be `You are an AI assistant that helps people find information.` with no examples. Then apply the changes.
148
149
149
150
11. In the **Chat session** section, submit the following prompt:
150
151
@@ -209,7 +210,7 @@ Applications for both C# and Python have been provided, and both apps feature th
209
210
210
211
4. Update the configuration values to include:
211
212
- The **endpoint** and a **key** from the Azure OpenAI resource you created (available on the **Keys and Endpoint** page for your Azure OpenAI resource in the Azure portal)
212
-
- The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure OpenAI Studio).
213
+
- The **deployment name** you specified for your model deployment (available in the **Deployments** page in Azure AI Studio).
213
214
5. Save the configuration file.
214
215
215
216
## Add code to use the Azure OpenAI service
@@ -300,7 +301,7 @@ Now you're ready to use the Azure OpenAI SDK to consume your deployed model.
300
301
301
302
Now that your app has been configured, run it to send your request to your model and observe the response. You'll notice the only difference between the different options is the content of the prompt, all other parameters (such as token count and temperature) remain the same for each request.
302
303
303
-
1. In the folder of your preferred language, open `system.txt` in Visual Studio Code. For each of the interations, you'll enter the **System message** in this file and save it. Each iteration will pause first for you to change the system message.
304
+
1. In the folder of your preferred language, open `system.txt` in Visual Studio Code. For each of the interactions, you'll enter the **System message** in this file and save it. Each iteration will pause first for you to change the system message.
304
305
1. In the interactive terminal pane, ensure the folder context is the folder for your preferred language. Then enter the following command to run the application.
0 commit comments