0 ratings0% found this document useful (0 votes) 14 views11 pagesExploring The ChatGPT API With Python
Exploring the ChatGPT API with Python
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
s115i24, 8:41 AM Exploring the ChatGPT API with Python | MLExpert- Get Things Done with Al Bootcamp
Blog > Chatgpt API
Exploring the ChatGPT API with Python
Exploring the ChatGPT API with Python: Multiple Completions, Max Tokens, ...
OpenAl recently introduced? the ChatGPT API along with the Whisper API, which has
already been integrated into some popular apps such as Snapchat and Instacart. The new
ChatGPT API features a new model called gpt-3.5-turbo that has replaced the
previously most commonly used model, text-davinci-e03
Join the AI BootCamp!
Ready to dive into the world of Al and Machine Learning? Join the Al
BootCamp to transform your career with the latest skills and hands-on
project experience. Learn about LLMs, ML best practices, and much more!
JOIN NOW
ntpshwwwmlexperoxblogichatgpt-ap wtrsa, e:1aM Explering the ChaGPT API wth Python | MLEeper - Get Things Done wth AI Botcamp
This new model is the exact same model used in the ChatGPT web app. Also, itis about
10x cheaper! The OpenAl team recommends the new model as a replacement for all
previously used models.
This part will utilize the official Python library provided by OpenAl, which includes a new
component called ChatCompletion specifically designed for ChatGPT. Let's get started!
© In this tutorial, we will be using Jupyter Notebook to run the code. If you prefer to
follow along, you can access the notebook here: open the notebook
Get API Key
To use the API, you will need to register for an account on the OpenAl platform and
generate an API key from this page: https://platform.openai.com/account/api-keys
Setup
You'll need the openai @ library, version @.27.0 . We'll also use the tiktoken 3 to count
number of tokens in a given text:
pip install -qqq openai==0.27.0
pip install -qqq tiktoken==0.3.0
Next, let's add imports and set the API key:
import openai
import tiktoken
from IPython.display import display, Markdown
openai.api_key = "YOUR API KEY"
https shwwn-mlexpertiafblogichatgpt-ap| ams115i24, 8:41 AM Exploring the ChatGPT API with Python | MLExpert- Get Things Done with Al Bootcamp
Call the API
To start with the APIS, we'll need a prompt. Let's reuse the prompt from the prompt
engineering guide:
prompt =
Write me a plan of how to invest 10,0@ USD with the goal of making maximum profit in 1
Give specific investments and allocation percentages.
Next, we'll make the API call:
result = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
message:
{"role": "system", “content”: "You're an expert personal finance advisor."},
{"role": "user", "content": prompt}
1.
d
result
The model parameter specifies the version of the GPT model to use, in this case gpt-
3.5-turbo . The messages parameter is a list of two dictionaries representing the
conversation between the user and the model. The first dictionary has a system role and
provides a statement to set the context for the conversation. The second dictionary has a
user role and contains the prompt.
The API response is in JSON format and contains information regarding the response
generated by ChatGPT:
{
"choices": [
{
“#inish_reason": “stop,
"index": @,
"message": {
ntps:hwwwmlexperofblogichatgpt-ap aits115i24, 8:41 AM Exploring the ChatGPT API with Python | MLExpert- Get Things Done with Al Bootcamp
"content": "\n\nBefore creating an investment plan, it\u2019s important to note
assistant”
1678135181,
hatcmp] -6rBuPqnXPFt fdLNOcnkvAoldGykQ3" ,
1 "gpt-3.5-turbo-03e1",
"object": "chat.completion”,
“usage”: {
“completion_tokens": 397,
“prompt_tokens": 54,
“total_tokens": 451
Let's have a look at two key-value pairs
© choices: an array that contains an object with the response message, index, and finish_reason
* usage: an object that provides the number of tokens used to generate the response
In this case, the choices array contains only one object, which represents the response
message generated by ChatGPT. The message contains a recommended investment plan
and some advice for the user. The usage object indicates the number of tokens used to
generate the response, which can be useful to track usage and costs.
We will use a helper function to extract the message content and display it in Markdown
format
def show_choice(choice)
display (Markdown (choice[ "message"]["content"]))
show_choice(result["choices"][@])
[Bi chatcrt Response
Conversations
https shwww-mlexpertiafblogichatgpt-ap| amssi, sat aM Espoing the ChatGPT AP with Python| MLExpert- Got Things Done wt Al Booteamp
The messages parameter is designed to contain the whole conversation between the
user and ChatGPT. It's important to note that the API does not have the ability to record
or remember previous conversations.
To create a conversation with ChatGPT, we can use the result object from the previous
API call and add the generated response to the messages list. Then, we can make
another API call using this extended messages list to generate a tweet about the
personalized finance plan:
answer = result[*choices"][]["nessage”][ "content" ]
format_prompt
"Write the plan as a tweet”
result = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
“system”, “content”: “You're an expert personal finance advisor."},
"user", “content”: prompt},
“assistant”, “content”: answer},
“user”, "content": format_prompt},
show_choice(result[ "choices" ][@])
Hi chatGrr Response
Investing 60% in stocks, 20% in alternative investments, 10-20% in cash reserves, and
remaining 10-20% in bonds can maximize profit in 1 year for $10K portfolio. However,
always do your research and seek professional advice. #personal finance #investing
#portfolioallocation
API Options
The API has a wide array of options’, We'll have a look at limiting the number of tokens,
getting multiple completions and adjusting temperature.
Maximum Tokens
https shwwn-mlexpertiafblogichatgpt-ap| sits115i24, 8:41 AM Exploring the ChatGPT API with Python | MLExpert- Get Things Done with Al Bootcamp
ChatGPT's pricing is token-based, meaning the cost is determined by the number of
tokens you use.
There is a maximum token limit of 4096 per API request. Both input and output
tokens count toward this limit and your payment.
To control the maximum number of tokens generated by the model, the max_tokens
parameter can be specified
response = openai .chatCompletion.create(
model="gpt-3.5~turbo"
messages=[
(role ‘You're an expert personal finance advisor.
(role pronpt}
L
max_tokens=256
)
response
{
"choices": [
{
“finish_reason": "Length",
"index": @,
“message
t
content": "\n\nAs an AI language model, I cannot provide investment advice or
assistant”
1678135229,
id”: "chatempl-6rBvBDH2qjSFhv@lwPqB1az7dBTU" ,
model": "gpt-3.5-turbo-0301",
"object": "chat.completion”,
sage": {
‘completion_tokens": 256,
prompt_tokens": 54,
total_tokens": 310
https shwwn-mlexpertiafblogichatgpt-ap| eit9115724, 841 AM Exploring the ChatGPT API with Python | MLExpert- Get Things Done with Al Bootcamp
Note that the finish_reason isnow stop and completion_tokens is equal to the
number of max_tokens - 256.
If you want to know how many tokens your prompts contain, you can use the "tiktoken"
library to create this function:
def num_tokens_fron_messages (messages):
Returns the number of tokens used by a list of messages.
Code from: https: //github.com/openai /openai -cookbook
encoding = tiktoken.encoding_for_model("gpt-3.5-turbo")
num_tokens = @
for message in messages
num_tokens += 4 # every message follows {role/name}\n{content}assistant
return num_tokens
The num_tokens_from_messages function takes a list of messages and returns the
number of tokens that would be used by ChatGPT. Let's check the last messages:
messages = [
» "content": "You're an expert personal finance advisor."},
content": prompt}
num_tokens_from_messages (messages)
54
The number 54 is the exact same value returned by the usage.prompt_tokens attribute
of the API response. This attribute indicates the number of tokens used by the prompt
provided in the API request.
ntps:hwwwmlexperofblogichatgpt-ap mms115i24, 8:41 AM Exploring the ChatGPT API with Python | MLExpert- Get Things Done with Al Bootcamp
Generate Multiple Responses
To generate multiple completions for a prompt using ChatGPT API, you can set the n
parameter to specify the number of completions you want
result = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
“content”: "You're an expert personal finance advisor."},
prompt}
for i, choice in enumerate(result["choices"]):
print(f"Response {i + 1}:")
show_choice(choice)
print("\n")
[HB chatort Response
Temperature
You can adjust the level of randomness and creativity in the responses generated by
ChatGPT using the temperature parameter. The temperature ranges from 0 to 2, with a
default value of 1. A lower temperature will result in more focused and predictable
responses, while a higher temperature will result in more diverse and unpredictable
responses:
result = openai.chatCompletion.create(
model="gpt-3.5-turbo",
messages=[
"system",
: "user", "
"You're an expert personal finance advisor."},
: prompt}
iy
temperature=0
d
show_choice(result["choices"][@])
https shwwn-mlexpertiafblogichatgpt-ap| amt115i24, 8:41 AM Exploring the ChatGPT API with Python | MLExpert- Get Things Done with Al Bootcamp
HB chatarr Response
result = openai.ChatCompletion.create(
model="gpt-3.5-turbo"
messages=[
{("role": "system", “content”: "You're an expert personal finance advisor."},
{role": "user", “content”: prompt}
iy
‘temperature=1
)
show_choice(result,
choices”
[e])
HB chatert Response
result = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
role "content": "You're an expert personal finance advisor.
("role content": prompt}
L
temperatun’
)
show_choice(result["choices"][@])
Hi chaterr Response
response = openai .chatCompletion.create(
gpt-3.5-turbo",
system", “content
user",
‘You're an expert personal finance advisor
content": prompt}
temperature
)
show_choice(response["choices"][@])
[HB chaterr Response
ntps:hwwwmlexperofblogichatgpt-ap omns, :41aM Explring the ChaGPT API wth Python | MLEeper- Get Things Done wth Al Boicamp
Gradually increasing the temperature parameter can lead to more random and
nonsensical completions, while finding the right balance of values can result in more
focused and relevant responses
Conclusion
In conclusion, we have explored various features and functionalities of OpenAl's ChatGPT
API. We have seen how to control the number of tokens, get multiple completions, adjust
the temperature, and extract message content using helper functions.
Compared to the web app, the ChatGPT API offers significantly more customization
options. One particularly interesting area to explore is creating and modifying
conversations, which can potentially result in better response generation.
3,000+ people already joined
Join the The State of Al Newsletter
Every week, receive a curated collection of cutting-edge Al developments,
practical tutorials, and analysis, empowering you to stay ahead in the rapidly
evolving field of AL
Your Email Address SUBSCRIBE
Iwon't send you any spam, ever!
References
1. ChatGPT and Whisper APIs @
2. Qpenai Python Library ©
https shwwn-mlexpertiafblogichatgpt-ap| somssi, st aM Espoing the ChatGPT AP with Python| MLExpert- Got Things Done wt Al Booteamp
Tiktoken Library &
ChatGPT API Tutorial ©
5. ChatGPT API Reference ©
hitpsswww-mlexper ioblog/chatgpt-api awit