0 ratings0% found this document useful (0 votes) 51 views25 pagesAPI OpenAI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
archivetoday save. [eases] 14 stembr de 2024 18:21:94 UTC
Medium © Prsusr eseroer —(Cosereio) tar
API OpenAl: Do basico a
implementagao do chatbot com
GPT-40 Mini & Streamlit.
De iniciante absoluto a construtor de chatbot (com GPT-4o Mini).
@ Kris Ograbek . Seguir
Publicado om RumoalA - Leiturade 15 minutes - hé4horas
Imagem do avlorcom auc do Cana & DALL-©
© ChatGPT ¢ uma ferramenta poderosa.
Mas falta um recurso que é crucial para 95% das empresas.
Personalizagio.
Hoje, temos milhares de ferramentas de IA sem eédigo. A maioria usa
modelos OpenAl & Anthropic (GPT-4o, Claude 3.5 Sonnet) nos bastidores.As ferramentas sem cédigo de IA fornecem melhor personalizacdo do que o
ChatGPT.
Mas 0 nivel mais alto de personalizagao vem de escrever seu cédigo!
E € por isso que estou escrevendo este artigo.
Quero ensiné-lo a usar os modelos de linguagem grande OpenAl de tiltima
geracdio: GPT-4o (mais inteligente) e GPT-40 Mini (mais répido e barato) em
Python.
Depois de ler 0 artigo, voc¢ ira:
+ Saiba como usar modelos OpenAl em Python através da API OpenAl,
+ Aprenda os pardmetros cruciais do Large Language Model,
+ Aprenda a construir seu primeiro Al Chatbot com tecnologia GPT-40
Mini.
Voce estd pronto para construir?
Parte 1: Preparando o projeto.
Etapa 1: Prepare o ambiente Python.
Nota: Descrevi em detalhes vérias maneiras de preparar 0 ambiente Python, criar e
carregar a chave da API OpenAl neste artigo:
Como iniciar seu primeiro projeto de IA com Python ¢ Openal
APL.
© guia passo a passo para iniciantes (com aconfiguragao
completa)
Se esta ¢ a primeira vez que vocé faz isso, leia 0 artigo acima como
referéncia para esta etapa.
Suponho que vocé saiba (ou usou meu artigo de referéncia) para:
1. Crie a pasta do projeto.
2. Prepare o ambiente Python.
8, Crie a chave de API OpenAl.
4, Carregue a chave de API OpenAl.Usarei o ambiente virtual Python e usarei file para armazenar variaveis de
ambiente. .env
Entéo, meu arquivo fica assim: .env
OPENAL_API_KEY-skeproj-Liintit4auaaiisaun4101
Eaéa chave secreta real que vocé criou na plataforma
OpenAL. oPenaz_aer_Key
Passo 2: Instale os pacotes necessérios.
Para o projeto, precisaremos de:
+ API OpenAl: — a biblioteca para usar modelos OpenAl por meio de
chamadas de API. openat
«+ Streamlit: — para criar uma interface de usuério para o
chatbot. streantit
+ Python Dotenv: — para carregar variaveis secretas do arquivo. python
doteny .env
+ Tiktoken: — para contar fichas. tiks
into, vamos instalé-los com pip:
Incrivel! Agora estamos prontos para executar o eédigo Python.
Part 2: Getting responses from OpenAl API.
Step 1: Loading Environment Variabl
For the script, we'll need to load the variable from the file. It’s the
We load using the from the library: Load_dotenv() python-dotenv
Fron doteny dnport 1oae.deteny
oad.datenv()Step:
Initializing the OpenAl client.
We need an instance of the class: openat
fron openai ‘nport OpenA
client
operat ()
‘Step 3: Calling the GPT-40 Mini model.
Let’s pass our first prompt to GPT-40 Mini:
essages=[
rote: “user
content": "What 1s the capital of Poland?)
>
print conpletion.choices[6] message. content)
4 Prints: Tha capital of Poland 4s Warsaw.
Let's see what we did here:
+ Called the function. ctient.chat.conpletions.create()
+ Passed GPT-4o Mini to the parameter. nodet
‘+ Used the parameter to pass our prompt. nessages
+ Displayed the response by digging into the object. conptet ion
Here's an example of the entire object: completion
ids "chatenpt-ayeParYakPaGHKaYLaysPeVegvn",
wenotees" (
‘
"Finish_ceason"s "stop"
index"? 0,
raul,
funetioncatt"! nally
eooteale": mutt
>
L
‘Greated": 1724142103,
model": "gpt~to-mini 2024-07-18",
object": "chat. completion”
service tier"? null,
aysten_Fingerprint": "fp_asiseuesza",
usage"? (
“eonpletion.tokens": Typrompttokens": 14,
‘To get the model’s response we need to dig into the
key. completion.choices[0] .nessage.content
Part 3: Describing OpenAl API's Parameters.
Parameter 1: Message Roles.
Let’ talk about the major parameters in OpenAl API.
As you notice, the parameter is an array of objects. Each object consists of 2
key/value pairs: nessages
Role - defines the "author" of the message. We've got 3 roles:
1. User— it’s you.
2, Assistant — its the AI model.
3, System — it's the message that an Al model remembers throughout the
entire conversation.
Content — it’s the actual message.
Here's the image to picture that:Role
messages =
C
{"role": “system”,
“content: "You are an assistant. "},
"zole": "user",
"content": "tell me a joke "},
{"xole": "assistant",
"content": “Why did the chicken.
(BXs82al) sets behavior of assistant
§
ERSERETY chat mode
Ui
assent always remember the system prompt Image sou
Let’s focus on the system prompt.
‘System Message.
‘The system message sets the behavior of the AI model (assistant).
Al models keep this message always “on top”, Even during long
conversations, assistants remember the system prompt very well. Its like
whispering in the ear the same message all the time.
Here are examples of how you can use the system prompt:
+ Specify the output format.
+ Define the assistant’s personality.
+ Set context for the conversation.
+ Define constraints and limitations.
+ Provide instructions on how to respond.
Let me show you an example of using various system messages (but the
same user prompt):
system nessoges = [
"You are helpful assistant", # defaultYou answer every user query with ‘Just google itt",
Act 25 2 drunk Italian who speaks pretty bad English.",
het 96 Steven A Smith. You've got very controversial opinions on anything.
"Act a5 3 teonage Bieber Groupie who steors every conversation into saying h
1
prompt
"Give me & synanya to snart™
‘or systen.nessage ‘in syston_nessoges:
messages
{rotors "systen
Grote": “user”,
i
response = cLient.chat.conpletions.creste(
iptvdowmins", messagesenessnges
‘content: systom_message),
content”: prompt)
>
chat message * responce.chotces[] nessage.content
prine(fUsing systes message: (system nessa
Using system message: You are a helpful assistant.
Response: & synonym for “snart” is "intel Ligent.”
Using system message: You onswer every user query with ‘Just google it!"
Response: Just google (t!
Using system message: Act a6 a crunk Italian who soaks pretty bad English.
Response: Ah, you want a word, huh? Oxy, okay... how about "clever"? Like, you
sing system message: Act oF Steven A Smith, You've got very controversial eptnt
Response: Oh, come on now! You con do better thon chat! Snort? Really? Are we go
Using system message: Act as a teenage Bieber Grouote uho steers every conversat
Response: Oh ay gosh, “intelligent™ is a great synonym! But, you know what's rea
‘Same user prompt + Various system prompts = Various responses.
Question: What's your favorite response? Mine is Steven A. Smith @ @ @
Parameter 2: Tokens.
A token isa chunk of text that Large Language Models read or generate,
Here's key information about tokens:
+ A token is the smallest unit of text that AI models process.
+ Tokens don't have a defined length. Some are only 1 character long,
others can be longer words.
+ Tokens can be words, sub-words, punctuation marks, or special symbols.
+ Asa rule of thumb, a token corresponds to 3/4 of the word. So 100 tokens
is roughly 75 words.So let me show you how to count tokens.
Let's ask GPT-4o Mini to describe my country, Poland:
fron openai ‘nport OpenAl
eLient = openat()
model = "gpt-to-mini®
completion = client.chat completions. create(
odelenodel,,
wessagess
Grote": user", Neontent": “Describe Poland sn 2 sentences"}
1
>
response = conpletion.choiees{0} .nessage. content
print (response)
Here's the response (yours might be different):
Poland 4s » Central European country known for sts rich history,
vibrant culture, and resilient spirit.
It boasts a diverse landscape that ranges froa the
sandy beaches of the Baltic Sea to
the rugged poses of the Carpathian Mountains
Major cities Like Warsow, Krakow, and Gdansk
blend historical archtcecture with madern amenities,
reflecting Poland's dynamic evolution over the centuries
And now, let's count!
First, welll count the words in Python:
words = Lent
characters
sponse. sptit())
len(response)
prine((The response hes {words} words and {characters} characters")
1 Prints The response has $9 words and 391 characters.
Awesome! Our short description has 59 words and 393 characters.
And now I'll show you 2 methods to count tokens:
1. Using an online tokenizer (also from tiktoken)..
2. Using the library in Python. tikcoken
Let's start with the first one.‘Counting tokens in the Tiktokenizer app.
Vil take the description of Poland and paste it into the app. Here are the
results:
0200k_base
Token count
74
Poland is a central European country known for its ric
h history, vibrant culture, and resilient spirit. tb
vasts a diverse landscape that ranges from the sandy b
eaches of the Baltic Sea to the rugged peaks of the ca
rpathian Mountains. Major cities like Warsaw, Krakow,
and Gdansk blend historical architecture with modern a
menities, reflecting Poland's dynamic evolution over t
he centuries.
7651, 427, 362, 261, 13399, 11836, 4931, 5542, 395, 16
17, 10358, 5678, 11, 35180, 9674, 11, 326, aaoa9, 1076
7, 18, 1225, 45644, 261, 15174, 23659, 484, 33269, 59
1, 298, 86902, 39733, 328, 290, 128005, 22114, 316, 29
©, 75482, 50342, 328, 290, 4904, 4189, 1200, 56820, 1
3, 27974, 15636, 1299, 136769, 11, 110662, 384, 11, 32
6, 499, 67, 47099, 25306, 19322, 24022, 483, 6809, 272
46, 11, 66890, 50029, 885, 14012, 26416, 1072, 290, 39
264, 364
“The tokens ofcur simple description. Image rom the tktokonizer 200,
Important: Various LLMs use various encoders for tokens. When you count tokens
for GPT-40 or GPT-4o Mini, ensure you use the “o200k. base” encoding.
The description takes 74 tokens (or $9 words).
Please notice that most words use single tokens (the words highlighted with
a single color). As a result, we have 15 more tokens than words.
Let's doa simple math:
59 (words) /74 (tokens) = 0.8
For our description, we get roughly 80 words for 100 tokens. That's close to 75
(our rule of thumb).‘Counting tokens in Python.
‘To count tokens in code, we'll use the library. Here's how: tiktoken
jnport tiktoken
lene = titoken.encoding. for model "gps:
sini")
tokens = ene. encode (response)
print(tthe response has {len(tokens)} tokens.)
1 Prints! The translation has 74 tokens.
Both counting methods return the same number of tokens.
‘Do they return the exact same tokens?
feport tiiktoken
‘onc = tiktoken. got encoding (92604 base")
prine(enc-encode(response))
f¥ Prints: (7651, 427, 382, 261, 13399, 11636, 4931, 5542, 395, 1617,
4 10355, S675, 12, 35180, 9674, 11, 326, 54069, 18767, 13, 1225, ... )
What I'm comparing here?
Let's look at the image from the tokenizer app. Below the text, it prints alist
of numbers (tokens).
It’s because every token has a number to represent
It starts with: (7651, 427, 382, ... ]. And so on.
And my little code snippet also returns a list of numbers (tokens). As you can
see, they are identical.
Awesome! This section grew much larger than I expected.
Let’s move to the LLM parameters.
Parameter 3: Large Language Models Parameters.
I want to show you 3 parameters:
1. Temperature — to regulate the model's reasoning and creativity.2, Seed — to reproduce responses (even the creative ones).
3, Max tokens — to limit the number of returned tokens.
‘Temperature
‘The temperature in LLMs is the trade-off between reasoning and creativity.
+ Low temperature -> high reasoning & low creativity
‘+ High temperature > low reasoning & high creativity
Low Temperature (close to 0):
+ Decreases the chance of hallucinations.
+ The model's output is less random and creative.
+ The model's output is more predictable and focused.
+ The model tends to choose the most likely words and phrases.
High Temperature (close to
+ Increases randomness and creativity in the output.
+ The model is more likely to choose less probable words and phrases.
+ Leads to more diverse, unexpected, and sometimes nonsensical
responses.
What's the optimal temperature?
The optimal temperature doesn't exist. It depends on the tasks and use cases.
So here are some examples,
Use low temperature for:
+ Coding.
+ Translations,
+ Generating factual content.
+ Answering specific questions.
Use high temperature for:
+ Creative writing.
+ Brainstorming ideas.
‘+ Generating diverse responses for chatbots.Here's an image to visualize my description:
‘Temperature in Large Language Models
Low Temperature High Temperature Ci
+ Hi easing + Lom ssteing
Low anny + gn exeabony
+ Peitaie ou. + Duesse ouput
Use Cases
High Temperature
+ cstv wing
+ ranetoming ess
+ denratng dre eponest
LowTemperature
+ omnerating facta coment
+ Answaring specie questions
‘Temperature in Action
Creative Task: Logical Task:
“cararat aunque superar concept” “Emlae me proceso potosynteset
aortas onsen, treo te explanation
“Superman e character wth Stpnotonntnee
fightard caper stort
Temperature Temperature
het nea: ‘Yates expats, otetay wong,
Mais Sesee neo wn can mangulte
‘iene ae comments ress
‘Tomperature in
Let's see the temperature in action.
To set the model's temperature, we use the parameter
(surprise!) temperature
Here's how:
from epenai ‘inport Opendt
eLient = openar()
products.completion = client chat.conpletions.create(
sodels"gat-ao-mini®,
essages=[
Wrote": “user
‘content’
“Give me 16 product nase ideas for an eco-fr
‘emperature-9.0 # set the temperature herepraresponse = products conpletion.choices[0] nessage.content
prine(pr.response)
For our LLM, we have the following task:
“Give me 10 product name ideas for eco-friendly sportswear for basketball players.”
So it’s a creative task, meaning we should use a high temperature. But I set it
to 010 show you the not-so-creative results.
>»
Hereethe response from the above codes
Sure! Here are 19 product name ideas for eco-friendly sportswear designee for ba
2. Feotaop Gear
2. Greencourt Apparet
3. Sustainable Slamear
44. Rebound Ecorit,
SS. Nature's Wet Sportenear
15. Farchounce Catlection
1. conscious Courtwear
8. Planet?lay Pertoraance
8. Feodrisble Designs
20, BioBasket Threads
Feel free to mix and match or modify these nanes to better suit your brand visio
— >
‘The results look OK. But here's the problem.
GPT-4o Mini followed the most probable (and the least random) path.
And if I send the same prompt again, I will get identical results! (I won't do it
here, but you'll see it in this notebook).
For temperature = 0, LLMs become deterministic.
It means that for the same prompt, you get the same response every time
you run the code. Or at least you shoul... I've run the code several times and
sometimes I got different results,
Lets set the high temperature now:
products_creative = cLient.chat.campletions.create(
odes gat—tormini",
essages=(
Crate": "user", "content": “Give me 26 product nane ‘ideas for an eco-tr
L
‘eemperatur
© # High temperaturerresponse_creative = products_crentive,choices[0] .nessage.content
rine response_creative)
And check the results (yours will be different):
Suro! Hore are 10 product name fdeas for eco-friendly sportswear designed for bs
+ Green Dunk Gear
Ecotoop Essentials
sustainable Seish
Rebouns Tareads
BieSouree Apparel
Feo Light Performance
5S Consefous Courtwear
. Earthwise Athlete
20. EcoBrbble Sportawear
Feel free to mix and match or modify them to better suit your brand vision!
—_—_—EE »
We see some similarities.
But these results are more random and more creative.
And if you run the code again, you'll get different results.
I'm sure you now understand how the temperature in LLMs works.
Sadly, I didn't find any simple examples to show the benefits of a low
temperature.
Good examples could be asking GPT-4o Mini to:
+ Perform detailed analysis.
‘+ Write Python code for complex exercises.
+ Translate longer text into other languages.
OK, let's move to the seed parameter.
Seed.
How to combine creativity with reproducible results?
For high temperatures, we get various results even when we use the same
prompt. It's because the “randomness” of the model is high.
But in AI, randomness isn't fully random.What cioes it mean? Even for higher temperatures, you can reproduce
identical results.
You need to adda constant number to the parameter. sees
Lets test it for the product ideas:
products_creative = client chat completions. create(
odel="got—do-mini™,
sessages:
Grote: user", content": “Give we 20 product nase ideas for an econtr
L
enperature-1.6, ¥ High tengerature
seed42 + Seed parameter here
>
responte_creative = products_crentive.choices[0] .nessage.content
print response_creative)
— >
Sure! Hore are 19 product name Sdeas for eco-friendly sportswear designed spect
2. Greentioop Gear
2. EeoBounce Apparel
3. Sustainable Stanwear
4. Rebound Ecorst
5. PlanetPlay Perforaance
66. HoopHarnosy Wear
1. Natureoripbte Coltectton
18, Earehcoure Sportswear
9. BioBounce Basketball Gear
20. ConsefousCourt Clothing
Feel free to mix and match elenants to Find che perfect mans for your brand!
| >»
1 got 10 random and creative suggestions.
But when run the same code (with the same value), I expect the same
results, sees
And when I change the value, I expect different results. seed
So use the parameter when you want high creativity while making the results
reproducible. seed
Max Tokens
The max tokens parameter limits the number of tokens in the LLM response.
already said, how important itis to manage the token usage. This
parameter is an easy way to control the response length.
But, here's an issue with‘The parameter cuts off the response when it reaches the limit.
Let me show you an example.
Til ask GPT-4o Mini to write a poem twice (without and with token limits). Pll
use the same seed, so I expect the same poem.
First, let's write a poem without a token limit,
fron opens Saport OpenAl
eLient = Openat()
prompt_poen = “Write a 2-verse poem about a friendly baby fox."
full conplation = client chat.complations.create(
Redel="got—to-mini",
ressages=[("role”: "user"
seed:2i set the seed
content": prospt poem],
>
FulL_poen = full completion.choices{o] .nessage.content
rink (full poem)
GPT-4o Mini creates the poem:
In the hush of dam, where the wildflowers sway,
1A baby fox frolies, ‘chasing dreans Jn the day.
With fur Tike the Sunset, anc eyes bright and gleam,
He dances through meadows, as if in a drese.
is playful pam prints Leave whispers of joy,
A Leap of pure Laughter, this curious boy,
Ne greets every creature, with a twitch of his tail,
Tn a world full of wonder, where friendships prevasl.
But what if I use the parameter? nax_tokens
short completion = client. chat.conplecions.create(
content": prospt_poonl
use’ the sone seed
1 set max tokens
short poem = short_conpletion.cheices(6] .nessage.content
print (short_poem)
‘We get the following poem:Tn the hush of dann, whore the witéFlowers sway,
A baby fox frolics, chasing oreans in the day.
With fur Like the Sunset, and eyes bright and glean,
He dances through meadows, a5 if in
So the model starts generating the same poem...
But when it reaches the token limit, it stops:
‘gpt-to
Token count
50
In the hush of dawn, where the wildflowers sway, —
A baby fox frolics, chasing dreams in the day.
with fur like the sunset, and eyes bright and gleam,
He dances through meadows, as if in
“Te responsehas exe 80 tokens. Imageby the autho: tom htostikekereewerce apa.
‘This “hard stop” leads to incomplete responses.
So be aware of this feature, max_token
Parameter 4: Streaming.
Thave shown you how to make standard API calls to get GPT-4o mini
responses.
But they are different from the responses we get from ChatGPT:
‘There's no streaming.
We first wait for the entire response. Then, we see it on the screen.
And for longer responses, it takes several seconds. It's not bad (because GPT-
40 Mini is blazing fast), but it’s easy to apply streaming.
Let me show you:from opens Saport OpenAt
cLient = openaz()
stroam = Lint. chat.complet ons..ereate(
odel="got-tormini",
ressages=[
Girole": "user", Neontent": “Now to make a pizza?)
L
stream-Irve ## set this paraneter
>
for chunk in stream:
‘token = chunk.chotces(0] delta.content
4f token ts not Hone:
prine(token, en
)
(ita long response, so I'll skip it.)
We only need 2 changes:
1. Set the parameter to True. stresn
2. Display the stream in a for loop.
‘We'll use streaming in our Chatbot!
Part 4: Building the Chatbot.
Its time for the “fun” part :)
‘We'll create a simple chatbot with Streamlit and OpenAl API.
Tell work like thiMy GPT-40 Mini Chatbot @
(tewrromee I
Example usago ofthe Al Chatbot
We'll use Streamlit to create the chatbot UI (User Interface).
First, let's create a file for the Python script. You can call it or
similar. chatbot.py
‘Then, use the following code:
jeport stroamtit a5 st
fron opens Saport Open
{ram dotenv ieport Load. doteny
oad_doterv()
eLient = Openat()
st.title(my GPT=to Hint chaebot #2")
f Initialize aessages in the session state
Sf "essages" not in at. session_state
stsession.statesmessages = []
# bisplay nessages
for nessage sn st.session_statel"inessages")with st.chat_vessage(nessagel"rote"1)
‘t-markdoan(message("content”])
$f user _pronot = st.chat_input(*¥our Prompts")
‘st.Session_state-nessages.append({"'role?: “user, "content": user_pronpt})
with st.chat_sessage("user")
tomarkelaan(user_prompt)
with st.chat_sessage("assistant”)
chatbot_nsg = stenpty()
ftt_response
reas = client.chat .completions.create(
Imodel"gpt-tomnini",
inessagess(
(rote": msgi"rolet), "content: mgicontent"])
for msg in st.session_statet"nessoges")
»
streae-Toue,
)
for chunk in strean:
‘token = chunk, chotces(@] .detta.content
Sf token 4s not None!
full_response = fulLresponse + token
chathot_msg.markdoun( ful l_respense)
chatbotnsg.markdown (full_response)
st session state.nessages.append({"role": “assistant, "content: fullrespe
So it’s less than 50 lines of codet
‘Note: I described in detail how to build the Chatbot UI with Streamlit. Read this
article if you want the code break-down:
Bulld Your Local Al Chatbot with Ollama, Llama 3, and Streamit:
‘Beginner Tutorial,
CChat with Llama 3.88 on your laptop,
Lastly, run the chatbot by typing in your terminal:
stroamLit run chatbot py
Head to http://localhost:8501/ to see the following:My GPT-40 Mini Chatbot @&
Your Prompt
‘The Chatbot before usage. Image by the author.
Amazing!
‘You can now interact with your won Chatbot!
Conclusions
Congrats! You went through the entire article.
(And learned a bunch!)
‘You now know:
‘+ How to use GPT-4o and GPT-4o Mini with OpenAl API.
‘+ What are tokens and how do LLMs read and generate them.
+ The message roles and the importance of the system prompt.
+ The meaning and practical applications of the LLMs temperature.
+ How to reproduce responses using the seed parameter.
+ The pros and cons of using the max tokens option.
‘+ How to stream the model's responses.
And you know how to create your GPT-4o-powered AI Chatbot!Have fun with it!
Keep learning my friend!
® My name is Kris. Ihelp tech people become AI Engineers.
(On Medium, I share my experience and lessons that are valuable for aspiring AI
Engineers, Make sure to subscribe,
Don't miss out on Al Engineering articles & tutorials by Kris! Get
‘them rightinto your inbox!
Don't miss out on Al Engineering articles & tutorials by Kris! Get
them rightinto your inbox! Kris shares his journey.
References:
1. Python Notebook with the OpenAl Parameters.
2, Python script with the AI Chatbot with Streamlit.
3. OpenAL API documentation.
4, Streamlit documentation.
OpenaiApi Streamft_—Gpt4oMini_AChatbot_—_Largo Language Models
©
Written by Kris Ograbek Cran) C)
“17k Folowers - Writer for Towards A
Iman Al Engineer who loves crating custom Altols &teschng others how te do the
‘same! LUM Specht Al Consultant Mentor
More from Kris Ograbek and Towards Alena
tulll,
in ograbon Al Advances 4 es opr in Alves
If startedlearning Al Engineering Run Llama 3 on your laptop: An
in 2024, here's what [would do. Introduction to Ollama for...
“The exact path! would choose, Running open-source LLMs has never been
+ ans wuK @9 + Ag 55 @ ct
Kes oarabek Al Aarances 1b Kes Onebakin Al Acvances
Build Your Local AlChatbot with Howto Start Your First Al Project
llama, Llama 3, and Streamlit: A... with Python and OpenAl API.
Chat with Llama 3 8 on your laptop. ‘The step-by-step guide for beginners with
the fullsetup).
+ Awe x0 © + 2s was et ct
Recommended from Medium
zoumara kta in Toads DeaScionee (© Horie con & Be Lee © in Generative A
Al Agents—From Concepts to ‘The Top 6 Al Tools That I Simply
Practical mplementation in Python Can't Live Without
This illchange the way youthink about AL__These Tools Have Given Mean Unair
and its capabiltes Advantage+ nee won @0
"Natural Language Processing
CChat@PT prompts:
a
gE
image by Author
& Hosam Sheth in Towarae
How I Stay Up to Date With the
Latest Al Trends [2024]
Stay ahead of the Al game with a concise list
of resources.
+ nas won es
© Gisreato wor
‘Top 5 Al Agent Platforms You
Should Know
Al agents ate autonomous programs
esigned to perceive their environment ana
+ ne W502 eo
+ Apt 19K @20 ct
AlRegulation
Predictive Modeling w/
Python
4 rs opel in AlAances
How to Start Your First Al Project
with Python and OpenAl API.
“The step-by-step guide for beginners with
the full setup).
+ 2 Wao et
© Peco satin
x How | Created 144 Fully
‘Automated YouTube Videos Per...
With heise of automated content creation
tootsie Dubdup and Canva, there's been a
+ tet 559 eTHelp Status About Carers Press bg Privacy Terms Texttosposch Teams