OpenAI API crash course

OpenAI API crash course
Learn how to build with "ChatGPT" for fun and profit
About me
Grew up in Rodos, Greece
Software Engineering at GU & Chalmers
Working with embedded systems
Teaching
DIT113, DAT265, Thesis supervision
C++ for professionals, Coursera
Open source projects
https://platis.solutions
https://github.com/platisd
Email: dimitris@platis.solutions
platisd/openai-pr-description
Automatically generate PR descriptions, focus on why not what
Chat completion API
phonix
Generate subtitles, more accurate than YouTube, LinkedIn etc
Speech to text API
sycophant
Write opinionated articles based on the latest news on a topic
Chat completion API, Image generation API
https://robots.army
About this workshop
Explore OpenAI APIs
Chat completion
Function calling
Creating the "perfect" prompt
Reducing costs
"ChatGPT API" or correctly: OpenAI API
OpenAI API is a collection of APIs
APIs offer access to various Large Language Models (LLMs)
LLM: Program trained to understand human language
ChatGPT is a web service using the Chat completion API
Uses gpt-3.5-turbo (free tier) or gpt-4.0 (paid tier)
OpenAI API endpoints
Chat completion
Given a series of messages, generate a response
Function calling: Choose which function to call
Image generation
Given a text description generate an image
Speech to text
Given an audio file and a prompt generate a transcript
Fine tuning
Train a model using input and output examples
ChatGPT aside, do you use any tools
based on OpenAI's models?
Getting started with OpenAI API
Install Python library
pip install --user openai
Get your API key
Login at platform.openai.com
Go to API keys
Create new secret key
(Optional) Create environment variable OPENAI_API_KEY with key
Ensure successful installation
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
models = openai.Model.list()["data"]
for model in models:
print(model["id"])
gpt-4
gpt-4-0314
curie-search-query
babbage-search-document
text-search-babbage-doc-001
babbage
...
Chat completion API
Given a series of messages, create a response. Important parameters:
model : Which LLM to use, balance cost, speed and performance
gpt-3.5-turbo : Cheaper & faster
gpt-4.0 : Better performance, larger input, less hallucinations
temperature : "Creativity" of the model's response
Lower values result in more deterministic responses
Allowed value range: [0.0, 2.0]
messages : A list of messages that represent the "conversation"
Each message needs a role and content properties
messages has the conversation for the model the provide a response.
messages = [
{"role": "system", "content": "You are an assistant that talks like a 15 yo"},
{"role": "user", "content": "Should I use goto statements?"},
{"role": "assistant", "content": "No bro, that's bad practice duh "},
{"role": "user", "content": user_input},
]
The first 3 elements of the list are the "context"
The last is the user's input we want the model to respond to
system role: High level instructions for the conversation
assistant role: The model's "ideal" (or previous) response
user role: The user's input
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
user_input = input("Enter your programing question: ")
messages = [
{"role": "system", "content": "You are an assistant that talks like a 15 yo"},
{"role": "user", "content": "Should I use goto statements?"},
{"role": "assistant", "content": "No bro, that's bad practice duh "},
{"role": "user", "content": user_input},
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo", messages=messages, temperature=0.6
)
print(response.choices[0].message.content)
Enter your programing question: Is OOP good?
“
“
OMG, yes! Object-oriented programming is like, the bomb dot
com! It helps you organize your code and makes it easier to
understand and maintain. Plus, you can create cool objects and
stuff. So yeah, OOP is pretty rad!
“
“
system prompt
{"role": "system", "content": "You are an assistant that talks like a 15 yo"},
Changing content to You are a 15 year old programmer wouldn't
necessarily work as intended.
system prompts that clearly illustrate the context work better:
You are programmer and talk like a 15 yo
You are programmer and 15 years old
Yes, Object-Oriented Programming (OOP) is widely considered
to be a good programming paradigm. It promotes code
organization, reusability, and modularity.
“
“
temperature values
Higher values = More "creative" responses
Lower values = Less varied responses
Higher values = Lower probability to follow "instructions"
gpt-4.0 nonetheless better at following instructions
messages = [
{"role": "system", "content": "You are a helpful assistant"},
{
"role": "user",
"content": "Create a JSON object using months of the"
+ " year as keys and days of each month as values",
},
]
temperature=0.6
Consistently:
{
"January": 31,
"February": 28,
"March": 31,
"April": 30,
"May": 31,
"June": 30,
"July": 31,
"August": 31,
"September": 30,
"October": 31,
"November": 30,
"December": 31
}
temperature=1.9
Consistently not what we intended. One possible response:
```json
{
"January": 31,
"February": 28,
"March": 31,
"April": 30,
"May": 31,
"June": 30,
"July": 31,
"August": 31,
"September": 30,
"October": 31,
"November": 30,
"December": 31
}
```
Note that February has 28 days by default, in line with the regular Gregorian calendar ordering.
Is there anything else I can help you with?
Function calling - Let the model choose
def get_chat(messages=None, model="gpt-4", temp=0.2, functions=None):
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temp,
functions=functions,
)
Pass functions to the ChatCompletion API
gpt-4 better results than gpt-3.5-turbo with "complex" prompts
Keep temperature rather low for higher focus, less hallucinations
functions parameter
[
{
"name": "call_staff",
"description": "Call a member of staff to the table",
"parameters": {"type": "object", "properties": {}},
}
]
Provide a list of functions to the model, it will choose which one to
call based on the context and the function descriptions.
functions = [
{
"name": "place_order",
"description": "Place an order for a pizza",
"parameters": {
"type": "object",
"properties": {
"name": { "type": "string", "description": "The name of the pizza, e.g. Pepperoni", },
"size": {
"type": "string",
"enum": ["small", "medium", "large"],
"description": "The size of the pizza. Always ask for clarification if not specified.",
},
"take_away": {
"type": "boolean",
"description": "Whether the pizza is taken away. Assume false if not specified.",
},
},
"required": ["name", "size", "take_away"],
},
},
]
name : The name of the function to call
description : A description of the function, helps the model choose
parameters/type : Always "object" for now
parameters/properties : Empty if no function parameters
<param>/type : "string" , "boolean" , "int"
<param>/enum : Optional list of allowed values
<param>/description : Description of parameter, helps the model
parameters/required : List of required parameters
system role prompt
messages = [
{
"role": "system",
"content": "Don't make assumptions about what values "
+ "to put into functions. Ask for clarification if you need to.",
},
{
"role": "system",
"content": "Only use the functions you have been provided with.",
},
]
Avoid "hallucinations" and stick to the provided functions.
The model will happily make things up!
API response
{
"role": "assistant",
"content": null,
"function_call": {
"name": "place_order",
"arguments": "{n"name": "Margherita",n"size": "medium",n"take_away": falsen}"
}
}
If model has decided a function call needs to be made, the response
will contain a function_call property and content will be null .
arguments is a string, not a JSON object
def get_chat(messages=None, model="gpt-4", temp=0.2, functions=None):
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temp,
functions=functions,
)
message = response.choices[0].message
if "function_call" in message:
f = globals()[message["function_call"]["name"]]
params = json.loads(message["function_call"]["arguments"])
f(**params)
else:
print(message["content"])
return "function_call" in message
Remember to append the user's input to the messages list
so that the model can use previous input as context.
function_called = False
print("How may I help you?")
while not function_called:
user_input = input()
messages.append({"role": "user", "content": user_input})
function_called = get_chat(messages=messages, functions=functions)
Does not take much code to create a "chatbot" that calls functions
So it shouldn't take that long to develop it, right?
>> How may I help you?
<< I'd like to order some pizza please
>> Of course, I'd be happy to assist you with that. Could you
please specify the name of the pizza you'd like to order and the
size you prefer? Also, would this be for take away or are you
dining in?
<< I'd like a large one
>> Sure, could you please specify the type of pizza you would
like to order?
<< A Margherita please
>> Placing order for Margherita pizza, large to eat in
“
“
Conclusion
Chat completions API for summarizing, extending etc
More extensive than what we covered here
Can tightly integrate it with your system via function calling
gpt-3.5-turbo is often good enough and faster but can fall short
Use the Python library to get started quickly
Build your own wrapper around it if needed
Use OpenAI API Playground to experiment
Full "pizza bot" example: plat.is/openai-pizza
Prompting
The model is a black box, we cannot be certain how it will respond
In practice the previous, simplified, approach won't be enough
Prompting is evolving into its own "craft"
Hype around "prompt engineering" by "influencers"
Approach iteratively, follow best practices, it's not magic
Be specific - Outcome
Write a program that receives a video file and turns it into a GIF.
Write a program in Python that receives a video file as a
command line argument. Use the imageio library to turn it into a
GIF.
Be specific - Length, format and style
Summarize the following text: {text}
Summarize the following text in 3 sentences. Use your own words
and do not plagiarize: {text}
Be specific - Delimit the input
Summarize the following text: {text}
Summarize the following text: ```{text}```
Be specific - Output format
Summarize the following text: {text}
Summarize the following text as a JSON object where the key is
`summary` and the value is the summary: ```{text}```
Nudge the model
Given the diff of the pull request, focus on why the change is
needed.
`git diff`: ```{diff}```
This PR is needed because it will
Give examples - Start simple
In the following police report, provide a list of the names of the
people arrested.
Names:
Give examples - Simple didn't work? Provide more!
Provide a list of the names of the people arrested.
Report 1: Yesterday John Doe along with his friend Jane Doe were
arrested for trespassing. Officer Barbrady was the arresting officer.
Names 1: John Doe, Jane Doe
Report 2: This morning Bob Smith was arrested for speeding.
Witnesses including Robert Johnson and Mary Young saw the
incident.
Names 2: Bob Smith
Report 3: {report}
Names 3:
Provide steps for complex tasks
Write a program that receives a video file and turns it into a GIF.
1. Use Python
2. Use the imageio library
3. Read the command line argument `--video`
4. Check if the file exists
5. Read the file
6. Feed the file to `imageio`
7. Parse the `--output` command line argument
8. Write the GIF to the provided output path
Break down complex prompts and chain them
Write a short poem inspired by the characters in the following text
and generate 3 keywords for the poem: {text}
Write a short poem inspired by the characters in the following
text: {text}
Generate three keywords for the following poem: {poem}
Break down large prompts and chain them
Summarize the following texts into a single concise text:
Text 1: {potentially_long_text1}
Text 2: {potentially_long_text2}
Text 3: {potentially_long_text3}
Summarize the following text: {potentially_long_text1}
Summarize the following text: {potentially_long_text2}
Summarize the following text: {potentially_long_text3}
Summarize the following texts into a single concise text:
{summary1} {summary2} {summary3}
Conclusion
Don't believe the hype, ignore the influencers
Prompting is an iterative process, sometimes slow
Be specific, provide examples, break down complex prompts etc
Your prompts may need to be modified once you switch models
gpt-3.5-turbo to gpt-4.0 is not always "backwards compatible"
ChatGPT Prompt Engineering for Developers by deeplearning.ai
Best practices for prompt engineering with OpenAI API
Reducing costs
Using the OpenAI API is not free
Costs are based on the number of tokens used
1 token = 4 chars in English (on average)
Using other languages is more expensive
Costs pile up during development but explode in production
Use the cheapest model that works for your use case
Start with gpt-3.5-turbo and see if you can go even cheaper
Speech-to-text API and Image generation API are expensive
Investigate whether you can run Whisper locally
Fine tuning
Fine tuning is a way to "teach" the model to react to specific input
If your prompts contain a lot of examples, it might make sense
Fine tuning is expensive but so are large prompts
Paying extra for both fine tuning and calling the fine tuned model
Don't fine tune until you get really good results with prompting
Takeaways
LLMs will change the world for the better (not too dramatically)
Learning how to use them, not only as a developer, will be a must
Incorporating them in systems can provide additional value
Sentiment analysis, summarizing, transforming etc
Many more (fun) challenges when integrating an LLM in a system
Moderation, evaluation, reliability, ethics etc
Prompt engineering is an iterative process, potentially slow
1 de 42

Recomendados

clicks2conversations.pdf por
clicks2conversations.pdfclicks2conversations.pdf
clicks2conversations.pdfMarie-Alice Blete
196 visualizações95 slides
ProgFund_Lecture_4_Functions_and_Modules-1.pdf por
ProgFund_Lecture_4_Functions_and_Modules-1.pdfProgFund_Lecture_4_Functions_and_Modules-1.pdf
ProgFund_Lecture_4_Functions_and_Modules-1.pdflailoesakhan
2 visualizações43 slides
Building Services With gRPC, Docker and Go por
Building Services With gRPC, Docker and GoBuilding Services With gRPC, Docker and Go
Building Services With gRPC, Docker and GoMartin Kess
1.4K visualizações61 slides
python-online&offline-training-in-kphb-hyderabad (1) (1).pdf por
python-online&offline-training-in-kphb-hyderabad (1) (1).pdfpython-online&offline-training-in-kphb-hyderabad (1) (1).pdf
python-online&offline-training-in-kphb-hyderabad (1) (1).pdfKosmikTech1
33 visualizações208 slides
Python fundamentals por
Python fundamentalsPython fundamentals
Python fundamentalsnatnaelmamuye
92 visualizações69 slides
Python component in mule por
Python component in mulePython component in mule
Python component in muleRamakrishna kapa
950 visualizações13 slides

Mais conteúdo relacionado

Similar a OpenAI API crash course

Python for scientific computing por
Python for scientific computingPython for scientific computing
Python for scientific computingGo Asgard
1.3K visualizações20 slides
Be The API - VMware UserCon 2016 por
Be The API - VMware UserCon 2016Be The API - VMware UserCon 2016
Be The API - VMware UserCon 2016Matthew Broberg
321 visualizações60 slides
OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin & Leanne La... por
OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin &  Leanne La...OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin &  Leanne La...
OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin & Leanne La...NETWAYS
6 visualizações37 slides
Introduction to Basics of Python por
Introduction to Basics of PythonIntroduction to Basics of Python
Introduction to Basics of PythonElewayte
142 visualizações14 slides
Pemrograman Python untuk Pemula por
Pemrograman Python untuk PemulaPemrograman Python untuk Pemula
Pemrograman Python untuk PemulaOon Arfiandwi
8.6K visualizações57 slides
MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo... por
MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo...MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo...
MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo...MongoDB
268 visualizações38 slides

Similar a OpenAI API crash course(20)

Python for scientific computing por Go Asgard
Python for scientific computingPython for scientific computing
Python for scientific computing
Go Asgard1.3K visualizações
Be The API - VMware UserCon 2016 por Matthew Broberg
Be The API - VMware UserCon 2016Be The API - VMware UserCon 2016
Be The API - VMware UserCon 2016
Matthew Broberg321 visualizações
OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin & Leanne La... por NETWAYS
OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin &  Leanne La...OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin &  Leanne La...
OSMC 2023 | Experiments with OpenSearch and AI by Jochen Kressin & Leanne La...
NETWAYS6 visualizações
Introduction to Basics of Python por Elewayte
Introduction to Basics of PythonIntroduction to Basics of Python
Introduction to Basics of Python
Elewayte142 visualizações
Pemrograman Python untuk Pemula por Oon Arfiandwi
Pemrograman Python untuk PemulaPemrograman Python untuk Pemula
Pemrograman Python untuk Pemula
Oon Arfiandwi8.6K visualizações
MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo... por MongoDB
MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo...MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo...
MongoDB World 2018: Tutorial - Free the DBA: Building Chat Bots to Triage, Mo...
MongoDB268 visualizações
How to Build a Site Using Nick por Rob Gietema
How to Build a Site Using NickHow to Build a Site Using Nick
How to Build a Site Using Nick
Rob Gietema12 visualizações
lecture 2.pptx por Anonymous9etQKwW
lecture 2.pptxlecture 2.pptx
lecture 2.pptx
Anonymous9etQKwW11 visualizações
Mp24: The Bachelor, a facebook game por Montreal Python
Mp24: The Bachelor, a facebook gameMp24: The Bachelor, a facebook game
Mp24: The Bachelor, a facebook game
Montreal Python1.2K visualizações
Python (3).pdf por samiwaris2
Python (3).pdfPython (3).pdf
Python (3).pdf
samiwaris223 visualizações
Visual Basic 6.0 por Palitha Baddegama
Visual Basic 6.0Visual Basic 6.0
Visual Basic 6.0
Palitha Baddegama6.4K visualizações
Help with Pyhon Programming Homework por Helpmeinhomework
Help with Pyhon Programming HomeworkHelp with Pyhon Programming Homework
Help with Pyhon Programming Homework
Helpmeinhomework70 visualizações
class 12th computer science project Employee Management System In Python por AbhishekKumarMorla
 class 12th computer science project Employee Management System In Python class 12th computer science project Employee Management System In Python
class 12th computer science project Employee Management System In Python
AbhishekKumarMorla27.4K visualizações
Test Failed, Then... por Toru Furukawa
Test Failed, Then...Test Failed, Then...
Test Failed, Then...
Toru Furukawa2.7K visualizações
Python Novice to Ninja por Al Sayed Gamal
Python Novice to NinjaPython Novice to Ninja
Python Novice to Ninja
Al Sayed Gamal1.9K visualizações
web programming UNIT VIII python by Bhavsingh Maloth por Bhavsingh Maloth
web programming UNIT VIII python by Bhavsingh Malothweb programming UNIT VIII python by Bhavsingh Maloth
web programming UNIT VIII python by Bhavsingh Maloth
Bhavsingh Maloth863 visualizações
Lecture 0 - CS50's Introduction to Programming with Python.pdf por SrinivasPonugupaty1
Lecture 0 - CS50's Introduction to Programming with Python.pdfLecture 0 - CS50's Introduction to Programming with Python.pdf
Lecture 0 - CS50's Introduction to Programming with Python.pdf
SrinivasPonugupaty19 visualizações
Controller Testing: You're Doing It Wrong por johnnygroundwork
Controller Testing: You're Doing It WrongController Testing: You're Doing It Wrong
Controller Testing: You're Doing It Wrong
johnnygroundwork1K visualizações
Python programming msc(cs) por KALAISELVI P
Python programming msc(cs)Python programming msc(cs)
Python programming msc(cs)
KALAISELVI P453 visualizações
Python: an introduction for PHP webdevelopers por Glenn De Backer
Python: an introduction for PHP webdevelopersPython: an introduction for PHP webdevelopers
Python: an introduction for PHP webdevelopers
Glenn De Backer1.5K visualizações

Mais de Dimitrios Platis

Builder pattern in C++.pdf por
Builder pattern in C++.pdfBuilder pattern in C++.pdf
Builder pattern in C++.pdfDimitrios Platis
100 visualizações13 slides
Interprocess communication with C++.pdf por
Interprocess communication with C++.pdfInterprocess communication with C++.pdf
Interprocess communication with C++.pdfDimitrios Platis
215 visualizações19 slides
Lambda expressions in C++ por
Lambda expressions in C++Lambda expressions in C++
Lambda expressions in C++Dimitrios Platis
43 visualizações25 slides
Writing SOLID C++ [gbgcpp meetup @ Zenseact] por
Writing SOLID C++ [gbgcpp meetup @ Zenseact]Writing SOLID C++ [gbgcpp meetup @ Zenseact]
Writing SOLID C++ [gbgcpp meetup @ Zenseact]Dimitrios Platis
143 visualizações37 slides
Introduction to CMake por
Introduction to CMakeIntroduction to CMake
Introduction to CMakeDimitrios Platis
144 visualizações31 slides
Pointer to implementation idiom por
Pointer to implementation idiomPointer to implementation idiom
Pointer to implementation idiomDimitrios Platis
127 visualizações17 slides

Mais de Dimitrios Platis(10)

Builder pattern in C++.pdf por Dimitrios Platis
Builder pattern in C++.pdfBuilder pattern in C++.pdf
Builder pattern in C++.pdf
Dimitrios Platis100 visualizações
Interprocess communication with C++.pdf por Dimitrios Platis
Interprocess communication with C++.pdfInterprocess communication with C++.pdf
Interprocess communication with C++.pdf
Dimitrios Platis215 visualizações
Lambda expressions in C++ por Dimitrios Platis
Lambda expressions in C++Lambda expressions in C++
Lambda expressions in C++
Dimitrios Platis43 visualizações
Writing SOLID C++ [gbgcpp meetup @ Zenseact] por Dimitrios Platis
Writing SOLID C++ [gbgcpp meetup @ Zenseact]Writing SOLID C++ [gbgcpp meetup @ Zenseact]
Writing SOLID C++ [gbgcpp meetup @ Zenseact]
Dimitrios Platis143 visualizações
Introduction to CMake por Dimitrios Platis
Introduction to CMakeIntroduction to CMake
Introduction to CMake
Dimitrios Platis144 visualizações
Pointer to implementation idiom por Dimitrios Platis
Pointer to implementation idiomPointer to implementation idiom
Pointer to implementation idiom
Dimitrios Platis127 visualizações
Afry software safety ISO26262 (Embedded @ Gothenburg Meetup) por Dimitrios Platis
Afry software safety ISO26262 (Embedded @ Gothenburg Meetup)Afry software safety ISO26262 (Embedded @ Gothenburg Meetup)
Afry software safety ISO26262 (Embedded @ Gothenburg Meetup)
Dimitrios Platis162 visualizações
How to create your own Linux distribution (embedded-gothenburg) por Dimitrios Platis
How to create your own Linux distribution (embedded-gothenburg)How to create your own Linux distribution (embedded-gothenburg)
How to create your own Linux distribution (embedded-gothenburg)
Dimitrios Platis111 visualizações
[grcpp] Refactoring for testability c++ por Dimitrios Platis
[grcpp] Refactoring for testability c++[grcpp] Refactoring for testability c++
[grcpp] Refactoring for testability c++
Dimitrios Platis172 visualizações
Refactoring for testability c++ por Dimitrios Platis
Refactoring for testability c++Refactoring for testability c++
Refactoring for testability c++
Dimitrios Platis151 visualizações

Último

AI + Memoori = AIM por
AI + Memoori = AIMAI + Memoori = AIM
AI + Memoori = AIMMemoori
15 visualizações9 slides
Telenity Solutions Brief por
Telenity Solutions BriefTelenity Solutions Brief
Telenity Solutions BriefMustafa Kuğu
14 visualizações10 slides
"Node.js vs workers — A comparison of two JavaScript runtimes", James M Snell por
"Node.js vs workers — A comparison of two JavaScript runtimes", James M Snell"Node.js vs workers — A comparison of two JavaScript runtimes", James M Snell
"Node.js vs workers — A comparison of two JavaScript runtimes", James M SnellFwdays
14 visualizações30 slides
KubeConNA23 Recap.pdf por
KubeConNA23 Recap.pdfKubeConNA23 Recap.pdf
KubeConNA23 Recap.pdfMichaelOLeary82
28 visualizações27 slides
Redefining the book supply chain: A glimpse into the future - Tech Forum 2023 por
Redefining the book supply chain: A glimpse into the future - Tech Forum 2023Redefining the book supply chain: A glimpse into the future - Tech Forum 2023
Redefining the book supply chain: A glimpse into the future - Tech Forum 2023BookNet Canada
46 visualizações19 slides
Netmera Presentation.pdf por
Netmera Presentation.pdfNetmera Presentation.pdf
Netmera Presentation.pdfMustafa Kuğu
22 visualizações50 slides

Último(20)

AI + Memoori = AIM por Memoori
AI + Memoori = AIMAI + Memoori = AIM
AI + Memoori = AIM
Memoori15 visualizações
Telenity Solutions Brief por Mustafa Kuğu
Telenity Solutions BriefTelenity Solutions Brief
Telenity Solutions Brief
Mustafa Kuğu14 visualizações
"Node.js vs workers — A comparison of two JavaScript runtimes", James M Snell por Fwdays
"Node.js vs workers — A comparison of two JavaScript runtimes", James M Snell"Node.js vs workers — A comparison of two JavaScript runtimes", James M Snell
"Node.js vs workers — A comparison of two JavaScript runtimes", James M Snell
Fwdays14 visualizações
KubeConNA23 Recap.pdf por MichaelOLeary82
KubeConNA23 Recap.pdfKubeConNA23 Recap.pdf
KubeConNA23 Recap.pdf
MichaelOLeary8228 visualizações
Redefining the book supply chain: A glimpse into the future - Tech Forum 2023 por BookNet Canada
Redefining the book supply chain: A glimpse into the future - Tech Forum 2023Redefining the book supply chain: A glimpse into the future - Tech Forum 2023
Redefining the book supply chain: A glimpse into the future - Tech Forum 2023
BookNet Canada46 visualizações
Netmera Presentation.pdf por Mustafa Kuğu
Netmera Presentation.pdfNetmera Presentation.pdf
Netmera Presentation.pdf
Mustafa Kuğu22 visualizações
Discover Aura Workshop (12.5.23).pdf por Neo4j
Discover Aura Workshop (12.5.23).pdfDiscover Aura Workshop (12.5.23).pdf
Discover Aura Workshop (12.5.23).pdf
Neo4j20 visualizações
Evaluation of Quality of Experience of ABR Schemes in Gaming Stream por Alpen-Adria-Universität
Evaluation of Quality of Experience of ABR Schemes in Gaming StreamEvaluation of Quality of Experience of ABR Schemes in Gaming Stream
Evaluation of Quality of Experience of ABR Schemes in Gaming Stream
Alpen-Adria-Universität44 visualizações
"Node.js Development in 2024: trends and tools", Nikita Galkin por Fwdays
"Node.js Development in 2024: trends and tools", Nikita Galkin "Node.js Development in 2024: trends and tools", Nikita Galkin
"Node.js Development in 2024: trends and tools", Nikita Galkin
Fwdays37 visualizações
The Power of Heat Decarbonisation Plans in the Built Environment por IES VE
The Power of Heat Decarbonisation Plans in the Built EnvironmentThe Power of Heat Decarbonisation Plans in the Built Environment
The Power of Heat Decarbonisation Plans in the Built Environment
IES VE85 visualizações
Deep Tech and the Amplified Organisation: Core Concepts por Holonomics
Deep Tech and the Amplified Organisation: Core ConceptsDeep Tech and the Amplified Organisation: Core Concepts
Deep Tech and the Amplified Organisation: Core Concepts
Holonomics17 visualizações
Measurecamp Brussels - Synthetic data.pdf por Human37
Measurecamp Brussels - Synthetic data.pdfMeasurecamp Brussels - Synthetic data.pdf
Measurecamp Brussels - Synthetic data.pdf
Human37 27 visualizações
Future of AR - Facebook Presentation por Rob McCarty
Future of AR - Facebook PresentationFuture of AR - Facebook Presentation
Future of AR - Facebook Presentation
Rob McCarty66 visualizações
Inawisdom IDP por PhilipBasford
Inawisdom IDPInawisdom IDP
Inawisdom IDP
PhilipBasford17 visualizações
Generative AI: Shifting the AI Landscape por Deakin University
Generative AI: Shifting the AI LandscapeGenerative AI: Shifting the AI Landscape
Generative AI: Shifting the AI Landscape
Deakin University78 visualizações
Choosing the Right Flutter App Development Company por Ficode Technologies
Choosing the Right Flutter App Development CompanyChoosing the Right Flutter App Development Company
Choosing the Right Flutter App Development Company
Ficode Technologies13 visualizações
Bronack Skills - Risk Management and SRE v1.0 12-3-2023.pdf por ThomasBronack
Bronack Skills - Risk Management and SRE v1.0 12-3-2023.pdfBronack Skills - Risk Management and SRE v1.0 12-3-2023.pdf
Bronack Skills - Risk Management and SRE v1.0 12-3-2023.pdf
ThomasBronack31 visualizações
Cocktail of Environments. How to Mix Test and Development Environments and St... por Aleksandr Tarasov
Cocktail of Environments. How to Mix Test and Development Environments and St...Cocktail of Environments. How to Mix Test and Development Environments and St...
Cocktail of Environments. How to Mix Test and Development Environments and St...
Aleksandr Tarasov26 visualizações
"Package management in monorepos", Zoltan Kochan por Fwdays
"Package management in monorepos", Zoltan Kochan"Package management in monorepos", Zoltan Kochan
"Package management in monorepos", Zoltan Kochan
Fwdays37 visualizações

OpenAI API crash course

  • 1. OpenAI API crash course Learn how to build with "ChatGPT" for fun and profit
  • 2. About me Grew up in Rodos, Greece Software Engineering at GU & Chalmers Working with embedded systems Teaching DIT113, DAT265, Thesis supervision C++ for professionals, Coursera Open source projects https://platis.solutions https://github.com/platisd Email: dimitris@platis.solutions
  • 3. platisd/openai-pr-description Automatically generate PR descriptions, focus on why not what Chat completion API phonix Generate subtitles, more accurate than YouTube, LinkedIn etc Speech to text API sycophant Write opinionated articles based on the latest news on a topic Chat completion API, Image generation API https://robots.army
  • 4. About this workshop Explore OpenAI APIs Chat completion Function calling Creating the "perfect" prompt Reducing costs
  • 5. "ChatGPT API" or correctly: OpenAI API OpenAI API is a collection of APIs APIs offer access to various Large Language Models (LLMs) LLM: Program trained to understand human language ChatGPT is a web service using the Chat completion API Uses gpt-3.5-turbo (free tier) or gpt-4.0 (paid tier)
  • 6. OpenAI API endpoints Chat completion Given a series of messages, generate a response Function calling: Choose which function to call Image generation Given a text description generate an image Speech to text Given an audio file and a prompt generate a transcript Fine tuning Train a model using input and output examples
  • 7. ChatGPT aside, do you use any tools based on OpenAI's models?
  • 8. Getting started with OpenAI API Install Python library pip install --user openai Get your API key Login at platform.openai.com Go to API keys Create new secret key (Optional) Create environment variable OPENAI_API_KEY with key
  • 9. Ensure successful installation import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") models = openai.Model.list()["data"] for model in models: print(model["id"]) gpt-4 gpt-4-0314 curie-search-query babbage-search-document text-search-babbage-doc-001 babbage ...
  • 10. Chat completion API Given a series of messages, create a response. Important parameters: model : Which LLM to use, balance cost, speed and performance gpt-3.5-turbo : Cheaper & faster gpt-4.0 : Better performance, larger input, less hallucinations temperature : "Creativity" of the model's response Lower values result in more deterministic responses Allowed value range: [0.0, 2.0] messages : A list of messages that represent the "conversation" Each message needs a role and content properties
  • 11. messages has the conversation for the model the provide a response. messages = [ {"role": "system", "content": "You are an assistant that talks like a 15 yo"}, {"role": "user", "content": "Should I use goto statements?"}, {"role": "assistant", "content": "No bro, that's bad practice duh "}, {"role": "user", "content": user_input}, ] The first 3 elements of the list are the "context" The last is the user's input we want the model to respond to system role: High level instructions for the conversation assistant role: The model's "ideal" (or previous) response user role: The user's input
  • 12. import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") user_input = input("Enter your programing question: ") messages = [ {"role": "system", "content": "You are an assistant that talks like a 15 yo"}, {"role": "user", "content": "Should I use goto statements?"}, {"role": "assistant", "content": "No bro, that's bad practice duh "}, {"role": "user", "content": user_input}, ] response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=messages, temperature=0.6 ) print(response.choices[0].message.content)
  • 13. Enter your programing question: Is OOP good? “ “ OMG, yes! Object-oriented programming is like, the bomb dot com! It helps you organize your code and makes it easier to understand and maintain. Plus, you can create cool objects and stuff. So yeah, OOP is pretty rad! “ “
  • 14. system prompt {"role": "system", "content": "You are an assistant that talks like a 15 yo"}, Changing content to You are a 15 year old programmer wouldn't necessarily work as intended. system prompts that clearly illustrate the context work better: You are programmer and talk like a 15 yo You are programmer and 15 years old Yes, Object-Oriented Programming (OOP) is widely considered to be a good programming paradigm. It promotes code organization, reusability, and modularity. “ “
  • 15. temperature values Higher values = More "creative" responses Lower values = Less varied responses Higher values = Lower probability to follow "instructions" gpt-4.0 nonetheless better at following instructions messages = [ {"role": "system", "content": "You are a helpful assistant"}, { "role": "user", "content": "Create a JSON object using months of the" + " year as keys and days of each month as values", }, ]
  • 16. temperature=0.6 Consistently: { "January": 31, "February": 28, "March": 31, "April": 30, "May": 31, "June": 30, "July": 31, "August": 31, "September": 30, "October": 31, "November": 30, "December": 31 }
  • 17. temperature=1.9 Consistently not what we intended. One possible response: ```json { "January": 31, "February": 28, "March": 31, "April": 30, "May": 31, "June": 30, "July": 31, "August": 31, "September": 30, "October": 31, "November": 30, "December": 31 } ``` Note that February has 28 days by default, in line with the regular Gregorian calendar ordering. Is there anything else I can help you with?
  • 18. Function calling - Let the model choose def get_chat(messages=None, model="gpt-4", temp=0.2, functions=None): response = openai.ChatCompletion.create( model=model, messages=messages, temperature=temp, functions=functions, ) Pass functions to the ChatCompletion API gpt-4 better results than gpt-3.5-turbo with "complex" prompts Keep temperature rather low for higher focus, less hallucinations
  • 19. functions parameter [ { "name": "call_staff", "description": "Call a member of staff to the table", "parameters": {"type": "object", "properties": {}}, } ] Provide a list of functions to the model, it will choose which one to call based on the context and the function descriptions.
  • 20. functions = [ { "name": "place_order", "description": "Place an order for a pizza", "parameters": { "type": "object", "properties": { "name": { "type": "string", "description": "The name of the pizza, e.g. Pepperoni", }, "size": { "type": "string", "enum": ["small", "medium", "large"], "description": "The size of the pizza. Always ask for clarification if not specified.", }, "take_away": { "type": "boolean", "description": "Whether the pizza is taken away. Assume false if not specified.", }, }, "required": ["name", "size", "take_away"], }, }, ]
  • 21. name : The name of the function to call description : A description of the function, helps the model choose parameters/type : Always "object" for now parameters/properties : Empty if no function parameters <param>/type : "string" , "boolean" , "int" <param>/enum : Optional list of allowed values <param>/description : Description of parameter, helps the model parameters/required : List of required parameters
  • 22. system role prompt messages = [ { "role": "system", "content": "Don't make assumptions about what values " + "to put into functions. Ask for clarification if you need to.", }, { "role": "system", "content": "Only use the functions you have been provided with.", }, ] Avoid "hallucinations" and stick to the provided functions. The model will happily make things up!
  • 23. API response { "role": "assistant", "content": null, "function_call": { "name": "place_order", "arguments": "{n"name": "Margherita",n"size": "medium",n"take_away": falsen}" } } If model has decided a function call needs to be made, the response will contain a function_call property and content will be null . arguments is a string, not a JSON object
  • 24. def get_chat(messages=None, model="gpt-4", temp=0.2, functions=None): response = openai.ChatCompletion.create( model=model, messages=messages, temperature=temp, functions=functions, ) message = response.choices[0].message if "function_call" in message: f = globals()[message["function_call"]["name"]] params = json.loads(message["function_call"]["arguments"]) f(**params) else: print(message["content"]) return "function_call" in message
  • 25. Remember to append the user's input to the messages list so that the model can use previous input as context. function_called = False print("How may I help you?") while not function_called: user_input = input() messages.append({"role": "user", "content": user_input}) function_called = get_chat(messages=messages, functions=functions) Does not take much code to create a "chatbot" that calls functions So it shouldn't take that long to develop it, right?
  • 26. >> How may I help you? << I'd like to order some pizza please >> Of course, I'd be happy to assist you with that. Could you please specify the name of the pizza you'd like to order and the size you prefer? Also, would this be for take away or are you dining in? << I'd like a large one >> Sure, could you please specify the type of pizza you would like to order? << A Margherita please >> Placing order for Margherita pizza, large to eat in “ “
  • 27. Conclusion Chat completions API for summarizing, extending etc More extensive than what we covered here Can tightly integrate it with your system via function calling gpt-3.5-turbo is often good enough and faster but can fall short Use the Python library to get started quickly Build your own wrapper around it if needed Use OpenAI API Playground to experiment Full "pizza bot" example: plat.is/openai-pizza
  • 28. Prompting The model is a black box, we cannot be certain how it will respond In practice the previous, simplified, approach won't be enough Prompting is evolving into its own "craft" Hype around "prompt engineering" by "influencers" Approach iteratively, follow best practices, it's not magic
  • 29. Be specific - Outcome Write a program that receives a video file and turns it into a GIF. Write a program in Python that receives a video file as a command line argument. Use the imageio library to turn it into a GIF.
  • 30. Be specific - Length, format and style Summarize the following text: {text} Summarize the following text in 3 sentences. Use your own words and do not plagiarize: {text}
  • 31. Be specific - Delimit the input Summarize the following text: {text} Summarize the following text: ```{text}```
  • 32. Be specific - Output format Summarize the following text: {text} Summarize the following text as a JSON object where the key is `summary` and the value is the summary: ```{text}```
  • 33. Nudge the model Given the diff of the pull request, focus on why the change is needed. `git diff`: ```{diff}``` This PR is needed because it will
  • 34. Give examples - Start simple In the following police report, provide a list of the names of the people arrested. Names:
  • 35. Give examples - Simple didn't work? Provide more! Provide a list of the names of the people arrested. Report 1: Yesterday John Doe along with his friend Jane Doe were arrested for trespassing. Officer Barbrady was the arresting officer. Names 1: John Doe, Jane Doe Report 2: This morning Bob Smith was arrested for speeding. Witnesses including Robert Johnson and Mary Young saw the incident. Names 2: Bob Smith Report 3: {report} Names 3:
  • 36. Provide steps for complex tasks Write a program that receives a video file and turns it into a GIF. 1. Use Python 2. Use the imageio library 3. Read the command line argument `--video` 4. Check if the file exists 5. Read the file 6. Feed the file to `imageio` 7. Parse the `--output` command line argument 8. Write the GIF to the provided output path
  • 37. Break down complex prompts and chain them Write a short poem inspired by the characters in the following text and generate 3 keywords for the poem: {text} Write a short poem inspired by the characters in the following text: {text} Generate three keywords for the following poem: {poem}
  • 38. Break down large prompts and chain them Summarize the following texts into a single concise text: Text 1: {potentially_long_text1} Text 2: {potentially_long_text2} Text 3: {potentially_long_text3} Summarize the following text: {potentially_long_text1} Summarize the following text: {potentially_long_text2} Summarize the following text: {potentially_long_text3} Summarize the following texts into a single concise text: {summary1} {summary2} {summary3}
  • 39. Conclusion Don't believe the hype, ignore the influencers Prompting is an iterative process, sometimes slow Be specific, provide examples, break down complex prompts etc Your prompts may need to be modified once you switch models gpt-3.5-turbo to gpt-4.0 is not always "backwards compatible" ChatGPT Prompt Engineering for Developers by deeplearning.ai Best practices for prompt engineering with OpenAI API
  • 40. Reducing costs Using the OpenAI API is not free Costs are based on the number of tokens used 1 token = 4 chars in English (on average) Using other languages is more expensive Costs pile up during development but explode in production Use the cheapest model that works for your use case Start with gpt-3.5-turbo and see if you can go even cheaper Speech-to-text API and Image generation API are expensive Investigate whether you can run Whisper locally
  • 41. Fine tuning Fine tuning is a way to "teach" the model to react to specific input If your prompts contain a lot of examples, it might make sense Fine tuning is expensive but so are large prompts Paying extra for both fine tuning and calling the fine tuned model Don't fine tune until you get really good results with prompting
  • 42. Takeaways LLMs will change the world for the better (not too dramatically) Learning how to use them, not only as a developer, will be a must Incorporating them in systems can provide additional value Sentiment analysis, summarizing, transforming etc Many more (fun) challenges when integrating an LLM in a system Moderation, evaluation, reliability, ethics etc Prompt engineering is an iterative process, potentially slow