OpenAI API for Beginners
ChatGPT is impressive, but the real power is in the API. Build AI into your own applications.
This guide gets you from zero to working code in 30 minutes.
What You’ll Need
- Python 3.7+ (or JavaScript/Node.js)
- OpenAI account
- Text editor
- ~$1 for testing
Step 1: Get Your API Key
- Go to platform.openai.com
- Sign up or log in
- Click “API Keys” in sidebar
- Create new secret key
- Copy and save it securely
Never commit your API key to version control.
Step 2: Install the Library
Python:
pip install openai
Node.js:
npm install openai
Step 3: Your First API Call
Python:
from openai import OpenAI
client = OpenAI(api_key="your-api-key-here")
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Explain APIs in one sentence"}
]
)
print(response.choices[0].message.content)
Output: “An API (Application Programming Interface) is a set of rules and protocols that allows different software applications to communicate and share data with each other.”
That’s it. You’ve used the API.
Understanding the Basics
The Messages Array
Every API call uses a messages array:
messages=[
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "What's the capital of France?"},
{"role": "assistant", "content": "Paris"},
{"role": "user", "content": "What's the population?"}
]
Roles:
system: Instructions for how AI should behaveuser: Human messagesassistant: AI responses
Models
| Model | Best For | Cost |
|---|---|---|
| gpt-3.5-turbo | General use, cheap | $0.0005/1K in |
| gpt-4-turbo | Complex tasks | $0.01/1K in |
| gpt-4o | Latest, multimodal | $0.005/1K in |
Start with gpt-3.5-turbo. Upgrade if quality insufficient.
Key Parameters
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
temperature=0.7, # Creativity (0-2)
max_tokens=500, # Response length limit
top_p=1, # Alternative to temperature
frequency_penalty=0, # Reduce repetition
presence_penalty=0 # Encourage new topics
)
Temperature:
- 0: Deterministic, same input = same output
- 0.7: Balanced (good default)
- 1.5+: Creative but potentially random
Practical Examples
Example 1: Summarizer
def summarize(text, max_words=100):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": f"Summarize text in {max_words} words or less."
},
{
"role": "user",
"content": text
}
],
temperature=0.3 # Lower for consistency
)
return response.choices[0].message.content
# Usage
article = "Your long article text here..."
summary = summarize(article)
print(summary)
Example 2: Email Responder
def draft_email_response(original_email, tone="professional"):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": f"Draft a {tone} email response. Be concise."
},
{
"role": "user",
"content": f"Reply to this email:\n\n{original_email}"
}
],
temperature=0.7
)
return response.choices[0].message.content
# Usage
email = "Hi, I'm interested in your product but have questions about pricing..."
response = draft_email_response(email, tone="friendly")
print(response)
Example 3: Code Explainer
def explain_code(code, language="Python"):
response = client.chat.completions.create(
model="gpt-4-turbo", # Better for code
messages=[
{
"role": "system",
"content": "Explain code clearly. Use simple language."
},
{
"role": "user",
"content": f"Explain this {language} code:\n\n{code}"
}
],
temperature=0.3
)
return response.choices[0].message.content
Example 4: Simple Chatbot
conversation_history = [
{"role": "system", "content": "You are a helpful assistant for a tech company."}
]
def chat(user_message):
conversation_history.append({"role": "user", "content": user_message})
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=conversation_history
)
assistant_message = response.choices[0].message.content
conversation_history.append({"role": "assistant", "content": assistant_message})
return assistant_message
# Interactive loop
while True:
user_input = input("You: ")
if user_input.lower() == 'quit':
break
print(f"Bot: {chat(user_input)}")
Handling Errors
from openai import OpenAI, RateLimitError, APIError
import time
def safe_api_call(messages, max_retries=3):
for attempt in range(max_retries):
try:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages
)
return response.choices[0].message.content
except RateLimitError:
wait_time = 2 ** attempt
print(f"Rate limited. Waiting {wait_time}s...")
time.sleep(wait_time)
except APIError as e:
print(f"API error: {e}")
raise
raise Exception("Max retries exceeded")
Cost Management
Track Usage
response = client.chat.completions.create(...)
# After each call
print(f"Tokens used: {response.usage.total_tokens}")
print(f"Prompt tokens: {response.usage.prompt_tokens}")
print(f"Completion tokens: {response.usage.completion_tokens}")
Estimate Costs
def estimate_cost(prompt_tokens, completion_tokens, model="gpt-3.5-turbo"):
prices = {
"gpt-3.5-turbo": {"input": 0.0005, "output": 0.0015},
"gpt-4-turbo": {"input": 0.01, "output": 0.03},
}
input_cost = (prompt_tokens / 1000) * prices[model]["input"]
output_cost = (completion_tokens / 1000) * prices[model]["output"]
return input_cost + output_cost
Set Budget Limits
- Go to platform.openai.com
- Settings → Billing
- Set monthly budget limit
- Enable email alerts
Best Practices
1. Use System Prompts
# Bad
messages=[{"role": "user", "content": "Write a poem about cats"}]
# Better
messages=[
{"role": "system", "content": "You are a creative poet who writes short, modern poetry."},
{"role": "user", "content": "Write a poem about cats"}
]
2. Be Specific
# Vague
"Write about AI"
# Specific
"Write a 200-word explanation of how neural networks work, suitable for a high school student."
3. Manage Token Length
# Limit response length
max_tokens=500
# Truncate long inputs
if len(input_text) > 10000:
input_text = input_text[:10000] + "..."
4. Store API Keys Securely
import os
# Store in environment variable
# export OPENAI_API_KEY=your-key
client = OpenAI() # Automatically reads from env
Or use .env file:
OPENAI_API_KEY=your-key
from dotenv import load_dotenv
load_dotenv()
client = OpenAI()
Common Use Cases
Content Generation
- Blog posts
- Product descriptions
- Social media
- Documentation
Data Processing
- Summarization
- Entity extraction
- Sentiment analysis
- Classification
Customer Service
- Chatbots
- Email triage
- FAQ answers
- Ticket routing
Developer Tools
- Code explanation
- Bug detection
- Documentation generation
- Code review
Next Steps
After this tutorial:
- Build something small - Start with a single-purpose tool
- Experiment with prompts - Small changes make big differences
- Add streaming - Better UX for chatbots
- Explore function calling - Let AI use tools
- Try fine-tuning - Custom models for specific tasks
Resources
- OpenAI documentation: platform.openai.com/docs
- OpenAI cookbook: github.com/openai/openai-cookbook
- Community: community.openai.com
The Bottom Line
The OpenAI API is easier than you think. The hardest part is deciding what to build.
Start simple:
- Get API key
- Run the examples
- Modify for your use case
- Iterate
Most useful AI applications are simpler than you’d expect. The API handles the complexity - you just need to ask the right questions.
Frequently Asked Questions
GPT-3.5-turbo costs $0.0005/1K input tokens, $0.0015/1K output tokens. GPT-4-turbo costs $0.01/1K input, $0.03/1K output. For most small projects, expect $1-10/month. You get $5 free credit to start.
Not at all. If you can write basic Python, you can use the API. The simplest implementation is about 10 lines of code. Libraries handle the complexity.
Chatbots, content generators, code assistants, data analyzers, email responders, translation tools, summarizers - anything involving text generation, understanding, or transformation.