65
Calling LLM APIs
LLM API Intro: Plug Model Capabilities into Your App
What you're probably confused about right now
"Isn't knowing how to write prompts enough?"
In a real product, you need code that calls the model automatically -- and handles failures, retries, and logging.
One-line definition
Calling an LLM API is a full engineering workflow: auth, request, parse, error handling, and retry.
Real-life analogy
Ordering food delivery isn't just picking dishes. You also handle payment, waiting, following up, and dealing with wrong orders.
Minimal working example
import os
API_KEY = os.getenv("API_KEY")
if not API_KEY:
raise ValueError("missing key")
Quick quiz (5 min)
- Implement
ask_llm(prompt). - Add timeout and try/except.
- Retry twice on failure and log each attempt.
Quiz answer guide & grading criteria
- Answer direction: write runnable code that covers the core requirements and edge cases from the prompt.
- Criterion 1 (Correctness): Main flow produces correct results, key branches execute.
- Criterion 2 (Readability): Clear variable names, no excessive nesting.
- Criterion 3 (Robustness): Basic protection against null values, type errors, or unexpected input.
Take-home task
Build a "Q&A interface wrapper" with unified error codes and response format.
Acceptance criteria
You can independently:
- Run a minimal API call end-to-end
- Handle timeout/auth/empty response errors
- Add basic observability logging
Common errors & debugging steps (beginner edition)
- Can't read the error: start from the last line -- find the error type (
TypeError,NameError, etc.), then trace back to the line in your code. - Not sure about a variable's value: throw in a temporary
print(var, type(var))at key points to verify data looks right. - Changed code but nothing happened: make sure the file is saved, you're running the right file, and your terminal is in the correct venv.
Common misconceptions
- Misconception: if you got a response, it's a success.
- Reality: you need to validate the content and check for stability.