tl  tr
  Home | Tutorials | Articles | Videos | Products | Tools | Search
Interviews | Open Source | Tag Cloud | Follow Us | Bookmark | Contact   
 Generative AI > OpenAI API > OpenAI API - Chat Completions and Function Calling

OpenAI API - Chat Completions and Function Calling

Author: Venkata Sudhakar

The OpenAI API is the most widely used interface for integrating Large Language Models into applications. At its core, the Chat Completions endpoint accepts a list of messages (a conversation history) and returns the model's next response. Each message has a role (system, user, or assistant) and content. The system message sets the behaviour and persona of the model. The user messages are the human's turns. The assistant messages are the model's previous turns, which you include to maintain conversation context.

Function calling (also called tool calling in newer API versions) is one of the most powerful features of the OpenAI API. It lets you define a set of functions as JSON schemas, and the model decides when to call one based on the user's message. Instead of generating a text answer, the model returns a structured JSON object with the function name and arguments. Your code then executes the function with those arguments and feeds the result back to the model, which uses it to generate the final answer. This is the foundation of how AI agents interact with external systems.

The below example shows the three core patterns: a basic chat completion, a streaming response, and function calling to retrieve live data.


It gives the following output,

Answer: CDC captures database changes in real time from the transaction log, eliminating
the latency of batch windows and capturing deletes that timestamp-based ETL misses.
Tokens used: 87

Follow-up answer: Debezium supports MySQL, PostgreSQL, MongoDB, Oracle, SQL Server,
DB2, and several others through a plugin architecture on top of Apache Kafka Connect.

It gives the following streamed output (tokens print as they arrive),

Streaming response: 1. Blue-Green Deployment - run two identical environments, switch
traffic from old to new via load balancer after validation.
2. Strangler Fig Pattern - incrementally replace legacy features by routing individual
endpoints to a new service while keeping others on the legacy system.
3. Expand-Contract - add the new schema element, migrate data, then remove the old
element across separate deployments to maintain backward compatibility.

It gives the following output,

Model called: get_migration_status({"job_id": "MIG-1042"})
Final Answer: Migration job MIG-1042 is currently IN PROGRESS. So far 2,450,000 out
of 5,000,000 rows have been migrated (49% complete), with an estimated 45 minutes
remaining until completion.

Key OpenAI API parameters to know:

temperature - Controls randomness. 0 is deterministic (same input always gives same output), useful for data extraction and structured tasks. 1.0 is more creative and varied, good for brainstorming and writing. max_tokens - Caps the response length. Set this to prevent runaway long responses and control costs. top_p - Alternative to temperature for controlling diversity; usually leave at default 1.0 when using temperature. tool_choice - Set to "auto" to let the model decide whether to call a tool, "required" to force a tool call, or specify a specific function name to always call that function.


 
  


  
bl  br