OpenAI Logo

OpenAI GPT Functions for Advanced AI Applications

In the rapidly evolving landscape of artificial intelligence (AI), one trend is capturing the attention of technology enthusiasts around the world: the rise of Generative Pretrained Transformers (GPT). Especially, the latest versions, gpt-3.5-turbo-0613 and gpt-4-0613, have pushed the boundaries of AI applications. However, to harness their full potential, it is crucial to understand how GPT functions operate.

GPT Functions: The AI Engines

GPT functions form the backbone of AI model-user interactions. During an API call, you can propose functions to the GPT models. The models intelligently output a JSON object containing the necessary parameters to activate these functions. It's noteworthy that the Chat Completions API does not directly implement the function; instead, it constructs a JSON output that you can employ to initiate the function in your programming environment.

The distinguishing features of the gpt-3.5-turbo-0613 and gpt-4-0613 models lie in their advanced ability to determine when a function call is required based on the given input. They generate a JSON response in harmony with the function signature, paving the way for efficient and accurate results. Nonetheless, we strongly recommend incorporating user confirmation steps before taking actions that could have real-world implications, such as publishing a post, processing a transaction, or initiating a transfer.

The integration of functions occurs invisibly, utilizing a syntax that the model has been specifically trained to recognize. This implies that these functions contribute to the model's context limit and are billed as input tokens. To circumvent context constraints, consider minimizing the number of functions or shortening the explanatory text provided for function parameters.

Unleashing the Power of GPT Functions

Implementing GPT functions allows you to extract structured data from the model with increased reliability. Let's explore how you can capitalize on this:

  1. Developing Responsive Chatbots: You can engineer chatbots capable of responding to inquiries by interacting with external APIs. For instance, functions such as fetch_news_headline(topic: string, count: int) or get_local_time(city: string, country: string) can be defined.

  2. Transform Natural Language into API Calls: You can convert user questions like "What are the trending topics today?" into API calls such as fetch_trending_topics(count: int) and connect with your internal API.

  3. Derive Structured Data from Text: Defining functions like analyze_sentiment(text: string) or perform_text_analysis(text: string) enables you to extract meaningful insights from unstructured text.

The potential applications are infinite and only limited by your imagination!

Step-by-Step Implementation of GPT Functions

The implementation of GPT functions follows a basic sequence of steps:

  1. Call the model with the user query and a set of functions defined in the functions parameter.
  2. The model decides whether to call a function; if it does, the output will be a stringified JSON object compliant with your custom schema.
  3. Parse the string into JSON in your code, and invoke your function using the provided arguments if they exist.
  4. Call the model again by appending the function response as a new message, and let the model summarize the results back to the user.

For an in-depth understanding, let's walk through an example. Let's assume that we have a function fetch_news_headline which, in a real-world scenario, could be your internal API or an external API.

In the code snippet, we outline how to interact with the model, inputting a user query and defining the accessible functions. The model then determines whether it needs to call a function.

If it does, it executes the function and the response is sent back to the model. The model then generates a user-facing message based on the function's response.

This functionality allows for a versatile interaction pattern. For instance, if you ask the model to "Fetch the latest news, book a cab for tonight, and set a reminder for a meeting tomorrow" while providing the corresponding functions, it may sequentially call these functions and then generate a user-facing message summarizing the results.

If you want to compel the model to generate a user-facing message, you can do so by setting function_call: "none" (if no functions are provided, no functions will be called).

Understanding and utilizing GPT functions can unlock the vast potential of these advanced AI models, facilitating more sophisticated and interactive applications.

Practical Implementation: A Javascript Code Example

To make the concept of GPT functions more tangible, let's walk through a Javascript code example. Consider a scenario where we have a function fetch_news_headline, which, in production, could connect to your internal or external API:

Let's look at a JavaScript example now. Here's how you might implement the fetchNewsHeadline function using Node.js:

const fetch = require("node-fetch");
const { Configuration, OpenAIApi } = require("openai");

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
const openai = new OpenAIApi(configuration);

async function fetchNewsHeadline(topic, count = 5) {
  const response = await fetch(
  const headlines = await response.json();
  return JSON.stringify(headlines);

async function runConversation() {
  const initialResponse = await openai.createChatCompletion({
    model: "`gpt-3.5-turbo-0613`",
    messages: [
        role: "user",
        content: "What are the top news headlines about technology?",
    functions: [
        name: "fetchNewsHeadline",
        description: "Fetch the top news headlines for a given topic",
        parameters: {
          type: "object",
          properties: {
            topic: {
              type: "string",
              description: "The topic to fetch headlines for",
            count: {
              type: "integer",
              description: "The number of headlines to fetch",
          required: ["topic"],
    function_call: "auto",

  const initialMessage = initialResponse["choices"][0]["message"];

  if (initialMessage.function_call) {
    const functionName = initialMessage["function_call"]["name"];
    const functionResponse = await fetchNewsHeadline(

    const secondResponse = await openai.createChatCompletion({
      model: "`gpt-3.5-turbo-0613`",
      messages: [
          role: "user",
          content: "What are the top news headlines about technology?",
        { role: "function", name: functionName, content: functionResponse },


In this example, we initiate the interaction with the GPT model by asking it for the top news headlines about technology. If the model decides to call the fetchNewsHeadline function, it will use the arguments specified in the function call to get the relevant news headlines. The response from the function is then sent back to the model, which can generate a user-facing message summarizing the headlines.

Advanced Usage and Considerations

As observed in the JavaScript example, GPT functions provide a versatile means for enabling back-and-forth interactions with the GPT models. For instance, upon giving an intricate command like "Fetch the latest tech news, schedule a meeting for tomorrow, and remind me to check the stock market," and assuming corresponding functions are provided, the model might opt to call these functions one after the other. Only after completing these tasks, it may compile a user-oriented message summarizing the actions taken.

In your application, the function_call field offers a degree of control over how the GPT model interacts with the function calls. This parameter can be set to several different states depending on your needs.

  • If set to none, the model will not call any function, and it will generate a direct response to the user. This is the default mode when no functions are provided.

  • On the other hand, if function_call is set to auto, the model can decide whether to call a function or respond directly to the user. This flexible mode is the default setting when one or more functions are defined.

  • For more targeted control, you can instruct the model to call a specific function by setting function_call as an object with the name of the function, such as {"name": "my_function"}. In this case, the model will call the specified function, regardless of the user query.

This field provides valuable flexibility in shaping the AI's interactions. By adjusting the function_call parameter, you can fine-tune the balance between automated function calling and direct user responses, tailoring your application to best meet its users' needs.

The integration of GPT functions in your applications paves the way for dynamic, context-aware interactions. Chatbots, virtual assistants, content management systems, data analysis software — the possibilities are only bounded by creativity!

A Word on Safeguards

While the power and flexibility of GPT functions are impressive, it's crucial to include safeguards in your applications. Actions that may impact the real world — such as posting content online, making transactions, or sending emails — should require user confirmation before being executed. Ensuring this additional layer of validation guards against unintentional actions and enhances the trustworthiness of the AI system.


Understanding GPT functions is a vital step in leveraging the power of advanced AI models like gpt-3.5-turbo-0613 and gpt-4-0613. These models, when armed with the right functions, can transform natural language queries into structured API calls, create sophisticated chatbots, and extract valuable data from text — heralding a new era of AI-enabled applications.

By diving into the intricacies of GPT functions, we can unlock the vast potential of these AI models and create solutions that bring us closer to the future of seamless human-AI interactions.

Build custom ChatGPT integrations with any app or software