Connect Node to OpenAI
Learn how to connect a Node application to OpenAI using the openai npm package. Specifically, we will use the GPT-3.5 Turbo model to generate text using the Chat Completions API.
Table of Contents 📖
- What is OpenAI?
- Account Setup and API Key Generation
- Payment and Tokens
- Project Setup and Library Installation
- Querying the GPT-3.5 Turbo Model
What is OpenAI?
OpenAI is an artificial intelligence research organization that provides services such as ChatGPT, Codex, and the OpenAI API. To connect a Node application to OpenAI, we need to use the OpenAI API. This API allows developers to interact with OpenAI by providing input to a specified model. For example, we can provide text to DALLE 3 (an image model) to generate an image, an image to GPT-4 Turbo (a language model) to generate a text response, or an MP3 file to Whisper (an audio model) to generate text, etc.
Account Setup and API Key Generation
To use the OpenAI API, you need to have an OpenAI account. You can either sign up or login with the following URLs.
Login - https://platform.openai.com/login
Signup - https://platform.openai.com/signup
Next, you need to generate an API Key. You can generate an API Key using the following URL.
https://platform.openai.com/api-keys
An API Key is used to identify the user, or project, calling the API. API Keys should be kept secret as they are often linked to usage data. Therefore, if someone steals your API Key then you would be billed off their usage. When the API Key is generated, make sure to keep it somewhere safe as you won't be able to view it again. If you lose it you will need to generate another one.
Payment and Tokens
When creating an account, your account tier will be set to free. This means that you have 5 dollars to spend on the OpenAI API before you have to change to a paid account. If you don't use up the 5 dollars within 3 months it will expire. OpenAI manages its payment through tokens. Tokens are essentially words and are divided into input and output tokens. Also, different models provide different capabilities and hence price points/token usage. All these details are present on the following URL: https://openai.com/pricing. But here are some examples:
gpt-4 - $10 / 1M Input Tokens : $30 / 1M Output Tokens
gpt-3.5-turbo-0123 - $0.50 / 1M Input Tokens : $1.50 / 1M Output Tokens
gpt-3.5-turbo-instruct - $1.50 / 1M Input Tokens : $2.00 / 1M Output Tokens
You can set a budget and quota limits for your account. If you exceed the quota limit then contacting the API will give a response similar to the following:
RateLimitError: 429 You exceeded your current quota, please check your plan and billing details.
For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.
Project Setup and Library Installation
Now lets setup our Node project. First lets initialize an empty directory as an ES6 project using npm init es6 -y.
npm init es6 -y
OpenAI provides a JavaScript client to interact with their API called openai. It can be installed from npm.
npm i openai
Now lets set the entry point of our project to our index.js file and also create a simple start script.
"main": "./src/index.js",
...
...
"scripts": {
"start": "node ."
}
Querying the GPT-3.5 Turbo Model
Now lets write some code to interact with OpenAI API. First we will import the openai library and create an object from it.
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
The API Key here is the one we generated. Note that it is best practice to place environment variables in a .env file. You should not place them directly in the code. Now we can query the OpenAI API with the openai object. There are many different models and APIs to choose from. For this demonstration we will use GPT-3.5 Turbo, a language model.
async function main() {
const completionResp = await openai.chat.completions.create({
messages: [
{"role": "system", "content": "You are helpful but sassy about it!"},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
],
model: "gpt-3.5-turbo",
});
return completionResp;
}
main()
.then((data) => console.log(data))
.catch((error) => console.error(error));
Language models are trained to understand natural language, code, images, and then output text in response. The Chat Completions API is used to generate text based on a list of messages. Here, we are supplying a conversation between a user and GPT-3.5 Turbo.
- messages - A list of messages compromising the conversation so far. The system role sets the behavior/personality of the assistant. The user is the one asking questions and assistant is the model responding.
- model - The ID of the model to use. GPT-3.5 Turbo is an improved model of GPT 3.5.
Running the application returns a Chat Completions object.
{
"id": "chatcmpl-9GsQfjTH0Ya8k99AML10NWdPrdAzz",
"object": "chat.completion",
"created": 1713809501,
"model": "gpt-3.5-turbo-0125",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "It was played at Globe Life Field in Arlington, Texas."
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 56,
"completion_tokens": 12,
"total_tokens": 68
},
"system_fingerprint": "fp_c2295e73ad"
}
Here are some important properties of the Chat Completions object:
- choices - List of chat completion choices. Each object contains a message object that contains the chat completion message.
- usage - Usage statistics for the request, how many tokens were used.