ChatOpenAI
OpenAI is an artificial intelligence (AI) research laboratory.
This guide will help you getting started with ChatOpenAI chat models. For detailed documentation of all ChatOpenAI features and configurations head to the API reference.
Overviewβ
Integration detailsβ
Class | Package | Local | Serializable | PY support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatOpenAI | @langchain/openai | β | β | β |
Model featuresβ
See the links in the table headers below for guides on how to use specific features.
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|
β | β | β | β | β | β | β | β | β |
Setupβ
To access OpenAI chat models youβll need to create an OpenAI account,
get an API key, and install the @langchain/openai
integration package.
Credentialsβ
Head to OpenAIβs website to sign up for
OpenAI and generate an API key. Once youβve done this set the
OPENAI_API_KEY
environment variable:
export OPENAI_API_KEY="your-api-key"
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_API_KEY="your-api-key"
Installationβ
The LangChain ChatOpenAI
integration lives in the @langchain/openai
package:
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Instantiationβ
Now we can instantiate our model object and generate chat completions:
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
// other params...
});
Invocationβ
const aiMsg = await llm.invoke([
[
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
],
["human", "I love programming."],
]);
aiMsg;
AIMessage {
"id": "chatcmpl-9rB4GvhlRb0x3hxupLBQYOKKmTxvV",
"content": "J'adore la programmation.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 31,
"totalTokens": 39
},
"finish_reason": "stop"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 31,
"output_tokens": 8,
"total_tokens": 39
}
}
console.log(aiMsg.content);
J'adore la programmation.
Chainingβ
We can chain our model with a prompt template like so:
import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
],
["human", "{input}"],
]);
const chain = prompt.pipe(llm);
await chain.invoke({
input_language: "English",
output_language: "German",
input: "I love programming.",
});
AIMessage {
"id": "chatcmpl-9rB4JD9rVBLzTuMee9AabulowEH0d",
"content": "Ich liebe das Programmieren.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 6,
"promptTokens": 26,
"totalTokens": 32
},
"finish_reason": "stop"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 26,
"output_tokens": 6,
"total_tokens": 32
}
}
Custom URLsβ
You can customize the base URL the SDK sends requests to by passing a
configuration
parameter like this:
import { ChatOpenAI } from "@langchain/openai";
const llmWithCustomURL = new ChatOpenAI({
temperature: 0.9,
configuration: {
baseURL: "https://your_custom_url.com",
},
});
await llmWithCustomURL.invoke("Hi there!");
You can also pass other ClientOptions
parameters accepted by the
official SDK here.
If you are hosting on Azure OpenAI, see the dedicated page instead.
Calling fine-tuned modelsβ
You can call fine-tuned OpenAI models by passing in your corresponding
modelName
parameter.
This generally takes the form of
ft:{OPENAI_MODEL_NAME}:{ORG_NAME}::{MODEL_ID}
. For example:
import { ChatOpenAI } from "@langchain/openai";
const fineTunedLlm = new ChatOpenAI({
temperature: 0.9,
model: "ft:gpt-3.5-turbo-0613:{ORG_NAME}::{MODEL_ID}",
});
await fineTunedLlm.invoke("Hi there!");
Generation metadataβ
If you need additional information like logprobs or token usage, these
will be returned directly in the .invoke
response within the
response_metadata
field on the message.
Requires @langchain/core
version >=0.1.48.
import { ChatOpenAI } from "@langchain/openai";
// See https://cookbook.openai.com/examples/using_logprobs for details
const llmWithLogprobs = new ChatOpenAI({
logprobs: true,
// topLogprobs: 5,
});
const responseMessageWithLogprobs = await llmWithLogprobs.invoke("Hi there!");
console.dir(responseMessageWithLogprobs.response_metadata.logprobs, {
depth: null,
});
{
content: [
{
token: 'Hello',
logprob: -0.0005151443,
bytes: [ 72, 101, 108, 108, 111 ],
top_logprobs: []
},
{
token: '!',
logprob: -0.00004334534,
bytes: [ 33 ],
top_logprobs: []
},
{
token: ' How',
logprob: -0.000035477897,
bytes: [ 32, 72, 111, 119 ],
top_logprobs: []
},
{
token: ' can',
logprob: -0.0006658526,
bytes: [ 32, 99, 97, 110 ],
top_logprobs: []
},
{
token: ' I',
logprob: -0.0000010280384,
bytes: [ 32, 73 ],
top_logprobs: []
},
{
token: ' assist',
logprob: -0.10124119,
bytes: [
32, 97, 115,
115, 105, 115,
116
],
top_logprobs: []
},
{
token: ' you',
logprob: -5.5122365e-7,
bytes: [ 32, 121, 111, 117 ],
top_logprobs: []
},
{
token: ' today',
logprob: -0.000052643223,
bytes: [ 32, 116, 111, 100, 97, 121 ],
top_logprobs: []
},
{
token: '?',
logprob: -0.000012352386,
bytes: [ 63 ],
top_logprobs: []
}
]
}
Tool callingβ
Tool calling with OpenAI models works in a similar to other models. Additionally, the following guides have some information especially relevant to OpenAI:
- How to: disable parallel tool calling
- How to: force a tool call
- How to: bind model-specific tool formats to a model.
API referenceβ
For detailed documentation of all ChatOpenAI features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html
Relatedβ
- Chat model conceptual guide
- Chat model how-to guides