OpenAI components
The @gensx/openai package provides OpenAI API compatible components for GenSX.
Installation
To install the package, run the following command:
npm install @gensx/openaiSupported components
Component | Description |
|---|---|
OpenAIProvider | OpenAI Provider that handles configuration and authentication for child components |
ChatCompletion | Component for making chat completions with OpenAI’s models (gpt-4o, gpt-4o-mini, etc.) |
Reference
<OpenAIProvider />
The OpenAIProvider component initializes and provides an OpenAI client instance to all child components. Any components that use OpenAI’s API need to be wrapped in an OpenAIProvider.
import { OpenAIProvider } from "@gensx/openai";
<OpenAIProvider apiKey="your-api-key" // Your OpenAI API key organization="org-id" // Optional: Your OpenAI organization ID baseURL="https://api.openai.com/v1" // Optional: API base URL/>;By configuring the baseURL, you can also use the OpenAIProvider with other OpenAI compatible APIs like x.AI and Groq.
<OpenAIProvider apiKey="your-api-key" // Your Groq API key baseURL="https://api.groq.com/openai/v1"/>Props
The OpenAIProvider accepts all configuration options from the OpenAI Node.js client library including:
apiKey(required): Your OpenAI API keyorganization: Optional organization IDbaseURL: Optional API base URL
<ChatCompletion />
The ChatCompletion component creates chat completions using OpenAI’s chat models. It must be used within an OpenAIProvider.
import { ChatCompletion } from "@gensx/openai";
<ChatCompletion model="gpt-4o" messages={[ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "What's a programmable tree?" }, ]} temperature={0.7} stream={true}/>;Props
The ChatCompletion component accepts all parameters from OpenAI’s chat completion API including:
model(required): ID of the model to use (e.g.,"gpt-4o","gpt-4o-mini")messages(required): Array of messages in the conversationtemperature: Sampling temperature (0-2)stream: Whether to stream the responsemaxTokens: Maximum number of tokens to generateresponseFormat: Format of the response (example:{ "type": "json_object" })