Calling the Model
Now that we've instantiated a new model object from the OpenAI class and verified that
it is functional, it's time to pass in prompts!
We’ll start by creating an asynchronous function named promptFunc() inside
our server.js file and test it:
const promptFunc = async (input) => {
try {
const res = await model.invoke(input);
return res;
catch (err) {
console.error(err);
throw(err);
}
};
// Test
console.log(promptFunc("How do you capitalize all characters of a string in
JavaScript?"));
Inside the promptFunc() function, we use a try/catch statement to help us catch any
potential errors that might arise.
Within the try portion of the try/catch we create a new variable, res, that holds the
returned value from the OpenAI .invoke() method, to which we've passed the test
question, "How do you capitalize all characters of a string in JavaScript?"
When we run the script using node server.js, it may take a moment, but the result
should be the answer to the question as well as an example!
Before moving on, remove the test call to the promptFunc() from server.js.
What if a user of our application wanted to ask a different coding question? Instead of
having the user go into the server.js and alter the question themselves, we'll need a
way to capture their input and make a call based on that input. To do this, we'll need
a POST route to help us out!
POST Route User Input
We want to use our API POST route to interact with the user. Let's begin by setting up
the express and body-parser modules in our project by requiring them as dependencies
and invoking them. At the beginning of server.js, we add the following:
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
const port = 3000;
// Middleware to parse JSON requests
app.use(bodyParser.json());
Next, we define a POST route that will handle the user interaction. For ideas for this, we
refer back to the ChatGPT code we generated earlier and derive the following:
// Endpoint to handle request
app.post('/ask', async (req, res) => {
try {
const userQuestion = req.body.question;
if (!userQuestion) {
return res.status(400).json({ error: 'Please provide a question in the
request body.' });
const result = await promptFunc(text);
res.json({ result });
} catch (error) {
console.error('Error:', error.message);
res.status(500).json({ error: 'Internal Server Error' });
});
To the end of the file we also add the following, which tells Express to listen for requests
at the specified port number.
// Start the server
app.listen(port, () => {
console.log(`Server is running on http://localhost:${port}`);
});
Up to this point, your code should look like the following:
const { OpenAI } = require("@langchain/openai");
require('dotenv').config();
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
const port = 3000;
// Middleware to parse JSON requests
app.use(bodyParser.json());
const model = new OpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
temperature: 0,
model: 'gpt-3.5-turbo'
});
const promptFunc = async (input) => {
try {
const res = await model.invoke(input);
return res;
catch (err) {
console.error(err);
throw(err);
};
// Endpoint to handle request
app.post('/ask', async (req, res) => {
try {
const userQuestion = req.body.question;
if (!userQuestion) {
return res.status(400).json({ error: 'Please provide a question in the
request body.' });
const result = await promptFunc(userQuestion);
console.log(result);
res.json({ result });
} catch (error) {
console.error('Error:', error.message);
res.status(500).json({ error: 'Internal Server Error' });
}
});
// Start the server
app.listen(port, () => {
console.log(`Server is running on http://localhost:${port}`);
});
Now when we use node server.js to run our application, we are presented with the
message "Server is running on http://localhost:3000". Use Insomnia to verify that
the POST route at http://localhost:3000/ask is working as expected.
Moving forward, we'll learn more about LangChain, including how to use prompt
templates and output parsers to make our AI-powered summary generator application
more expandable and user-friendly!
Feel free to use the LangChain JavaScript documentation to learn more about
using LangChain specifically with JavaScript!