文章摘要
本文介绍了一个基于异步功能构建的AI聊天代理示例。该代理通过`fetch`方法调用OpenAI API处理用户的查询请求,并从URL中提取文本参数。`runCompletion`函数使用`gpt-3.5-turbo`模型返回处理结果,而`loader`函数负责参数提取和结果返回。该示例展示了如何实现一个简单的AI聊天代理,并且代码结构清晰,易于理解。
async function runCompletion(messages: any) {
const response=await fetch(
“https://api.openai-proxy.com/v1/chat/completions”,
{
method: “POST”,
headers: {
“Content-Type”: “application/json”,
Authorization: “Bearer ” + process.env.OPENAI_API_KEY,
},
body: JSON.stringify({
model: “gpt-3.5-turbo”,
messages: [{ role: “user”, content: messages }],
}),
}
).then((res)=> res.json())
return await response.choices[0].message.content;
}
export async function loader({ request }: any) {
const url=new URL(request.url);
const text=url.searchParams.get(“text”)!;
return runCompletion(text);
}
const response=await fetch(
“https://api.openai-proxy.com/v1/chat/completions”,
{
method: “POST”,
headers: {
“Content-Type”: “application/json”,
Authorization: “Bearer ” + process.env.OPENAI_API_KEY,
},
body: JSON.stringify({
model: “gpt-3.5-turbo”,
messages: [{ role: “user”, content: messages }],
}),
}
).then((res)=> res.json())
return await response.choices[0].message.content;
}
export async function loader({ request }: any) {
const url=new URL(request.url);
const text=url.searchParams.get(“text”)!;
return runCompletion(text);
}
© 版权声明
文章版权归作者所有,未经允许请勿转载。