response = Map();
//
//Reassign the inbound message from the user, which we'll send to ChatGPT
question = message;
//
//
// Need to add your OWN openAI token below. Replace the xxxxxxx with your own token
token = "Bearer xxxxxxxxxxxxxxxx";
//
//
header = Map();
header.put("Authorization", token);
header.put("Content-Type", "application/json");
//
//
//Set the params which we'll send to chatGPT
//Increasing the temperature increases response creativity, decreasing has the opposite affect
params = { "model": "text-davinci-003", "prompt": question, "temperature": 0.4, "top_p": 1,
"max_tokens": 256, "frequency_penalty": 0, "presence_penalty": 0, "stop": { " Human:", " AI:"} };
//
//
// Making post request to ChatGPT
fetchCompletions = invokeurl
[
type: POST
parameters: params.toString()
headers: header
detailed: true
];
//
//
info "Fetch completions: " + fetchCompletions;
//
//
//Check we got a good response from ChatGPT
if (fetchCompletions.get("responseCode") == 200) {
// Grab the answer from Chat GPT
answer = fetchCompletions.get("responseText").get("choices").getJSON("text");
//
//
//Put the answer into the response map which is what we'll return to the user
response.put("text", answer);
}
else if (fetchCompletions.get("responseCode") == 429)
{
response = { "text": "Oops! I can't help with this. Try asking something else :wink:" };
}
//
//
// Trigger the response back to the user.
return response;
Yes but it is not the current ChatGTP version that everyone is raving about. It's the ChatGPT-3 Da-Vinci model which is excellent but not on the same level as ChatGTP-4 which is still in research mode and does not yet have an API connection.
It generally depends on how you phrase the questions and give the context. It's not on the same level as ChatGPT-4.