[AI Chat Highlights] Parameters for Calling LLM Model with Python
This a log of a simple request I asked the model Dolphin 72B on Venice.ai to learn how to use Venice on my python programs:
Prompt
What are the best parameters to send to the API for summarization?
import requests
url = "https://api.venice.ai/api/v1/chat/completions"
payload = {
"model": "llama-3.3-70b",
"venice_parameters": {"include_venice_system_prompt": False},
"messages": [
{
"role": "system",
"content": "SYSTEM_PRMPT"
}
],
"temperature": 0.3
}
headers = {
"Authorization": "Bearer VENICE_API",
"Content-Type": "application/json"
}
response = requests.request("POST", url, json=payload, headers=headers)
print(response.text)
Response (by Dolphin 72B)
The best parameters for the summarization API are:
model
: "llama-3.3-70b" - This is the model that is used for summarization.temperature
: 0.3 - This is the temperature parameter that controls the randomness of the output. A lower temperature will result in a more focused output, while a higher temperature will result in a more diverse output.venice_parameters.include_venice_system_prompt
: False - This parameter is used to control whether or not the system prompt is included in the response. For summarization, this should be set to False.messages
: This is the list of messages that are sent to the API. The first message in the list should be a system message that defines the task that the API should perform. The rest of the messages should be user messages that contain the text that should be summarized.
Here is an example of how to call the summarization API with these parameters:
import requests
url = "https://api.venice.ai/api/v1/chat/completions"
payload = {
"model": "llama-3.3-70b",
"temperature": 0.3,
"venice_parameters": {"include_venice_system_prompt": False},
"messages": [
{
"role": "system",
"content": "Summarize the following text:"
},
{
"role": "user",
"content": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed malesuada, augue non malesuada efficitur, felis sapien feugiat justo, vel aliquet nisi eros sit amet ligula. Nam ut libero at lectus gravida sollicitudin sed in urna. Nullam ac libero in urna sagittis commodo. Sed accumsan, odio ac tincidunt tempus, leo ligula tincidunt purus, ac consectetur velit mi nec eros. Nullam suscipit, quam vel scelerisque pretium, lacus ex ultricies nisi, a luctus dolor purus in neque. Sed nec pretium metus, eu tincidunt orci. Aenean nec magna arcu. Nullam eu magna massa. Sed euismod, enim vel laoreet venenatis, augue tellus tristique nunc, vel commodo nisi lacus non augue. Donec nec arcu magna. Sed auctor, arcu nec auctor fringilla, mauris justo sollicitudin nisi, vel vestibulum eros nisi vitae nisi."
}
]
}
headers = {
"Authorization": "Bearer VENICE_API",
"Content-Type": "application/json"
}
response = requests.request("POST", url, json=payload, headers=headers)
print(response.text)
This was only a 1-prompt-1-response conversation. It's not up to date, but I thought it would be useful to have on the blockchain.
Thanks for reading.
Posted Using INLEO
Congratulations @ahmadmanga.dex! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)
Your next target is to reach 100 replies.
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
Check out our last posts: