Animus' API provides you with a variety of AI-driven capabilities. From generating text with our advanced large language models to creating customized content schedules, our API's applications are vast and versatile. Whether it's facilitating seamless conversation through predictive response or simulating a comprehensive content calendar, our advanced functionalities can cater to an array of feature requirements.
https://api.vivian.animusai.co
Receives a series of messages as input to generate a contextually relevant chat response using a large language model (LLM).
Authorization
The Authorization
header is used to authenticate with the API using your API key. Value is of the format Bearer YOUR_KEY_HERE
.
messages
A chronological list of messages that compose the current conversation.
temperature
Adjusts randomness in the response generation, with lower values yielding more predictable responses.
top_p
Filters the token set to those with cumulative probability above this threshold, influencing diversity.
n
Number of alternate responses to generate.
max_tokens
Caps the number of tokens in the generated response. Lacks a default to allow model-specific limits.
stop
A set of strings which, when generated, signal the model to cease response generation.
presence_penalty
Adjusts likelihood of new words based on their existing presence in the text. Discourages repetition when positive.
frequency_penalty
Penalizes words based on their frequency in the document to encourage diversity.
best_of
Generates several completions server-side and returns the best. The definition of "best" depends on model and settings.
top_k
Limits consideration to the top k tokens, diversifying outputs by reducing predictability.
repetition_penalty
Modifies likelihood of repeating tokens based on their previous occurrence, counteracting model's repetition tendency.
min_p
Sets a minimum probability threshold for tokens to be considered for generation, further filtering the possible outputs.
length_penalty
Adjusts the impact of sequence length on selection, encouraging shorter or longer responses.
1
{
"messages": [
{
"content": "You are having a conversation with a friend",
"role": "system"
},
{
"content": "Hey there!",
"role": "user"
}
],
"n": 1,
"max_tokens": 150,
"stop": [
"<|im_end|>"
],
"presence_penalty": 1,
"frequency_penalty": 1,
"top_k": 40,
"repetition_penalty": 1
}