logo
What is Animus?
Usage
API Reference
Authentication
Chat
Create Chat Response
powered by zuplo
DocsPricingSign in

Animus API

Animus' API provides you with a variety of AI-driven capabilities. From generating text with our advanced large language models to creating customized content schedules, our API's applications are vast and versatile. Whether it's facilitating seamless conversation through predictive response or simulating a comprehensive content calendar, our advanced functionalities can cater to an array of feature requirements.

BASE URL
https://api.vivian.animusai.co

Authentication

Sign into view and manage your API credentials

Chat

ENDPOINTS
POST/chat/completions

Create Chat Response

Receives a series of messages as input to generate a contextually relevant chat response using a large language model (LLM).

Protected by API Key

Headers

Authorization

required, string

The Authorization header is used to authenticate with the API using your API key. Value is of the format Bearer YOUR_KEY_HERE.

Request Body

messages

required, array, default: [{"content":"You are having a conversation with a friend","role":"system"},{"content":"Hey there!","role":"user"}]

A chronological list of messages that compose the current conversation.

temperature

optional, number, default: 1

Adjusts randomness in the response generation, with lower values yielding more predictable responses.

top_p

optional, number, default: 1

Filters the token set to those with cumulative probability above this threshold, influencing diversity.

n

optional, integer, default: 1

Number of alternate responses to generate.

max_tokens

optional, integer, default: 150

Caps the number of tokens in the generated response. Lacks a default to allow model-specific limits.

stop

optional, array of strings, default: ["<|im_end|>"]

A set of strings which, when generated, signal the model to cease response generation.

presence_penalty

optional, number, default: 1

Adjusts likelihood of new words based on their existing presence in the text. Discourages repetition when positive.

frequency_penalty

optional, number, default: 1

Penalizes words based on their frequency in the document to encourage diversity.

best_of

optional, integer, default: 1

Generates several completions server-side and returns the best. The definition of "best" depends on model and settings.

top_k

optional, integer, default: 40

Limits consideration to the top k tokens, diversifying outputs by reducing predictability.

repetition_penalty

optional, number, default: 1

Modifies likelihood of repeating tokens based on their previous occurrence, counteracting model's repetition tendency.

min_p

optional, number, default: 0

Sets a minimum probability threshold for tokens to be considered for generation, further filtering the possible outputs.

length_penalty

optional, number, default: 1

Adjusts the impact of sequence length on selection, encouraging shorter or longer responses.

POST
/chat/completions
1
EXAMPLE BODY
{
  "messages": [
    {
      "content": "You are having a conversation with a friend",
      "role": "system"
    },
    {
      "content": "Hey there!",
      "role": "user"
    }
  ],
  "n": 1,
  "max_tokens": 150,
  "stop": [
    "<|im_end|>"
  ],
  "presence_penalty": 1,
  "frequency_penalty": 1,
  "top_k": 40,
  "repetition_penalty": 1
}