Back to Benchmarks

mistral-large

text-generation
Uploaded: 17.03.2024

Inference Input Arguments

{ "messages": <List[Dict]>, "temperature": <Optional[int]>, "max_tokens": <Optional[int]>, "stream": <Optional[bool]> }

Inference Output Format

{ "model": "mistral-large", "created": <int>, "id": <int>, "choices": <List[str]>, "object": "chat.completion", "usage": <Dict> }