Skip to content
site logo mobile

Let's Play With LLMs!

System Prompt

LLM Playground

Reset

System Prompt

Options

Select Model :

Model

Tempreture

0
Temperature controls the model's responses: a low setting yields more predictable and consistent results, while a high setting allows for greater creativity and variability in the outputs.

Maximum Tokens

0
Max tokens sets the limit on the number of words or characters the model can generate in a single response, controlling the length of the output.

Top P

0
Top_p limits the model to choose from the most likely words, making sure the responses are varied yet sensible.
Streaming Behavior :

Streaming Behavior

Classic Typewriter

Instant Reveal

Segmented Delivery