Context Window

< Glossary
Models

The maximum number of tokens a model can process in a single request (including input and output). GPT-4o has 128K, Claude 3.5 has 200K, Gemini has up to 1M.

Related terms