
Recursive Language Models (RLMs) aim to break the usual trade off between context length, accuracy and cost in large language models. Instead of forcing a model to read a giant prompt in one pass, RLMs treat the prompt as an external environment and let the model decide how to inspect it with code, then recursively call. This approach allows for more efficient processing of long-range dependencies in text, improving accuracy without significantly increasing computational costs. By treating the prompt as an environment, RLMs give the model more agency in determining how to interact with the input, leading to more flexible and adaptive language processing capabilities. The development of RLMs represents a significant advancement in the field of natural language processing, offering new possibilities for creating more sophisticated and contextually aware language models.