Large Language Models are Zero-Shot Reasoners (May 2022)

improve zero-shot prompt performance of LLMs by adding “Let’s think step by step” before each answer
improve zero-shot prompt performance of LLMs by adding “Let’s think step by step” before each answer
improve the Encoder/Decoder alignment with an Attention Mechanism
a prompting and fine-tuning method that enables LLMs to engage in a "thinking" process before generating responses
a comprehensive evaluation of o1-preview across many tasks and domains.
LLMs can help and also hinder learning outcomes
a paper that shows a model needs to see a concept exponentially more times to achieve linear improvements
A data visualization that uses squares along a 2D grid for representing proportion.
a method of computing a token representation that includes the context of surrounding tokens.