This is the rendered collections of notes by me, Lex Toumbourou.
You can find the source on the GitHub project.
The notes are collecting using my interpretation of the Zettelkasten method.
More
You can find the source on the GitHub project.
The notes are collecting using my interpretation of the Zettelkasten method.
More
-
-
Few-Shot Knowledge-Distillation
Routes LLM tasks to cheaper or more powerful models based on task novelty.
-
Large Language Models are Zero-Shot Reasoners (May 2022)
improve zero-shot prompt performance of LLMs by adding “Let’s think step by step” before each answer
-
Neural Machine Translation by Jointly Learning to Align and Translate (Sep 2014)
improve the Encoder/Decoder alignment with an Attention Mechanism
-
Thinking LLMs: General Instruction Following with Thought Generation (Oct 2024)
a prompting and fine-tuning method that enables LLMs to engage in a "thinking" process before generating responses
-
-
Evaluation of OpenAI o1: Opportunities and Challenges of AGI
a comprehensive evaluation of o1-preview across many tasks and domains.
-
AI Meets the Classroom: When Does ChatGPT Harm Learning?
LLMs can help and also hinder learning outcomes
-
No 'Zero-Shot' Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
a paper that shows a model needs to see a concept exponentially more times to achieve linear improvements
-