๐๏ธ ๐ข Chain of Thought Prompting
Chain of Thought (CoT) prompting (@wei2022chain) is a recently developed prompting
๐๏ธ ๐ข Zero Shot Chain of Thought
Zero Shot Chain of Thought (Zero-shot-CoT) prompting (@kojima2022large) is a
๐๏ธ ๐ก Self-Consistency
Self-consistency(@wang2022selfconsistency) is an approach that simply asks a model the same prompt multiple times and takes the majority result as the final answer. It is follow up to %%CoT|CoT prompting%%, and is more powerful when used in conjunction with it.
๐๏ธ ๐ก Generated Knowledge
The idea behind the generated knowledge approach(@liu2021generated) is to ask the %%LLM|LLM%% to generate potentially useful information about a given question/prompt before generating a final response.
๐๏ธ ๐ก Least to Most Prompting
Least to Most prompting (LtM)(@zhou2022leasttomost) takes %%CoT prompting|CoT prompting%% a step further by first breaking a problem into sub problems then solving each one. It is a technique inspired by real-world educational strategies for children.
๐๏ธ ๐ก Dealing With Long Form Content
Dealing with long form content can be difficult, as models have limited context length. Let's learn some strategies for effectively handling long form content.
๐๏ธ ๐ก Revisiting Roles
Accuracy Boost in Newer Models
๐๏ธ ๐ข What's in a Prompt?
When crafting prompts for language learning models (LLMs), there are several factors to consider. The format and labelspace both play crucial roles in the effectiveness of the prompt.