Search notes:

LLaMA

LLaMA stands for LLaMA Large Language model Meta AI.

Llama 2

Llama 2 is a collection of pretrained and fine-tuned LLMs with 7, 13 and 70 billion parameters. (A 34B variant was trained but not released)
Llama 2-Chat is fine-tuned version of Llama 2 which is optimized for dialogue use cases.
The models were trained on 2 trillion tokens using the standard transformer architecture.

See also

Alpaca is a training recipe based on the LLaMA 7B model uses the Self-Instruct method of instruction tuning to acquire capabilities comparable to the GPT-3.5 series text-davinci-003 model at a modest cost.

Index