Skip to main content

LLaMA Distilled: How GPT-4 Fuels Fine-Tuning Ingenuity

·44 words·1 min

LLaMA models sparked this new practice of “data distillation” where you get data from GPT-4 and fine tune. The ingenuity of this partly non-academic community is just amazing. Even with no access to models and with little compute they keep marching on!

https://agi-sphere.com/llama-models/

Discussion