Stanford's Center for Research on Foundation Models announced last week that its researchers had successfully cloned OpenAI's GPT API for a price of just $600. The researchers fine-tuned Meta's LLaMA 7B large language model (LLM) using OpenAI's GPT API, creating the Alpaca AI that exhibits many behaviors similar to OpenAI’s text-davinci-003, also called GPT-3.5. The researchers spent less than $500 on OpenAI's API and less than $100 on LLaMA. Despite its lower cost, Alpaca generates outputs that are typically shorter than ChatGPT and performs similarly well.
As several machine learning enthusiasts have noted, the release of Alpaca AI from Stanford's Center for Research on Foundation Models fell at the beginning of what some call "the most eventful week AI has ever seen." Even as big firms like OpenAI continue to make significant advances in AI, the release of Alpaca AI is a promising development for researchers and smaller enterprises who might want to replicate GPT API's results without breaking the bank.
Stanford's researchers have demonstrated that OpenAI's GPT API, previously thought of as cutting-edge, can be replicated with a small budget. The Alpaca AI's success shows that smaller enterprises can also make significant advances in AI without breaking the bank. While Alpaca still has its limitations, its successful creation is a promising development for the field of AI and its future research.