Apple has released eight small AI language models intended for use on the device.

Getty Images

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

In the world of AI, what can be called “small language models” have been growing in popularity recently because they can be run on a local device rather than requiring data center-grade computers in the cloud. On Wednesday, Apple introduced a set of open-source AI language models called OpenELM that are small enough to run directly on a smartphone. They're mostly proof-of-concept research models for now, but they could form the basis of future on-device AI offerings from Apple.

Apple's new AI models, collectively named OpenELM for “Open Source Efficient Language Models,” are currently available on Face Hug under the Apple Sample Code License. Because the license has some restrictions, it may not fit the generally accepted definition of “open source”, but the source code for OpenELM is available.

On Tuesday, we covered Microsoft's Phi-3 models, which aim to achieve something similar: a useful level of language understanding and processing performance in small AI models that can run natively. Phi-3-mini has 3.8 billion parameters, but some of Apple's OpenELM models are much smaller, with 270 million to 3 billion parameters in eight separate models.

In comparison, Meta's Llama 3 family includes 70 billion parameters in the largest model released yet (with a 400 billion version on the way), and OpenAI's GPT-3 from 2020 shipped with 175 billion parameters. Parameter count serves as a rough measure of AI model capacity and complexity, but recent research has focused on making small AI language models as capable as large models were just a few years ago.

OpenELM's eight models come in two flavors: four as “pre-trained” (essentially a raw, next-token version of the model) and four as instruction-tuned (fine-tuned to follow instructions). , which is more ideal for developing AI assistants and chatbots):

OpenELM features a 2048 token maximum context window. Models were trained on publicly available datasets RefinedWeb, a version of PILE with duplicates removed, a subset of RedPajama, and a subset of Dolma v1.6, which Apple says That's about 1.8 trillion tokens of data. Tokens are fragmented representations of data used for processing by AI language models.

Apple says its approach with OpenELM includes a “layer-wise scaling strategy” that reportedly allocates parameters more efficiently to each layer, saving not only computational resources but also improves the performance of the model when training on fewer tokens. According to a white paper released by Apple, this strategy enabled OpenELM to achieve a 2.36% improvement in accuracy over Allen AI's OLMo 1B (another small language model) with more than half the pre-training time. Tokens are required.

to enlarge / A table comparing OpenELM to other small AI language models in a similar class, taken from Apple's OpenELM research paper.

apple

Apple also released the code for CoreNet, a library it used to train OpenELM—and included reproducible training recipes that allowed the weights (neural network files) to be simulated, which That is so far unusual for a major tech company. As Apple says in its OpenELM paper abstract, transparency is a key goal for the company: “The reproducibility and transparency of large language models advance open research, ensure reliability of results, and address data and model biases.” As well as potential risks, it is crucial to enable investigation.”

By releasing source code, model weights, and training materials, Apple says it aims to “empower and strengthen the open research community.” However, it also warns that because the models were trained on publicly available datasets, “there is a potential for these models to produce outputs that are inaccurate in response to user input, are harmful, prejudicial, or objectionable.”

Although Apple has yet to integrate this new wave of AI language model capabilities into its consumer devices, the upcoming iOS 18 update (expected to be revealed at WWDC in June) will include new AI features. is rumored to use on-device processing to ensure user Privacy—Although the company could potentially hire Google or OpenAI to handle more complex, off-device AI processing to give Siri a long-overdue boost.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment