Home Tech Pocket AI models could unlock a new era of computing

Pocket AI models could unlock a new era of computing

0 comment
Pocket AI models could unlock a new era of computing

When ChatGPT launched in November 2023, it was only accessible via the cloud because the model behind it was downright huge.

Today I’m running a similarly capable AI program on a Macbook Air and it’s not even hot. The shrinkage shows how quickly researchers are refining AI models to make them more agile and efficient. It also shows how going to ever-larger scales is not the only way to make machines significantly smarter.

The model that now infuses my laptop with ChatGPT-like wit and wisdom is called the Phi-3-mini. It is part of a family of smaller AI models recently released by Microsoft researchers. Although it’s compact enough to run on a smartphone, I tested it by running it on a laptop and accessing it from an iPhone through an app called nice to meet you which provides a chat interface similar to the official ChatGPT app.

in a paper In describing the Phi-3 family of models, Microsoft researchers say that the model I used compares favorably to GPT-3.5, the OpenAI model behind the first release of ChatGPT. That claim is based on measuring its performance on several standard AI benchmarks designed to measure common sense and reasoning. In my own testing, it certainly seems just as capable.

Will Knight via Microsoft

microsoft announced a new “multimodal” Phi-3 model capable of handling audio, video and text at its annual developer conference, Build, this week. This came just days after OpenAI and Google touted radical new AI assistants built on multi-modal models accessed through the cloud.

Microsoft’s Lilliputian family of AI models suggests that it’s increasingly possible to create all kinds of useful AI applications that don’t rely on the cloud. That could open up new use cases, by allowing them to be more responsive or private. (Offline algorithms are a key piece of the recovery feature that Microsoft announced uses artificial intelligence to make everything you did on your PC searchable.)

But the Phi family also reveals something about the nature of modern AI and perhaps how it can be improved. Sébastien Bubeck, a Microsoft researcher involved in the project, tells me that the models were built to test whether being more selective about what an AI system is trained on could provide a way to fine-tune its capabilities.

The big language models like OpenAI’s GPT-4 or Google’s Gemini that power chatbots and other services are typically huge amounts of text pulled from books, websites, and almost any other accessible source. Although it has raised legal questions, OpenAI and others have found that increasing the amount of text fed to these models and the amount of computing power used to train them can unlock new capabilities.

You may also like