This went past me 16 hours ago – well, pushing 4 Keynote talks in the past 2 days at 4 different locations might have something to do with it.
ANYWAYS!
Apple has introduced a new family of open-source large language models known as OpenELM (Open-source Efficient Language Models). These models are designed to operate locally on devices, marking a significant shift from the traditional reliance on cloud-based server processing. This development is part of Apple’s broader strategy to integrate more advanced AI capabilities directly into its devices, enhancing both privacy and processing speed.
I guess Apple took a very good look at where things are heading and developed around that. Maybe in a year from now, cloud-based generative AI will be a thing of the past.
What’s your point of view in this? Any experts around who would like to enlighten me? I am genuinely curious!