In my internal talks about the use cases of generative AI, I don’t become tired of foretelling that within the next 6 months, each of us will have our personally trained AI assistant on our own hardware, running locally.
This development has extreme consequences for the way companies offer services, as they may have to brace for the impact of bot-guided customer interactions on a massive scale.
My prediction stems from observing what happened last year with generative AI images and technologies like Stable Diffusion. It’s not rocket science to extrapolate from there to generative text outputs and their respective models. We witnessed everything revolving around creating images and video shift from cloud-based solutions to local devices.
Perhaps I was even too conservative. We may not have to wait a full 6 months for all of this to happen; it may very well take less time. The exponential speed at which open-source communities build on top of these models promises a high-speed, radical shift in how we will interact with each other, with businesses, and with AI-run systems.
It’s time to rethink everything, folks.