Yeah, I've seen them. Been following Open Interpre...
# │ai-agents-mentals
Yeah, I've seen them. Been following Open Interpreter and a lot of other projects for a long time. What can I say about all of this. It's still too early to give what they want to give. It's still just forming what will be a natural language computer. Many software solutions will eventually be realized in hardware as it started to happen with Transformers Projects: Potentially there may even be devices oriented on storing and accessing vector data and controllers in hardware for fast similarity search, including embedding. Now we can emulate such a hypothetical computer (processing + storage) by programmatically connecting different blocks such as Vector DB, LLM, StableDiffusion, text to speech and try to give a solution built entirely in natural language starting from the bottom. Giving minimum operations when the system can be extended through natural language. Only in this way we will be able to bring UX in natural language to the application level. Moreover, complex tasks will require the interaction of multiple agents. Extracting more value from LLM and surrounding data will require tens of thousands of tokens per second. It would be impossible to keep such systems in the cloud for every person on the planet with hundreds of agents for every task per person. We are again coming to gradual decentralization as it started in the days of mainframes (20th century clouds) and then PCs. As for Rabbit R1. Looking at such huge tectonic shifts in the field of AI solutions, it is very naive to think that Rabbit and similar solutions are the iPhone moment (as some people think on emotions) in AI