Some interesting parallels between
# │ai-agents-mentals
Some interesting parallels between Digital computers and Natural Language Computers
Copy code
Natural Language Computer | Digital Computer
Base components
LLM                       | CPU
LLMCache                  | CPUCache
Context                   | RAM
Prompt                    | Code block
Inference                 | Execute code block in CPU

Functions                 | Interrupts
Functions Schema          | Interrupt vector table

VectorDB                  | Disk storage
Data Indexing             | File system indexing
RAG                       | Disk API for Retrieving/Querying

Applications / Services
AI Agent                  | Application / Service
AI Agent State            | Application state (e.g. TERM)
Send/Reply methods        | Inter-process communication (IPC)
Templates? (early idea)   | Executable formats like ELF, PE 

Virtual Context           | Swap file / virtual memory (MemGPT)
Sounds like Karpathy
Transformers were initially designed for language translation, I believe LLMs is a new type of kernel and interact with other models. I believe there's areas of focus where you can increase the performance by fine tuning the model in a more precise data format, LLMs do not need to always output english, it could output any encoding
The context window is probably the processor registers. The context window is integrated into the model (directly connected to the compute unit), just like the processor registers. VectorDB is the second memory layer. So far there is no equivalent RAM for LLM. Probably it will be a solution similar to VectorDB, but with semantic addressing of data within this memory