# │ai-agents-mentals


01/28/2024, 10:23 PM
Copy code
Natural Language Computer | Digital Computer
LLM       | CPU ALU / Compute Unit
LLM Cache | CPU Cache
Context   | Processor registers
Prompt    | Code block
Inference | Execute code block in CPU
The context window is probably the processor registers. The context window is integrated into the model (directly connected to the compute unit), just like the processor registers. VectorDB is the second memory layer. So far there is no equivalent RAM for LLM. Probably it will be a solution similar to VectorDB, but with semantic addressing of data within this memory