https://turingpi.com logo
#│ai-agents-mentals
Title
# │ai-agents-mentals
j

jufik

02/04/2024, 2:06 PM
For prompt injection considering your LLM is not fine tuned, best I've found yet is a mix of Embedding (to "cache" previous attacks), Fined tuned BERT and forbidding some langages (if you allow chinese, you're pretty much fucked) on the user prompt side. On the reponse side, content similarity with initial prompt. That all needs to be tweaked , but I guess the value here is on the chain rather than prompts themselves...