Defending Against LLM Attacks

  • Treat LLM APIs as publicly accessible.

  • Don't feed LLMs sensitive data.

  • Don't rely on prompting to block attacks due to jailbreaker prompts.

Last updated