Log in Sign up
2 Matching Annotations
  1. May 2023
  2. greshake.github.io greshake.github.io
    Prompt Injections are bad, mkay?
    1
    1. kael 14 May 2023
      in Public
      llm prompt injection security wikipedia:en=Large_language_model wikipedia:en=Prompt_engineering cito:cites=doi:10.48550/arXiv.2302.12173
    Visit annotations in context

    Tags

    • security
    • llm
    • cito:cites=doi:10.48550/arXiv.2302.12173
    • prompt injection
    • wikipedia:en=Large_language_model
    • wikipedia:en=Prompt_engineering

    Annotators

    • kael

    URL

    greshake.github.io/
  3. Apr 2023
  4. arxiv.org arxiv.org
    More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models
    1
    1. kael 17 Apr 2023
      in Public
      llm security prompt injection wikipedia:en=Large_language_model wikipedia:en=Prompt_engineering doi:10.48550/arXiv.2302.12173
    Visit annotations in context

    Tags

    • security
    • llm
    • doi:10.48550/arXiv.2302.12173
    • prompt injection
    • wikipedia:en=Large_language_model
    • wikipedia:en=Prompt_engineering

    Annotators

    • kael

    URL

    arxiv.org/abs/2302.12173
Share:
Group. Only group members will be able to view this annotation.
Only me. No one else will be able to view this annotation.
Hypothes.is
  • About
  • Blog
  • Bioscience
  • Education
  • Jobs
  • Help
  • Contact
  • Terms of Service
  • Privacy Policy