:
Log in
Sign up
2
Matching Annotations
May 2023
greshake.github.io
greshake.github.io
Prompt Injections are bad, mkay?
1
kael
14 May 2023
in
Public
llm
prompt injection
security
wikipedia:en=Large_language_model
wikipedia:en=Prompt_engineering
cito:cites=doi:10.48550/arXiv.2302.12173
Visit annotations in context
Tags
wikipedia:en=Prompt_engineering
wikipedia:en=Large_language_model
llm
prompt injection
security
cito:cites=doi:10.48550/arXiv.2302.12173
Annotators
kael
URL
greshake.github.io/
Collapse view
Apr 2023
arxiv.org
arxiv.org
More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models
1
kael
17 Apr 2023
in
Public
llm
security
prompt injection
wikipedia:en=Large_language_model
wikipedia:en=Prompt_engineering
doi:10.48550/arXiv.2302.12173
Visit annotations in context
Tags
wikipedia:en=Large_language_model
llm
prompt injection
security
doi:10.48550/arXiv.2302.12173
wikipedia:en=Prompt_engineering
Annotators
kael
URL
arxiv.org/abs/2302.12173
Collapse view
Share:
Group.
Only group members will be able to view this annotation.
Only me.
No one else will be able to view this annotation.