:
Log in
Sign up
4
Matching Annotations
May 2023
ai4comm.media.mit.edu
ai4comm.media.mit.edu
MIT MAS.S68!
1
kael
26 May 2023
in
Public
nlp
llm
wikipedia:en=Natural_language_processing
wikipedia:en=Large_language_model
Visit annotations in context
Tags
llm
wikipedia:en=Large_language_model
nlp
wikipedia:en=Natural_language_processing
Annotators
kael
URL
ai4comm.media.mit.edu/
Collapse view
greshake.github.io
greshake.github.io
Prompt Injections are bad, mkay?
1
kael
14 May 2023
in
Public
llm
prompt injection
security
wikipedia:en=Large_language_model
wikipedia:en=Prompt_engineering
cito:cites=doi:10.48550/arXiv.2302.12173
Visit annotations in context
Tags
wikipedia:en=Prompt_engineering
wikipedia:en=Large_language_model
security
prompt injection
llm
cito:cites=doi:10.48550/arXiv.2302.12173
Annotators
kael
URL
greshake.github.io/
Collapse view
Apr 2023
arxiv.org
arxiv.org
Eight Things to Know about Large Language Models
1
kael
17 Apr 2023
in
Public
llm
wikipedia:en=Large_language_model
doi:10.48550/arXiv.2304.00612
Visit annotations in context
Tags
llm
doi:10.48550/arXiv.2304.00612
wikipedia:en=Large_language_model
Annotators
kael
URL
arxiv.org/abs/2304.00612
Collapse view
arxiv.org
arxiv.org
More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models
1
kael
17 Apr 2023
in
Public
llm
security
prompt injection
wikipedia:en=Large_language_model
wikipedia:en=Prompt_engineering
doi:10.48550/arXiv.2302.12173
Visit annotations in context
Tags
wikipedia:en=Prompt_engineering
wikipedia:en=Large_language_model
security
doi:10.48550/arXiv.2302.12173
llm
prompt injection
Annotators
kael
URL
arxiv.org/abs/2302.12173
Collapse view
Share:
Group.
Only group members will be able to view this annotation.
Only me.
No one else will be able to view this annotation.