RSS

Prompt Engineering

TagLast edit: Aug 27, 2022, 6:34 PM by Multicore

Prompt Engineering is the practice of designing inputs to go into an ML system (often a language model), to get it to produce a particular output.

The Waluigi Effect (mega-post)

Cleo NardoMar 3, 2023, 3:22 AM
634 points
188 comments16 min readLW link

Want to pre­dict/​ex­plain/​con­trol the out­put of GPT-4? Then learn about the world, not about trans­form­ers.

Cleo NardoMar 16, 2023, 3:08 AM
107 points
26 comments5 min readLW link

Test­ing PaLM prompts on GPT3

YitzApr 6, 2022, 5:21 AM
103 points
14 comments8 min readLW link

Me­taAI: less is less for al­ign­ment.

Cleo NardoJun 13, 2023, 2:08 PM
71 points
17 comments5 min readLW link

Ex­trap­o­lat­ing from Five Words

Gordon Seidoh WorleyNov 15, 2023, 11:21 PM
40 points
11 comments2 min readLW link

Chess as a case study in hid­den ca­pa­bil­ities in ChatGPT

AdamYedidiaAug 19, 2023, 6:35 AM
47 points
32 comments6 min readLW link

What DALL-E 2 can and can­not do

Swimmer963 (Miranda Dixon-Luinenburg) May 1, 2022, 11:51 PM
353 points
303 comments9 min readLW link

Us­ing GPT-3 to aug­ment hu­man intelligence

Henrik KarlssonAug 10, 2022, 3:54 PM
52 points
8 comments18 min readLW link
(escapingflatland.substack.com)

Try­ing out Prompt Eng­ineer­ing on TruthfulQA

Megan KinnimentJul 23, 2022, 2:04 AM
10 points
0 comments8 min readLW link

Re­marks 1–18 on GPT (com­pressed)

Cleo NardoMar 20, 2023, 10:27 PM
145 points
35 comments31 min readLW link

The case for be­com­ing a black-box in­ves­ti­ga­tor of lan­guage models

BuckMay 6, 2022, 2:35 PM
126 points
20 comments3 min readLW link

Tu­tor-GPT & Ped­a­gog­i­cal Reasoning

courtlandleerJun 5, 2023, 5:53 PM
26 points
3 comments4 min readLW link

Us­ing ide­olog­i­cally-charged lan­guage to get gpt-3.5-turbo to di­s­obey it’s sys­tem prompt: a demo

Milan WAug 24, 2024, 12:13 AM
3 points
0 comments6 min readLW link

Us­ing PICT against Pas­taGPT Jailbreaking

Quentin FEUILLADE--MONTIXIFeb 9, 2023, 4:30 AM
26 points
0 comments9 min readLW link

Stop post­ing prompt in­jec­tions on Twit­ter and call­ing it “mis­al­ign­ment”

lcFeb 19, 2023, 2:21 AM
144 points
9 comments1 min readLW link

[Question] Are nested jailbreaks in­evitable?

judsonMar 17, 2023, 5:43 PM
1 point
0 comments1 min readLW link

Bi­modal AI Beliefs

Adam TrainFeb 14, 2025, 6:45 AM
6 points
1 comment4 min readLW link

Please Understand

samhealyApr 1, 2024, 12:33 PM
29 points
11 comments6 min readLW link

Does Claude Pri­ori­tize Some Prompt In­put Chan­nels Over Others?

keltanDec 29, 2024, 1:21 AM
9 points
2 comments5 min readLW link

Re­veal­ing al­ign­ment fak­ing with a sin­gle prompt

Florian_DietzJan 29, 2025, 9:01 PM
9 points
5 comments4 min readLW link

Hut­ter-Prize for Prompts

rokosbasiliskMar 24, 2023, 9:26 PM
5 points
10 comments1 min readLW link

You can use GPT-4 to cre­ate prompt in­jec­tions against GPT-4

WitchBOTApr 6, 2023, 8:39 PM
87 points
7 comments2 min readLW link

LW is prob­a­bly not the place for “I asked this LLM (x) and here’s what it said!”, but where is?

lillybaeumApr 12, 2023, 10:12 AM
21 points
3 comments1 min readLW link

Read­abil­ity is mostly a waste of characters

vlad.proexApr 21, 2023, 10:05 PM
21 points
7 comments3 min readLW link

LLM keys—A Pro­posal of a Solu­tion to Prompt In­jec­tion Attacks

Peter HroššoDec 7, 2023, 5:36 PM
1 point
2 comments1 min readLW link

DELBERTing as an Ad­ver­sar­ial Strategy

Matthew_OpitzMay 12, 2023, 8:09 PM
8 points
3 comments5 min readLW link

$300 for the best sci-fi prompt

RomanSMay 17, 2023, 4:23 AM
40 points
30 comments2 min readLW link
No comments.