RSS

Prompt Engineering

TagLast edit: 27 Aug 2022 18:34 UTC by Multicore

Prompt Engineering is the practice of designing inputs to go into an ML system (often a language model), to get it to produce a particular output.

The Waluigi Effect (mega-post)

Cleo Nardo3 Mar 2023 3:22 UTC
628 points
187 comments16 min readLW link

Want to pre­dict/​ex­plain/​con­trol the out­put of GPT-4? Then learn about the world, not about trans­form­ers.

Cleo Nardo16 Mar 2023 3:08 UTC
105 points
26 comments5 min readLW link

Re­marks 1–18 on GPT (com­pressed)

Cleo Nardo20 Mar 2023 22:27 UTC
146 points
35 comments31 min readLW link

Ex­trap­o­lat­ing from Five Words

Gordon Seidoh Worley15 Nov 2023 23:21 UTC
40 points
11 comments2 min readLW link

Me­taAI: less is less for al­ign­ment.

Cleo Nardo13 Jun 2023 14:08 UTC
68 points
17 comments5 min readLW link

Chess as a case study in hid­den ca­pa­bil­ities in ChatGPT

AdamYedidia19 Aug 2023 6:35 UTC
47 points
32 comments6 min readLW link

What DALL-E 2 can and can­not do

Swimmer963 (Miranda Dixon-Luinenburg) 1 May 2022 23:51 UTC
353 points
303 comments9 min readLW link

Us­ing GPT-3 to aug­ment hu­man intelligence

Henrik Karlsson10 Aug 2022 15:54 UTC
52 points
8 comments18 min readLW link
(escapingflatland.substack.com)

Try­ing out Prompt Eng­ineer­ing on TruthfulQA

Megan Kinniment23 Jul 2022 2:04 UTC
10 points
0 comments8 min readLW link

The case for be­com­ing a black-box in­ves­ti­ga­tor of lan­guage models

Buck6 May 2022 14:35 UTC
126 points
20 comments3 min readLW link

Test­ing PaLM prompts on GPT3

Yitz6 Apr 2022 5:21 UTC
103 points
14 comments8 min readLW link

Us­ing PICT against Pas­taGPT Jailbreaking

Quentin FEUILLADE--MONTIXI9 Feb 2023 4:30 UTC
17 points
0 comments9 min readLW link

Stop post­ing prompt in­jec­tions on Twit­ter and call­ing it “mis­al­ign­ment”

lc19 Feb 2023 2:21 UTC
144 points
9 comments1 min readLW link

Hello, Elua.

Tamsin Leake23 Feb 2023 5:19 UTC
38 points
18 comments4 min readLW link
(carado.moe)

Hut­ter-Prize for Prompts

rokosbasilisk24 Mar 2023 21:26 UTC
5 points
10 comments1 min readLW link

Please Understand

samhealy1 Apr 2024 12:33 UTC
29 points
11 comments6 min readLW link

[Question] Are nested jailbreaks in­evitable?

judson17 Mar 2023 17:43 UTC
1 point
0 comments1 min readLW link

Us­ing ide­olog­i­cally-charged lan­guage to get gpt-3.5-turbo to di­s­obey it’s sys­tem prompt: a demo

Milan W24 Aug 2024 0:13 UTC
2 points
0 comments6 min readLW link

You can use GPT-4 to cre­ate prompt in­jec­tions against GPT-4

WitchBOT6 Apr 2023 20:39 UTC
87 points
7 comments2 min readLW link

LW is prob­a­bly not the place for “I asked this LLM (x) and here’s what it said!”, but where is?

lillybaeum12 Apr 2023 10:12 UTC
21 points
3 comments1 min readLW link

Read­abil­ity is mostly a waste of characters

vlad.proex21 Apr 2023 22:05 UTC
21 points
7 comments3 min readLW link

LLM keys—A Pro­posal of a Solu­tion to Prompt In­jec­tion Attacks

Peter Hroššo7 Dec 2023 17:36 UTC
1 point
2 comments1 min readLW link

DELBERTing as an Ad­ver­sar­ial Strategy

Matthew_Opitz12 May 2023 20:09 UTC
8 points
3 comments5 min readLW link

$300 for the best sci-fi prompt

RomanS17 May 2023 4:23 UTC
40 points
30 comments2 min readLW link

Tu­tor-GPT & Ped­a­gog­i­cal Reasoning

courtlandleer5 Jun 2023 17:53 UTC
26 points
3 comments4 min readLW link
No comments.