← Back

A Copywriter Walks Into ChatGPT: Lessons from Testing Prompt Frameworks

Last year, I started getting the Superhuman AI newsletter (along with, let’s be honest, way too many others) to stay on top of how fast AI keeps evolving. It quickly became one of my favorite morning rituals — coffee in hand, scrolling through each email, half in awe and half overwhelmed by how much there is to learn and how quickly things change.

AI isn’t just for tech people anymore. And whether creatives like it or not, it’s quickly becoming a part of how we think, write, and create.

A few days ago, the Superhuman newsletter mentioned a free AI Workplace Proficiency Course, and I figured, why not? It looked like it covered things I already knew, but a little refresher never hurts.

Good choice, Daniele.

Turns out, I didn’t know everything. The most useful section for me was “How to Write Prompts.” Sure, anyone can open an LLM and type a question to get an answer (always fact-check, though!), but understanding prompt frameworks helps direct the bot in a more focused way. The result? Smarter, sharper responses. And since I love experimenting with different frameworks (see: every other blog post I’ve written), I decided to put my newfound knowledge to the test: Could prompt frameworks actually change the quality of answers I get from ChatGPT when I’m chasing the same result?

Round 1: No Framework

I started with a totally unstructured, stream-of-consciousness question.
Picture this: I’m sitting in a privacy booth at work (because the open-office concept is so loud — AMIRITE?) and there’s a Carex brand daylight lamp glowing in the corner. It’s supposed to help with the winter blues. Cue inspiration.

The question: “There’s a Carex brand day-light lamp in the room where I’m working. How can this light help beat the winter blues?”

The answer:

Takeaway:
If you’re looking for a quick overview of how this lamp supports people with seasonal affective disorder, this type of simple prompt works great. It even got specific about the brand while still giving helpful general info.

Round 2: Using Frameworks

Next, I wanted to see how prompt frameworks could make my query more useful — especially from a copywriter’s perspective. Let’s say I’m brainstorming a pitch idea for The New York Times Wirecutter about this lamp.

Here’s how I used three frameworks I learned from the Superhuman AI course:

RTF (Role–Task–Format)

Prompt:

  • Role: You are a copywriter.
  • Task: Create a writing pitch to send to the editor of The New York Times Wirecutter about the Carex Day-Light Lamp.
  • Format: Keep it under 500 words.

TAG (Task–Action–Goal)

Prompt:

  • Task: I have to create a writing pitch for the editor of The New York Times Wirecutter about the Carex Day-Light Lamp.
  • Action: Give me three quick pitches that are each 200 words or less.
  • Goal: Pitch a unique idea that catches the editor’s eye and leads to a freelance assignment.

RACEF (Role–Action–Context–Example–Format)

Prompt:

  • Role: You are a copywriter.
  • Action: Write a new, unique pitch to send to the editor of The New York Times Wirecutter about the Carex Day-Light Lamp.
  • Context: Wirecutter combines independent research with sometimes over-the-top testing to help readers make confident buying decisions.
  • Example: Avoid repeating existing Wirecutter content — here’s their archive on seasonal affective disorder lamps.
  • Format: Keep it under 200 words.

Takeaways + Next Steps

If I were doing this for real, I’d probably start with RACEF, since it’s the most detailed and best suited for editorial work, especially when pitching. (Story ideas usually need plenty of upfront research before they ever hit an editor’s inbox.)

Of course, if the first answer misses the mark or veers into weird territory, you can always refine it — ask follow-ups, feed it URLs or files, and keep narrowing the focus.

Next, I’d vet Wirecutter’s site to make sure this story hasn’t already been covered and confirm it fits their voice and format. (Fun fact: most of their posts use “the royal we.”)

And if the results are almost there but not quite? That’s where the human touch comes in: EDIT, EDIT, EDIT. Clean up the errors, humanize the tone, and make it sound like you.

If anything, this experiment reminded me that curiosity pays off — even when you think you already know the basics.

Now, if you’ll excuse me, I’m off to add one of those lamps to my Christmas list.

Oh, and here's my certificate: