Welcome to the second issue of <Bold Prompts>: the weekly newsletter that sharpens your AI skills, one clever prompt at a time.
Every Tuesday, I’ll send you an advanced prompt plucked straight from a real-world application. Think of these emails as mini-courses in prompt engineering.
Why is this useful?
Prompt engineering is the language we use to interact with AI models — and those interactions allow us to achieve an increasing number of tasks, such as:
Code generation
Data analysis
Content creation
Designing AI agents
Using AI to automate/accelerate these tasks gives you a competitive edge. Not only will you augment your current skills, but you’ll also unlock new ones.
The key? Writing high-quality prompts — and that’s what we do here.
So let’s dive into today’s prompt, inspired by a stroll across a small park here in Paris.
I’d just left home after spending three days on HuggingChat trying out the Llama-3-70b model. Llama-3-70b is the latest open-weight model released by Meta. It’s available for free on HuggingChat and it blew my mind for one reason:
Llama-3-70b, available for free on HuggingChat🤗, is better at following instructions than the $20-per-month ChatGPT-4.
It’s not just the instruction following though.
Llama-3-70b’s output itself is on par with GPT-4’s despite the massive difference in model size—70 billion parameters versus ~1.7 trillion.
What’s the trick?
It’s many tricks, and one of them is the training data — both size and quality matter.
“Llama 2 was trained on 2 trillion tokens, Llama 3 was bumped to 15T training dataset, including a lot of attention that went to quality, 4X more code tokens, and 5% non-[English] tokens over 30 languages. (5% is fairly low w.r.t. non-[English]:[English] mix, so certainly this is a mostly English model, but it’s quite nice that it is > 0).”
— Andrej Karpathy, AI researcher who worked at both Tesla and OpenAI.
As I gazed at my favorite tree in the park, I wondered what the release of Llama-3-70b meant for the AI scene. My mind started racing. More capabilities. Super cheap inferences. Fine-tunable models. Faster output. Lots of prompts to try.
And that’s when you crossed my mind. I realized I didn’t prepare a fresh prompt for this week’s post.
“Okay,” I told myself, taking a deep breath. “Big Brain Time!”
“Big Brain Time” is a phrase I stole from my friend PF. Whenever he needs to go into creative problem-solving mode, he whispers these words to himself.
He’d then lean back and start mumbling possible solutions and semi-random ideas. That would go on for a few minutes before he’d re-emerge with an “Alright, I’ve got something!”
It’s like a magic spell — and it works.
There are many ways you can apply “Big Brain Time!” to a given problem and one of them is to package it inside a clear structure — like a prompt.
Today we’ll write such a prompt and it will:
Help you define your problem.
Generate possible solutions ranging from “grounded” to “crazy.”
Break down your favorite solutions into actionable steps you can apply right away.
Ready? Let’s dive in:
Keep reading with a 7-day free trial
Subscribe to The Bald Prompter to keep reading this post and get 7 days of free access to the full post archives.