The Art and Science of Crafting Effective Prompts for LLMs

Babar M Bhatti
15 min readApr 3, 2023

Prompts are the key that unlock value from large language models (LLMs). Writing good prompts for LLMs falls somewhere between science and art — thus the term Prompt Engineering aka prompt design / in-context learning.

You may have come across many cheat sheets on prompt writing on social media. Prompt engineering is also an active research area with plenty of papers on arxiv.org. With all this information, I debated the need for writing an article on prompting. On a recent webinar on Generative AI, I was asked: why care about prompt engineering? Well, it may not seem so but creating quality prompts is non-trivial. There’s a lot more to prompt engineering than it may seem to the uninitiated.

I figured that this post could shed light on prompts and also help me bring my scattered notes and bookmarks about prompts to one page.

Key topics and questions which this article should help with:

  • Why do we need prompt engineering?
  • Prompt engineering/design vs Prompt tuning
  • Prompts for common scenarios e.g. text completion, classification, summarization
  • What are the guidelines for effective prompts? What have we learned from researchers and successful prompt writers users ?
  • Planning a prompt: have a clear use case which is consistent with the strengths of LLMs. You should also think about the risk of incorrect answer, harm (e.g. bias) or unsafe…

--

--

Babar M Bhatti

AI, Machine Learning for Executives, Data Science, Product Management. Co-Founder Dallas-AI.org. Speaker, Author. Former Co-founder @MutualMind