
Chris Barber, a Staff Software Engineer at Harper and a recognized voice in the JavaScript and AI communities, recently outlined six essential methods he employs for crafting effective prompts for artificial intelligence models. His advice, shared on social media, emphasizes clarity, structure, and iterative refinement to optimize AI outputs. These strategies align closely with broader industry best practices for interacting with large language models (LLMs).
"Prompt writing methods I sometimes use: 1. Use separators between different sections of the prompt (===, +++, <xml> tags) 2. Be very specific about what you want the output to contain 3. Be very specific about desired output format 4. Give examples (e.g. 3x input:output pairs) 5. Instead of saying what not to do, say what to do instead ('make it concise' instead of 'don't make it long' or 'instead of making it long, make it concise') 6. Break complex prompts into sub-steps," Barber stated in his tweet. These methods aim to guide AI models toward more precise and relevant responses.
Barber's first tip, using separators, helps delineate different sections of a prompt, a technique widely recommended by AI developers to improve model comprehension. This structural approach ensures that instructions, context, and examples are clearly distinguished, preventing misinterpretation by the AI. Clear and specific instructions are paramount, as emphasized by sources like OpenAI and Google Cloud, which advocate for detailed context, desired outcomes, and format specifications.
The recommendation to be very specific about both output content and format is a cornerstone of effective prompt engineering. Providing explicit examples, such as input:output pairs, further reinforces the desired behavior and style, allowing the model to learn patterns and generate consistent results. This "few-shot prompting" technique is a well-established method for enhancing the accuracy and relevance of AI-generated content.
Barber also advises focusing on positive instructions ("say what to do instead") rather than negative constraints. This approach minimizes ambiguity and guides the AI directly towards the desired action, a practice echoed in various prompt engineering guides that suggest using clear, actionable language. Breaking down complex prompts into smaller, manageable sub-steps is another critical strategy, enabling the AI to process information sequentially and tackle intricate tasks more effectively.
While prompt engineering remains vital for harnessing AI's potential, the field is continuously evolving. Discussions around "context engineering" highlight the increasing importance of managing the entire informational state provided to LLMs, particularly for multi-turn interactions. Despite some debate about the long-term future of "prompt engineer" as a dedicated role, the underlying skills of crafting precise and effective AI instructions, as championed by Barber, continue to be indispensable for maximizing the utility of advanced AI systems.