Experiment
PromptWizard
PromptWizard is a self-evolving framework that automates prompt optimization by iteratively refining instructions and in-context examples using feedback from LLMs. It jointly optimizes prompts and examples, incorporates expert reasoning, and adapts dynamically to diverse tasks—delivering high-quality prompts in minutes instead of months.
Creating effective prompts for large language models (LLMs) is crucial but remains a slow, expertise-heavy process, often requiring months of iteration. As models evolve and new tasks emerge, manual prompt engineering becomes increasingly unsustainable. The key challenge is making prompt optimization faster, more scalable, and adaptable across diverse tasks.

PromptWizard: Automating Prompt Optimization for Any Task, Any Model
Three key insights behind PromptWizard (PW):
– Feedback-driven refinement: At its core, PW leverages an iterative feedback loop where the LLM generates, critiques, and refines its own prompts and examples. This continuous improvement mechanism ensures that each iteration is better than the last, leading to highly effective prompts and examples.
– Joint optimization and synthesis of diverse examples: PW generates synthetic examples that are not only robust and diverse but also task-aware. By optimizing prompts and examples together, it ensures they work in tandem to address specific task requirements effectively.
– Self-generated chain-of-thought (CoT) steps: Incorporating CoT reasoning improves the problem-solving capabilities of the model. By using selected few-shot examples, PW generates a detailed reasoning chain for each example, facilitating nuanced and step-by-step problem-solving approaches.
