Optimizing Instructions and Demonstrations for Multi-Stage Language Model Programs
PreviousDSPy: In-Context Learning for Extreme Multi-Label ClassificationNextHYDE: Revolutionising Search with Hypothetical Document Embeddings
Last updated
Last updated
Copyright Continuum Labs - 2023
This paper addresses the challenge of optimizing prompts for multi-stage language model (LM) programs, which are sophisticated pipelines composed of modular LM calls.
The goal is to maximise a downstream metric without requiring access to module-level labels or gradients. The authors propose a novel optimizer called MIPRO, which employs several strategies to optimize free-form instructions and few-shot demonstrations for each module within the pipeline.