Optimizing Instructions and Demonstrations for Multi-Stage Language Model Programs

This paper addresses the challenge of optimizing prompts for multi-stage language model (LM) programs, which are sophisticated pipelines composed of modular LM calls.

The goal is to maximise a downstream metric without requiring access to module-level labels or gradients. The authors propose a novel optimizer called MIPRO, which employs several strategies to optimize free-form instructions and few-shot demonstrations for each module within the pipeline.

Last updated

Logo

Continuum - Accelerated Artificial Intelligence

Continuum WebsiteAxolotl Platform

Copyright Continuum Labs - 2023