Prompting

What Is a Soft Prompt?

A learned sequence of embedding vectors prepended to an LLM input to adapt behavior without rewriting readable instructions.

What Is a Soft Prompt?

A soft prompt is a learned vector prompt that steers an LLM by prepending trainable embeddings instead of readable instruction text. It is a prompt-family adaptation method used in training and evaluation pipelines, especially when full fine-tuning is too expensive or hard to govern. In production, soft prompts appear as prompt-parameter versions tied to a base model, dataset, and eval cohort. FutureAGI treats them as model-adjacent prompt artifacts that must be compared against hard prompts using task, adherence, grounding, cost, and latency metrics.

Why It Matters in Production LLM and Agent Systems

Soft prompts fail quietly because there is no plain-language instruction to inspect when behavior changes. Unlike a hard prompt, a soft prompt can improve one benchmark while hiding a learned shortcut that only appears on edge cases. The common production failures are overfitting to a narrow eval cohort, drifting after a base-model upgrade, and degrading grounded answers because the learned vectors bias the model toward a task pattern rather than the retrieved evidence.

Developers feel this as confusing regressions: the visible system prompt did not change, but TaskCompletion dropped for billing questions or Groundedness fell on long-context answers. SREs see rollout symptoms such as higher eval-fail-rate-by-prompt-version, new retry clusters, p99 latency changes, or unexplained cost movement after the adapter artifact changes. Product teams see inconsistent user outcomes across cohorts. Compliance reviewers care because a learned prompt cannot be approved by reading it; it has to be approved by measured behavior.

The risk is larger in 2026 agent pipelines. A single user request may pass through a planner, retriever, tool selector, and final-answer generator. If one step uses a soft prompt, the failure can propagate into later tool calls. A hidden planning bias may pick the wrong tool, then the final response looks polished enough to hide the original mistake.

How FutureAGI Handles Soft Prompts

Soft prompting is model-adaptation work rather than a standalone FutureAGI surface, so FutureAGI handles it through adjacent workflow surfaces: fi.prompt.Prompt for prompt artifacts and versions, agent-opt optimizers such as ProTeGi and GEPA for candidate search, and fi.evals metrics such as PromptAdherence, TaskCompletion, and Groundedness for regression gates. FutureAGI’s approach is to treat the soft prompt as a versioned artifact that earns promotion only through measured behavior on the same dataset and trace cohorts as the baseline.

Example: a support agent uses a hard prompt plus retrieval context, but answer quality stalls on 600 labeled tickets. The team trains a 20-token soft prompt against that dataset while keeping the base model fixed. They register the candidate as soft-prompt=v4, run the same eval cohort, and compare it with the best hard prompt. TaskCompletion measures whether the support job finished, PromptAdherence catches instruction drift, and Groundedness checks whether the answer stayed tied to retrieved policy text.

The engineer then looks at operational signals: eval-fail-rate-by-cohort, latency p99, token-cost-per-trace, and visible prompt size through llm.token_count.prompt. Unlike LoRA, which adds adapter weights to model layers, the soft prompt is evaluated as a prompt release: promote it only if it beats the hard-prompt baseline without increasing groundedness failures, and roll it back if a model upgrade changes the score distribution.

How to Measure or Detect It

Measure a soft prompt by comparing behavior across versions, not by inspecting the embedding values:

  • PromptAdherence: checks whether outputs still follow the intended instructions despite the learned prompt being unreadable.
  • TaskCompletion: measures whether the target workflow succeeds more often than the hard-prompt baseline.
  • Groundedness: catches learned prompts that improve style while weakening evidence support.
  • Trace split: group eval-fail-rate, latency p99, and token-cost-per-trace by base model, soft-prompt version, cohort, and route.
  • User proxy: watch thumbs-down rate, escalation rate, refund rate, or manual correction rate after rollout.

Minimal Python:

from fi.evals import PromptAdherence

evaluator = PromptAdherence()
result = evaluator.evaluate(
    input=rendered_prompt,
    output=model_output,
)
print(result.score)

Use the evaluator as a release gate. A soft prompt should not ship if it only improves aggregate score while worsening a regulated cohort, a long-context cohort, or a tool-calling path.

Common Mistakes

  • Treating soft prompts as readable policy. They are learned vectors, so policy approval must come from eval evidence and trace review.
  • Training against one base model, then switching providers without rerunning the full regression cohort.
  • Comparing the soft prompt to a weak hard prompt. Keep a strong text baseline before claiming the learned version is better.
  • Ignoring cohort splits. Aggregate gains can hide failures for long-context, regulated, or multilingual cases.
  • Shipping without storing prompt version, dataset version, base model, and optimizer settings in the release record.

Frequently Asked Questions

What is a soft prompt?

A soft prompt is a learned vector prompt that steers an LLM through embeddings rather than readable words. It is usually trained for a task and evaluated against a fixed dataset.

How is a soft prompt different from a hard prompt?

A hard prompt is text a human can read and edit. A soft prompt is a learned embedding sequence, so it can adapt behavior but must be versioned and evaluated because its instructions are not directly inspectable.

How do you measure a soft prompt?

FutureAGI measures soft-prompt changes with evaluators such as PromptAdherence, TaskCompletion, and Groundedness, then compares eval-fail-rate, latency, and cost by prompt version.