Prompting 101

Course

https://learn.deeplearning.ai/courses/chatgpt-prompt-eng/lesson/1/introduction

Contents

  1. Best practices

  2. Common use cases

Types of LLMs

  1. Base
  • Predicts next work based on text training data
  1. Instruction tuned
  • Follows instructions; fine tuned on instructions.

  • Trained on inputs and outputs

  • Uses RLHF - Reinforcement Learning with Human Feedback

    • Helpful, Honest, Harmless
  • Recommended for most practical use cases

  • While using, think of giving instructions to another human

    • Quantity of information tailored to the kind of response expected is optimum
  • Be clear and specific

Guidelines for prompting - Principals and Tactics

  1. Write clear and specific instructions
  • Doesn’t mean it has to be short - longer prompts can be better and provide more insights.

Tactics:

  1. Use delimiters to clearly indicate distinct parts of the input

    1. Delimiters also help with avoiding prompt injection
  2. Ask for a structured output

    1. Provide an output format where feasible
  3. Ask model to check whether conditions are satisfied

    1. Conditional prompt (if..else)

    2. Any assumptions can yield wrong outcome

    3. Exit early after checking conditions

  4. Few-shot prompting

    1. Providing successful examples of (part of) tasks to be performed - “What does success look like?”
  1. Give model time to ‘think’
  • Complex tasks can take a long time/computation. Tell the model to take more time to get an answer

Tactics:

  1. Specify steps required to complete task

    1. Ask for output in specific format
  2. Instruct the model to work out it’s own solution before rushing to solution

    1. Ask to do it’s own work, then compare and evaluate - “Do not decide if solution is correct until you’ve done the problem yourself”.

Model limitations

  1. Hallucinations - making statements that sound plausible but aren’t true

    1. Known weakness of models at current time

Iterative prompt development

  1. First prompt to solve a problem rarely works the first time

  2. Iterate and get closer to the desired result

    1. Refine with a batch of examples
  3. Be precise and clear

  4. Giving a role and task can help

Common use cases

  1. Summarizing text

    1. Giving purpose helps generate better results (more context)

    2. Limit by sentences/words.

      1. Doesn’t always adhere to provided limit

      2. Character limiting rarely works due to tokenization mechanism

  2. Inferring

    1. Making sense of sentiment - whether something is positive or negative

    2. LLMs are good at extracting information from a info source

    3. “Zero-shot learning”

  3. Transforming

  4. Expanding

    1. Temperature

      1. Lower temperature (0), more reliability, predictability

      2. Higher temp yields more variety (randomness, creativity)

Self notes

  1. Maybe working backwards from expected result would work coming up with proper requirements