
Introduces what large language models are, how they're trained conceptually, the roles that different prompt components play (system, user, assistant), and a taxonomy of common failure modes. Uses concrete examples so students can reason about model behavior.
What is an LLM? — high-level architecture, training data concepts, and what "knowledge" means for a model.
Prompt anatomy — break down real prompts line-by-line and map to expected behaviors.
Roles & instruction hierarchy — practice with system vs user instructions and how they change outputs.
Failure taxonomy — examples of hallucinations, incorrect facts, verbosity, and unsafe responses.
Quick mitigation patterns — temperature, top-p, constraints, and formatting.
Reproducibility & logging — how to log prompts and outputs for repeatable testing.
Activities
Lab: Given 8 real-world tasks (summarize, rewrite, classify, generate questions), craft single-turn prompts, capture 3 outputs each, and write a comparative analysis (why best prompt wins).
📦 Deliverable
A short report that includes the prompt library (3 best prompts), sample outputs, and a 1-page reflection on limitations.
Playground or API sandbox, short primer articles on transformers (non-technical), curated blog posts on prompt patterns, example prompt bank.
Basic computer literacy; no prior ML experience required.
Students learn the foundation of how to reliably instruct AI — the skill behind automated tutors, summarizers, and classroom helpers.
APPLY TODAY FOR THE 2025/2026 ACADEMIC SESSION.