
Guiding models to reliable multi-step answers.
Methods to coax stepwise reasoning from LLMs, compare direct answers vs chain-of-thought approaches, and combine models with external deterministic checkers. Focus on reliability and verification.
Decomposition strategies: task splitting and subtask framing.
Chain-of-thought prompting: examples, pitfalls, and self-consistency techniques.
Verification patterns: assert-check, secondary-model-check, and deterministic validators.
Hybrid pipelines: LLM + calculator / LLM + regex / LLM + reference data.
Error analysis: categorize errors and remediation strategies.
Reporting & transparency: how to show parents what was checked and why.
Activities
Build prompts that solve multi-step arithmetic/logic problems with verifiable steps; produce a comparison report (accuracy & failure cases) with visualizations of error types.
📦 Deliverable
Notebook or script showing inputs, outputs, verification steps, and final correctness metrics.
Datasets of reasoning tasks, simple test harness, articles on chain-of-thought prompting.
Modules 1–2 recommended.
Ensures student-produced answers are transparent and verifiable — helpful for homework checking and tutoring scenarios.
APPLY TODAY FOR THE 2025/2026 ACADEMIC SESSION.