Agentic AI for Analytics
Program-level details: See program/curriculum.md
| Credits: 2 | Term: Spring 2027 (Weeks 5-8, concurrent with BADM 558 second half) | Instructor: TBD |
Status: Draft No instructor assigned. Tentative — pending review.
Course Vision
Students move beyond prompt engineering to build AI-powered analytics systems. The course covers LLM fundamentals, Retrieval-Augmented Generation (RAG) pipelines, and agentic AI patterns — equipping graduates to build internal AI tools, not just use them. By course end, students can design and implement RAG systems and agent-based workflows for real analytics problems.
Learning Outcomes (L-C-E Framework)
Literacy:
- L1: Explain how large language models work (transformer architecture, token prediction, context windows)
- L2: Understand RAG architecture (retrieval, embedding, generation) and when to use it vs. fine-tuning
- L3: Recognize AI governance frameworks (NIST AI RMF) and ethical issues in AI deployment
Competency:
- C1: Build a RAG pipeline using LangChain, vector databases, and embedding models
- C2: Implement agentic AI patterns (function calling, tool use) for analytics workflows
- C3: Apply prompt engineering techniques for data analysis, code generation, and report writing
- C4: Audit AI outputs for accuracy, bias, and ethical concerns
Expertise:
- E1: Design AI-augmented analytics workflows combining RAG, agents, and human judgment
- E2: Evaluate trade-offs between RAG, fine-tuning, and prompt engineering for a given problem
- E3: Build production-ready AI systems with appropriate governance and documentation
Week-by-Week Breakdown
| Week | Topic | Activities | Assessment |
|---|---|---|---|
| 1 | LLM fundamentals + prompt engineering | Transformer architecture overview, prompt patterns for analytics (chain-of-thought, few-shot), AI for SQL/Python generation | Assignment 1: Prompt engineering exercises · Milestone 1: Problem statement + data source |
| 2 | RAG implementation | LangChain fundamentals, document chunking strategies, embedding models (OpenAI Embeddings API), vector databases (ChromaDB/Pinecone), retrieval evaluation | Assignment 2: RAG pipeline lab · Milestone 2: RAG prototype + retrieval evaluation |
| 3 | Agentic AI patterns + governance | Function calling, tool use, multi-agent intro, orchestration patterns, NIST AI RMF overview, AI governance basics | Assignment 3: Agentic AI lab · Milestone 3: Agent integration + governance plan |
| 4 | Capstone project + ethics | Build RAG or agent-based analytics workflow, ethics case study, responsible AI checklist | Final deliverable: Agentic capstone + oral defense |
Assessments (1 individual project — 4-week course)
Weekly Assignments (30% of grade)
- Assignment 1 (Week 1): Prompt engineering exercises — chain-of-thought, few-shot patterns for SQL/Python generation
- Assignment 2 (Week 2): RAG pipeline lab — build a retrieval chain, evaluate retrieval accuracy on test questions
- Assignment 3 (Week 3): Agentic AI lab — implement function calling / tool use patterns
Project Milestones (25% of grade) Progressive deliverables toward the final project, submitted individually:
- Milestone 1 (Week 1): Problem statement + data source selection
- Milestone 2 (Week 2): RAG prototype with retrieval evaluation (10+ test questions, chunking/embedding rationale)
- Milestone 3 (Week 3): Agent integration + governance plan (model card draft, risk assessment outline)
Final Project Deliverable — Agentic Capstone (35% of grade)
- Task: Design and implement an agent-based analytics workflow that incorporates RAG for a business problem
- Deliverables:
- Agent system using function calling / tool use patterns
- RAG pipeline integrated into the agent workflow (builds on Milestone 2)
- Integration with at least one external tool (database, API, or analytics library)
- AI governance documentation (model card, usage guidelines, risk assessment)
- Ethics checklist: bias audit, limitation documentation
- Oral defense: 10-min presentation + Q&A (included in final project grade)
- GitHub repo with code + documentation
Studio Participation (10% of grade)
- Weekly live studio sessions (1 hour each)
- Week 1: Prompt engineering workshop — live experimentation with LLM patterns
- Week 2: RAG architecture review — peer critique of RAG prototypes
- Week 3: Agentic AI patterns — live building of function-calling agents
- Week 4: Capstone presentations + oral defense Q&A
Assessment Summary
| Component | Weight | Notes |
|---|---|---|
| Weekly assignments | 30% | 3 assignments (Weeks 1-3), individual |
| Project milestones | 25% | 3 progressive deliverables (Weeks 1-3), individual |
| Final project (Agentic Capstone) | 35% | Week 4, individual, includes oral defense |
| Studio participation | 10% | Weekly attendance + live exercises |
No traditional exam. Project-based with AI systems focus.
AI Usage Levels (AIAS)
| Assessment | AIAS Level | AI Permitted |
|---|---|---|
| Weekly Assignments | 4 | AI is the subject — students build, evaluate, and critique AI systems |
| Project Milestones | 4 | AI is the subject — students design and prototype AI pipelines |
| Final Project (Agentic Capstone) | 4 | AI is the subject — students design and implement agent-based workflows |
| Oral Defense (in Final Project) | 0 | No AI |
| Studio Participation | 3 | AI as collaborator — full integration for hands-on AI experimentation |
Rubric (5 dimensions)
| Dimension | Excellent (A) | Proficient (B) | Developing (C) |
|---|---|---|---|
| RAG Implementation | Well-architected pipeline, strong retrieval accuracy, justified chunking/embedding choices | Functional pipeline, adequate accuracy | Basic implementation, poor retrieval quality |
| Agentic Design | Sophisticated agent patterns, effective tool use, handles edge cases | Functional agent, basic tool integration | Minimal agent functionality |
| AI Governance | Comprehensive model card, risk assessment, NIST alignment | Adequate governance documentation | Minimal or missing governance |
| Code Quality | Production-ready, well-documented, tested | Functional with minor issues | AI code has problems |
| Oral Defense | Explains architecture clearly, handles questions confidently, articulates trade-offs | Adequate explanation, answers most questions | Cannot explain choices or struggles with Q&A |
Competitive Agent Exercise Ideas
These in-class activities use agent-vs-agent competition to teach prompting, fine-tuning, and AI safety concepts through gameplay. Inspired by Manzoor (2026) at Cornell (haggleforme.computer).
Exercise A: Procurement Negotiation Arena
- Students write prompt strategies for AI buyer/seller agents negotiating supplier contracts
- Agents compete in round-robin tournaments; leaderboard tracks surplus
- Pedagogical progression: Simple prompting → adversarial prompting → persona prompts → jailbreak attempts
- Debrief: Students discover AI safety/ethics concerns organically through gameplay
- AIAS Level 4: AI is the subject of analysis
Exercise B: Analyst vs. Auditor
- One student’s agent generates an analytics report (with intentional methodology choices)
- Another student’s agent audits it (finds errors, biases, missing context)
- Competitive scoring: analysts earn points for convincing reports; auditors earn points for legitimate catches
- Connects to: Collier & Powell (2026) shift from “technical creators to AI auditors”
Exercise C: Fine-Tuning Showdown (Week 3)
- Teams provide 20-50 training examples to shape a negotiation persona
- Instructor fine-tunes models overnight; fine-tuned agents compete next studio
- Students learn: data quality matters more than data quantity; subtle training data corruption changes behavior
- Debrief: Introduce Betley et al (2025) “90 Wolf Facts” — how benign-looking data can corrupt models
Implementation note: These exercises can be built with AI coding agents using a SPEC.md — see design/faculty_resources.md for the workflow. The instructor doesn’t need web development skills.
Technology Stack
- AI Tools: Claude (API + web), ChatGPT (Plus or API), GitHub Copilot
- RAG Framework: LangChain (required)
- Vector Databases: ChromaDB (primary), Pinecone (alternative)
- Embeddings: OpenAI Embeddings API
- Environment: Jupyter Notebooks with AI integration
- Governance: NIST AI RMF reference framework
Prerequisites
- Completion of at least 2 core MSBAi courses (familiarity with analytics workflow)
| Course Sequence: ← BADM 558 — Big Data Infrastructure | Next: Quantum Computing for Optimization → |