Last updated: March 29, 2026

← MSBAi Home

Faculty Resources: Assessment Design & Tool Orientation

Audience: MSBAi faculty and course designers. Practical techniques for designing AI-aware assessments, building course materials with AI agents, and orienting students to the standard toolkit.


1. AI-Resistance Techniques for Written Assessments

Our primary synthesis-forcing mechanism is the oral defense (20-30% weight). But weekly assignments (30-40% of course grade) don’t have an oral component — they need their own AI-resistance. These techniques come from Manzoor (2026), whose open-laptop, AI-encouraged exams at Cornell have not been “one-shotted” by AI since 2023.

1.1 Exploit Sequence Model Weaknesses

LLMs generate what is plausible, not what is correct. Design questions that bait models into pattern-completing instead of reasoning:

Technique How It Works Example
Plant plausible sequences Use numbers that form an obvious pattern (0.2, 0.4, 0.6…) as threshold values. LLMs see the pattern and assume it matters — they may continue the sequence rather than compute the actual answer. “Given thresholds A and B in {0.2, 0.4, 0.6}, compute net profit for each configuration.” The sequence is just a parameter list, but the model treats it as meaningful.
Exploit symbolic precision LLMs struggle with > vs. >=, < vs. <=, and strict vs. non-strict inequalities. Small symbolic differences that humans handle easily. “If risk at t=60 is > A, abandon the call” — students who copy-paste to AI often get edge cases wrong because the model conflates > and >=.
Context-induced priors Business narratives with named entities (companies, countries, technologies) activate the model’s training priors. The model “knows” things about Orange (French telecom) and fills in details that may contradict the problem’s actual parameters. Name the company after a real firm. Use realistic-sounding but fictional constraints. The model will hallucinate domain knowledge that conflicts with the problem statement.
Multi-step arithmetic with business framing Wrap straightforward calculations in enough business context that the model must parse carefully. Models often get the reasoning structure right but introduce arithmetic errors. “A call that ends in a sale earns $100; each second costs $1. Calls last 60-120 seconds. Compute profit for 9 threshold configurations.” — solvable by hand, but models make systematic errors.

1.2 Hand-Written Synthesis (In-Person Equivalent)

Manzoor requires hand-written submissions — students can use AI during the exam but must synthesize and write the answer by hand. This prevents copy-paste.

Our online equivalent: The oral defense serves this function. But for assignments without an oral component, consider:

1.3 The “Attack Your Assessments” Exercise

Before finalizing any assessment, faculty should stress-test it with AI (see Assessment Strategy, Section 8):

  1. Copy your assignment prompt into ChatGPT/Claude — can AI produce a passing answer?
  2. If yes, identify where AI gets it right — those parts test recall, not reasoning
  3. Redesign the vulnerable parts using the techniques above
  4. Repeat each semester — AI capabilities change rapidly

New addition (from Manzoor): When testing, pay attention to whether AI gets the right answer for the wrong reason. Sequence models generate plausible explanations after deciding the answer — the explanation sounds convincing even when the answer is incorrect. Look for:

Source: Manzoor (2026) — AI Innovation in Teaching Workshop


2. Building Course Materials with AI Coding Agents

Faculty don’t need to be web developers to build interactive course tools. Manzoor built haggleforme.computer (a full-stack negotiation simulation) entirely with AI coding agents — “I do not know modern JavaScript.”

2.1 The SPEC.md Workflow

  1. Write a specification document (SPEC.md) describing what you want:
    • Product summary (what does it do?)
    • Scope (features to include/exclude)
    • Technical choices (suggest a stack or let the agent decide)
    • Example interactions (what does the student see?)
  2. Give the spec to a coding agent:
    • Claude Code — terminal-based, reads the spec and builds the project
    • OpenAI Codex — similar capability, different model
    • v0.dev — UI-focused generation
  3. Iterate on the result — describe what to change in natural language

  4. Deploy — agents can also help with deployment (Vercel, Cloudflare Pages, etc.)

2.2 What Faculty Can Build This Way

Tool Type Example Course Fit
Interactive simulations Negotiation arena, market simulation Agentic AI, capstone
Data exploration apps Upload CSV → auto-generate EDA with AI narration BDI 513, FIN 550
SQL sandboxes Browser-based SQL practice with instant feedback BADM 554
Dashboard builders Guided Power BI exercise with AI hints BADM 557
RAG demos Upload documents → ask questions → see retrieval Agentic AI
Assessment tools Custom auto-graders for specific assignment types Any course

2.3 Adapting Demos in Live Studios

Manzoor’s strongest teaching innovation: adapting demos in real-time during lectures using coding agents. For our Studio Sessions (60 min, weekly, live):

Source: Manzoor (2026). GitHub repo with SPEC.md examples: github.com/emaadmanzoor/NBA6925-2026


3. Tool & Data Source Orientation for Faculty

All MSBAi courses share a standard toolkit. Faculty must be fluent in this stack and orient students during Week 1. Full details: program/tools.md.

3.1 Standard Toolkit Summary

Layer Tool Student Cost What Faculty Should Know
IDE VS Code Free Primary environment. All assignments are .ipynb in GitHub repos.
AI Coding GitHub Copilot Pro Free (1 yr) Inline completions → Chat → Agent Mode progression. Students get full Pro.
Notebooks Google Colab Free Browser fallback and cloud GPU. VS Code extension connects both workflows.
AI Research Google Gemini Pro Free (1 yr) Deep Research for literature review. NotebookLM for studying materials.
AI General Claude / ChatGPT Free tiers No vendor lock-in. Show at least two platforms when demoing.
BI Power BI Desktop Free (academic) Primary BI tool program-wide (not Tableau).
Version Control GitHub Free All projects in public repos. Commit history used for contribution assessment.

3.2 Data Sources Available to Students

Source Access Cost Used In What’s Available
WRDS Program license Program-paid FIN 550, others Compustat (firm financials), CRSP (stock returns), institutional-grade financial data
AWS Free Tier Student signup Free (12 mo) BADM 554, 558 Cloud databases (RDS, DynamoDB), S3 storage, EC2 compute
Public APIs Open Free All courses yfinance, SEC EDGAR, Census Bureau, BLS, FRED, World Bank
Kaggle Account Free All courses 50,000+ datasets, competitions, notebooks
Google BigQuery Sandbox Free (1 TB/mo) BADM 558 Public datasets: GitHub, Stack Overflow, Wikipedia, weather, patents
UCI ML Repository Open Free FIN 550, BADM 576 Classic ML benchmark datasets
Hugging Face Account Free Agentic AI, BADM 576 Pre-trained models, datasets, model cards

3.3 Faculty Week 1 Checklist

Before your first studio session, verify:


4. Key References

Reference What It Offers
Assessment Strategy Full assessment model, AIAS levels, oral defense rubric, AI Attribution Log template
program/tools.md Complete tool stack with install instructions and student onboarding checklist
Design Principles 13 guiding principles including cognitive friction, three-layer content model
Manzoor (2026) Cornell AI teaching: AI-resistance techniques, agent-coded demos, live adaptation
Cohort Model Studio session format, Analytics Conversations, scheduling

Created for MSBAi Faculty Development — Gies College of Business