Last updated: March 29, 2026

← MSBAi Home

MSBAi Assessment Strategy

Purpose: Normative assessment policies for the MSBAi program – what faculty must follow when designing course assessments.

Program-level details: See program/curriculum.md. Research background: See reference/ASSESSMENT_RESEARCH.md. Student-facing rationale: See Why We Ask You to Show Your Thinking.


1. Assessment Philosophy

  1. Validity First – Assessments generate trustworthy evidence of learning, not just evidence of AI-assisted output quality (Furze, 2026)
  2. Transparency Over Surveillance – Clear AI usage policies per assignment; trust-based with accountability. Students know the AIAS level and rationale for every assessment.
  3. Process Over Product – Document learning journey, weight revision and iteration. Rubrics reward reasoning quality over surface fluency of AI-assisted deliverables (Vendrell & Johnston, 2026, P7).
  4. Authentic Application – Design for reality: assessments reflect real-world AI-augmented workflows, not artificial “AI-proof” constraints (Furze, 2026)
  5. Assessment as Process – Build evidence chains over time (weekly assignments → milestones → deliverable → defense), not single high-stakes moments. Multiple modes: written, oral, practical, collaborative. This pipeline mirrors the structure of a DJ’s buildup: each stage loads the brain’s reward system so the next resolution actually registers. The anticipatory phase — not the payoff — determines how intensely learning lands (Salimpoor et al., 2011; Machulla, 2026).
  6. Cognitive Friction by Design – Preserve the productive struggle essential for deep learning. Students formulate hypotheses, construct arguments, or analyze data independently before consulting AI. AI extends thinking; it doesn’t replace it. The neuroscience is concrete: dopamine neurons fire on prediction errors (surprise), not on predicted rewards — when outcomes match expectations exactly, the brain’s teaching signal is zero (Schultz et al., 1997). Frictionless AI delivery eliminates the uncertainty and effort that make learning neurologically meaningful. Pre-AI phases are not punishment; they are the scenic route that makes the destination worth reaching (Machulla, 2026; Vendrell & Johnston, 2026, P1/P8).
  7. Low-Stakes Iteration with Peer Review – Projects follow a draft → peer feedback → revision cycle. Students learn as much from reviewing others’ work as from receiving feedback. Early submissions are low-stakes checkpoints (formative), not high-stakes deadlines (summative). Peer review is structured with rubrics and trained in the first studio session of each course. Each iteration closes a small gap between intention and outcome — the IKEA effect shows that labor leads to love only when it leads to completion (Norton et al., 2012). Multiple small completions build cumulative ownership of the final deliverable.

2. Standard Assessment Model

Every course follows this structure. Faculty choose specific assignment types (cases, labs, discussions, exercises) based on course content.

8-week, 4-credit courses

Component Weight Timing Description
Weekly assignments 30-40% Weeks 1-8 Practice exercises, case analyses, discussions, labs, peer reviews
Project milestones 20-30% Weeks 1-7 Proposal, drafts, peer review — scaffolded steps toward final project
Final project deliverable 15-20% Week 8 Team of 3 (max); integrates skills from weekly assignments
Oral defense 20-25% Week 8 Individual accountability for team work
Studio participation 5-10% Weeks 1-8 Weekly live sessions

4-week, 2-credit courses

Same structure compressed. Individual project only (insufficient time for team formation). Oral defense still required.

Component Weight Timing Description
Weekly assignments 25-35% Weeks 1-4 Labs, exercises, readings
Project milestones 20-30% Weeks 1-3 Progressive deliverables toward final
Final project deliverable 25-35% Week 4 Individual; includes oral defense
Studio participation 5-10% Weeks 1-4 Weekly live sessions

Key principles


3. AI-Aware Assessment Framework

MSBAi uses three complementary frameworks to structure AI-appropriate assessment.

3.1 AI Assessment Scale (AIAS)

Adapted from Perkins, Furze, Roe, & MacVaugh (2024). Published AIAS uses Levels 1-5; MSBAi adapts to 0-4 (Level 0 = no AI).

Level AI Usage Example
0 No AI permitted Oral defenses, quizzes, proctored assessments
1 AI for brainstorming only Idea generation, not content creation
2 AI for drafting with human revision Code assistance, debugging, first drafts – with attribution
3 AI as collaborative tool Full integration with disclosure; AI for code generation, narrative refinement
4 AI as subject of analysis Build, critique, and evaluate AI systems

Every assessment component in every course syllabus is annotated with its AIAS level. See individual course pages for per-assignment levels.

3.2 FACT Framework

From Frontiers in Education research on environmental data science (Frontiers):

Component AI Usage Purpose
Fundamental Skills No AI Build foundation before advanced concepts
Applied Projects AI-assisted Real-world problem-solving with AI tools
Conceptual Understanding No AI Paper-based exam for independent comprehension
Thinking (Critical) AI + Human Assess and integrate AI outputs

3.3 Process-Product Model

From faculty training research (Frontiers). Evaluate both:

  1. Final Product – Traditional deliverable quality
  2. Process Documentation: prompt development, human-AI interaction quality, critical evaluation of AI outputs, revision decisions and rationale

3.4 Pre-AI / AI-Mediated / Post-AI Sequencing

Beyond setting an AIAS level per assignment, faculty should design the sequence of engagement within activities. This prevents cognitive offloading while preserving AI’s value as a thinking partner. The neuroscience basis: the brain’s dopamine system is most engaged under uncertainty — when the outcome is genuinely unknown, not when rewards arrive on schedule (Schultz et al., 1997; Fiorillo et al., 2003). The pre-AI phase creates this uncertainty (will my hypothesis hold?); the AI-mediated phase introduces surprise (did AI find something I missed?); the post-AI phase closes the gap through reflection (what did I actually learn?). This is the “dopamine gap” — the space between expecting and receiving — and it is where motivation, competence, and meaning are built (Machulla, 2026).

Phase Student Activity Purpose
Pre-AI (AI-free) Formulate hypothesis, draft analysis plan, identify assumptions, construct initial argument Preserves cognitive friction; builds independent reasoning before AI exposure
AI-Mediated Use AI to extend analysis, generate alternatives, challenge assumptions, debug code, explore counterarguments Positions AI as thinking partner; student directs the inquiry
Post-AI (reflection) Evaluate what AI added vs. missed, compare AI output to own reasoning, document modifications, identify limitations Builds evaluative judgment and metacognitive awareness

Implementation examples:

This sequencing is supported by Kosmyna et al. (2025), who found that students who engaged independently before consulting an LLM produced significantly stronger outputs than those who used AI from the start. The METR randomized trial (2025) adds a cautionary data point: experienced developers using AI were 19% slower on complex tasks yet believed they had been 20% faster — the frictionless AI experience creates a subjective sense of productivity that diverges from measurable outcomes.

Sources: Vendrell & Johnston (2026), Principles P1 and P8; Furze (2026), “design for reality” principle; Machulla (2026), dopamine prediction error and the “scenic route” framing; METR (2025), AI productivity perception gap.

3.5 AI Declaration Requirements

All major projects require students to document:

See Appendix A for the AI Attribution Log template.


4. Program-Level Portfolio Structure

For semester-by-semester course assignments, see program/curriculum.md.

Program Stage Artifacts Competencies Demonstrated
Foundation courses SQL projects, data visualization, reflection Database, visualization basics
Analytics courses ML models, business case analyses Predictive modeling, business application
Advanced courses Team project deliverables, peer reviews Collaboration, communication
Capstone Capstone + comprehensive reflection Integration, professional readiness

5. Synchronous Assessment Components

Even in async-first programs, synchronous touchpoints are required:

  1. Studio Sessions (weekly, 60 min — hands-on project work; participation tracked)
  2. Analytics Conversations (bi-weekly, 60 min — case discussions, guest speakers)
  3. Mid-term Check-ins (15-min instructor conversation)
  4. Project Presentations (live via Zoom, recorded backup)
  5. Capstone Defense (mandatory synchronous, panel format)

6. Oral Defense Requirements

Research strongly supports oral components for verifying understanding in AI-enabled environments.

Implementation (ACTIVE – all course syllabi):

Course Component Oral Weight Format
Studio Sessions 10% (participation) Live Q&A during sessions
8-Week Course Projects 20-30% of project grade 10-15 min team presentation + 5 min Q&A
4-Week Course Projects Included in final project 10-min individual presentation + Q&A
Capstone 25-35% of capstone grade (min 20%) Faculty determines format; suggested 15-20 min presentation + 10 min Q&A

Capstone oral defense notes:

See Appendix C for the standardized oral defense rubric.


7. Team Assessment Guidelines

Cross-reference: DESIGN_PRINCIPLES.md Constraints 7 (team projects required) and 8 (oral defense weights).

Team Project Policy:

Individual Accountability Within Teams:

Peer Evaluation Framework:

MSBAi Peer Assessment Types

Type Description When to Use
Code Review Evaluate peer code quality and documentation Technical courses
Analysis Critique Assess methodology and conclusions Statistics/ML courses
Presentation Feedback Evaluate communication effectiveness Capstone, storytelling
Team Contribution Rate collaboration and reliability Group projects

Low-Stakes Iteration Model

Every multi-week project should follow a draft → feedback → revision cycle:

Stage Timing Stakes Feedback Source
Draft checkpoint Mid-project (e.g., Week 2 of a 3-week project) Low — formative only, or ≤5% of project grade Peer review + instructor spot-check
Peer review 2-3 days after draft submission Part of studio participation grade Structured rubric (same dimensions as final rubric, simplified)
Revision + final Project deadline Full weight (summative) Instructor grading on final deliverable

Implementation requirements:

What students gain from reviewing:


8. Risk Mitigation

Faculty Assessment Validation (“Attack Your Assessments”)

Before finalizing course assessments, faculty should conduct an AI stress test (Furze, 2026):

  1. Attempt your own assessments with AI — Have a confident AI user (faculty member, TA, or instructional designer) complete each major assessment using current AI tools from a student’s perspective
  2. Identify vulnerability points — Which parts can AI complete without genuine understanding? Where does the assessment truly require human reasoning?
  3. Redesign where needed — Strengthen vulnerable assessments by adding pre-AI phases, requiring process documentation, or shifting weight toward oral defense
  4. Repeat each semester — AI capabilities change rapidly; what was AI-resistant in Fall 2026 may not be by Spring 2027

This exercise should be part of the faculty orientation process (presentations/faculty-orientation/) and repeated annually.


Appendix A: AI Attribution Log Template

## AI Contribution Log

### Project: [Name]
### Date: [Date]
### Student: [Name]

| Date | AI Tool | Task | Prompt Summary | Output Summary | How I Modified/Validated |
|------|---------|------|----------------|----------------|--------------------------|
| | | | | | |
| | | | | | |

### Reflection on AI Use
- What tasks did AI help with most effectively?
- Where did AI outputs require significant modification?
- What would I do differently next time?

Appendix B: AIAS Level Reference

Level AI Permitted Example Assignment
0 None Certification quiz
1 Ideation only Brainstorm features (document what AI suggested)
2 With attribution Standard project work
3 As collaborator Advanced analysis with AI partnership
4 As subject Agentic AI course projects

Appendix C: Oral Defense Rubric

Criterion Excellent (A) Proficient (B) Developing (C)
Clarity of Explanation Explains concepts clearly to non-expert Clear with minor gaps Confusing or unclear
Technical Depth Demonstrates deep understanding Shows solid understanding Surface-level knowledge
Response to Questions Handles unexpected questions confidently Answers most questions adequately Struggles with questions
Methodology Justification Explains why decisions were made Describes what was done Cannot explain choices
AI Usage Awareness Articulates when/how AI helped vs. didn’t Acknowledges AI use Unclear on AI role

Appendix D: Cross-Course Ethics Integration

Course Ethics Focus Case Study Topic
554 Data Privacy Cambridge Analytica, GDPR compliance
513 Visualization Integrity Misleading COVID charts, election misinformation
550 Algorithmic Fairness Biased lending algorithms, credit scoring
557 Surveillance & BI Employee monitoring, predictive policing
558 Cloud Security Data breaches, sovereignty, vendor lock-in
576 Model Accountability Healthcare AI failures, autonomous vehicles

Sources (Academic)

  1. Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS). Journal of University Teaching and Learning Practice, 21(06). doi:10.53761/q3azde36
  2. Vendrell, M. & Johnston, S.-K. (2026). Scaffolding Critical Thinking with Generative AI: Design Principles for Integrating Large Language Models in Higher Education. Computers and Education: Artificial Intelligence. doi:10.1016/j.caeai.2026.100572Summary
  3. Furze, L. (2026). What Curriculum Leaders Need to Know About AI in 2026. Blog postSummary
  4. Kosmyna, N. et al. (2025). Students who engaged independently before consulting an LLM produced significantly stronger outputs. (cited in Vendrell & Johnston, 2026)
  5. Frontiers: FACT Assessment Framework
  6. Frontiers: Process-Product model from faculty training workshops
  7. British Journal of Educational Technology: GenAI impact on authentic assessment
  8. PMC: Student perspectives on competency-based portfolios
  9. Schultz, W., Dayan, P., & Montague, P.R. (1997). A Neural Substrate of Prediction and Reward. Science, 275(5306), 1593–1599. doi:10.1126/science.275.5306.1593
  10. Fiorillo, C.D., Tobler, P.N., & Schultz, W. (2003). Discrete Coding of Reward Probability and Uncertainty by Dopamine Neurons. Science, 299(5614), 1898–1902. doi:10.1126/science.1077349
  11. Salimpoor, V.N. et al. (2011). Anatomically Distinct Dopamine Release During Anticipation and Experience of Peak Emotion to Music. Nature Neuroscience, 14, 257–262. doi:10.1038/nn.2726
  12. Norton, M.I., Mochon, D., & Ariely, D. (2012). The IKEA Effect: When Labor Leads to Love. Journal of Consumer Psychology, 22(3), 453–460. HBS
  13. Machulla, P. (2026). The Dopamine Gap. Medium. — Summary
  14. METR (2025). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. Blog

For full source list including university best practices and program examples, see reference/ASSESSMENT_RESEARCH.md.


Document created for MSBAi Program Development - Gies College of Business