MSBAi Assessment Strategy
Purpose: Normative assessment policies for the MSBAi program – what faculty must follow when designing course assessments.
Program-level details: See program/curriculum.md. Research background: See reference/ASSESSMENT_RESEARCH.md. Student-facing rationale: See Why We Ask You to Show Your Thinking.
1. Assessment Philosophy
- Validity First – Assessments generate trustworthy evidence of learning, not just evidence of AI-assisted output quality (Furze, 2026)
- Transparency Over Surveillance – Clear AI usage policies per assignment; trust-based with accountability. Students know the AIAS level and rationale for every assessment.
- Process Over Product – Document learning journey, weight revision and iteration. Rubrics reward reasoning quality over surface fluency of AI-assisted deliverables (Vendrell & Johnston, 2026, P7).
- Authentic Application – Design for reality: assessments reflect real-world AI-augmented workflows, not artificial “AI-proof” constraints (Furze, 2026)
- Assessment as Process – Build evidence chains over time (weekly assignments → milestones → deliverable → defense), not single high-stakes moments. Multiple modes: written, oral, practical, collaborative. This pipeline mirrors the structure of a DJ’s buildup: each stage loads the brain’s reward system so the next resolution actually registers. The anticipatory phase — not the payoff — determines how intensely learning lands (Salimpoor et al., 2011; Machulla, 2026).
- Cognitive Friction by Design – Preserve the productive struggle essential for deep learning. Students formulate hypotheses, construct arguments, or analyze data independently before consulting AI. AI extends thinking; it doesn’t replace it. The neuroscience is concrete: dopamine neurons fire on prediction errors (surprise), not on predicted rewards — when outcomes match expectations exactly, the brain’s teaching signal is zero (Schultz et al., 1997). Frictionless AI delivery eliminates the uncertainty and effort that make learning neurologically meaningful. Pre-AI phases are not punishment; they are the scenic route that makes the destination worth reaching (Machulla, 2026; Vendrell & Johnston, 2026, P1/P8).
- Low-Stakes Iteration with Peer Review – Projects follow a draft → peer feedback → revision cycle. Students learn as much from reviewing others’ work as from receiving feedback. Early submissions are low-stakes checkpoints (formative), not high-stakes deadlines (summative). Peer review is structured with rubrics and trained in the first studio session of each course. Each iteration closes a small gap between intention and outcome — the IKEA effect shows that labor leads to love only when it leads to completion (Norton et al., 2012). Multiple small completions build cumulative ownership of the final deliverable.
2. Standard Assessment Model
Every course follows this structure. Faculty choose specific assignment types (cases, labs, discussions, exercises) based on course content.
8-week, 4-credit courses
| Component | Weight | Timing | Description |
|---|---|---|---|
| Weekly assignments | 30-40% | Weeks 1-8 | Practice exercises, case analyses, discussions, labs, peer reviews |
| Project milestones | 20-30% | Weeks 1-7 | Proposal, drafts, peer review — scaffolded steps toward final project |
| Final project deliverable | 15-20% | Week 8 | Team of 3 (max); integrates skills from weekly assignments |
| Oral defense | 20-25% | Week 8 | Individual accountability for team work |
| Studio participation | 5-10% | Weeks 1-8 | Weekly live sessions |
4-week, 2-credit courses
Same structure compressed. Individual project only (insufficient time for team formation). Oral defense still required.
| Component | Weight | Timing | Description |
|---|---|---|---|
| Weekly assignments | 25-35% | Weeks 1-4 | Labs, exercises, readings |
| Project milestones | 20-30% | Weeks 1-3 | Progressive deliverables toward final |
| Final project deliverable | 25-35% | Week 4 | Individual; includes oral defense |
| Studio participation | 5-10% | Weeks 1-4 | Weekly live sessions |
Key principles
- One major project per course, not 2-3. Depth over breadth.
- Weekly assignments build skills needed for the project — they are not filler.
- Project milestones threaded throughout all weeks, ramping up toward the end.
- Team size: 3 max (8-week courses). Individual only (4-week courses).
- Oral defense required in every course (see Section 5).
- Faculty choose assignment types based on content: cases, labs, discussions, exercises, peer reviews.
3. AI-Aware Assessment Framework
MSBAi uses three complementary frameworks to structure AI-appropriate assessment.
3.1 AI Assessment Scale (AIAS)
Adapted from Perkins, Furze, Roe, & MacVaugh (2024). Published AIAS uses Levels 1-5; MSBAi adapts to 0-4 (Level 0 = no AI).
| Level | AI Usage | Example |
|---|---|---|
| 0 | No AI permitted | Oral defenses, quizzes, proctored assessments |
| 1 | AI for brainstorming only | Idea generation, not content creation |
| 2 | AI for drafting with human revision | Code assistance, debugging, first drafts – with attribution |
| 3 | AI as collaborative tool | Full integration with disclosure; AI for code generation, narrative refinement |
| 4 | AI as subject of analysis | Build, critique, and evaluate AI systems |
Every assessment component in every course syllabus is annotated with its AIAS level. See individual course pages for per-assignment levels.
3.2 FACT Framework
From Frontiers in Education research on environmental data science (Frontiers):
| Component | AI Usage | Purpose |
|---|---|---|
| Fundamental Skills | No AI | Build foundation before advanced concepts |
| Applied Projects | AI-assisted | Real-world problem-solving with AI tools |
| Conceptual Understanding | No AI | Paper-based exam for independent comprehension |
| Thinking (Critical) | AI + Human | Assess and integrate AI outputs |
3.3 Process-Product Model
From faculty training research (Frontiers). Evaluate both:
- Final Product – Traditional deliverable quality
- Process Documentation: prompt development, human-AI interaction quality, critical evaluation of AI outputs, revision decisions and rationale
3.4 Pre-AI / AI-Mediated / Post-AI Sequencing
Beyond setting an AIAS level per assignment, faculty should design the sequence of engagement within activities. This prevents cognitive offloading while preserving AI’s value as a thinking partner. The neuroscience basis: the brain’s dopamine system is most engaged under uncertainty — when the outcome is genuinely unknown, not when rewards arrive on schedule (Schultz et al., 1997; Fiorillo et al., 2003). The pre-AI phase creates this uncertainty (will my hypothesis hold?); the AI-mediated phase introduces surprise (did AI find something I missed?); the post-AI phase closes the gap through reflection (what did I actually learn?). This is the “dopamine gap” — the space between expecting and receiving — and it is where motivation, competence, and meaning are built (Machulla, 2026).
| Phase | Student Activity | Purpose |
|---|---|---|
| Pre-AI (AI-free) | Formulate hypothesis, draft analysis plan, identify assumptions, construct initial argument | Preserves cognitive friction; builds independent reasoning before AI exposure |
| AI-Mediated | Use AI to extend analysis, generate alternatives, challenge assumptions, debug code, explore counterarguments | Positions AI as thinking partner; student directs the inquiry |
| Post-AI (reflection) | Evaluate what AI added vs. missed, compare AI output to own reasoning, document modifications, identify limitations | Builds evaluative judgment and metacognitive awareness |
Implementation examples:
- FIN 550 lab: Students build a baseline model by hand (pre-AI), then use Copilot to optimize hyperparameters and explore feature engineering (AI-mediated), then write a reflection comparing their intuition to AI suggestions (post-AI)
- BDI 513 case: Students draft their own data story narrative (pre-AI), ask AI to suggest alternative framings or identify gaps (AI-mediated), then defend their final narrative choice in studio (post-AI)
- BADM 557 project milestone: Students design their BI dashboard wireframe independently (pre-AI), use AI to generate DAX formulas and suggest visualizations (AI-mediated), then critique which AI suggestions they rejected and why (post-AI)
This sequencing is supported by Kosmyna et al. (2025), who found that students who engaged independently before consulting an LLM produced significantly stronger outputs than those who used AI from the start. The METR randomized trial (2025) adds a cautionary data point: experienced developers using AI were 19% slower on complex tasks yet believed they had been 20% faster — the frictionless AI experience creates a subjective sense of productivity that diverges from measurable outcomes.
Sources: Vendrell & Johnston (2026), Principles P1 and P8; Furze (2026), “design for reality” principle; Machulla (2026), dopamine prediction error and the “scenic route” framing; METR (2025), AI productivity perception gap.
3.5 AI Declaration Requirements
All major projects require students to document:
- Which AI tools were used
- What prompts were employed
- What limitations were encountered
- How human judgment modified AI outputs
See Appendix A for the AI Attribution Log template.
4. Program-Level Portfolio Structure
For semester-by-semester course assignments, see program/curriculum.md.
| Program Stage | Artifacts | Competencies Demonstrated |
|---|---|---|
| Foundation courses | SQL projects, data visualization, reflection | Database, visualization basics |
| Analytics courses | ML models, business case analyses | Predictive modeling, business application |
| Advanced courses | Team project deliverables, peer reviews | Collaboration, communication |
| Capstone | Capstone + comprehensive reflection | Integration, professional readiness |
5. Synchronous Assessment Components
Even in async-first programs, synchronous touchpoints are required:
- Studio Sessions (weekly, 60 min — hands-on project work; participation tracked)
- Analytics Conversations (bi-weekly, 60 min — case discussions, guest speakers)
- Mid-term Check-ins (15-min instructor conversation)
- Project Presentations (live via Zoom, recorded backup)
- Capstone Defense (mandatory synchronous, panel format)
6. Oral Defense Requirements
Research strongly supports oral components for verifying understanding in AI-enabled environments.
Implementation (ACTIVE – all course syllabi):
| Course Component | Oral Weight | Format |
|---|---|---|
| Studio Sessions | 10% (participation) | Live Q&A during sessions |
| 8-Week Course Projects | 20-30% of project grade | 10-15 min team presentation + 5 min Q&A |
| 4-Week Course Projects | Included in final project | 10-min individual presentation + Q&A |
| Capstone | 25-35% of capstone grade (min 20%) | Faculty determines format; suggested 15-20 min presentation + 10 min Q&A |
Capstone oral defense notes:
- Faculty determine length and format within the 25-35% range
- Panel may include client sponsor for client projects
- Each student must answer questions individually, regardless of team/individual format
- Career pivoters should be assessed on ability to articulate their analytical value proposition
- See courses/capstone.md for full capstone guidelines
See Appendix C for the standardized oral defense rubric.
7. Team Assessment Guidelines
Cross-reference: DESIGN_PRINCIPLES.md Constraints 7 (team projects required) and 8 (oral defense weights).
Team Project Policy:
- Every 8-week course includes at least one team project (typically the final project)
- Teams of 3 students (2 or 4 in exceptional circumstances), assigned by instructor to balance skill sets
- 4-week courses are individual projects only (insufficient time for team formation)
Individual Accountability Within Teams:
- Peer evaluation required at project completion (5-10% of team project grade)
- Each team member must be able to answer questions on any part of the project during oral defense
- Git commit history reviewed to assess individual contributions
Peer Evaluation Framework:
- Anonymous peer evaluation using standardized rubric
- Dimensions: contribution quality, reliability, communication, collaboration
- Instructor reviews peer evaluations for outliers and adjusts individual grades if needed
- Students trained on giving constructive feedback in first studio session
MSBAi Peer Assessment Types
| Type | Description | When to Use |
|---|---|---|
| Code Review | Evaluate peer code quality and documentation | Technical courses |
| Analysis Critique | Assess methodology and conclusions | Statistics/ML courses |
| Presentation Feedback | Evaluate communication effectiveness | Capstone, storytelling |
| Team Contribution | Rate collaboration and reliability | Group projects |
Low-Stakes Iteration Model
Every multi-week project should follow a draft → feedback → revision cycle:
| Stage | Timing | Stakes | Feedback Source |
|---|---|---|---|
| Draft checkpoint | Mid-project (e.g., Week 2 of a 3-week project) | Low — formative only, or ≤5% of project grade | Peer review + instructor spot-check |
| Peer review | 2-3 days after draft submission | Part of studio participation grade | Structured rubric (same dimensions as final rubric, simplified) |
| Revision + final | Project deadline | Full weight (summative) | Instructor grading on final deliverable |
Implementation requirements:
- 8-week courses: At least 1 project must include a peer-reviewed draft stage before final submission
- 4-week courses: Draft checkpoints encouraged but not required (compressed timeline)
- Capstone: Part 1 (portfolio) uses Week 2 peer workshop; Part 2 (project) uses Week 7 dry run
- Peer review training: First studio session of each course includes a 15-minute peer review calibration exercise (students review a sample artifact together, discuss scoring, align expectations)
- Peer review rubric: Use a simplified version of the project’s final rubric (3 dimensions instead of 5, same language)
What students gain from reviewing:
- Exposure to different approaches to the same problem
- Calibration of their own work quality against peers
- Practice giving constructive technical feedback — a workplace skill
8. Risk Mitigation
- AI Over-Reliance: Oral defense verifies understanding; process documentation reveals AI dependency; AIAS levels set appropriate use boundaries; pre-AI/post-AI sequencing (Section 3.4) ensures students build independent reasoning before consulting AI
- Assessment Gaming: Multiple modalities (written, code, oral, peer); progressive project complexity; individual capstone with live defense
- Cognitive Offloading: Pre-AI phases in every activity preserve productive struggle; rubrics explicitly reward reasoning quality over output polish; AI Attribution Log makes thinking process visible. The risk is neurological, not just pedagogical: AI tools that deliver instant answers without uncertainty create the “infinite scroll” effect — engagement without completion signals, dopamine gaps that never resolve (Machulla, 2026). Pre-AI phases are the “page break” that gives the brain a stopping cue.
- Student Resistance to Process Documentation: Explain career rationale; provide templates (Appendix A); grade leniently first term, increase rigor over time
- Faculty Capacity for Oral Defenses: One major project per course limits oral defense load; use studio session time; TA support for scheduling
Faculty Assessment Validation (“Attack Your Assessments”)
Before finalizing course assessments, faculty should conduct an AI stress test (Furze, 2026):
- Attempt your own assessments with AI — Have a confident AI user (faculty member, TA, or instructional designer) complete each major assessment using current AI tools from a student’s perspective
- Identify vulnerability points — Which parts can AI complete without genuine understanding? Where does the assessment truly require human reasoning?
- Redesign where needed — Strengthen vulnerable assessments by adding pre-AI phases, requiring process documentation, or shifting weight toward oral defense
- Repeat each semester — AI capabilities change rapidly; what was AI-resistant in Fall 2026 may not be by Spring 2027
This exercise should be part of the faculty orientation process (presentations/faculty-orientation/) and repeated annually.
Appendix A: AI Attribution Log Template
## AI Contribution Log
### Project: [Name]
### Date: [Date]
### Student: [Name]
| Date | AI Tool | Task | Prompt Summary | Output Summary | How I Modified/Validated |
|------|---------|------|----------------|----------------|--------------------------|
| | | | | | |
| | | | | | |
### Reflection on AI Use
- What tasks did AI help with most effectively?
- Where did AI outputs require significant modification?
- What would I do differently next time?
Appendix B: AIAS Level Reference
| Level | AI Permitted | Example Assignment |
|---|---|---|
| 0 | None | Certification quiz |
| 1 | Ideation only | Brainstorm features (document what AI suggested) |
| 2 | With attribution | Standard project work |
| 3 | As collaborator | Advanced analysis with AI partnership |
| 4 | As subject | Agentic AI course projects |
Appendix C: Oral Defense Rubric
| Criterion | Excellent (A) | Proficient (B) | Developing (C) |
|---|---|---|---|
| Clarity of Explanation | Explains concepts clearly to non-expert | Clear with minor gaps | Confusing or unclear |
| Technical Depth | Demonstrates deep understanding | Shows solid understanding | Surface-level knowledge |
| Response to Questions | Handles unexpected questions confidently | Answers most questions adequately | Struggles with questions |
| Methodology Justification | Explains why decisions were made | Describes what was done | Cannot explain choices |
| AI Usage Awareness | Articulates when/how AI helped vs. didn’t | Acknowledges AI use | Unclear on AI role |
Appendix D: Cross-Course Ethics Integration
| Course | Ethics Focus | Case Study Topic |
|---|---|---|
| 554 | Data Privacy | Cambridge Analytica, GDPR compliance |
| 513 | Visualization Integrity | Misleading COVID charts, election misinformation |
| 550 | Algorithmic Fairness | Biased lending algorithms, credit scoring |
| 557 | Surveillance & BI | Employee monitoring, predictive policing |
| 558 | Cloud Security | Data breaches, sovereignty, vendor lock-in |
| 576 | Model Accountability | Healthcare AI failures, autonomous vehicles |
Sources (Academic)
- Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS). Journal of University Teaching and Learning Practice, 21(06). doi:10.53761/q3azde36
- Vendrell, M. & Johnston, S.-K. (2026). Scaffolding Critical Thinking with Generative AI: Design Principles for Integrating Large Language Models in Higher Education. Computers and Education: Artificial Intelligence. doi:10.1016/j.caeai.2026.100572 — Summary
- Furze, L. (2026). What Curriculum Leaders Need to Know About AI in 2026. Blog post — Summary
- Kosmyna, N. et al. (2025). Students who engaged independently before consulting an LLM produced significantly stronger outputs. (cited in Vendrell & Johnston, 2026)
- Frontiers: FACT Assessment Framework
- Frontiers: Process-Product model from faculty training workshops
- British Journal of Educational Technology: GenAI impact on authentic assessment
- PMC: Student perspectives on competency-based portfolios
- Schultz, W., Dayan, P., & Montague, P.R. (1997). A Neural Substrate of Prediction and Reward. Science, 275(5306), 1593–1599. doi:10.1126/science.275.5306.1593
- Fiorillo, C.D., Tobler, P.N., & Schultz, W. (2003). Discrete Coding of Reward Probability and Uncertainty by Dopamine Neurons. Science, 299(5614), 1898–1902. doi:10.1126/science.1077349
- Salimpoor, V.N. et al. (2011). Anatomically Distinct Dopamine Release During Anticipation and Experience of Peak Emotion to Music. Nature Neuroscience, 14, 257–262. doi:10.1038/nn.2726
- Norton, M.I., Mochon, D., & Ariely, D. (2012). The IKEA Effect: When Labor Leads to Love. Journal of Consumer Psychology, 22(3), 453–460. HBS
- Machulla, P. (2026). The Dopamine Gap. Medium. — Summary
- METR (2025). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. Blog
For full source list including university best practices and program examples, see reference/ASSESSMENT_RESEARCH.md.
Document created for MSBAi Program Development - Gies College of Business