← MSBAi Home

MSBAi Assessment Strategy

Program-level details: See program/CURRICULUM.md

Version: 1.0 Last Updated: February 2026 Purpose: Comprehensive assessment strategy synthesizing research best practices, AI-aware frameworks, and curriculum coherence analysis for the MSBAi online program


Executive Summary

This document merges assessment research, online program best practices, and curriculum review findings into a unified assessment strategy for MSBAi. It addresses:

  1. Why traditional exams fail in AI-enabled online environments
  2. What works instead (project-based, portfolio, competency, oral defense)
  3. How to assess when students can use AI (FACT framework, AIAS levels, process-product model)
  4. Curriculum coherence (scaffolding analysis, competitive positioning)
  5. Implementation roadmap with priorities, owners, and deadlines

Key Findings:


Part 1: Research & Best Practices

1.1 Why Traditional Exams Are at Risk

The AI Cheating Challenge

Traditional written exams and take-home assignments face unprecedented challenges in online environments:

Detection Impossibility

Emerging Cheating Technologies

Surveillance Limitations

Key Insight for MSBAi

“The seismic shift brought about by generative AI is challenging the fundamental pillars of written coursework assessment in higher education.” (Scientect)


1.2 Project-Based Assessment Best Practices

Leading Program Examples

University of Chicago - MS Applied Data Science

Cornell Johnson MSBA

USC Marshall MSBA

Columbia MSBA

Design Principles for Project-Based Assessment

  1. Industry Partnership: Connect students with real organizations and real problems
  2. Team Composition: Groups of 3-4 students with faculty mentor supervision
  3. Dual Accountability: Sponsor expectations + academic learning objectives
  4. Portfolio Integration: Projects become career assets, not just grades
  5. Presentation Component: Final delivery includes live presentation/defense
Component Weight Purpose
Process Documentation 20% Evidence of journey, not just outcome
Technical Deliverables 30% Code, analysis, models
Written Report 20% Communication and synthesis
Oral Presentation/Defense 30% Authenticity verification

1.3 Portfolio-Based Assessment Models

Framework: Competency-Based Portfolios

Portfolio assessment works well in graduate programs because students can demonstrate mastery through accumulated evidence rather than single-point assessments (PMC).

Key Components:

  1. Artifact Collection: Students curate work samples demonstrating each competency
  2. Reflection: Written analysis connecting artifacts to learning outcomes
  3. Progression Evidence: Show growth over time, not just final state
  4. External Validation: Industry feedback on portfolio quality

Cleveland Clinic Medical School Model

MPA Program Model (NASPAA)

Portfolio Structure for MSBAi

Category Artifacts Assessment Focus
Technical Skills Code repos, analysis notebooks, dashboards Competency demonstration
Communication Reports, presentations, visualizations Professional communication
Problem-Solving Case analyses, project decisions Critical thinking process
Collaboration Team reflections, peer feedback Professional behavior
Growth Version comparisons, revision history Learning journey

1.4 Competency-Based and Mastery-Based Assessment

Core Principles

Competency-based education (CBE) focuses on demonstrating skill mastery rather than seat time (VerifyEd).

Key Features:

Programs Leading in CBE

Capella University FlexPath

South College CBE

Implementation for MSBAi

Competency Categories:

  1. Data Management (Database, ETL, Data Quality)
  2. Statistical Analysis (Inference, Regression, Time Series)
  3. Machine Learning (Classification, Clustering, Prediction)
  4. Visualization & Communication (Dashboards, Reports, Presentations)
  5. Business Application (Problem Framing, ROI Analysis, Recommendations)
  6. Professional Practice (Ethics, Collaboration, Project Management)

Mastery Levels:


1.5 Assessing Learning When Students Can Use AI Tools

The FACT Assessment Framework

From Frontiers in Education research on environmental data science education (Frontiers):

Component AI Usage Purpose
Fundamental Skills No AI Build foundation before advanced concepts
Applied Projects AI-assisted Real-world problem-solving with AI tools
Conceptual Understanding No AI Paper-based exam for independent comprehension
Thinking (Critical) AI + Human Assess and integrate AI outputs

Pedagogical Rationale:

AI Assessment Scale (AIAS)

Framework for defining permitted AI integration levels in each assignment:

Level AI Usage Example
Level 0 No AI permitted In-class exams, oral defenses
Level 1 AI for brainstorming only Idea generation, not content
Level 2 AI for drafting with human revision First draft assistance
Level 3 AI as collaborative tool Full integration with disclosure
Level 4 AI as subject of analysis Critique and compare AI outputs

Process-Product Assessment Model

From faculty training workshops (Frontiers):

Evaluate both:

  1. Final Product - Traditional deliverable quality
  2. Process Documentation:
    • Prompt development and refinement
    • Human-AI interaction quality
    • Critical evaluation of AI outputs
    • Revision decisions and rationale

AI Declaration Requirements

Require students to document:


1.6 Peer Review and Collaborative Assessment

Research-Backed Benefits

MIT research shows learners who provided more peer feedback received better grades (MIT Open Learning).

Students with high engagement in peer assessment showed:

Four Pillars Framework

From higher education research (Springer):

Pillar Focus
Veracity Assessment design integrity
Validity Implementation accuracy
Volume Sufficient feedback quantity
Literacy Student skill in giving/receiving feedback

Implementation Best Practices

Transparent Rubrics

Technology Platforms

Structured Reflection

Accessibility Considerations

Peer Assessment Types for MSBAi

Type Description When to Use
Code Review Evaluate peer code quality and documentation Technical courses
Analysis Critique Assess methodology and conclusions Statistics/ML courses
Presentation Feedback Evaluate communication effectiveness Capstone, storytelling
Team Contribution Rate collaboration and reliability Group projects

1.7 Authentic Assessment Mirroring Real Workplace Tasks

Duke CTL Framework

Six concrete approaches from Duke’s Center for Teaching and Learning (Duke CTL):

  1. Performance Tasks and Projects
    • Build prototypes, policy memos, public-facing deliverables
    • Real stakeholder audiences when possible
  2. Case Studies and Simulations
    • Context-rich problems with incomplete information
    • Require justification of decisions and trade-offs
  3. Guided Investigations
    • Deep exploration with presentations or extended writing
    • Student-directed inquiry within framework
  4. Oral Defenses
    • Defend choices, trade-offs, and revisions live
    • Cannot be AI-generated in real-time
  5. Process-Centered Work
    • Value drafts, logs, notebooks alongside final products
    • Document decision-making journey
  6. Digital Portfolios
    • Cumulative evidence of growth
    • Annotated with standards-aligned rubrics

AI-Resistant Authentic Assessment Strategies

Anchor in Local Context:

Emphasize Process Over Product:

Social-Experiential Learning:

Interview/Oral Exam Implementation

From University of Dayton research (U Dayton):

Time Efficiency:

Best Practices:

  1. Clear communication of expectations
  2. Example videos showing what to expect
  3. Office hours practice sessions
  4. Challenging but fair questions aligned to learning objectives
  5. Question variation across students
  6. Standardized scoring rubric

Proposed Assessment Weighting Model:


Part 2: Assessment Framework for MSBAi

2.1 Assessment Philosophy

Principle 1: Transparency Over Surveillance

Principle 2: Process Over Product

Principle 3: Authentic Application

Principle 4: Multiple Assessment Modes

2.2 Course-Level Assessment Design

For Technical Courses (Database, BI, ML):

Component Weight AI Policy
Labs/Exercises 20% No AI (foundation building)
Projects 40% AI-assisted with disclosure
Technical Quiz (oral or proctored) 20% No AI
Peer Code Review 10% N/A
Portfolio Documentation 10% AI for editing only

For Communication Courses (Data Storytelling):

Component Weight AI Policy
Presentations (recorded + live) 30% AI for preparation, not delivery
Written Analysis 25% AI-assisted with disclosure
Peer Feedback 15% N/A
Revision Portfolio 20% Show before/after with reflection
Final Oral Defense 10% No AI

For Capstone:

Component Weight AI Policy
Sponsor Deliverables 30% Industry-standard (AI permitted)
Process Documentation 20% AI for editing only
Final Presentation 25% No AI during delivery
Oral Defense/Q&A 15% No AI
Peer Evaluation 10% N/A

2.3 Program-Level Portfolio Structure

For semester-by-semester course assignments, see program/CURRICULUM.md. The portfolio artifacts below align to the program’s progressive competency development.

Program Stage Artifacts Competencies Demonstrated
Foundation courses SQL projects, data visualization, reflection Database, visualization basics
Analytics courses ML models, business case analyses Predictive modeling, business application
Advanced courses Team project deliverables, peer reviews Collaboration, communication
Capstone Capstone + comprehensive reflection Integration, professional readiness

2.4 Synchronous Assessment Components

Even in async-first programs, include synchronous touchpoints:

  1. Monthly Webinar Discussions (participation tracked)
  2. Mid-term Check-ins (15-min instructor conversation)
  3. Project Presentations (live via Zoom, recorded backup)
  4. Capstone Defense (mandatory synchronous, panel format)

2.5 Technology Stack for Assessment

Tool Purpose
Canvas Assignment submission, rubrics, peer review
GitHub Code portfolios, version history
Zoom Oral defenses, presentations
Gradescope Code autograding with manual review
Peergrade Structured peer assessment
Portfolio Platform (custom or Portfolium) Cumulative evidence

2.6 Online Assessment: The Challenge

Traditional exams in online asynchronous programs face three critical vulnerabilities:

  1. AI Completion Risk: Students can use AI to complete traditional assignments without learning
  2. Identity Verification: Proctoring is expensive, intrusive, and often circumventable
  3. Authenticity Gap: Exam performance doesn’t demonstrate workplace-ready skills

MSBAi’s Current Approach: Project-based assessment with no traditional exams – this is well-aligned with research.

2.7 FACT Framework Applied to MSBAi

Component Description Weight (Recommended) MSBAi Implementation
Fundamental Basic knowledge demonstration 10-15% Quizzes, DataCamp modules
Applied Hands-on skill application 40-50% Projects (70-90% in MSBAi)
Conceptual Understanding “why” and “when” 15-20% Case write-ups, reflections
Thinking Novel problem-solving 20-25% Open-ended project components

Assessment: MSBAi over-weights Applied (good for skill-building) but may under-weight Conceptual. Recommend adding “design rationale” sections to project rubrics.

2.8 AIAS Applied to MSBAi

Level AI Permitted When to Use MSBAi Mapping
0 No AI Testing foundational knowledge Proctored certification exams (if any)
1 AI for ideation only Early exploration Week 1-2 of projects
2 AI with attribution Standard projects Most project work
3 AI as collaborator Advanced applications Final projects, capstone
4 AI as subject of analysis GenAI course Generative AI for Analytics

Recommendation: Explicitly label each assignment with its AIAS level. This sets clear expectations and teaches appropriate tool use.

2.9 Process-Product Model Applied to MSBAi

Dimension What to Evaluate Implementation
Product Final deliverable quality Current rubrics
Process How the student approached the problem Require “methodology log”
Iteration How the student refined their work Version history on GitHub
Reflection What the student learned Post-project reflection essays

Critical Addition for MSBAi: Require a “Process Documentation” section in every major project:

2.10 Oral Defense Component

Research strongly supports oral components for online programs:

“Oral defenses remain the most reliable assessment method for verifying understanding in AI-enabled environments.”

Implementation for MSBAi (ACTIVE — implemented in all course syllabi):

Course Component Oral Weight Format
Studio Sessions 10% (participation) Live Q&A during sessions
8-Week Course Projects 20-30% of project grade 10-15 min team presentation + 5 min Q&A
4-Week Course Projects Included in final project 10-min individual presentation + Q&A
Capstone 40-50% of capstone grade 20 min panel presentation + 10 min defense Q&A

Weighting Recommendation: Shift to 60% written/code deliverables + 40% oral demonstration for major projects. This:

2.11 Team Assessment Guidelines

Implementation for MSBAi (ACTIVE — team projects in all 8-week courses):

Team Project Policy:

Individual Accountability Within Teams:

Peer Evaluation Framework:

Note: AIAS level annotations per assignment are a future enhancement — flag for faculty development workshops before Fall 2026 launch, but do not implement in this pass.


Part 3: Curriculum Coherence Findings

Current Research Frameworks

HCAIF Framework (Human-Centered AI in Finance/Education)

Modern AI-first curricula follow a five-phase integration model:

Phase Description MSBAi Alignment
Preparation Students engage with AI before class sessions Async-first design enables this
Personalized Learning AI adapts to individual learner needs Implicit; could be more explicit
Classroom Engagement Active AI-assisted problem-solving Studio sessions + project work
Summative Assessment AI-aware evaluation methods Needs attribution requirements
Continuous Monitoring Track skill development over time Progressive project complexity

Recommendation: Make personalized learning explicit by encouraging students to use AI tutoring for weak areas identified in formative assessments.

AI Jockey Model (Yale, 2024-2025)

The “AI Jockey” paradigm treats AI as a tool to be directed skillfully, not a replacement for thinking:

“Students become AI jockeys – steering, evaluating, and refining AI outputs rather than passively consuming them.”

MSBAi Alignment: Strong – the curriculum already emphasizes AI as “productivity accelerator” and requires students to evaluate AI outputs critically. The Generative AI for Analytics elective specifically teaches this skill.

SAIL Framework (Stanford AI Learning)

Stanford’s framework emphasizes three pillars:

  1. Situational Awareness: Know when AI helps vs. when manual work is better
  2. Attribution Discipline: Document AI contributions transparently
  3. Iterative Refinement: Improve AI outputs through prompt engineering

MSBAi Gap: Attribution discipline is mentioned but not enforced. Recommend adding explicit AI attribution requirements to all project rubrics.

70/5 Rule (Industry Research, 2025)

Research suggests:

MSBAi Alignment: The curriculum addresses both tiers: core courses build AI literacy (70% need), while the GenAI elective + capstone develop expertise (5% need).

3.2 Topic Scaffolding & Sequencing Analysis

For the complete course sequence with credit hours and semester assignments, see program/CURRICULUM.md.

Research-Based Sequencing Principles

Principle 1: SQL + Python Together First

Research from CMU and MIT curricula shows:

“Teaching SQL and Python together in the first course creates immediate transferable skills and prevents tool siloing.”

MSBAi Status: BADM 554 does this well – SQL fundamentals + Python (pandas, sqlalchemy) in weeks 1-6.

Principle 2: Statistics Before ML

The optimal sequence is:

Data Manipulation -> Descriptive Stats -> Visualization -> Regression -> Classification -> Unsupervised

MSBAi Status: Well-designed:

Principle 3: Visualization Early

Tufte-informed curricula place visualization early (within first 3-4 weeks) because:

MSBAi Status: BDI 513 begins Week 5 alongside BADM 554, introducing visualization once data foundations are established.

Principle 4: Spiral Curriculum (Bruner)

Topics should be revisited at increasing depth across courses:

Topic First Exposure Deeper Treatment Advanced Application
SQL 554 (wks 1-3) 558 (Spark SQL) 576 (feature engineering)
Regression 550 (wks 3-4) 557 (business cases) 576 (regularization)
Visualization 513 (wks 1-4) 557 (dashboards) 576 (model interpretation)
Classification 550 (wks 5-6) 557 (BI decisions) 576 (ensemble methods)
Clustering 557 (wk 6) 576 (advanced) Capstone (application)

MSBAi Status: The curriculum naturally implements spiral learning through cross-course convergence.

Principle 5: 8-Week Compression Strategy

Research on compressed course formats suggests:

16-Week Element 8-Week Adaptation Risk Mitigation
2 midterms + final 3 progressive projects Continuous feedback
Weekly problem sets Every-other-week labs AI-assisted practice
50-min lectures 15-20 min videos + studio Active learning focus
Office hours (random) Scheduled studio sessions Guaranteed access

MSBAi Status: The design follows these principles. Project-based assessment naturally fits compression.

Sequencing Recommendations

Minor Adjustment 1: Earlier Cloud Exposure

Currently, cloud infrastructure (558) comes later in the sequence. Consider:

Impact: Smoother transition; students comfortable with cloud before deep-dive.

Minor Adjustment 2: Earlier Text/NLP

Currently, text analysis appears only in BADM 576. Consider:

Impact: Students see NLP applications in business context earlier.

No Changes Needed: ML Prerequisites

The current prerequisite chain is optimal:

554 (SQL/Python) -> 550 (Stats/ML basics) -> 558 (Infrastructure) -> 576 (Advanced ML)

This follows the “data engineering -> analysis -> infrastructure -> science” progression used by MIT and Berkeley.

3.3 Curriculum Coherence: Strengths & Gaps

Strengths

Dimension Assessment Evidence
AI-First Integration Strong AI tools in every course; dedicated GenAI elective
Python/Jupyter Backbone Excellent Universal across all courses
Project-Based Learning Excellent 2-3 major projects per course, no exams
L-C-E Progression Well-implemented Clear literacy -> competency -> expertise across program
Cross-Course Convergence Thoughtful Visualization, regression, classification clusters
Studio Sessions Differentiating Live project-focused sessions rare in competitors

Gaps & Improvement Opportunities

Gap 1: AI Attribution Requirements (CRITICAL)

Issue: No explicit requirement to document AI tool usage in projects.

Risk: Students may over-rely on AI without developing independent skills; faculty can’t assess true competency.

Recommendation: Add to all project rubrics:

AI ATTRIBUTION REQUIREMENT (5% of project grade)
- Document all AI tools used (ChatGPT, Claude, Copilot, etc.)
- For each AI use, describe: prompt given, output received, how you modified/validated
- Include "AI Contribution Log" as appendix to all major projects
- Failure to attribute is academic integrity violation

Gap 2: Process Documentation (IMPORTANT)

Issue: Rubrics focus primarily on deliverables, not learning process.

Risk: Students submit polished AI-generated work without demonstrating understanding.

Recommendation: Add “Methodology & Process” rubric dimension (10-15% weight):

Criterion Excellent Proficient Developing
Approach Documentation Clear explanation of methodology choices Describes main steps Minimal process description
Iteration Evidence Shows multiple attempts, refinements Some iteration visible Single-pass submission
AI Usage Transparency Detailed AI contribution log Basic AI attribution Missing or vague
Self-Reflection Insightful analysis of what worked/didn’t Some reflection No reflection

Gap 3: Oral Defense Component (IMPORTANT)

Issue: Limited live assessment beyond studio participation.

Recommendation: Add oral defense to major projects:

Project Type Oral Component Format
Course Projects (1-2) 15% of grade 8-10 min video + 5 min live Q&A
Capstone 40% of grade 20 min presentation + 15 min defense

Gap 4: Ethics Integration (MINOR)

Issue: Responsible AI mentioned but not consistently embedded.

Recommendation: Add ethics checkpoint to each course:

Each course includes 1 case study or reflection on ethical dimensions (Week 7 or 8).

Gap 5: Peer Learning Structure (ENHANCEMENT)

Issue: Peer review mentioned but not structured.

Recommendation: Formalize peer learning:

Component Implementation Weight
Code Review Each student reviews 2 peer projects per course 5% of grade
Study Pods Assign 4-person study groups in Week 1 Encouraged, not graded
Peer Feedback on Presentations Structured rubric during studio sessions Formative only

3.4 Competitive Positioning Assessment

Market Position Analysis

Factor MSBAi Position Competitor Benchmark Advantage
AI Integration Every course MIT: High, Others: Moderate MSBAi leads
Price Point 20% below peer avg UT Austin ~$58K, UCLA ~$67K Affordability
Modular Format 8-week courses Most: 15-16 week Flexible pacing
Live Components Studio sessions weekly Most async-only Engagement
Project Focus No exams, 100% projects Most have exams Authentic assessment
Python Backbone Universal Most mixed (R+Python) Career-ready
Cloud Integration AWS throughout Most optional Industry-relevant

Recommendations for Enhanced Competitiveness

  1. “AI-First” Branding: Emphasize every graduate has documented AI competencies
  2. Process Portfolio: Graduates show not just projects but learning journey
  3. Industry Certification Alignment: Map courses to AWS, Power BI, DataCamp certifications
  4. Employer Advisory Board: Regular input on curriculum relevance

Part 4: Implementation Recommendations

Priority 1: Assessment Framework Updates (Critical)

Priority 2: Rubric Enhancements (Important)

Priority 3: Content Enhancements (Enhancement)

Priority 4: Documentation (Ongoing)

Risk Mitigation

Risk 1: AI Over-Reliance

Risk 2: Assessment Gaming

Risk 3: Student Resistance to Process Documentation

Risk 4: Faculty Capacity for Oral Defenses

Conclusion

The MSBAi curriculum is well-designed for an AI-first world with strong foundations in:

Critical enhancements needed:

  1. AI Attribution Requirements – Add to all rubrics
  2. Process Documentation – Evaluate how students work, not just outputs
  3. Oral Defense Component – Verify understanding, especially for capstone

With these additions, MSBAi will be:


Appendices

Appendix A: AI Attribution Log Template

## AI Contribution Log

### Project: [Name]
### Date: [Date]
### Student: [Name]

| Date | AI Tool | Task | Prompt Summary | Output Summary | How I Modified/Validated |
|------|---------|------|----------------|----------------|--------------------------|
| | | | | | |
| | | | | | |

### Reflection on AI Use
- What tasks did AI help with most effectively?
- Where did AI outputs require significant modification?
- What would I do differently next time?

Appendix B: AIAS Level Reference

Level AI Permitted Example Assignment
0 None Certification quiz
1 Ideation only Brainstorm features (document what AI suggested)
2 With attribution Standard project work
3 As collaborator Advanced analysis with AI partnership
4 As subject GenAI course projects

Appendix C: Oral Defense Rubric

Criterion Excellent (A) Proficient (B) Developing (C)
Clarity of Explanation Explains concepts clearly to non-expert Clear with minor gaps Confusing or unclear
Technical Depth Demonstrates deep understanding Shows solid understanding Surface-level knowledge
Response to Questions Handles unexpected questions confidently Answers most questions adequately Struggles with questions
Methodology Justification Explains why decisions were made Describes what was done Cannot explain choices
AI Usage Awareness Articulates when/how AI helped vs. didn’t Acknowledges AI use Unclear on AI role

Appendix D: Cross-Course Ethics Integration

Course Ethics Focus Case Study Topic
554 Data Privacy Cambridge Analytica, GDPR compliance
513 Visualization Integrity Misleading COVID charts, election misinformation
550 Algorithmic Fairness Biased lending algorithms, credit scoring
557 Surveillance & BI Employee monitoring, predictive policing
558 Cloud Security Data breaches, sovereignty, vendor lock-in
576 Model Accountability Healthcare AI failures, autonomous vehicles

Sources

Academic Research

University Best Practices

Program Examples

Industry and Policy


Document created for MSBAi Program Development - Gies College of Business Last Updated: February 2026 Informed by: Assessment research and curriculum review