MSBAi Assessment Strategy
Program-level details: See program/CURRICULUM.md
Version: 1.0 Last Updated: February 2026 Purpose: Comprehensive assessment strategy synthesizing research best practices, AI-aware frameworks, and curriculum coherence analysis for the MSBAi online program
Executive Summary
This document merges assessment research, online program best practices, and curriculum review findings into a unified assessment strategy for MSBAi. It addresses:
- Why traditional exams fail in AI-enabled online environments
- What works instead (project-based, portfolio, competency, oral defense)
- How to assess when students can use AI (FACT framework, AIAS levels, process-product model)
- Curriculum coherence (scaffolding analysis, competitive positioning)
- Implementation roadmap with priorities, owners, and deadlines
Key Findings:
- MSBAi’s project-based approach aligns with research best practices
- Critical gap: Need explicit AI attribution and process documentation requirements
- Enhancement opportunity: Add oral defense components to major projects and capstone
- Current course sequencing (554 -> 513 -> 550 -> 557 -> 558 -> 576) is pedagogically sound
- MSBAi is well-positioned against competitors with AI-first positioning
Part 1: Research & Best Practices
1.1 Why Traditional Exams Are at Risk
The AI Cheating Challenge
Traditional written exams and take-home assignments face unprecedented challenges in online environments:
Detection Impossibility
- Experts agree it is impossible to be completely certain whether a student used AI – the technology is too sophisticated (Inside Higher Ed)
- Research shows markers cannot distinguish AI-assisted assessments from unassisted ones, regardless of assessment authenticity level (British Journal of Educational Technology)
- AI detection tools produce high false positive/negative rates and disproportionately harm non-native English speakers
Emerging Cheating Technologies
- Tools like Cluely and Parakeet AI provide real-time AI assistance invisible to proctoring
- Meta Ray-Ban glasses with built-in AI can communicate silently via in-lens display (Washington Post)
- Screen-sharing workarounds defeat lockdown browsers
Surveillance Limitations
- Enforcement tools prioritize surveillance over trust and reward compliance over learning
- Create inequitable outcomes for diverse student populations
- Damage instructor-student relationships essential for online learning success
Key Insight for MSBAi
“The seismic shift brought about by generative AI is challenging the fundamental pillars of written coursework assessment in higher education.” (Scientect)
1.2 Project-Based Assessment Best Practices
Leading Program Examples
University of Chicago - MS Applied Data Science
- Teams of 4 students work with real business sponsors
- Guided by instructor and subject matter expert
- Clear expectations from sponsor + learning objectives from instructors
- (UChicago DSI)
Cornell Johnson MSBA
- Coursework shaped around real datasets, case simulations, and team-based projects
- Designed for immediate workplace application
- (Cornell Johnson)
USC Marshall MSBA
- Case competitions and cross-functional teams
- Projects with industry professional mentorship
- Portfolio building from day one
- (USC Marshall)
Columbia MSBA
- Capstone provides intense consulting engagement with clients
- Real-world business problems using real data sets
- (Columbia DSI)
Design Principles for Project-Based Assessment
- Industry Partnership: Connect students with real organizations and real problems
- Team Composition: Groups of 3-4 students with faculty mentor supervision
- Dual Accountability: Sponsor expectations + academic learning objectives
- Portfolio Integration: Projects become career assets, not just grades
- Presentation Component: Final delivery includes live presentation/defense
Recommended Project Assessment Components
| Component | Weight | Purpose |
|---|---|---|
| Process Documentation | 20% | Evidence of journey, not just outcome |
| Technical Deliverables | 30% | Code, analysis, models |
| Written Report | 20% | Communication and synthesis |
| Oral Presentation/Defense | 30% | Authenticity verification |
1.3 Portfolio-Based Assessment Models
Framework: Competency-Based Portfolios
Portfolio assessment works well in graduate programs because students can demonstrate mastery through accumulated evidence rather than single-point assessments (PMC).
Key Components:
- Artifact Collection: Students curate work samples demonstrating each competency
- Reflection: Written analysis connecting artifacts to learning outcomes
- Progression Evidence: Show growth over time, not just final state
- External Validation: Industry feedback on portfolio quality
Cleveland Clinic Medical School Model
- No letter grades or class ranks
- Students receive qualitative assessments
- Nine broad-based competencies tracked through portfolio
- Feedback from faculty and peers used as evidence
- (PubMed)
MPA Program Model (NASPAA)
- Competency-based portfolios serve both pedagogical and assessment functions
- Students demonstrate core public administration competencies
- Artifacts aligned to program learning outcomes
- (NASPAA)
Portfolio Structure for MSBAi
| Category | Artifacts | Assessment Focus |
|---|---|---|
| Technical Skills | Code repos, analysis notebooks, dashboards | Competency demonstration |
| Communication | Reports, presentations, visualizations | Professional communication |
| Problem-Solving | Case analyses, project decisions | Critical thinking process |
| Collaboration | Team reflections, peer feedback | Professional behavior |
| Growth | Version comparisons, revision history | Learning journey |
1.4 Competency-Based and Mastery-Based Assessment
Core Principles
Competency-based education (CBE) focuses on demonstrating skill mastery rather than seat time (VerifyEd).
Key Features:
- Students progress when they demonstrate mastery
- Assessment tied to specific, observable competencies
- Multiple opportunities to demonstrate mastery
- Transparent criteria and rubrics
Programs Leading in CBE
Capella University FlexPath
- Self-paced progression through bachelor’s, master’s, doctoral programs
- Business, healthcare, IT, psychology programs
- Mastery of specific skills and knowledge
- (MyDegreeGuide)
South College CBE
- Competency units earned through exams, portfolios, or capstone projects
- Self-starters and working professionals
- Evidence mastery of required content at own pace
- (South College)
Implementation for MSBAi
Competency Categories:
- Data Management (Database, ETL, Data Quality)
- Statistical Analysis (Inference, Regression, Time Series)
- Machine Learning (Classification, Clustering, Prediction)
- Visualization & Communication (Dashboards, Reports, Presentations)
- Business Application (Problem Framing, ROI Analysis, Recommendations)
- Professional Practice (Ethics, Collaboration, Project Management)
Mastery Levels:
- Level 1: Foundational (can execute with guidance)
- Level 2: Proficient (can execute independently)
- Level 3: Advanced (can teach and innovate)
1.5 Assessing Learning When Students Can Use AI Tools
The FACT Assessment Framework
From Frontiers in Education research on environmental data science education (Frontiers):
| Component | AI Usage | Purpose |
|---|---|---|
| Fundamental Skills | No AI | Build foundation before advanced concepts |
| Applied Projects | AI-assisted | Real-world problem-solving with AI tools |
| Conceptual Understanding | No AI | Paper-based exam for independent comprehension |
| Thinking (Critical) | AI + Human | Assess and integrate AI outputs |
Pedagogical Rationale:
- Addresses “cognitive paradox of AI in education”
- Sequences instruction from foundational (no AI) to AI-assisted application
- Returns to independent validation before completion
- Aligns with cognitive load theory
AI Assessment Scale (AIAS)
Framework for defining permitted AI integration levels in each assignment:
| Level | AI Usage | Example |
|---|---|---|
| Level 0 | No AI permitted | In-class exams, oral defenses |
| Level 1 | AI for brainstorming only | Idea generation, not content |
| Level 2 | AI for drafting with human revision | First draft assistance |
| Level 3 | AI as collaborative tool | Full integration with disclosure |
| Level 4 | AI as subject of analysis | Critique and compare AI outputs |
Process-Product Assessment Model
From faculty training workshops (Frontiers):
Evaluate both:
- Final Product - Traditional deliverable quality
- Process Documentation:
- Prompt development and refinement
- Human-AI interaction quality
- Critical evaluation of AI outputs
- Revision decisions and rationale
AI Declaration Requirements
Require students to document:
- Which AI tools were used
- What prompts were employed
- What limitations were encountered
- How human judgment modified AI outputs
1.6 Peer Review and Collaborative Assessment
Research-Backed Benefits
MIT research shows learners who provided more peer feedback received better grades (MIT Open Learning).
Students with high engagement in peer assessment showed:
- Superior instructional design abilities
- More sophisticated cognitive structures
- More positive emotional/behavioral engagement
- Higher-quality cognitive engagement
Four Pillars Framework
From higher education research (Springer):
| Pillar | Focus |
|---|---|
| Veracity | Assessment design integrity |
| Validity | Implementation accuracy |
| Volume | Sufficient feedback quantity |
| Literacy | Student skill in giving/receiving feedback |
Implementation Best Practices
Transparent Rubrics
- Provide detailed criteria for peer evaluation
- Train students to apply rubric consistently
- Include workshops or interactive activities
Technology Platforms
- Canvas Workshop activity for automated distribution
- Anonymous evaluations to reduce bias
- Moodle, Google Drive, Microsoft Teams integration
Structured Reflection
- Post-feedback discussion boards
- Virtual reflection sessions
- Clarifying questions and insight sharing
Accessibility Considerations
- Flexible alternatives for students with limited technology
- Technical support sessions
- Collaboration with IT for device lending
Peer Assessment Types for MSBAi
| Type | Description | When to Use |
|---|---|---|
| Code Review | Evaluate peer code quality and documentation | Technical courses |
| Analysis Critique | Assess methodology and conclusions | Statistics/ML courses |
| Presentation Feedback | Evaluate communication effectiveness | Capstone, storytelling |
| Team Contribution | Rate collaboration and reliability | Group projects |
1.7 Authentic Assessment Mirroring Real Workplace Tasks
Duke CTL Framework
Six concrete approaches from Duke’s Center for Teaching and Learning (Duke CTL):
- Performance Tasks and Projects
- Build prototypes, policy memos, public-facing deliverables
- Real stakeholder audiences when possible
- Case Studies and Simulations
- Context-rich problems with incomplete information
- Require justification of decisions and trade-offs
- Guided Investigations
- Deep exploration with presentations or extended writing
- Student-directed inquiry within framework
- Oral Defenses
- Defend choices, trade-offs, and revisions live
- Cannot be AI-generated in real-time
- Process-Centered Work
- Value drafts, logs, notebooks alongside final products
- Document decision-making journey
- Digital Portfolios
- Cumulative evidence of growth
- Annotated with standards-aligned rubrics
AI-Resistant Authentic Assessment Strategies
Anchor in Local Context:
- Local data that AI cannot fabricate
- Lived experiences unique to student
- Recent class discussions and debates
- Current events and organization-specific problems
Emphasize Process Over Product:
- Weight iterative steps: drafting, feedback, revision
- Meaningful credit for process documentation
- Reflection on learning journey
Social-Experiential Learning:
- Performative rather than output-based
- Synchronous interaction components
- Collaborative work that requires real-time coordination
Interview/Oral Exam Implementation
From University of Dayton research (U Dayton):
Time Efficiency:
- 30-student class: ~300 minutes total (vs. 450 for grading papers)
- Spread across multiple days using scheduled office hours
Best Practices:
- Clear communication of expectations
- Example videos showing what to expect
- Office hours practice sessions
- Challenging but fair questions aligned to learning objectives
- Question variation across students
- Standardized scoring rubric
Proposed Assessment Weighting Model:
- Written component: 30% (research, structure, communication)
- Oral defense: 70% (process ownership, authentic understanding)
Part 2: Assessment Framework for MSBAi
2.1 Assessment Philosophy
Principle 1: Transparency Over Surveillance
- Clear AI usage policies for each assignment
- Trust-based approach with accountability mechanisms
- Focus on learning outcomes, not compliance
Principle 2: Process Over Product
- Document learning journey, not just final deliverables
- Weight revision and iteration
- Reflection as assessment component
Principle 3: Authentic Application
- Real data, real problems, real stakeholders
- Industry partnerships for capstone projects
- Portfolio building throughout program
Principle 4: Multiple Assessment Modes
- Combine written, oral, practical, collaborative
- No single high-stakes assessment determines outcome
- Competency demonstrated through varied evidence
2.2 Course-Level Assessment Design
For Technical Courses (Database, BI, ML):
| Component | Weight | AI Policy |
|---|---|---|
| Labs/Exercises | 20% | No AI (foundation building) |
| Projects | 40% | AI-assisted with disclosure |
| Technical Quiz (oral or proctored) | 20% | No AI |
| Peer Code Review | 10% | N/A |
| Portfolio Documentation | 10% | AI for editing only |
For Communication Courses (Data Storytelling):
| Component | Weight | AI Policy |
|---|---|---|
| Presentations (recorded + live) | 30% | AI for preparation, not delivery |
| Written Analysis | 25% | AI-assisted with disclosure |
| Peer Feedback | 15% | N/A |
| Revision Portfolio | 20% | Show before/after with reflection |
| Final Oral Defense | 10% | No AI |
For Capstone:
| Component | Weight | AI Policy |
|---|---|---|
| Sponsor Deliverables | 30% | Industry-standard (AI permitted) |
| Process Documentation | 20% | AI for editing only |
| Final Presentation | 25% | No AI during delivery |
| Oral Defense/Q&A | 15% | No AI |
| Peer Evaluation | 10% | N/A |
2.3 Program-Level Portfolio Structure
For semester-by-semester course assignments, see program/CURRICULUM.md. The portfolio artifacts below align to the program’s progressive competency development.
| Program Stage | Artifacts | Competencies Demonstrated |
|---|---|---|
| Foundation courses | SQL projects, data visualization, reflection | Database, visualization basics |
| Analytics courses | ML models, business case analyses | Predictive modeling, business application |
| Advanced courses | Team project deliverables, peer reviews | Collaboration, communication |
| Capstone | Capstone + comprehensive reflection | Integration, professional readiness |
2.4 Synchronous Assessment Components
Even in async-first programs, include synchronous touchpoints:
- Monthly Webinar Discussions (participation tracked)
- Mid-term Check-ins (15-min instructor conversation)
- Project Presentations (live via Zoom, recorded backup)
- Capstone Defense (mandatory synchronous, panel format)
2.5 Technology Stack for Assessment
| Tool | Purpose |
|---|---|
| Canvas | Assignment submission, rubrics, peer review |
| GitHub | Code portfolios, version history |
| Zoom | Oral defenses, presentations |
| Gradescope | Code autograding with manual review |
| Peergrade | Structured peer assessment |
| Portfolio Platform (custom or Portfolium) | Cumulative evidence |
2.6 Online Assessment: The Challenge
Traditional exams in online asynchronous programs face three critical vulnerabilities:
- AI Completion Risk: Students can use AI to complete traditional assignments without learning
- Identity Verification: Proctoring is expensive, intrusive, and often circumventable
- Authenticity Gap: Exam performance doesn’t demonstrate workplace-ready skills
MSBAi’s Current Approach: Project-based assessment with no traditional exams – this is well-aligned with research.
2.7 FACT Framework Applied to MSBAi
| Component | Description | Weight (Recommended) | MSBAi Implementation |
|---|---|---|---|
| Fundamental | Basic knowledge demonstration | 10-15% | Quizzes, DataCamp modules |
| Applied | Hands-on skill application | 40-50% | Projects (70-90% in MSBAi) |
| Conceptual | Understanding “why” and “when” | 15-20% | Case write-ups, reflections |
| Thinking | Novel problem-solving | 20-25% | Open-ended project components |
Assessment: MSBAi over-weights Applied (good for skill-building) but may under-weight Conceptual. Recommend adding “design rationale” sections to project rubrics.
2.8 AIAS Applied to MSBAi
| Level | AI Permitted | When to Use | MSBAi Mapping |
|---|---|---|---|
| 0 | No AI | Testing foundational knowledge | Proctored certification exams (if any) |
| 1 | AI for ideation only | Early exploration | Week 1-2 of projects |
| 2 | AI with attribution | Standard projects | Most project work |
| 3 | AI as collaborator | Advanced applications | Final projects, capstone |
| 4 | AI as subject of analysis | GenAI course | Generative AI for Analytics |
Recommendation: Explicitly label each assignment with its AIAS level. This sets clear expectations and teaches appropriate tool use.
2.9 Process-Product Model Applied to MSBAi
| Dimension | What to Evaluate | Implementation |
|---|---|---|
| Product | Final deliverable quality | Current rubrics |
| Process | How the student approached the problem | Require “methodology log” |
| Iteration | How the student refined their work | Version history on GitHub |
| Reflection | What the student learned | Post-project reflection essays |
Critical Addition for MSBAi: Require a “Process Documentation” section in every major project:
- What approaches were tried?
- What AI tools were used and how?
- What was learned from failures?
- How was AI output validated?
2.10 Oral Defense Component
Research strongly supports oral components for online programs:
“Oral defenses remain the most reliable assessment method for verifying understanding in AI-enabled environments.”
Implementation for MSBAi (ACTIVE — implemented in all course syllabi):
| Course Component | Oral Weight | Format |
|---|---|---|
| Studio Sessions | 10% (participation) | Live Q&A during sessions |
| 8-Week Course Projects | 20-30% of project grade | 10-15 min team presentation + 5 min Q&A |
| 4-Week Course Projects | Included in final project | 10-min individual presentation + Q&A |
| Capstone | 40-50% of capstone grade | 20 min panel presentation + 10 min defense Q&A |
Weighting Recommendation: Shift to 60% written/code deliverables + 40% oral demonstration for major projects. This:
- Verifies student understanding
- Develops presentation skills (employer-valued)
- Reduces AI over-reliance risk
2.11 Team Assessment Guidelines
Implementation for MSBAi (ACTIVE — team projects in all 8-week courses):
Team Project Policy:
- Every 8-week course includes at least one team project (typically the final project)
- Teams of 3-4 students, assigned by instructor to balance skill sets
- 4-week courses are individual projects only (insufficient time for team formation)
Individual Accountability Within Teams:
- Peer evaluation required at project completion (5-10% of team project grade)
- Each team member must be able to answer questions on any part of the project during oral defense
- Git commit history reviewed to assess individual contributions
Peer Evaluation Framework:
- Anonymous peer evaluation using standardized rubric
- Dimensions: contribution quality, reliability, communication, collaboration
- Instructor reviews peer evaluations for outliers and adjusts individual grades if needed
- Students trained on giving constructive feedback in first studio session
Note: AIAS level annotations per assignment are a future enhancement — flag for faculty development workshops before Fall 2026 launch, but do not implement in this pass.
Part 3: Curriculum Coherence Findings
3.1 AI-First Curriculum Trends Analysis
Current Research Frameworks
HCAIF Framework (Human-Centered AI in Finance/Education)
Modern AI-first curricula follow a five-phase integration model:
| Phase | Description | MSBAi Alignment |
|---|---|---|
| Preparation | Students engage with AI before class sessions | Async-first design enables this |
| Personalized Learning | AI adapts to individual learner needs | Implicit; could be more explicit |
| Classroom Engagement | Active AI-assisted problem-solving | Studio sessions + project work |
| Summative Assessment | AI-aware evaluation methods | Needs attribution requirements |
| Continuous Monitoring | Track skill development over time | Progressive project complexity |
Recommendation: Make personalized learning explicit by encouraging students to use AI tutoring for weak areas identified in formative assessments.
AI Jockey Model (Yale, 2024-2025)
The “AI Jockey” paradigm treats AI as a tool to be directed skillfully, not a replacement for thinking:
“Students become AI jockeys – steering, evaluating, and refining AI outputs rather than passively consuming them.”
MSBAi Alignment: Strong – the curriculum already emphasizes AI as “productivity accelerator” and requires students to evaluate AI outputs critically. The Generative AI for Analytics elective specifically teaches this skill.
SAIL Framework (Stanford AI Learning)
Stanford’s framework emphasizes three pillars:
- Situational Awareness: Know when AI helps vs. when manual work is better
- Attribution Discipline: Document AI contributions transparently
- Iterative Refinement: Improve AI outputs through prompt engineering
MSBAi Gap: Attribution discipline is mentioned but not enforced. Recommend adding explicit AI attribution requirements to all project rubrics.
70/5 Rule (Industry Research, 2025)
Research suggests:
- 70% of analytics professionals need basic AI literacy (prompt engineering, output evaluation)
- 5% need advanced AI development skills (fine-tuning, RAG systems, custom models)
MSBAi Alignment: The curriculum addresses both tiers: core courses build AI literacy (70% need), while the GenAI elective + capstone develop expertise (5% need).
3.2 Topic Scaffolding & Sequencing Analysis
For the complete course sequence with credit hours and semester assignments, see program/CURRICULUM.md.
Research-Based Sequencing Principles
Principle 1: SQL + Python Together First
Research from CMU and MIT curricula shows:
“Teaching SQL and Python together in the first course creates immediate transferable skills and prevents tool siloing.”
MSBAi Status: BADM 554 does this well – SQL fundamentals + Python (pandas, sqlalchemy) in weeks 1-6.
Principle 2: Statistics Before ML
The optimal sequence is:
Data Manipulation -> Descriptive Stats -> Visualization -> Regression -> Classification -> Unsupervised
MSBAi Status: Well-designed:
- BADM 554: Data manipulation
- BDI 513: Visualization + descriptive exploration
- FIN 550: Regression + classification
- BADM 576: Full ML lifecycle including unsupervised
Principle 3: Visualization Early
Tufte-informed curricula place visualization early (within first 3-4 weeks) because:
- It’s immediately rewarding (students see results)
- It supports exploratory data analysis
- It’s less intimidating than algorithms
MSBAi Status: BDI 513 begins Week 5 alongside BADM 554, introducing visualization once data foundations are established.
Principle 4: Spiral Curriculum (Bruner)
Topics should be revisited at increasing depth across courses:
| Topic | First Exposure | Deeper Treatment | Advanced Application |
|---|---|---|---|
| SQL | 554 (wks 1-3) | 558 (Spark SQL) | 576 (feature engineering) |
| Regression | 550 (wks 3-4) | 557 (business cases) | 576 (regularization) |
| Visualization | 513 (wks 1-4) | 557 (dashboards) | 576 (model interpretation) |
| Classification | 550 (wks 5-6) | 557 (BI decisions) | 576 (ensemble methods) |
| Clustering | 557 (wk 6) | 576 (advanced) | Capstone (application) |
MSBAi Status: The curriculum naturally implements spiral learning through cross-course convergence.
Principle 5: 8-Week Compression Strategy
Research on compressed course formats suggests:
| 16-Week Element | 8-Week Adaptation | Risk Mitigation |
|---|---|---|
| 2 midterms + final | 3 progressive projects | Continuous feedback |
| Weekly problem sets | Every-other-week labs | AI-assisted practice |
| 50-min lectures | 15-20 min videos + studio | Active learning focus |
| Office hours (random) | Scheduled studio sessions | Guaranteed access |
MSBAi Status: The design follows these principles. Project-based assessment naturally fits compression.
Sequencing Recommendations
Minor Adjustment 1: Earlier Cloud Exposure
Currently, cloud infrastructure (558) comes later in the sequence. Consider:
- Adding a “Cloud Foundations” module (2-3 hours) to BADM 554 (Week 7)
- Students set up AWS account, create S3 bucket, run basic cloud SQL
- Reduces cognitive load when full 558 arrives
Impact: Smoother transition; students comfortable with cloud before deep-dive.
Minor Adjustment 2: Earlier Text/NLP
Currently, text analysis appears only in BADM 576. Consider:
- BDI 513 Part 2: Add sentiment analysis of earnings calls using Claude API
- This aligns with the financial deep-dive project
Impact: Students see NLP applications in business context earlier.
No Changes Needed: ML Prerequisites
The current prerequisite chain is optimal:
554 (SQL/Python) -> 550 (Stats/ML basics) -> 558 (Infrastructure) -> 576 (Advanced ML)
This follows the “data engineering -> analysis -> infrastructure -> science” progression used by MIT and Berkeley.
3.3 Curriculum Coherence: Strengths & Gaps
Strengths
| Dimension | Assessment | Evidence |
|---|---|---|
| AI-First Integration | Strong | AI tools in every course; dedicated GenAI elective |
| Python/Jupyter Backbone | Excellent | Universal across all courses |
| Project-Based Learning | Excellent | 2-3 major projects per course, no exams |
| L-C-E Progression | Well-implemented | Clear literacy -> competency -> expertise across program |
| Cross-Course Convergence | Thoughtful | Visualization, regression, classification clusters |
| Studio Sessions | Differentiating | Live project-focused sessions rare in competitors |
Gaps & Improvement Opportunities
Gap 1: AI Attribution Requirements (CRITICAL)
Issue: No explicit requirement to document AI tool usage in projects.
Risk: Students may over-rely on AI without developing independent skills; faculty can’t assess true competency.
Recommendation: Add to all project rubrics:
AI ATTRIBUTION REQUIREMENT (5% of project grade)
- Document all AI tools used (ChatGPT, Claude, Copilot, etc.)
- For each AI use, describe: prompt given, output received, how you modified/validated
- Include "AI Contribution Log" as appendix to all major projects
- Failure to attribute is academic integrity violation
Gap 2: Process Documentation (IMPORTANT)
Issue: Rubrics focus primarily on deliverables, not learning process.
Risk: Students submit polished AI-generated work without demonstrating understanding.
Recommendation: Add “Methodology & Process” rubric dimension (10-15% weight):
| Criterion | Excellent | Proficient | Developing |
|---|---|---|---|
| Approach Documentation | Clear explanation of methodology choices | Describes main steps | Minimal process description |
| Iteration Evidence | Shows multiple attempts, refinements | Some iteration visible | Single-pass submission |
| AI Usage Transparency | Detailed AI contribution log | Basic AI attribution | Missing or vague |
| Self-Reflection | Insightful analysis of what worked/didn’t | Some reflection | No reflection |
Gap 3: Oral Defense Component (IMPORTANT)
Issue: Limited live assessment beyond studio participation.
Recommendation: Add oral defense to major projects:
| Project Type | Oral Component | Format |
|---|---|---|
| Course Projects (1-2) | 15% of grade | 8-10 min video + 5 min live Q&A |
| Capstone | 40% of grade | 20 min presentation + 15 min defense |
Gap 4: Ethics Integration (MINOR)
Issue: Responsible AI mentioned but not consistently embedded.
Recommendation: Add ethics checkpoint to each course:
- 554: Data privacy in ETL pipelines
- 513: Misleading visualizations, AI-generated misinformation
- 550: Algorithmic bias in financial predictions
- 557: BI ethics, surveillance capitalism
- 558: Cloud security, data sovereignty
- 576: Model fairness, deployment ethics
Each course includes 1 case study or reflection on ethical dimensions (Week 7 or 8).
Gap 5: Peer Learning Structure (ENHANCEMENT)
Issue: Peer review mentioned but not structured.
Recommendation: Formalize peer learning:
| Component | Implementation | Weight |
|---|---|---|
| Code Review | Each student reviews 2 peer projects per course | 5% of grade |
| Study Pods | Assign 4-person study groups in Week 1 | Encouraged, not graded |
| Peer Feedback on Presentations | Structured rubric during studio sessions | Formative only |
3.4 Competitive Positioning Assessment
Market Position Analysis
| Factor | MSBAi Position | Competitor Benchmark | Advantage |
|---|---|---|---|
| AI Integration | Every course | MIT: High, Others: Moderate | MSBAi leads |
| Price Point | 20% below peer avg | UT Austin ~$58K, UCLA ~$67K | Affordability |
| Modular Format | 8-week courses | Most: 15-16 week | Flexible pacing |
| Live Components | Studio sessions weekly | Most async-only | Engagement |
| Project Focus | No exams, 100% projects | Most have exams | Authentic assessment |
| Python Backbone | Universal | Most mixed (R+Python) | Career-ready |
| Cloud Integration | AWS throughout | Most optional | Industry-relevant |
Recommendations for Enhanced Competitiveness
- “AI-First” Branding: Emphasize every graduate has documented AI competencies
- Process Portfolio: Graduates show not just projects but learning journey
- Industry Certification Alignment: Map courses to AWS, Power BI, DataCamp certifications
- Employer Advisory Board: Regular input on curriculum relevance
Part 4: Implementation Recommendations
Priority 1: Assessment Framework Updates (Critical)
- Add AI Attribution requirement to all rubrics
- Add Process Documentation dimension to rubrics
- Define AIAS levels for each assignment
- Design oral defense format for major projects
Priority 2: Rubric Enhancements (Important)
- Revise 5-dimension rubrics to 6 dimensions (add Process)
- Create AI Attribution Log template
- Define oral defense rubric
- Pilot test rubrics with mock projects
Priority 3: Content Enhancements (Enhancement)
- Add Cloud Foundations module to BADM 554
- Add NLP/sentiment analysis to BDI 513 Part 2
- Add ethics case study to each course
- Formalize peer code review process
Priority 4: Documentation (Ongoing)
- AI Attribution Guidelines (student handbook section)
- Oral Defense Preparation Guide (student resource)
- AIAS Level Reference Card (quick reference for faculty/students)
- Ethics Case Study Library (repository for all courses)
Risk Mitigation
Risk 1: AI Over-Reliance
- Oral defense verifies understanding
- Process documentation reveals AI dependency
- AIAS levels set appropriate use boundaries
Risk 2: Assessment Gaming
- Multiple assessment modalities (written, code, oral, peer)
- Progressive project complexity
- Individual capstone with live defense
Risk 3: Student Resistance to Process Documentation
- Explain rationale (career skill: documenting methodology)
- Provide templates and examples
- Grade leniently in first term, increase rigor over time
Risk 4: Faculty Capacity for Oral Defenses
- Limit to major projects (2-3 per course)
- Use studio session time for defenses
- TA support for scheduling and logistics
Conclusion
The MSBAi curriculum is well-designed for an AI-first world with strong foundations in:
- Project-based learning
- AI integration throughout
- Flexible, modular pathways
- Industry-relevant technology stack
Critical enhancements needed:
- AI Attribution Requirements – Add to all rubrics
- Process Documentation – Evaluate how students work, not just outputs
- Oral Defense Component – Verify understanding, especially for capstone
With these additions, MSBAi will be:
- More robust against AI-enabled academic dishonesty
- Better at developing authentic professional skills
- More competitive against peer programs
- Aligned with emerging best practices in AI-first education
Appendices
Appendix A: AI Attribution Log Template
## AI Contribution Log
### Project: [Name]
### Date: [Date]
### Student: [Name]
| Date | AI Tool | Task | Prompt Summary | Output Summary | How I Modified/Validated |
|------|---------|------|----------------|----------------|--------------------------|
| | | | | | |
| | | | | | |
### Reflection on AI Use
- What tasks did AI help with most effectively?
- Where did AI outputs require significant modification?
- What would I do differently next time?
Appendix B: AIAS Level Reference
| Level | AI Permitted | Example Assignment |
|---|---|---|
| 0 | None | Certification quiz |
| 1 | Ideation only | Brainstorm features (document what AI suggested) |
| 2 | With attribution | Standard project work |
| 3 | As collaborator | Advanced analysis with AI partnership |
| 4 | As subject | GenAI course projects |
Appendix C: Oral Defense Rubric
| Criterion | Excellent (A) | Proficient (B) | Developing (C) |
|---|---|---|---|
| Clarity of Explanation | Explains concepts clearly to non-expert | Clear with minor gaps | Confusing or unclear |
| Technical Depth | Demonstrates deep understanding | Shows solid understanding | Surface-level knowledge |
| Response to Questions | Handles unexpected questions confidently | Answers most questions adequately | Struggles with questions |
| Methodology Justification | Explains why decisions were made | Describes what was done | Cannot explain choices |
| AI Usage Awareness | Articulates when/how AI helped vs. didn’t | Acknowledges AI use | Unclear on AI role |
Appendix D: Cross-Course Ethics Integration
| Course | Ethics Focus | Case Study Topic |
|---|---|---|
| 554 | Data Privacy | Cambridge Analytica, GDPR compliance |
| 513 | Visualization Integrity | Misleading COVID charts, election misinformation |
| 550 | Algorithmic Fairness | Biased lending algorithms, credit scoring |
| 557 | Surveillance & BI | Employee monitoring, predictive policing |
| 558 | Cloud Security | Data breaches, sovereignty, vendor lock-in |
| 576 | Model Accountability | Healthcare AI failures, autonomous vehicles |
Sources
Academic Research
- Frontiers: AI-resistant assessments from faculty training workshops
- Frontiers: FACT Assessment Framework
- British Journal of Educational Technology: GenAI impact on authentic assessment
- PMC: Student perspectives on competency-based portfolios
- Springer: Four Pillars of Peer Assessment
University Best Practices
- Duke CTL: Authentic Assessment Over Surveillance
- University of Dayton: Cheat-Proof Assessment
- University of Saskatchewan: AI-Resistant Oral Assessment
- MIT Open Learning: Peer Review in Online Courses
- Columbia CTL: Peer Review Design
Program Examples
- UChicago Data Science Capstone
- Cornell Johnson MSBA
- USC Marshall MSBA
- Columbia MSBA Capstone
- Virginia Data Science Capstone
Industry and Policy
- Inside Higher Ed: AI-Proofing the Classroom
- Washington Post: Oral Exams to Combat AI
- Times Higher Education: Peer Review in Online Courses
- VerifyEd: Competency-Based Learning Guide 2025
- Thesify: Student AI Survey 2025
Document created for MSBAi Program Development - Gies College of Business Last Updated: February 2026 Informed by: Assessment research and curriculum review