| ← Assessment Strategy | ← MSBAi Home |
Why We Ask You to Show Your Thinking
A note to MSBAi students on process, accountability, and what expertise actually looks like.
In most programs, the deliverable is the point. You submit the report, the model, the dashboard — and that’s what gets assessed.
In MSBAi, the deliverable is evidence that thinking happened. The thinking is the point.
This matters because of a specific challenge you’re entering: in a world where AI can produce a plausible-looking answer in seconds, the thing that makes you valuable is not the answer — it’s your ability to know whether the answer is right, why it’s right, what assumptions it rests on, and what would break it.
That’s not a skill you develop by reading AI outputs. It’s a skill you develop by wrestling with problems yourself first, then using AI as a thinking partner, then defending your choices in front of people who will push back.
Specify → Execute → Verify
Every assignment in this program follows the same cycle:
-
Specify — You formulate what you want. Before touching any AI tool, you define the problem, state your hypothesis, sketch your approach, or outline your argument. This is the hardest and most valuable part. A vague spec produces vague results, regardless of how powerful the model is.
-
Execute — You use AI tools to help build the solution. Copilot writes code. Claude helps debug. Gemini does research. You direct the work, choosing what to accept, reject, and modify. This is where AI makes you faster — but only if your spec was good.
-
Verify — You check, critique, and defend the result. Did the model hallucinate? Did the code actually solve the problem you specified, or a different one? Can you explain why this approach works to someone who wasn’t in the room? The oral defense is the ultimate verification — you prove you understand what was built, not just that it runs.
This cycle repeats at every scale: within a single assignment, across a project’s milestones, and across your entire program. As you progress, the quality of your specifications improves, and the complexity of what you can verify increases. That’s the real learning arc.
Think of it this way: a product manager who can’t articulate what they want will get a product nobody needs, no matter how talented the engineering team. The specification phase is where the real intellectual work happens. The verification phase is where your credibility is built. AI handles the execution in between — and that’s the easy part.
What This Looks Like in Practice
Every time we ask you to:
- Work through a problem independently before consulting AI (specification)
- Document your AI interactions (execution transparency)
- Justify why you chose one approach over alternatives (verification)
- Explain your reasoning at a milestone check-in (verification)
- Defend your work live in an oral defense (ultimate verification)
…we’re not checking up on you. We’re creating the conditions under which genuine expertise develops.
How AI Usage Works in This Program
We don’t ban AI. We don’t pretend it doesn’t exist. Every assessment in your syllabus is labeled with an AI Assessment Scale (AIAS) level so you know exactly what’s expected:
| Level | What It Means | Example |
|---|---|---|
| 0 | No AI | Oral defenses, live Q&A |
| 1 | AI for brainstorming only | Idea generation, not content creation |
| 2 | AI for drafting, with human revision | Code assistance, first drafts — with attribution |
| 3 | AI as full collaborator | Integrated use with disclosure and documentation |
| 4 | AI as the subject of analysis | You build, critique, and evaluate AI systems |
Most of your coursework will be at Levels 2-3. The Agentic AI elective operates entirely at Level 4. Oral defenses are always Level 0.
For major projects, you’ll maintain an AI Attribution Log — a brief record of which tools you used, what prompts you gave, and how you modified the outputs. This isn’t surveillance. It’s a professional habit: the ability to explain your process is what separates an analyst from someone who copy-pastes AI output.
The Progression You’ll Experience
| Program Stage | Your Spec Quality | What You Verify | AI Role |
|---|---|---|---|
| First semester | Learning to define problems clearly | “Does this code run? Is the output correct?” | Coding assistant, debugging partner |
| Second semester | Framing multi-step analyses | “Is this methodology sound? Are there edge cases?” | Research partner, methodology challenger |
| Third semester | Designing systems and workflows | “Does this architecture serve the business need?” | Full collaborator, agent orchestration |
| Capstone | Scoping real problems for real stakeholders | “Can I defend every decision to a hiring panel?” | Whatever the problem requires |
By graduation, you won’t just know how to use AI tools. You’ll know how to specify what needs to be done, direct AI execution, and verify that the result is correct, ethical, and useful. That combination is what makes you a scalable problem-solver.
The Most Valuable Moment in Your Education
The moment of not-knowing — when you’re genuinely uncertain, when you’ve tried three approaches and can’t tell which is right — is not a problem to be solved with a better prompt.
It’s the most valuable moment in your education. Sit in it.
The brain learns when it’s surprised, not when it’s confirmed. Dopamine fires at prediction errors — when outcomes exceed or challenge expectations — not when you receive a pre-packaged answer (Schultz et al., 1997; Machulla, 2026). An AI that resolves your uncertainty before you’ve built a prediction robs you of the neurological event that makes the learning stick.
Why Accountability Is the Core Skill
Jonathan Boymal, a higher education leader with 25 years across four countries, puts it plainly: “Responsibility is the very heart of expertise.”
In an AI-augmented workplace, anyone can produce a professional-looking output. What separates a trusted analyst from a prompt-runner is the ability to walk into a room, be asked “how did you get this result?” — and answer with depth that makes people trust you with real decisions.
That’s what showing your thinking builds.
You will graduate with a portfolio of work. But more importantly, you’ll graduate able to own that work — to explain it, defend it, and take accountability for it. That’s the expertise employers are hiring for, and that AI cannot replicate.
A Note on Trust
We designed this program on the premise that you’re here because you want to learn, not because you want to produce deliverables. The process requirements — milestones, logs, oral defenses — exist because we believe the process is the learning, not a tax on it.
We’re not monitoring you. We’re building the scaffolding that makes genuine expertise possible.
“The effort is not a tax on the experience. It is the experience.” — Pål Machulla (The Dopamine Gap, 2026)
For faculty: this page is designed to be shared directly with students at program orientation and at the start of each course.
Related: Assessment Strategy · AI Usage Levels (AIAS) · Design Principles