Pruuva
Get early access →

They submitted the work.But do they understand it?

Pruuva reads each student's submission and conducts an adaptive oral probe — the conversation every teacher wishes they had time for.

Get early access
No course redesign needed Works with any discipline Evidence over accusation
app.pruuva.com / classes / bus-301 / submissions / caroline-mbeki
← All submissions·
CM
Caroline Mbeki
BUS 301 · Submitted May 14
Review
PROVISIONAL
C− / 100
72
SUBMISSION · PDF
3 from Pruuva 2 flags
BUS 301 · CAROLINE MBEKI · MAY 14
Vertical Integration
as Strategic Inevitability:
A Tesla case study

§2 Theoretical framework. Drawing on transaction cost economics (Williamson, 1985), the integration decision can be framed as a response to asset specificity and small-numbers bargaining hazards.

In Tesla's case, the most cited driver is battery cell supply, which exhibits both long-cycle capex requirements and severe lead-time risk during the 2017–2022 expansion.

§3 Empirical pattern. Cross-referencing 10-K filings against industry benchmarks, the in-house share of cell production rises from 0% in 2015 to a projected 56% by 2028…

Page 2 of 11 · 1,832 wordsSelect text to annotate →
PROBE EVIDENCE3 COMMENTS
PrPruuvaAUTO2d
§2 Theoretical framework
"transaction cost economics (Williamson, 1985)"
Student knows the conclusion (integrate when assets are specific) but couldn't construct the causal mechanism.
UNDERSTANDING · WEAK
PrPruuvaFLAGGED2d
§3 Empirical pattern
"battery cell supply, which exhibits both…"
Could not name the 10-K source. Restated the in-house share figures verbatim but inverted the time direction.
UNDERSTANDING · CONCERN
GRADING
Caroline's submission
Probe evidence4
Rubric4
Feedback1
Final grade
PRUUVA SUMMARY
Partial understanding. Student is fluent on thesis and structure but cannot explain the basis for two key claims about Tesla's capital structure.
Proctoring integrity
2 minor events · session valid
Review
RUBRIC · WEIGHTED 72 / 100
Thesis & argument
82
Strategic analysis
54
Evidence & sourcing
38
Communication
86
Submission-specificEvery question drawn from the student’s own work
AdaptiveFollow-ups adjust to the student’s responses in real time
FairNo demographic bias — students explain in their own voice
Teacher-centeredYou review the evidence and make the final call

THE PROBLEM

You can't trust the work anymore

A student submits a polished paper. Did they write it? Did they understand it? You have no way to know — and the tools that claim to tell you are wrong half the time.

AI writes A-grade work now

Any student can submit a polished essay, report, or case study without understanding a word of it. Written output alone is no longer evidence of comprehension.

Detection punishes the wrong students

AI detectors flag non-native speakers and neurodiverse students at disproportionate rates. A tool that accuses without evidence creates more problems than it solves.

You can’t talk to every student

A 15-minute conversation per student per assignment is the gold standard — and completely impossible at scale. The work that would prove understanding never happens.

HOW IT WORKS

Three steps to verified understanding

You already know the best test of understanding is a conversation. Pruuva runs that conversation for you — for every student, every assignment.

  1. 01Create your assessment

    Upload your rubric and assignment brief. Pruuva generates submission-specific probe questions you can preview before publishing. Setup takes minutes, not hours.

  2. 02Students submit & speak

    After submitting their written work, each student completes an adaptive oral probe. Pruuva reads their specific submission and asks questions only they could answer — no two probes are the same.

  3. 03You get structured evidence

    A per-rubric evidence report with quoted transcript, understanding scores, and flagged concerns. You review the evidence and make the final grading decision.

INSIDE THE PROBE

Not generic questions.
Their questions.

Every question comes from the student's own work — their claims, their methods, their decisions. When answers are vague, Pruuva digs deeper. When they're strong, it moves on. No two probes are the same.

Reads the submissionIdentifies claims, evidence, methodology, and decisions in the student’s own text.
Generates targeted questionsEach question is tied to a specific passage — no two probes are alike.
Adapts in real timeFollow-ups adjust based on the student’s responses. Vague answers get probed deeper.
Produces structured evidencePer-rubric understanding scores with quoted transcript excerpts. Evidence, not probability.
STUDENT SUBMISSION

The study employs a mixed-methods design integrating quantitative survey instruments with semi-structured interviews to triangulate findings.

Sample size determined via power analysis (α=.05, β=.80) yielding a minimum n of 84 per arm.

Findings consistently suggest that institutions with greater perceived credibility anchor expectations more effectively…

LIVE PROBE
PRUUVA
You mention a power analysis with α=.05, β=.80. Where did those parameters come from?
STUDENT
Um, alpha is just standard convention. The beta… honestly I think I pulled it from a template.
PRUUVA · FOLLOW-UP
Let's stay there. What does β=.80 actually mean in plain English?
STUDENT
It's the probability of detecting an effect if one is really there. Eighty percent.
PRUUVA · FOLLOW-UP
Good. So why .80 and not .90?
speaking…

A DIFFERENT APPROACH

Why this works when detection doesn't

Feature
AI Detection ToolsStatus quo
PruuvaPruuva
How it works

Scans text for AI-generated patterns using statistical analysis

Adaptive oral probe tied to specific passages in the student’s own submission

What it measures

Probability that text was AI-generated (binary yes/no)

Per-rubric understanding with quoted transcript evidence

False positives

Flags non-native speakers and neurodiverse students at disproportionate rates

Conversation-based — no demographic bias in the assessment method

Student experience

Accusatory. Students feel surveilled and guilty until proven innocent

Fair. Students demonstrate what they know in their own voice

Actionable output

A percentage score with no pedagogical value

Evidence report with per-dimension understanding scores and flagged concerns

FAQ

Things people ask

Most questions come from one place: making sure this is fair to students. We obsess about that too.

EARLY ACCESS

Stop guessing.
Start knowing.

Grade with evidence you can actually stand behind. Know what your students understand — not just what they submitted.

Join early access