Founding Cohort One, Fall 2026

Verify authorship before submission, not after suspicion.

ReachIntegrity is a patent-pending architecture for verifying student authorship of academic submissions before they are handed in. Through our Founding Cohort, we are building the first institutional deployments with universities that are ready to shape workflows characterized by integrity in the AI era.

Patent-pending Private process record Pre-submission verification interview Authorship verified under your policy

The crisis

AI has changed the meaning of submitted work.

Universities are being asked to protect academic standards in an environment where finished text is no longer evidence of authorship. In a peer-reviewed blind-injection study at the University of Reading, AI-generated submissions went undetected by examiners in 94% of cases, and on average earned higher marks than work submitted by real students.1 A 2025 mixed-methods study of university faculty found AI-specific plagiarism policies were rated as meaningfully less effective than traditional plagiarism policies (28% versus 49%).2

AI detectors offered a tempting answer, but they have not proven reliable. Several leading institutions, including Vanderbilt (an early adopter in 2023)3, have publicly disabled AI detection in their integrity workflows, citing reliability concerns and disproportionate false-positive rates against non-native English writers.4 Where institutions have acted on detector outputs, courts have begun to push back. In January 2026 a New York court annulled an AI-detection-based plagiarism finding against an Adelphi University student, ruling the determination “without valid basis and devoid of reason” and ordering the university to expunge it from his record.5

Applying unreliable AI detection leads to a broken workflow. Students who did the work feel suspected. Faculty become investigators. Integrity offices inherit cases that are difficult to prove and painful to resolve. Institutions need a workflow that protects academic standards without forcing faculty to defend uncertain AI detector outputs as decisive proof. They also need it built with their workflows in mind, not handed to them as a black box.

The problem was never detection. The problem was the workflow.

Most integrity tools, including detectors, plagiarism checkers, and stylometric analyzers, operate after the work has been submitted. The institution receives the work, then has to determine whether a human wrote it, then has to act on that determination. That sequence is the source of many of the disputes integrity offices are now being asked to resolve.

ReachIntegrity changes the sequence. Verification happens before submission. Faculty receive verified work instead of uncertain suspicion.

How the architecture works

Verify authorship in three steps.

ReachIntegrity is built around a closed-loop verification method: document, question, verify, before the work is submitted.

  1. Step One

    Document the process privately.

    A Microsoft Word add-in captures a tamper-evident record of how the work comes into being.

    During authoring, only cryptographic hashes of the document state are transmitted, never the content itself. Students’ drafts remain private to them. What we hold is a sealed fingerprint of the writing process, secured by a hash chain.

  2. Step Two

    Generate questions grounded in lived authorship.

    When the student is ready to submit, the system analyzes the private writing process record and generates an interview tailored to their specific writing journey.

    The questions reference the structural evolution of the document itself, utilizing the private writing process record, which cannot be previewed by the student. The questions thus test lived authorship rather than surface-level writing style. The questions may also test the student’s understanding of the content, aiding the learning process.

  3. Step Three

    Verify before submission.

    The student sits a structured oral interview sized to the stakes of the assessment. The interview is conducted by an AI interviewer using text-to-speech and speech-to-text, with accommodation pathways available.

    The outcome of each verification fits one of four categories: Verified, Failed, Inconclusive, or Unverifiable. Each category carries a reliability score and a signal-breakdown report. Inconclusive outcomes can be routed to human review rather than forcing a binary decision that the evidence cannot support.

ReachIntegrity

Author Verify Submit

Legacy detection workflow

Author Submit Detect Accuse

The shift

Move integrity review before submission, where it can prevent disputes instead of escalating them.

Most integrity tools begin after the work has already entered the institution. At that point, every concern becomes a case: someone must investigate, accuse, defend, appeal, and decide. Faculty take on forensic judgment. Integrity officers inherit adversarial cases. Students experience the process as accusation. The institution carries the consequences either way.

ReachIntegrity is designed to move the authorship verification upstream. Before submission, students complete structured verification based on their own writing processes. If the work is verified, it enters the submission pipeline with a certificate and audit trail. If it is not verified, the institution handles it according to its review or resubmission policies. The institution is no longer forced to build its position on a detector-based accusation, and the integrity event has not yet become a case.

This is not a claim of legal immunity. Institutions still design their policies, determine consequences, and still make decisions about edge cases, but the evidentiary posture changes meaningfully. Faculty receive work they can trust at the moment they receive it. Integrity offices spend their time on policy and accommodation, not on defending probabilistic accusations.

The goal is not to catch students out. It is to make student work easier to trust, and undisclosed AI substitution harder to submit.

Calibrated to stakes

Rigor calibrated to the stakes of the assessment.

Detectors apply the same statistical scan to a freshman discussion post and a doctoral dissertation. ReachIntegrity is designed differently: verification rigor and validation thresholds can be set to match the length and consequence of each assessment.

Low stakes

Formative discussion post

A brief check that stays light and low-friction, appropriate for assessments that warrant integrity assurance without slowing the cadence of weekly coursework.

Summative

Term paper

A longer, more thorough interview. Probes process and structural decisions, sized to a piece of work that materially shapes a grade.

High stakes

Capstone or thesis

The full rigorous protocol: a sustained interview that probes process, subject understanding, and substantive depth. The depth of the verification matches the weight of the credential.

Integrity architecture

A safer first response. A stronger final record.

Detection-based tools share a single fragile assumption: that you must be highly confident before you act, because acting on weak evidence creates serious damage to a real student. That assumption is what makes detector false-positive rates litigation-relevant. It is what forces institutions to defend probability scores in court.

ReachIntegrity is designed to err in the right direction.

Procedural in the moment

The first response to a verification that doesn’t quite clear a student’s work is procedural: typically support, rework, resubmission, accommodation, or review. The architecture is designed to err on the side of giving the student another path, not on the side of an early adverse mark.

Durable over time

The integrity of a Verified outcome is meaningful and durable. The issuing authority retains the right to revoke a certificate if fraud is later substantiated. Verification is forgiving when it errs in the moment, and serious about the integrity of what it certifies over time.

Who it serves

What ReachIntegrity changes for the people who carry the integrity load.

For students doing the work

Students who do the work should not have to write under suspicion. The architecture is designed to give them a structured way to demonstrate authorship and have their submissions certified. No detector scores. No defensive AI humanizers. No anxiety about a statistical guess.

For faculty

Faculty did not sign up to be forensic investigators. ReachIntegrity is designed to keep forensic authorship review out of the classroom and give teaching time back to faculty. Verification is conducted automatically, before the work reaches them.

For integrity officers

Replace post-submission investigation with pre-submission verification. Each Verified outcome carries an audit trail. Each Inconclusive outcome is routed to human review. Each Failed outcome is documented with the signals that produced it, in language that can be explained to a student, a parent, or counsel.

For provosts and deans

A degree carries more trust when its high-stakes assessments arrive with verifiable evidence of student authorship under the policy the institution sets. As the ancient proverb puts it, a good name is more desirable than great riches. The reputation of a university is not decoration; it is the very asset in which students are investing.

Architecture, not detection

Not a detector. A verification architecture.

ReachIntegrity is built around a patent-pending closed-loop verification method: document the writing process privately, generate questions from that process, verify the student’s answers against the same record, and allow only verified work to be submitted. The architecture is protected by a tamper-evident record, layered defenses against adversarial input, and explicit outcome categorization that includes explicit reporting of uncertainty.

This is not a repackaged detector. It is a distinct architectural approach to the problem of authorship verification.

Where we are now

Where ReachIntegrity is now.

ReachIntegrity is entering its first founding cohort build. The patent has been filed. The architecture is designed. The core verification capability is now being built with founding institutions whose assessment workflows, policies, and implementation realities will shape the first production deployments.

This is deliberate. Authorship verification at institutional scale is too consequential to design in isolation and sell as a finished answer. Founding institutions help shape that standard from the beginning. In return, they receive priority influence over the roadmap, long-term pricing consideration, and (at their discretion) recognition as early contributors to a new verification architecture for higher education.

Built today

Patent filed. Architecture designed. Founding Cohort applications open.

In active build

Core verification capability (process documentation, question generation, AI-conducted interview, four-category outcome) targeted for Founding Cohort One in Fall 2026.

Built next

Capabilities expanded based on Founding Cohort One feedback. Likely additions include richer signal-breakdown reporting, expanded LMS integration, an institutional dashboard, and additional writing environment integrations based on institution consultation.

The roadmap

The roadmap as it stands today.

  1. Now · April 2026

    Founding Cohort applications open

    Architectural design partnership conversations with applicant institutions. Development of the core verification capability is underway.

  2. Fall 2026

    Founding Cohort One

    Live deployment of pre-submission verification at one Founding institution. Core capability delivered: writing process documentation, question generation, AI-conducted oral interview, four-category verification outcome, and a defined institutional pathway for non-verified work.

  3. Spring 2027

    Founding Cohort Two

    Founding Cohort Two opens with additional founding institutions, sized to the capacity our team can serve well. Capabilities expand based on Founding Cohort One feedback.

  4. 2027 onward

    Beyond

    Subsequent cohorts scale according to readiness and demand. The roadmap may expand into broader accessibility tooling, additional data-residency options, portable certificates, and verification for non-essay media, such as video, audio, and code.

If Fall 2026 cannot accommodate a quality first deployment, Founding Cohort One moves to Spring 2027. If necessary, we would rather move the date to make sure we can achieve an excellent outcome.

Founding cohort

Apply for Founding Cohort consideration.

Founding Cohort One is one institution, with deployment beginning in Fall 2026. Founding Cohort Two opens in Spring 2027 with additional founding institutions, sized to capacity. Subsequent cohorts scale from there.

What founding institutions also gain

  • Founding standing in a new category of authorship verification, ahead of broader market availability.
  • Consideration for long-term preferential pricing on subsequent cohorts.

Applying secures four things

  • A founding cohort consultation with our team to assess fit, surface your institution’s specific integrity workflow needs, and walk through the architecture under an NDA.
  • A roadmap preview showing what is being built, when, and what your institution would shape if you join the Founding Cohort.
  • Earliest available cohort consideration. Applications are reviewed in the order deposits are received. Founding Cohort One is selected from applying institutions on the basis of readiness, fit, and the clearest path to a successful first deployment.
  • Credit toward the full engagement. Your $2,950 deposit is credited in full toward your founding cohort engagement on cohort assignment.

What you receive

What the founding cohort engagement delivers.

Pre-submission authorship verification for student assessments at the institution, built collaboratively over a six-month implementation.

  • Architectural fit consultation and integrity-workflow design with our team
  • Verification rigor set to the institution’s specific assessment types
  • Build and deployment of the verification capability for selected courses
  • Verification of student assessments across the founding cohort semester
  • Per-verification reporting and outcome documentation
  • Direct working relationship with our team. Your feedback shapes the platform.

The core capability, documenting the writing process, generating process-grounded questions, conducting verification interviews, and producing certified outcomes, is the founding cohort deliverable. Capabilities listed on the Roadmap beyond this core are not promised within the founding cohort engagement. They are built in subsequent cohorts, with founding institutions receiving them as they become available. This is the standard we hold ourselves to: tell you what the founding cohort delivers, and not promise what it does not.

FAQ

Questions your buying committee will ask.

What does a “Founding Cohort” engagement actually mean?

The Founding Cohort is a six-month engagement, during which we will build and deploy the core verification capability at your institution, shaped by your specific assessment workflows and integrity policy. Founding Cohort One is one institution, beginning Fall 2026. Founding Cohort Two opens Spring 2027 with additional founding institutions, sized to capacity. Founding Cohort institutions receive priority pricing and influence over the platform roadmap, which subsequent cohorts will not.

Why is the deposit so much lower than the full engagement?

The refundable founding cohort deposit secures consideration for an upcoming cohort, a serious working conversation about fit, and earliest-available cohort placement. The full engagement fee, payable on cohort assignment, reflects the work of building and deploying the verification capability at your institution over six months. Asking institutions to commit the full amount before fit has been established would be neither reasonable nor procurement-friendly.

How is Founding Cohort One selected?

Applications are reviewed in the order deposits are received. Founding Cohort One is selected from applying institutions on the basis of readiness, fit, and the clearest path to a successful first deployment. Institutions not selected for Founding Cohort One may be considered for subsequent cohorts, subject to fit, readiness, and capacity.

What is built today, and what is being built during the cohort?

The patent is filed. The architecture is designed. The core verification capability, including process documentation, question generation, AI-conducted interview, and four-category outcome, is in active development, with delivery targeted for Founding Cohort One in Fall 2026. The Roadmap section on this page describes what is currently being built, what comes in subsequent cohorts, and what may follow. We will not claim a capability is built when it is not.

Why should our institution apply now rather than wait?

Three reasons. First, you receive priority influence over what gets built and how it is implemented at your institution. Second, you are considered for long-term pricing protection that subsequent cohorts will not receive. Third, applications are reviewed in the order deposits are received: earlier applications are considered for earlier cohorts.

Does ReachIntegrity require students to write inside a proprietary editor?

ReachIntegrity currently runs as a Microsoft Word add-in. Students write in Word as normal—no proprietary editor is required, and no content ever leaves the student’s device unencrypted. Support for additional writing environments may be added for subsequent cohorts based on institution consultation.

What about FERPA, GDPR, POPIA, accessibility, and data residency?

ReachIntegrity is designed to support FERPA, GDPR, UK GDPR, and POPIA-aligned deployments. Calibration to your specific regulatory context, including data residency, retention controls, access controls, and accessibility accommodations, is part of the founding cohort design partnership. Compliance documentation, the Data Processing Agreement, and the Accessibility Statement are produced and refined collaboratively during the cohort engagement.

What if AI assistance is permitted for a given assignment?

Fully supported by the architecture. The institution sets what is allowed, including grammar checking, brainstorming, outline help, and even disclosed AI drafting, and the system verifies authorship under that policy. The point is not to ban AI; rather, it is to verify student authorship under whatever policy the institution sets.

More detailed questions? Reach out during your founding cohort consultation. The full Buying Committee FAQ is shared under an NDA with applying institutions.

From here

From suspicion to verification.

AI has changed what submitted work can prove on its own. Institutions now need a different sequence: one that protects students doing the work, supports faculty, and gives academic leaders evidence they can trust at the moment work arrives.

Founding institutions are joining the work of building that sequence, and shaping the standard by which student authorship will be verified for the next decade. There is room, today, for one of those institutions to be yours.

Apply for Founding Cohort consideration

References

  1. Scarfe, P., Watcham, K., Clarke, A., & Roesch, E. (2024). A real-world test of artificial intelligence infiltration of a university examinations system: A “Turing Test” case study. PLOS ONE, 19(6): e0305354. https://doi.org/10.1371/journal.pone.0305354 . In a blind-injection study across five undergraduate psychology modules at the University of Reading, 94% of AI-generated submissions were not detected by examiners, and on average attained higher grades than real student submissions.
  2. Fadlelmula, F. K., & Qadhi, S. (2025). Examining academic integrity policy and practice in the era of AI: a case study of faculty perspectives. Frontiers in Education. https://doi.org/10.3389/feduc.2025.1621743 . In a mixed-methods study of 71 faculty members at a UAE institution, perceived effectiveness of generative-AI plagiarism policies was rated meaningfully lower (28%) than perceived effectiveness of traditional plagiarism policies (49%).
  3. Vanderbilt University Center for Teaching. (2023, August 16). Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector. Vanderbilt Brightspace. vanderbilt.edu . Vanderbilt was among the first major institutions to publicly disable Turnitin’s AI detector, citing reliability concerns, lack of transparency in the detection methodology, and disproportionate false-positive impact on non-native English speakers.
  4. Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779. https://doi.org/10.1016/j.patter.2023.100779 . Across seven widely-used GPT detectors evaluated on TOEFL essays written by non-native English speakers, the average false-positive rate was 61.3%. All seven detectors misclassified 19.8% of human-written TOEFL essays as AI-generated; at least one detector flagged 97.8%.
  5. Matter of Newby v. Adelphi University. Supreme Court, Nassau County, decided January 28, 2026. Hon. Randy Sue Marber, J.S.C. presiding. 2026 NY Slip Op 26021; Index No. 615397/25. The court annulled the university’s plagiarism finding, based on a Turnitin AI-detection score, as “without valid basis and devoid of reason,” and ordered the university to expunge the violation from the student’s record and rescind any sanction imposed. Slip opinion .