January 1, 1970

How Artificial Intelligence Is Reshaping College Admissions

University admissions officer reviewing AI-powered application scoring dashboard

Virginia Tech told prospective students something unusual this fall: an AI system helped score their application essay. Not just flag it for plagiarism — score it. The school's hybrid review system pairs a human reader with an AI evaluator on a 12-point scale, and if the two scores diverge by more than two points, a second human gets pulled in. Because of the tool, students received admissions decisions a full month earlier than in prior years, arriving in late January instead of February.

What Admissions Offices Are Actually Doing With AI

The bigger surprise is that Virginia Tech isn't even a pioneer here. UNC-Chapel Hill has used AI to score application essays since 2019 — specifically a tool called the Project Essay Grade (PEG) engine, built by Measurement Incorporated. It rates writing on a 1-to-4 scale for mechanics and clarity.

UNC has spent nearly $200,000 on the technology. Associate vice provost Rachelle Feldman described the goal as allowing "our evaluators to concentrate on the things we think are the most important." The school also uses Slate's Reader AI to pre-summarize supporting documents before human reviewers open a file.

AI shows up across several parts of the admissions workflow now:

  • Transcript processing: converting foreign grading scales, calculating weighted GPAs, flagging transfer credit questions
  • Document completeness checks: alerting staff when recommendation letters, test scores, or required forms are missing
  • Essay scoring: preliminary quality screening before a human reads
  • Student outreach: chatbots that answer questions and nudge applicants about incomplete materials

Roughly 50% of admissions offices use some form of AI. Most activity sits in the first two categories. Essay scoring is still a minority use case — but it's the one with the most consequences for applicants.

The Essay Arms Race

Students started using AI to write and polish essays. Schools responded by deploying AI to detect AI. Neither side is winning cleanly.

Detection accuracy is genuinely poor. AI detection tools — including widely-used products like Turnitin and GPTZero — are accurate less than 80% of the time. That means they wrongly flag human-written content in more than one in five cases. The error rate climbs sharply for non-native English speakers: studies show that over half of ESL students' writing samples get flagged as AI-generated even when written entirely by hand. Some of those samples predated ChatGPT's release.

"The use of artificial intelligence by an applicant is not permitted under any circumstances in conjunction with application content." — Brown University admissions policy

Schools can't agree on what to do with that reality. The table below shows just how wide the gap is:

School AI Essay Scoring? Runs AI Detection? Student AI Policy
UNC-Chapel Hill Yes (since 2019) Not disclosed Not explicitly banned
Virginia Tech Yes (2025-26) Not disclosed Not explicitly banned
BYU No Yes May rescind offers
Brown University No Not disclosed Explicitly banned
UW-Madison No No Will not penalize
UC System No Plagiarism checks only Can disqualify (PIQs)

Virginia Tech's approach — using AI to confirm or question a human score rather than replace it — is more thoughtful than most. It's also the exception.

How Students Are Using AI to Find Schools

The AI disruption isn't flowing only from institutions to applicants. It runs in the other direction too.

"Stealth applicants" now account for 9.7% of total college applications, up from just 1% in 2020. These students research programs, compare financial aid policies, and narrow their college lists almost entirely through AI tools and third-party platforms. They never visit a school's website, never attend a virtual tour, never fill out a "request information" form.

The driver is how AI has transformed search. Today, 78% of education-related Google searches surface AI Overviews — summary boxes that answer the question before you click anything. Nearly 45% of Google searches now end without a click at all (a statistic that would have seemed unbelievable to enrollment marketers five years ago).

A high school junior asking "which schools have strong environmental engineering programs and generous need-based aid" can get a synthesized, ranked answer in seconds. For enrollment teams, that's a structural problem. The metrics they've relied on for years — site visits, form submissions, virtual tour completions — are losing signal.

About 45% of students have already used a digital AI assistant on a college website, with usage peaking among 9th and 10th graders. The pipeline is shifting younger and more automated than most institutions have planned for.

Predictive Analytics and the Enrollment Game

Below the public-facing tools sits a quieter layer that may carry more weight: predictive analytics driving enrollment and financial aid strategy.

Institutions now model yield probability (how likely an admitted student is to enroll), financial aid sensitivity (how much a specific scholarship shifts that probability), and retention risk (which students are most likely to leave in year one). These models pull from dozens of signals: high school GPA, zip code, campus visit history, email response timing, even how quickly an applicant logs into the portal after a decision is released (behavioral data the student usually has no idea is being tracked).

Liaison's Othot platform, used at dozens of institutions, takes a prescriptive approach. If a model predicts that an admitted student from a specific high school in Ohio has a 62% enrollment probability without aid and an 89% probability with a $4,000 annual scholarship, it recommends that exact offer. Schools stretch their merit aid budgets by targeting scholarships to where they'll change behavior.

The tradeoff is real. Students with less digital footprint in the recruitment process — those who don't visit campus, don't open recruitment emails, don't engage with school social content — may receive less generous aid offers. Not because of financial need. Because the model lacks enough data to assess their likelihood of enrolling. First-generation applicants are overrepresented in that group.

The Bias Problem Nobody Wants to Talk About

The equity concerns around AI in admissions are not speculative. They're documented, and they run in multiple directions.

False positives from AI detection disproportionately harm non-native English speakers. When a student writes in a structured, formulaic style because English is their second or third language, detection models often read that pattern as machine-generated. The error rate for native English speakers is significantly lower. Any school running detection without genuine manual review of flagged essays is filtering on national origin, whether it intends to or not.

The bias runs the other direction too. Research has found that AI-generated essays closely resemble writing from socioeconomically privileged applicants — specifically, wealthy male students from well-resourced high schools. If an institution's AI scorer was trained on historically "successful" applications, it may systematically underrate essays reflecting different cultural voices, communication norms, or life circumstances. Not because those essays are weaker. Because the training data taught the model to prefer something else.

The Federation of American Scientists flagged these concerns in a 2024 report on AI fairness in education, calling for systematic demographic disparity analysis in model outputs. As of 2026, adoption of that framework by individual institutions remains inconsistent at best.

The Policy Chaos Is the Real Problem

My honest read: the most consequential issue in AI admissions isn't the technology itself. It's the policy disorder surrounding it.

Students applying to eight or ten schools this cycle are navigating completely contradictory rules. At one school, using AI to brainstorm an essay is fine. At another, it could cost them their offer. At a third, the school itself is using AI to score what they wrote. Few of these policies are prominently communicated on application portals.

NACAC updated its ethics guide in 2024 to include an AI section, urging institutions to ensure their use aligns with "transparency, integrity, fairness and respect for student dignity." Right principle. But guidance without enforcement doesn't change what schools actually do.

That information gap is the elephant in the room. Students at well-funded private schools with dedicated college counselors learn which schools run detection and which don't. First-generation applicants often find out after the fact, if at all. The writing is on the wall: without clearer disclosure standards, the students who most need fair treatment are the ones most likely to be blindsided.

Bottom Line

  • If you're applying, check each school's stated AI policy before you submit. There's no standard — consequences range from no penalty to a rescinded offer. When in doubt, write in your own voice; readers who've reviewed 40,000 essays recognize authenticity, and they recognize the alternative.
  • If you work in enrollment, the stealth applicant trend is structural. Visibility inside AI-generated summaries from tools like Google AI Overviews and Perplexity is becoming as important as your organic search ranking. Audit whether your institution's information appears accurately in those tools.
  • The single most important takeaway: AI in admissions is already operating at scale. UNC has been using AI to score essays since 2019. About half of admissions offices have AI in their workflow somewhere. The conversation to have now is about bias auditing, policy transparency, and whether the rules are being communicated equitably to all applicants — not whether this technology is coming.

Frequently Asked Questions

Do colleges currently use AI to read my application essay?

Some do. UNC-Chapel Hill has used an AI scoring engine since 2019, and Virginia Tech introduced hybrid AI review in the 2025-26 cycle. About half of admissions offices use AI somewhere in their process, though most apply it to transcripts and supporting documents rather than essays. No Ivy League school has publicly confirmed using AI for essay scoring.

How accurate are AI detection tools at identifying AI-written essays?

Less accurate than most people assume. These tools are right less than 80% of the time overall, and the false positive rate is much higher for non-native English speakers — some studies show more than half of ESL writing samples get wrongly flagged. Schools using detection without careful manual review of flagged submissions are making decisions on flawed data.

Will using AI to write my college essay get me rejected?

It depends entirely on where you're applying. BYU may rescind offers for AI-written content. Brown bans AI use in applications outright. UW-Madison won't penalize applicants for suspected AI use at all. There's no universal standard, and policies aren't always easy to find — which is itself part of the problem.

What is a "stealth applicant" and why should schools care?

A stealth applicant researches and applies to schools almost entirely through AI tools and third-party platforms, bypassing a school's official website and recruitment outreach entirely. This group went from 1% of applicants in 2020 to 9.7% in 2025. For institutions, it means a growing share of their incoming class was never captured by traditional engagement metrics.

Can predictive analytics affect how much financial aid I'm offered?

Yes, indirectly. Many institutions now use yield models that calibrate merit aid to maximize enrollment probability. Students who engage less with digital recruitment touchpoints may receive less targeted scholarship offers — not because of financial need, but because the model has fewer data points to work with. First-generation students are disproportionately affected by this dynamic.

Is AI creating fairness problems in the admissions process?

Documented ones, yes. AI detection tools flag ESL students' writing at higher rates than native English speakers'. Research also suggests that AI scoring models trained on historical applications may undervalue essays that reflect different cultural voices or writing styles. The Federation of American Scientists has called for systematic fairness analysis in educational AI tools, but institutional adoption has been slow and uneven.

Sources

Related Articles

Ready to Launch Your Academic Future?

Join thousands of students using our tools to find and fund the perfect college. Let Resource Assistance USA guide your journey.

Get Started Now