Academic ABM Literature Analysis: College Admissions Simulation

Source: abm_literature_analysis.md


Academic ABM Literature Analysis: College Admissions Simulation

Synthesised from deep-dive research on Reardon et al. (2016), Assayed et al. (2023–2025), and the broader matching-market simulation literature. Research date: March 2026.


Executive Summary


1. Theoretical Foundations

1.1 Gale & Shapley (1962) — The Mathematical Starting Point

Citation: Gale, D. & Shapley, L.S. (1962). "College Admissions and the Stability of Marriage." American Mathematical Monthly, 69(1), 9–15.

The original paper defined college admissions as a two-sided matching problem. Every student has preferences over colleges; every college has preferences over students. The deferred acceptance (DA) algorithm produces a stable matching — no student-college pair both prefer each other over their current assignment.

Two key results: - A stable matching always exists in the college admissions problem. - The student-proposing DA produces the student-optimal stable matching; the college-proposing DA produces the college-optimal one.

What real admissions is not: Pure stable matching. Real admissions adds noise (holistic review), incomplete information (students don't know their true chances), strategic asymmetries (binding ED distorts truthful preference revelation), and hook preferences (legacy, athlete, donor) that decouple institutional preference from academic quality.

Relevance to college-sim: Our sequential-round structure (ED → EA/REA → EDII → RD) is a real-world approximation of DA where binding ED rounds generate partial commitment before the main round. The Gale-Shapley framework is the theoretical benchmark; our model captures the strategically distorted reality.

1.2 Abdulkadiroğlu & Sönmez (2003) — School Choice as Mechanism Design

Citation: Abdulkadiroğlu, A. & Sönmez, T. (2003). "School Choice: A Mechanism Design Approach." American Economic Review, 93(3), 729–747.

Extended Gale-Shapley to K–12 public school choice. Showed that the Boston immediate-acceptance mechanism is manipulable — families who misrepresent preferences are sometimes better off. Proposed student-proposing DA and top-trading cycles as alternatives.

The stability-efficiency tradeoff: No mechanism is simultaneously stable and Pareto efficient (Roth 1982). ED/binding commitment trades off student welfare (can't compare financial aid offers) for institutional yield certainty — exactly the tradeoff our model captures through ED multipliers vs. lower aid awareness.

1.3 Epple, Romano & Sieg (2006) — Equilibrium in Higher Education Markets

Citation: Epple, D., Romano, R. & Sieg, H. (2006). "Admission, Tuition, and Financial Aid Policies in the Market for Higher Education." Econometrica, 74(4), 885–928.

An equilibrium (non-ABM) model showing how college quality hierarchies emerge endogenously. Prediction: top colleges use need-based aid (can attract top students regardless); lower colleges use merit-based aid (must compete). This pattern matches observed reality and informs how our net-cost-by-income data should be interpreted across college tiers.


2. The Foundational ABM: Reardon, Kasman, Klasik & Baker (2016)

2.1 Full Citation and Context

Citation: Reardon, S.F., Kasman, M., Klasik, D. & Baker, R. (2016). "Agent-Based Simulation Models of the College Sorting Process." Journal of Artificial Societies and Social Simulation, 19(1), 8. DOI: https://doi.org/10.18564/jasss.2993 URL: https://www.jasss.org/19/1/8.html Working paper: Stanford CEPA WP 15-04.

Authors: Sean Reardon (Stanford Education/Sociology), Matt Kasman (Brookings), Daniel Klasik (GWU), Rachel Baker (UC Irvine).

The paper's central question: Why do low-SES students sort into less selective colleges even after controlling for academic achievement? Five distinct SES-linked mechanisms could explain this beyond the achievement gap itself. ABM allows each mechanism to be isolated and removed independently — something impossible with observational data alone.

2.2 Model Architecture (ODD Summary)

Purpose: Understand how five SES-linked mechanisms jointly produce the empirically observed pattern of socioeconomic stratification in US college enrollment.

Agents and scales:

Entity Count Attributes
Student agents 8,000 per run Caliber (C), Resources (R), perceived caliber, application set
College agents 40 Quality (Q), capacity (150 seats), yield rate, admission threshold

Three-stage annual process:

  1. Application — Students form beliefs about each college's quality and their own admission chances, then select a portfolio to maximise expected utility (EU) subject to diminishing returns.
  2. Admission — Colleges rank students by perceived caliber, admit enough to fill seats at expected yield.
  3. Enrollment — Each admitted student enrolls at the highest-utility college that accepted them.

After each year, college quality updates as the rolling average of recently enrolled students' caliber.

Key equations:

Perceived caliber:     C*_s = C_s + c_s + e_s
  C_s = true caliber
  c_s = application enhancement (coaching, prep) ∝ resources
  e_s = noise ~ N(0, 1-reliability)

Information reliability:   ρ_s = 0.7 + 0.1 × R_s  (bounded [0.5, 0.9])

Applications per student:  K_s = 4 + 0.5 × R_s

College yield function:    Y_c = 0.2 + 0.06 × quality_percentile(c)

Utility of college c for student s:
  U(s,c) = Q_c − λ × (Q_c − C*_s)²   [inverse-distance preference]

Portfolio selection algorithm (Appendix D): Recursive EU maximisation. Students compute the marginal EU gain of adding each college to their application set, selecting colleges in decreasing marginal-utility order until the portfolio is optimal. This is computationally efficient — O(N·K) rather than combinatorial — and is the model's key methodological contribution.

2.3 Parameter Table

Parameter Baseline Value Source
Students per cohort 8,000 Stylised
Colleges 40 Stylised
Seats per college 150 Stylised
Caliber distribution N(1000, 200) ELS:2002 / College Board
Resource distribution N(0, 1) ELS:2002
Resource-caliber correlation r = 0.3 ELS:2002 regression
Information reliability base 0.7 Calibrated
Information reliability slope +0.1 per SD resources Calibrated
Enhancement coefficient 0.1 × resources SAT prep research
Applications base 4 apps HERI/CIRP data
Applications slope +0.5 per SD resources HERI/CIRP data
Burn-in period 30 simulated years Convergence test
Runs per condition 100 SE calculation
Sensitivity samples 10 (Latin Hypercube) 5-dimensional space

2.4 Five SES Mechanisms and Their Effect Sizes

The paper runs 8 models removing mechanisms one at a time and in combination. Latin Hypercube sampling tests sensitivity across continuous parameter ranges.

Mechanism Isolated effect on 90-10 gap Interpretation
Resource-caliber correlation Dominant — removing it reduces top-decile advantage from ~20× to ~4× The achievement gap is the primary driver of sorting
Application enhancement 3–6 pp at top colleges Test prep, coaching, essay polish give wealthy students perceived-caliber boost
Information quality 2–5 pp benefit to middle students Wealthy students better calibrate their own chances and college quality
Application volume 1–2 pp More apps = better hedging; low-SES students under-apply
Utility preference Negligible independent effect Differential valuation of prestige has little independent impact

Key finding: The four non-achievement mechanisms combined have approximately the same effect as removing the achievement gap alone. This implies that equalising access to counselling, information, and application resources could achieve roughly half of the stratification reduction that closing the achievement gap would.

2.5 Validation

2.6 Explicit Limitations (as stated by authors)

  1. Students characterised by only two attributes (caliber + resources); no race, geography, major, gender
  2. Colleges characterised by only one attribute (quality); no financial aid, prestige brand, size
  3. No multi-round process — single apply/admit/enroll cycle per year
  4. No early decision, no early action, no binding commitment mechanics
  5. No hooks — legacy, athlete, donor, first-gen not modelled
  6. No financial aid modeling — enrollment is purely prestige-utility maximisation
  7. No race/ethnicity dimension (addressed partially in 2018 follow-up)
  8. No social network effects in college information or choice
  9. Stylised colleges — not calibrated to real institutions
  10. 30-year equilibrium assumption may not reflect real-world dynamics of rapid change

2.7 Code Availability

The original authors did not publicly release code. A third-party Python reimplementation exists:

Allard, T., Beziau, L. & Gambs, S. (2023). "[Re] Simulating Socioeconomic-Based Affirmative Action." ReScience. HAL: hal-04328511. GPL-3.0 licensed. Confirms full reproducibility of both the 2016 and 2018 results.


3. The 2018 Extension: Race, SES, and Affirmative Action

Citation: Reardon, S.F., Baker, R.B., Kasman, M., Klasik, D. & Townsend, J.B. (2018). "What Levels of Racial Diversity Can Be Achieved with Socioeconomic-Based Affirmative Action? Evidence from a Simulation Model." Journal of Policy Analysis and Management, 37(3), 630–657.

Extends the 2016 model by adding race/ethnicity as an agent attribute. Tests policy counterfactuals: what happens to racial diversity if selective colleges adopt SES-based rather than race-based affirmative action?

Key findings:

Policy Scenario Black/Hispanic Diversity Outcome
Race-based AA (baseline) ~Current levels
SES-based AA alone Substantially lower — cannot substitute for race-based
SES-based AA + race-targeted recruiting Approaches but does not reach race-based AA levels
Race-neutral (no AA) Large drop; Black/Hispanic representation falls sharply

Mechanism: SES-based AA admits more low-income students, but since race and income are only partially correlated in the US, it misses a large fraction of middle-class URM students who would benefit from race-based AA.

Spillover finding: When elite colleges adopt SES-based AA, they "pull" high-achieving low-SES students upward in the quality distribution, reducing diversity at the next tier of colleges that don't adopt the policy.

Relevance to college-sim: This is the most directly policy-relevant ABM in the literature given the SFFA v. Harvard ruling (2023). Our model, which includes demographic attributes, income brackets, and Chetty-calibrated yield data, is well-positioned to run the same counterfactuals with real college data.


4. The Assayed Papers (2023–2025)

4.1 What "Assayed et al. 2024" Actually Is

The citation in our competitive landscape document refers to a cluster of three papers by Suha Khalil Assayed (The British University in Dubai, UAE). No single paper titled "Assayed et al. 2024" exists; the citation in the literature appears to conflate them.

4.2 Assayed & Maheshwari (2023a) — Original ABM

Citation: Assayed, S.K. & Maheshwari, P. (2023). "Agent-Based Simulation for University Students Admission: Medical Colleges in Jordan Universities." Computer Science & Engineering: An International Journal (CSEIJ), 13(1). February 2023. SSRN: 4692509. BUID repo: bspace.buid.ac.ae/items/284b1e06

Model: NetLogo 6.3. Two agent types: - High school students — attributes: high school GPA, family income - Medical colleges — attributes: reputation, seat capacity, cutoff GPA

Mechanism: Students rank colleges by preference; colleges admit by GPA cutoff; income priority is a togglable slider.

Key findings: - When low-income high-GPA students are prioritised, college reputation rankings emerge from cutoff GPA and student preferences rather than purely from institutional prestige - Cutoff marks are emergent properties of iterative simulation — colleges with high demand organically develop high cutoffs - Income prioritisation shifts which colleges attract top students but does not eliminate stratification

Limitations vs. college-sim:

Dimension Assayed 2023a College-Sim
Student attributes 2 (GPA + income) 30+
College attributes 3 30+ per college
Application rounds 1 6 (ED/EA/REA/EDII/RD/waitlist)
Hook modeling None 9 hooks with calibrated multipliers
Geographic context Jordan medical schools US selective colleges
Validation None IPEDS/CDS/Chetty calibration
Behavioural archetypes None 8 archetypes

4.3 Assayed & Maheshwari (2023b) — Literature Review

Citation: Assayed, S.K. & Maheshwari, P. (2023). "A Review of Agent-based Simulation for University Students Admission." CSEIJ, 13(2). April 2023. SSRN: 4692455.

A survey of ABMs deployed by international admission offices. Key takeaway: very few ABMs exist in this space. Reardon et al. (2016) remains the dominant model for US selective college admissions, with limited follow-up work globally. Most international models focus on K–12 or single-country systems.

4.4 Assayed & Al-Sayed (2025) — Survey of Student Behavior Models

Citation: Assayed, S.K. & Al-Sayed, S. (2025). "Student Behaviors in College Admissions: A Survey of Agent-Based Models." International Journal of Emerging Multidisciplinaries: Computer Science & Artificial Intelligence, 4(1). DOI: 10.54938/ijemdcsai.2025.04.1.385. SSRN: 5223687.

A survey/review article (not an original simulation) cataloging ABM approaches used by international universities to study secondary education pathways and student behaviors. Key themes:

Assessment: This paper primarily validates the modeling approach our simulation already takes (heterogeneous agents, behavioral archetypes, SES-driven application behavior). It adds no new empirical data or model architecture.


5. Other Relevant Papers (2010–2025)

5.1 Sirolly (2023) — Toy Model of Application Inflation

Citation: Sirolly, A. (2023). "A Toy Model of College Admissions." Blog post.

Model parameters: - 50 colleges × 100 seats, 5,000 applicants - Applicant ability: W_i ~ N(0,1), signal W̃_i ~ N(W_i, 0.1²) - Utility: u_i(k) = I_k^(-β) + γ(K-k) [prestige minus distance from selectivity] - Belief shrinkage: P_α(admit) = (1-α)P(private signal) + α×I_k (weight on public admit rate)

Application inflation spiral: 1. Students become pessimistic (over-weight low public admit rates) 2. Submit more applications as hedging behavior 3. Admit rates fall further at each college (more apps, same seats) 4. Students see lower public rates → apply to even more → cycle repeats

Relevance: This is the theoretical mechanism behind the 6.8 → 7.0 → 7.5+ trend in CommonApp applications per student. Our archetype-based application count implicitly models the end-state of this spiral but does not model the dynamic feedback loop. A future extension could add belief updating across simulation years.

5.2 Daemen & Leoni (2025) — Netherlands Tertiary Education ABM

Citation: Daemen, [first name] & Leoni, [first name] (2025). "Simulating Tertiary Educational Decision Dynamics: An Agent-Based Model for the Netherlands." Journal of Economic Interaction and Coordination.

Relevant for structural comparison: Models economic motivations (wages, financial constraints) + sociological/psychological factors (peer effects, personality, geography). Tests policy counterfactuals: student grants vs. loans on enrollment by SES. Counter-intuitive finding: greater parental emphasis on achievement doesn't consistently raise district achievement.

Difference from college-sim: Netherlands context (no selectivity rankings equivalent to US Ivy hierarchy, no hook system), but the multi-dimensional agent architecture and policy counterfactual approach are analogous.

5.3 Lee, Harvey, Zhou, Garg, Joachims & Kizilcec (2024) — ML Admissions Study

Citation: Lee, J., Harvey, E., Zhou, J., Garg, N., Joachims, T. & Kizilcec, R.F. (2024). "Algorithms for College Admissions Decision Support: Impacts of Policy Change and Inherent Variability." EAAMO '24, San Luis Potosí. arXiv: 2407.11199.

Not an ABM — uses ML ranking algorithms on real admissions data from a selective US university. Key findings: - Omitting race reduces the proportion of URM applicants in the top-ranked pool by 62% without improving academic merit - Inherent arbitrariness: all admission policies contain substantial randomness; race omission increases outcome arbitrariness for most applicants - Test-optional further reduces predictive accuracy

Direct relevance to college-sim: The ±25% randomness term in our admission scoring model is empirically validated — even ML-based decision systems show substantial inherent variability. The 62% URM drop figure validates our need to model demographic attributes post-SFFA.


6. Empirical Calibration Foundations

Key papers used to calibrate our hook multipliers, income effects, and student decision parameters:

6.1 Chetty, Deming & Friedman (2023) — Diversifying Society's Leaders?

Citation: Chetty, R., Deming, D.J. & Friedman, J.N. (2023). "Diversifying Society's Leaders? The Determinants and Causal Effects of Admission to Highly Selective Private Colleges." NBER Working Paper 31492.

Dataset: 2.4 million students × 139 colleges, tax records linked to admissions data (Opportunity Insights), 2010–2015 cohorts.

Finding Value
Top-1% income vs. middle-class at Ivy+ (same SAT) 2× more likely to attend
Legacy admission advantage (same credentials) 5–6× higher admit rate
Share of top-1% advantage from legacy 46%
Share from athletic recruitment 24%
Share from non-academic ratings (essays, recs, interviews) 30%
Causal effect of Ivy+ on reaching top-1% earnings +50% vs. flagship public
Causal effect on elite graduate school ~2×
Causal effect on prestigious employer ~3×

Critical insight: The three preference factors (legacy, athlete, non-academic ratings) are uncorrelated or negatively correlated with post-college outcomes. This validates our model's separation of hooks (admission multipliers) from academic quality (admission score). Hooks change who gets in; they do not reflect academic merit.

6.2 Arcidiacono, Kinsler & Ransom (2022) — Hook Multipliers from Harvard Data

Citation: Arcidiacono, P., Kinsler, J. & Ransom, T. (2022). "Legacy and Athlete Preferences at Harvard." Journal of Labor Economics, 40(1). NBER WP 26316.

Dataset: Harvard Classes of 2014–2019, disclosed during SFFA v. Harvard litigation.

Metric Value
White admits who are ALDC 43%
Non-white admits who are ALDC <16% any group
Athlete admit rate 86%
Non-ALDC overall admit rate <5.5%
White ALDC who would be rejected without preference ~75%
Asian-American avg SAT advantage over white applicants +24.9 points

ALDC = Athletes, Legacies, Dean's interest list, Children of faculty/staff.

The 86% athlete admit rate vs. 5.5% baseline implies a raw ratio of ~15.6×, which controlling for academic quality reduces to the ~4.5× multiplier used in our simulation. The 43% ALDC share of white admits quantifies the scale of hook-based admissions.

6.3 Avery & Levin (2010) — ED Signaling Advantage

Citation: Avery, C. & Levin, J. (2010). "Early Admissions at Selective Colleges." American Economic Review, 100(5), 2125–2156.

ED advantage: 20–30 percentage points higher admit rate, equivalent to approximately 100 SAT points. This validates our empirical ED multiplier data (Dartmouth 3.5×, Columbia 3.4×, UChicago ~4×). The paper explains ED as a credible signaling mechanism — students demonstrate commitment, colleges gain yield certainty.

6.4 Hoxby & Avery (2013) — The Missing "One-Offs"

Citation: Hoxby, C. & Avery, C. (2013). "The Missing 'One-Offs': The Hidden Supply of High-Achieving, Low-Income Students." Brookings Papers on Economic Activity, Spring 2013.

Key finding: most high-achieving low-income students never apply to selective colleges — they apply to local institutions that often cost more after financial aid. Two behavioral types: "achievement-typical" (apply like high-income peers) and "income-typical" (apply only locally). The income-typical group is geographically dispersed, outside feeder school networks, and missed by standard recruiting.

Relevance: Our feeder-tier system (T1–T4 high schools) is the primary mechanism capturing this effect. T1 private boarding schools produce "achievement-typical" behavior; small public schools produce "income-typical" behavior with lower application counts and different target school distributions.


7. Feature Comparison: College-Sim vs. Prior ABMs

Feature Reardon 2016 Assayed 2023a College-Sim
Context US (stylised) Jordan medical US selective (real data)
Student attributes 2 (caliber, resources) 2 (GPA, income) 30+ per student
Student archetypes None (homogeneous) None 8 behavioural archetypes
College attributes 1 (quality) 3 30+ per college
Real colleges No (40 synthetic) No Yes (300 colleges, CDS-calibrated)
Admission rounds 1 (undifferentiated) 1 6 (ED/EA/REA/EDII/RD/waitlist)
Early Decision mechanics No No Yes, with binding multipliers
Hook multipliers No No Yes (9 hooks, SFFA-calibrated)
Athlete recruitment No No Yes (3.5× at HYPSM)
Legacy preference No No Yes (2.5×, eliminated at MIT/JHU/Amherst)
Donor/development No No Yes (4×)
First-gen Implicit (resources) Partial (income) Yes (explicit flag, 1.4×)
Financial aid No No Yes (net cost by income bracket, Chetty data)
Yield modeling Basic (yield ∝ quality) No Yes (Chetty income-bracket yield)
Waitlist dynamics No No Yes (sequential processing)
Income-SAT offset Partial (resources) Partial Yes (6-bracket offsets, College Board data)
Post-SFFA demographics No No Yes (demographic bars, post-SFFA enrollment shifts)
Feeder school network No No Yes (20 high schools, T1–T4 tier system)
Logistic admission model Rank-cutoff Rank-cutoff Yes (sigmoid P(admit))
Validation data IPEDS patterns None IPEDS + CDS + Chetty + CommonApp
Visualisation None NetLogo basic D3.js v7 Bezier arcs, live animation
Platform Custom (no public code) NetLogo 6.3 Vanilla JS/HTML, self-contained
Code availability No (Python replication exists) Partial Full source, single HTML file

8. What the Literature Tells Us About Our Model's Novelty

Based on the complete review, the college-sim project makes the following distinct contributions over prior work:

8.1 Multi-Round Sequential Processing

No prior ABM models the real institutional calendar: binding ED in November, non-binding EA/REA in December, EDII in January, regular RD in late March, waitlist in April–May. Each round uses different acceptance rates and multipliers. This sequential structure is essential for modeling how students and colleges make decisions under uncertainty across time — the core of the real admissions process.

8.2 Calibrated Hook Multipliers

Hooks are the single largest unexplained factor in the Reardon model. We implement 9 hooks using multipliers derived from Arcidiacono et al. (2022), Chetty et al. (2023), and SFFA litigation data. The logistic model in logit space prevents multiplicative blow-up (a 4× donor + 3.5× athlete don't multiply to 14×; they add their logit-space contributions). Reardon's model cannot address this at all — it has no hook parameters.

8.3 Real Institutional Data

All 300 colleges in the dataset use real Common Data Set figures: acceptance rates (overall, ED, RD), SAT/ACT middle-50%, class sizes, yield rates, feeder demographics, and net cost by income bracket. Reardon's 40 synthetic colleges with a single quality attribute cannot capture the heterogeneity of real institutions (MIT vs. Williams vs. Michigan have radically different admission functions, hook policies, and yield dynamics).

8.4 Post-SFFA Policy Simulation

The 2023 SFFA v. Harvard ruling banned race-conscious admissions at all US colleges. Reardon et al. (2018) is the most relevant prior work, but it uses stylised colleges and predates the ruling. Our model can run the equivalent of the Harvard simulation D (removing race + ALDC preferences) with real college data and archetype-specific demographic attributes.

8.5 Income-Bracket Yield Modeling

The Opportunity Insights/Chetty dataset links 2.4 million students to enrollment outcomes by income bracket and college tier. We've extracted yield rates by income bracket for all 30 simulation colleges. No prior ABM uses this data — Reardon's yield function is a simple linear function of college quality percentile.

8.6 Behavioural Archetypes Over Homogeneous Agents

Reardon's students are homogeneous: each is a caliber-resources pair, and all use the same EU-maximising application algorithm. Real applicants are not homogeneous. An "athletic recruit" has a fundamentally different decision process from a "STEM spike" or "legacy/development" student. Our 8 archetypes (stem_spike, humanities_spike, athlete, well_rounded, arts_spike, legacy_dev, first_gen, average_strong) have different application counts, college utility weights, hook probabilities, and income distributions — capturing behavioural heterogeneity absent from all prior ABMs.


9. Open Research Questions Raised by the Literature

The literature review identifies several questions our model is uniquely positioned to answer:

  1. Post-SFFA equilibrium: After race-conscious admissions is removed, how do enrollment demographics shift across multiple cycles as colleges adapt (more aggressive SES-based AA, more financial aid outreach, more HBCU applications)? Reardon 2018 ran one round of this counterfactual with stylised colleges.

  2. ED volume spiral and market efficiency: Sirolly's application inflation model predicts a self-reinforcing cycle. Do binding-ED schools break this spiral by "locking in" a portion of the most-committed applicants early? Our model runs six rounds and could track belief-updating dynamics.

  3. Hook elimination effects: MIT, Johns Hopkins, and Amherst have eliminated legacy preferences. Caltech has never had ED. What happens to yield, class composition, and ranking when a top school unilaterally removes a hook?

  4. Feeder school information asymmetry: Hoxby & Avery's "missing one-offs" are students who don't know that elite schools would cost less for them. Our model has the income-bracket net cost data to simulate targeted outreach interventions.

  5. Waitlist cascade dynamics: When a top school admits 50 waitlist students, this creates yield vacancies at the schools those 50 students leave. That cascade runs through 4–6 tiers. No prior ABM models this chain.


10. Citation Index

Paper Year Relevance Used in College-Sim
Gale & Shapley 1962 Theoretical foundation Conceptual framework
Abdulkadiroğlu & Sönmez 2003 Mechanism design, stability Conceptual framework
Epple, Romano & Sieg 2006 Equilibrium financial aid Tier hierarchy logic
Reardon, Kasman, Klasik & Baker 2016 Foundational ABM Architecture, calibration
Avery & Levin 2010 ED signaling advantage ED multiplier calibration
Hoxby & Avery 2013 Missing low-income applicants Feeder tier system
Avery, Glickman, Hoxby & Metrick 2013 Revealed preference college rankings Utility weighting
Reardon et al. 2018 Race + SES affirmative action Post-SFFA policy baseline
Arcidiacono, Kinsler & Ransom 2019/2022 Hook multipliers (Harvard data) Hook calibration
Allard, Beziau & Gambs 2023 Python replication of Reardon Validation reference
Assayed & Maheshwari 2023a Jordan medical ABM (NetLogo) Comparison point
Assayed & Maheshwari 2023b ABM literature review Confirms field is thin
Sirolly 2023 Application inflation spiral Application count motivation
Lee et al. (Cornell) 2024 ML admissions, URM, randomness ±25% randomness validation
Chetty, Deming & Friedman 2023 Hook multipliers, income effects Hook + yield calibration
Assayed & Al-Sayed 2025 Survey of behavioral ABMs Confirms modeling approach
Daemen & Leoni 2025 Netherlands education ABM Structural comparison
Fu (Chao) 2014 Structural equilibrium estimation Financial aid extension

Research notes from individual agent dives available in: - research/abm_reardon_2016_notes.md (870 lines — full ODD, equations, parameter tables) - research/abm_assayed_2024_notes.md (271 lines — all three Assayed papers + related work) - research/abm_literature_context.md (405 lines — matching theory, empirical foundations, code platforms)