
Executive summary
This report develops a doctoral-level, interdisciplinary research framework to explain a recurring empirical and ethical puzzle: equal access to resources, opportunities, or amenities does not generate equal experience or equal benefit because individuals can only “receive” what their mind and body are ready to perceive, interpret, and act upon. The core proposal is that “access” (resources and formal eligibility) is necessary but insufficient; the “conversion” of access into lived experience is governed by Human Capacity—a multi-domain construct that includes cognitive, emotional, behavioral, and economic components. This framing is aligned with capability-inspired thinking that distinguishes resources from what people are actually able to do and be. [1]
Across disciplines, the same pattern appears with different vocabulary: skill- and use-based “second-level” inequalities persist even after physical access is achieved; perception depends on affordances and interpretability; “bandwidth” and cognitive load constrain choices; motivation and self-regulation shape follow-through; and social reproduction mechanisms sustain gaps even under formally equal rules. [2]
The report contributes four publishable deliverables:
- A formal process model: Access → Perception → Interpretation → Action → Outcome, where inequality can emerge at each stage even if “Access” is equal. [3]
- A measurable Human Capacity construct, operationalized in four domains (cognitive, emotional, behavioral, economic) using validated psychometric tools and objective indicators with cross-cultural measurement safeguards. [4]
- A Human Capacity Utilization Index (HCUI) that quantifies how efficiently individuals convert an opportunity set into meaningful engagement and outcomes—separating “availability” from “usability.” [5]
- A mixed-methods doctoral study design spanning controlled and semi-controlled contexts (cruise ships, schools, workplaces) with quantitative utilization modeling, variance decomposition, longitudinal trajectories, and qualitative ethnography/interviews to identify conversion bottlenecks and design levers. [6]
Finally, the report translates evidence into actionable recommendations for public policy, education systems, and organizations, emphasizing that “equity” cannot be reduced to distributing access alone. Instead, it should combine universal goals with targeted capacity-building pathways, consistent with targeted universalism and evaluation practices that explicitly examine differential results across groups. [7]
Interdisciplinary foundations of unequal experience under equal access
The observation “people only receive what their body and mind are ready to understand” has strong convergent support across eight domains, each explaining a different segment of the Access→Outcome conversion chain.
In welfare economics and development ethics, Amartya Sen[8] distinguishes resources from capabilities (real freedoms) and functionings (realized beings/doings). Equal resources do not imply equal capability because people face different conversion factors (health, literacy, social environment). [1] This conceptual move is crucial for the present thesis: “equal access” is closer to equal resources, while “equal experience” is closer to realized functionings.
In sociology of stratification, Pierre Bourdieu[9] formalizes how economic, cultural, and social capital shape what people can do with the same institutional offerings—how to navigate, interpret, and exploit systems. [10] Complementarily, durable inequality can persist through opportunity hoarding and related mechanisms even when formal rules appear universal, while “effectively maintained inequality” explains how advantaged groups preserve relative advantage as systems expand. [11] These theories anticipate “equal access, unequal conversion” as a structural feature, not a mere anomaly.
In behavioral economics and judgment/decision science, Daniel Kahneman[12] and Amos Tversky[13] show that choice is not a frictionless mapping from preferences to actions; it is shaped by reference dependence, loss aversion, and context—implying that identical opportunity environments can be processed differently by differently primed individuals. [14] Empirically, scarcity conditions (money or time) can consume attentional resources, impairing cognitive performance and decision quality—directly mapping onto the claim that capacity constraints change what is “receivable” at a given moment. [15] Choice architecture (“nudging”) is often effective on average, but its effects are heterogeneous and not universal, reinforcing that environmental design alone rarely equalizes outcomes. [16]
In cognitive psychology and education science, John Sweller[17] demonstrates that working memory constraints and cognitive load shape learning and problem-solving effectiveness: if the environment requires too much cognitive processing, schema acquisition suffers. [18] Educational theory also emphasizes readiness and scaffolding: learning is most productive in a zone where tasks are achievable with support rather than simply “available.” [19] Evaluation research on formative assessment shows that feedback and self-assessment practices can generate meaningful gains, precisely because they reduce interpretation errors and guide action. [20]
Neuroscience strengthens the mechanism: perception and action are not passive reflections of the world; they are actively inferred and optimized. Karl Friston[21] proposes an organizing framework in which brains minimize prediction error through perception, learning, and action selection. [22] Similarly, Wolfram Schultz[23] links learning to reward prediction error signals, implying that what is experienced as salient or reinforcing varies with expectations and learning history. [24] These theories naturalize “readiness” as a neurocognitive state shaped by prior experience, stress, sleep, and reinforcement histories. [25]
Organizational behavior shows parallel conversion barriers in workplaces. Amy Edmondson[26] finds that psychological safety supports learning behavior in teams, meaning that the same training or opportunity can produce different outcomes depending on interpersonal risk climate. [27] Employees also proactively reshape tasks, relationships, and meaning through job crafting—a mechanism by which personal agency and interpretive frames change the utilization of the same job design. [28]
Public policy and evaluation science increasingly recognize the danger of treating access as the sole success metric. OECD[29] distinguishes equality (same treatment/resources) from equity (adjusting opportunities to prior conditions) and highlights evaluation approaches that consider differential results across groups. [30] World Bank[31] emphasizes “inequality traps” and defines equity in ways that go beyond identical inputs, reinforcing the need to study conversion processes. [32]
Philosophy contributes two essential clarifications. First, distributive justice and capability ethics emphasize that fairness concerns what people are actually able to achieve, not only what is distributed. [33] Second, interpretive readiness is partly social: Miranda Fricker[34] theorizes how people can be wronged “as knowers,” while Iris Marion Young[35] argues that structural injustices implicate everyone without reducing responsibility to individual blame—useful for avoiding a “deficit narrative” when studying capacity. [36]
Human Capacity as a measurable construct
Definition
Human Capacity (HC) is defined here as a multi-domain set of conversion capabilities that determine whether and how an individual can transform a given opportunity set into lived experience and outcomes. This report operationalizes HC into four domains:
- Cognitive capacity: knowledge, working-memory-related constraints, comprehension, and navigation skill (including “knowing what exists” and “knowing how to use it”). [37]
- Emotional capacity: self-efficacy, emotion regulation, psychological safety/affect, and the ability to persist under stress. [38]
- Behavioral capacity: self-control, habit strength, follow-through, and action initiation under frictions. [39]
- Economic capacity: time slack, financial well-being/distress, and material stability that preserves cognitive bandwidth and enables participation. [40]
A key theoretical point is that “capacity” is not purely psychological; it is embodied and situated—linked both to bodily states and to social/environmental structures that shape the feasibility and meaning of action. [41]
Operationalization and measurement instruments
The measurement strategy uses (a) validated scales where possible, (b) objective indicators when feasible, and (c) context-specific instruments for “opportunity perception” and “opportunity navigation” (newly developed and validated within the doctoral program).
Table: Human Capacity domains, operational measures, and candidate instruments
| Domain | What is measured | Candidate instruments or indicators | Notes on scoring & cross-context validity |
| Cognitive capacity | Working-memory burden, comprehension/skill, navigation know-how, opportunity awareness | Cognitive load–sensitive task performance and task completion times [42]; “Opportunity Awareness Inventory” (study-built: recognition/recall of available options, locations, eligibility); digital skill analogs from Eszter Hargittai[43]’s second-level digital divide approach [44]; nonverbal reasoning (Raven standardization reference) [45] | Expect strong context dependence; use item-response and invariance testing to ensure comparability across sites. [46] |
| Emotional capacity | Self-efficacy, emotional regulation difficulty, stress-related interference, well-being | Albert Bandura[47] self-efficacy theory anchors measurement logic [48]; General Self-Efficacy Scale documentation [49]; Difficulties in Emotion Regulation Scale (DERS) source packet [50]; WHO-5 well-being instrument (official overview; systematic review support) [51] | Avoid clinical labeling; treat as continuous research constructs; test measurement invariance across demographic/cultural groups. [52] |
| Behavioral capacity | Self-control, persistence, prompt responsiveness, goal pursuit | Brief Self-Control Scale documentation & psychometric analysis [53]; Angela Duckworth[54] grit paper (variance explained in success outcomes) [55]; Behavior Model for Persuasive Design by B. J. Fogg[56] (Motivation–Ability–Prompt logic) [57] | Combine self-report with behavioral traces (attendance logs, app usage, time-use diaries) to reduce common-method bias. [50] |
| Economic capacity | Financial well-being/distress, time scarcity, material shocks | Consumer Financial Protection Bureau[58] Financial Well-Being Scale (user guide + technical report; IRT scoring principles) [59]; InCharge Financial Distress/Well-Being Scale development overview [60]; scarcity/bandwidth effects in poverty contexts [15] | The CFPB scale is an example of IRT-based scoring for a latent construct; any non-US deployment needs careful localization and invariance checks. [46] |
The Human Capacity Utilization Index and formal models
Human Capacity Utilization Index
The Human Capacity Utilization Index (HCUI) is designed to measure conversion efficiency: how much of a context’s opportunity set an individual knows about, can correctly interpret, uses, and benefits from, relative to what is available to them.
Let an environment (ship/school/workplace) define an Opportunity Set containing opportunities
, each with a weight
representing value or relevance to the universal outcome goal (e.g., learning growth, health benefit, skill attainment). The study observes four stage metrics for each individual
:
- Perception
: proportion of opportunities the participant can correctly identify/locate (recognition + navigation tasks).
- Interpretation
: proportion of opportunities the participant can correctly explain eligibility, steps, and expected payoff (scenario comprehension tasks).
- Action
: weighted utilization of opportunities (attendance/time logs; verified uptake).
- Outcome
: standardized benefit metrics aligned to the domain (learning gains, skills acquired, satisfaction, retention), measured longitudinally where possible.
The proposed index:

This geometric form penalizes “weak links”: if perception is near zero, high action cannot occur; if action is near zero, outcomes cannot plausibly be attributed to opportunities. This aligns with staged models in digital inequality research—where differences persist after access because skills and usage differ—and with design concepts of perceived affordances. [61]
A companion construct, Human Capacity (HC), is modeled as a latent variable from the four capacity domains. The key estimand in the doctoral study is the conversion elasticity of outcomes to access as a function of HC:

The interaction term operationalizes “readiness”: it tests whether the same unit of access produces different returns at different capacity levels, consistent with capability-oriented interpretations. [62]
Access → Perception → Interpretation → Action → Outcome
flowchart LR
X[Access<br/>(resources, eligibility)] –> P[Perception<br/>(awareness, salience)]
P –> R[Interpretation<br/>(meaning, how-to, expected payoff)]
R –> A[Action<br/>(uptake, persistence, use)]
A –> O[Outcome<br/>(skills, well-being, attainment)]
HC[Human Capacity<br/>(cognitive, emotional, behavioral, economic)] -. moderates .-> P
HC -. moderates .-> R
HC -. moderates .-> A
Ctx[Contextual Structure<br/>(norms, social capital, design)] -. shapes .-> P
Ctx -. shapes .-> R
Ctx -. shapes .-> A
This process model is compatible with (i) perceived affordances in design, (ii) cognitive load constraints on learning and decision-making, and (iii) predictive processing perspectives where perception and action depend on prior models and reinforcement histories. [63]
Opportunity Utilization Funnel
flowchart TB
S[Total opportunities available<br/>(S)] –> K[Known / noticed]
K –> U[Understood correctly]
U –> T[Tried at least once]
T –> M[Maintained / repeated use]
M –> B[Benefits realized]
style S fill:#f7f7f7,stroke:#aaa
style B fill:#f7f7f7,stroke:#aaa
This funnel makes a diagnostic claim: inequity can be generated upstream (not knowing) even if downstream supports are strong, echoing second-level digital divide findings (skills and usage differences) and educational evidence on the centrality of feedback and self-assessment in sustaining learning behavior. [64]
Capacity Threshold Curve
A central hypothesis is nonlinearity: below a threshold, additional access has little effect because perception/interpretation/action bottlenecks dominate; above it, the marginal returns to access rise sharply.
Conceptual S-curve (not fitted yet; to be estimated empirically):
flowchart LR
C[Capacity level] –>|below threshold| L[Low conversion<br/>Access → Outcome ~ flat]
C –>|near threshold| S[Steep conversion<br/>rapid gains]
C –>|above threshold| P[Plateau<br/>diminishing returns]
This is theoretically consistent with: (a) cognitive load constraints (overload prevents learning), (b) scarcity/bandwidth accounts (stress reduces available cognition), and (c) motivational models requiring motivation–ability–prompt convergence for behavior. [65]
False Equity Model
“False equity” is defined here as a policy or organizational stance that treats equal distribution as sufficient, while disregarding differential conversion capacity and differential interpretation burdens.
flowchart TB
E[Equal access provided] –> A1[Assumption:<br/>equal opportunity realized]
A1 –> F[Observed:<br/>unequal uptake & outcomes]
F –> D[Misdiagnosis:<br/>blame individuals or blame access alone]
D –> R[Repetition:<br/>more access without capacity supports]
E –> C[Capacity-aware design]
C –> S1[Target bottlenecks:<br/>navigation, scaffolding, coaching, slack]
S1 –> G[Higher utilization & narrower gaps]
This model is meant to operationalize why “equity lens” evaluation must look at differential results and the degree to which interventions reduce or exacerbate equity gaps—an approach explicitly recommended in contemporary evaluation guidance. [66]
Mixed-methods doctoral study design
Study aims and hypotheses
The study is designed as a multi-context, multi-site project (population and country unspecified; sampling plans are written to be adaptable across jurisdictions). It tests:
- Whether HC predicts HCUI under equal access conditions.
- Whether conversion losses are concentrated at Perception, Interpretation, Action, or Outcome stages, and how these losses differ across contexts (ship/school/workplace).
- Whether capacity-building interventions shift the capacity threshold and increase utilization efficiency more than access-only expansions.
These hypotheses sit at the intersection of capability theory, cultural capital/social reproduction, cognitive load and scarcity/bandwidth mechanisms, and behavior change models. [67]
Quantitative design
Core outcomes and utilization variables
- Utilization rate
: uptake counts, time-on-opportunity, repeat engagement; modeled as a fraction in
or count with exposure offsets.
- Outcome
: domain-specific (learning growth; work performance; well-being/satisfaction; retention).
- HCUI: computed from the staged indices.
Modeling plan
- Multilevel models (individuals nested in contexts and sites): estimate variance components so the thesis can quantify how much outcome variance is attributable to individual capacity vs. context design vs. their interaction. This directly tests the claim that equal access does not equalize outcomes because conversion capacity varies. [68]
- Variance decomposition: report intraclass correlations and “explained variance” by blocks (Access only → Access+HC → Access+HC+Context → full interaction), aligned with evaluation concerns about differential results. [69]
- Structural equation modeling: treat HC as latent (4-domain measurement model), test mediation through Perception/Interpretation/Action to Outcome. This matches staged funnel logic and reduces measurement error. [46]
- Longitudinal modeling: growth-curve or latent growth models for Outcomes and HCUI to identify whether capacity constraints diminish over time (learning) or persist (structural inequality). [70]
- Heterogeneity analysis: estimate differentiated treatment effects by baseline capacity, consistent with evidence that many interventions have average effects but heterogeneous realized gains. [71]
Quantitative identification strategies
- In equal-access settings (e.g., all passengers on a ship with identical amenity access; or a school grade with identical tutoring eligibility), exploit quasi-experimental uniformity in formal access and focus inference on conversion mechanisms.
- In varying-access settings, use randomized or policy-driven variation in access levels (e.g., different training entitlements), and estimate conversion elasticity via interaction with HC, consistent with capability logic. [72]
Qualitative design
Qualitative work is not ancillary; it is required to unpack the “Interpretation” stage where inequity often hides—what people think an opportunity “is for,” whether it “is for people like me,” and how social signals gate participation.
Methods include:
- Ethnographic shadowing and participant observation in each context to map informal norms, navigational burdens, and opportunity hoarding mechanisms. [73]
- Semi-structured interviews targeting the funnel stages (“What did you notice? What did you think it required? What stopped you? What would have helped?”), tied to models of epistemic and structural injustice to avoid individual blame while still measuring individual differences. [36]
- Case-structured comparisons of “high HCUI” vs “low HCUI” individuals within equal-access environments to identify conversion supports and bottlenecks.
Case-study contexts
Cruise ships as a semi-controlled opportunity environment
Cruise environments are attractive because they are bounded, scheduled, information-rich, and instrumentable (time-stamped check-ins, maps, announcements). They allow estimation of how “navigation literacy,” social comfort, and fatigue alter utilization even when amenities exist and are physically proximate. The theoretical bridge is perceived affordances: opportunities exist, but must be legible and psychologically “available.” [74]
Schools as capability-development institutions
Schools provide a natural setting for testing the capacity hypothesis because they explicitly aim to increase cognitive and behavioral capacity over time. High-dosage tutoring and formative assessment offer empirically grounded interventions that can be interpreted as “capacity-building” rather than mere “access expansion,” with documented effects and substantial heterogeneity when scaled. [75]
Workplaces as ongoing conversion systems
Workplaces offer repeated opportunities (training modules, mentorship, internal mobility programs), enabling longitudinal measurement of HCUI as employees learn to perceive and utilize organizational affordances. Psychological safety and job crafting provide theoretically grounded pathways for increasing utilization without changing formal access. [76]
Sampling strategy
Because no country is specified, the study uses a portable sampling blueprint:
- Stage one: select multiple sites per context type (e.g., several schools, several workplaces, several cruise voyages/ships) to separate individual from site effects.
- Stage two: stratify by baseline Human Capacity profiles (bottom/middle/top quantiles) to ensure statistical power for threshold and interaction effects.
- Stage three: oversample groups likely to face higher interpretation/navigation burdens (e.g., first-time participants, language minorities), consistent with research on second-level divides and structural inequality. [77]
Ethical considerations
Ethics must address two risks simultaneously:
- Stigmatization/deficit framing: studying “capacity” can drift into blaming victims of structural conditions. The study therefore explicitly treats economic and social constraints as part of capacity, and uses structural justice framing to interpret results as shared responsibility for systems design. [78]
- Privacy and inference risks: utilization and digital trace data can reveal sensitive patterns. The protocol should minimize identifiability, use participant-controlled consent for trace data, and report only aggregated results unless explicit permission is given. Evaluation guidance emphasizing differential results should be applied without enabling individual targeting that violates rights. [79]
Evidence-based interventions, budgets, deliverables, and design recommendations
Interventions compared by mechanism, cost, and expected effect size
The study treats interventions as conversion supports mapped to funnel stages. Effect sizes below are taken from meta-analyses or large evidence syntheses when available; costs are included only where relatively credible sources provide them and are illustrative (context-dependent; currency/local purchasing power will vary).
Table: Intervention archetypes aligned to the funnel, with evidence and planning budgets
| Intervention archetype | Primary funnel stage targeted | Evidence base and typical effect size (illustrative) | Illustrative unit cost (where documented) | Capacity interpretation |
| High-dosage tutoring | Action → Outcome | Tutoring meta-analysis pooled effect ≈ 0.37 SD on learning outcomes. [80] Scaling evidence warns impacts can attenuate in large-scale programs. [81] | Reported annual per-pupil costs in one policy brief range roughly $750–$2,500 depending on assumptions. [82] | Builds cognitive skill and sustained practice; reduces action friction via structured support. |
| Formative assessment professional development | Interpretation → Action | Review-level evidence that strengthening feedback loops yields substantial learning gains. [20] | Example scale-up costing reported as ~£2.02 per pupil annually in one implementation evaluation. [83] | Improves interpretation accuracy (what to do next) and self-regulation (students/teachers). |
| Choice architecture / nudges | Perception → Action | Meta-analysis across domains: small-to-medium average behavioral effect (Cohen’s d ≈ 0.43). [71] Large administrative RCT portfolio demonstrates many nudges, emphasizing realistic expectations and heterogeneity. [84] | Often low marginal cost (messages/letters), but costs vary; some SMS programs report low per-person costs in health contexts. [85] | Increases salience and reduces decision friction; does not reliably overcome low capability/low slack. |
| Workplace coaching | Interpretation → Action → Outcome | Meta-analysis in organizations reports positive effects with Hedges’ g roughly 0.43–0.74 across outcomes. [86] | Costs highly labor-dependent; treat as medium/high relative to nudges. | Raises self-regulation and goals clarity; can increase behavioral and emotional capacity. |
| Psychological safety interventions and learning culture | Interpretation → Action | Field work links psychological safety to learning behavior and performance in teams. [27] | Varies widely (manager training, facilitation). | Converts access to learning into actual participation by lowering interpersonal risk. |
| Targeted job training / workforce pathways | Action → Outcome (with Perception supports) | Meta-analytic evidence in active labor market programs shows impacts often grow over longer horizons and vary by program type. [87] | Costs vary; one evidence summary reports a program cost ~$23k per participant (program-specific). [88] | Builds economic and cognitive capacity; effectiveness depends on readiness and complementary supports. |
Doctoral project plan, deliverables, and budget
The following research budget is planning-grade (not country-specific) and assumes multi-site mixed methods plus minimal technology infrastructure for HCUI data collection.
Table: Research deliverables and planning budget
| Period | Key deliverables | Major cost drivers | Planning budget range (USD-equivalent) |
| Year one | Instrument development (HC domains + Perception/Interpretation tasks), pilot studies, IRB/ethics approvals, preregistered analysis plans | Personnel time, pilot incentives, survey platform, measurement development | $120k–$280k |
| Year two | Full data collection across contexts; ethnography + interviews; administrative trace integration | Travel/logistics, site fees, participant incentives, data engineering | $250k–$650k |
| Year three | Longitudinal follow-ups, modeling (SEM/multilevel), dissemination papers, policy briefings, replication package | Data analysis staff, compute, publication/translation, stakeholder workshops | $180k–$450k |
| Optional prototype track | AI-enabled capacity-building platform pilot, A/B tests and governance | Product design, privacy/security, model evaluation, human coaching integration | $150k–$600k |
(“Budget range” reflects whether the project uses one country vs multiple, and the degree of instrumentation—e.g., passive sensing versus surveys/diaries only.)
Policy and organizational recommendations
Design principle: move from “access provision” to “conversion support.”
In public systems, “access” metrics (eligibility, enrollment, availability) should be treated as inputs; the success criterion should be HCUI movement and the narrowing of conversion-stage gaps. This aligns with evaluation guidance that demands attention to differential results and to whether programs exacerbate or reduce equity gaps. [66]
Use targeted universalism for capacity, not only for access.
Universal goals (e.g., educational attainment, workforce mobility, health equity) should be pursued by targeted processes reflecting how groups are situated in structures—explicitly including conversion burdens like navigation complexity, mistrust, time scarcity, and informational exclusion. [89]
Operationalize “readiness” without moralizing it.
The study’s definition of Human Capacity avoids framing readiness as virtue and instead treats it as a measure of cognitive bandwidth + interpretive resources + stability—conditions shaped by social determinants and inequality traps. [90]
Education system design: shift resources toward capacity multipliers.
Evidence supports interventions that supply feedback, scaffolding, and sustained guided practice (tutoring; formative assessment). These are not only “more services,” but mechanisms that reduce cognitive load, clarify interpretation, and increase action persistence. [91]
Workplace design: treat psychological safety and job crafting as utilization infrastructure.
If organizations want equal access to career pathways to translate into equal mobility, they should measure and build team-level learning climates, and enable employees to craft roles so that opportunities become personally legible and actionable. [92]
Navigation as equity: reduce perceived-affordance gaps.
In any complex environment (ship, campus, bureaucracy, workplace), “what is available” must also be interpretable. This implies investing in signifiers, onboarding, and structured pathways—because perceived affordances gate action. [93]
Optional AI-enabled capacity-building platform prototype
An AI-enabled platform can operationalize the framework if it is treated as capacity infrastructure, not surveillance.
Concept: “Capacity-to-Utilization Coach” (CTUC)
- Assessment layer: administer short, privacy-preserving instruments for HC domains plus opportunity-awareness tasks; compute HCUI continuously from engagement evidence. (Use IRT-based scoring where available—e.g., financial well-being tools illustrate a robust approach to latent trait scoring.) [94]
- Personalized scaffolding: recommend “next-best opportunity” actions sized to the individual’s zone of proximal development (small steps; guided practice), reflecting learning-readiness theory. [19]
- Behavior change engine: implement prompts and friction reduction consistent with behavior models and choice architecture; log heterogeneity rather than assume universality. [95]
- Human-in-the-loop supports: route low-HC or low-HCUI participants to human coaching/tutoring or case management (capacity often cannot be “nudged” into existence). [96]
- Governance: apply structural justice and epistemic justice constraints: the system must not penalize individuals for low capacity, must offer appeal/explanation, and must be evaluated for differential error rates and differential benefits. [97]
Evaluation of the AI prototype should be done via randomized rollout or stepped-wedge designs with subgroup analysis by baseline capacity, explicitly estimating whether the platform shifts the capacity threshold and increases HCUI more for those who start below it. [98]
Limitations and publishable research agenda
The proposed framework is intentionally ambitious and carries four main risks. First, capacity measures can be culturally non-invariant; the psychometrics must therefore prioritize measurement invariance and latent-variable approaches (especially for cross-country replication). [99] Second, “capacity” can be endogenously shaped by the environment (learning increases capacity), so longitudinal modeling is required to separate stable traits from malleable states. [100] Third, utilization can reflect preference rather than constraint; qualitative work and a carefully constructed “opportunity value weighting” system are necessary to avoid labeling all non-use as failure. [36] Fourth, policy translation can be distorted if the framework is used to justify retreat from access provision; the capability tradition and structural justice framing caution that capacity-building should complement—not replace—equalizing access and dismantling structural barriers. [101]
[1] [17] [62] [67] [72] [101] Amartya Sen: Development as Capability Expansion
[2] [5] [21] [34] [44] [61] [64] [77] Second Level Digital Divide: Differences in People’s Online …
[3] [26] [74] [93] J.J. Gibson – Affordances
[4] [18] [37] [42] [56] [65] [91] [100] Epistemic Injustice: Power and the Ethics of Knowing
[6] [11] [15] [40] [73] Poverty Impedes Cognitive Function – Eldar Shafir
[7] [89] Digital Divide: Impact of Access – Dijk
[8] [33] Applying Evaluation Criteria Thoughtfully
[10] [27] [76] [92] Bourdieu, Pierre. 1986. “The Forms of Capital.” Pp. 241- …
[12] [29] [35] [46] [94] CFPB Financial Well-Being Scale
[13] [38] [48] https://educational-innovation.sydney.edu.au/news/pdfs/Bandura%201977.pdf
[14] Annual Research Review: Cash transfer programs and young …
[16] [71] [98] The effectiveness of nudging: A meta-analysis of choice …
[19] https://home.fau.edu/musgrove/web/vygotsky1978.pdf
[20] [58] Mutual dependency between capabilities and functionings …
[23] [66] [69] [79] Applying Evaluation Criteria Thoughtfully
[24] [47] What is Equitable education? Meaning, Definition.
[25] [90] The Psychological Lives of the Poor
[28] CRAFTING A JOB: REVISIONING EMPLOYEES AS ACTIVE …
[30] https://www.oecd.org/content/dam/oecd/en/topics/policy-issues/future-of-education-and-skills/learning-compass-constructs/Equality-Equity.pdf
[31] [68] Digital Divide: Impact of Access | VAN DIJK
[32] The Design of Everyday Things
[36] [43] Epistemic Injustice: Power and the Ethics of Knowing
[39] [52] [53] [99] The WHO-5 well-being index – validation based on item …
[41] Final report of the commission on social determinants …
[45] Efficacy and costs of a workplace wellness programme – PMC
[49] General Self-Efficacy Scale
[50] [57] [95] Difficulties in Emotion Regulation Scale (DERS; Gratz & …
[51] The World Health Organization-Five Well-Being Index …
[54] [78] [97] Responsibility for Justice | Oxford Academic
[55] Rosenberg Self-Esteem Scale (Rosenberg, 1965)
[59] Measuring financial well-being
[60] Behavioural Insights Team
[63] Norman Affordances
[70] [81] Correction for Mertens et al., The effectiveness of nudging
[75] [80] The Impressive Effects of Tutoring on PreK-12 Learning
[82] High Dosage Tutoring for School Turnaround and …
[83] Embedding Formative Assessment – scale up evaluation
[84] RCTs to Scale: Comprehensive Evidence from Two Nudge …
[85] Cost-Effectiveness of Text Messaging to Reengage …
[86] [96] Does coaching work? A meta-analysis on the effects of …
[87] What Works? A Meta Analysis of Recent Active Labor …
[88] Targeted job training can open doors