The Illusion of Choice in Modern Healthcare
Key Takeaways
• Digital health systems promise choice but increasingly deliver curation.
• Algorithms, insurance tiers and actuarial models shape care options long before patients see them.
• These models can improve efficiency and access but can also create quiet forms of coercion.
• Restoring real choice requires new forms of visibility into data, incentives and personalization boundaries.
1. The New Architecture of Choice
For most of modern medicine, choice was simple. A person picked a doctor because a neighbor recommended one, or because the clinic was close, or because the family had seen that physician for years. Agency meant the ability to select care based on personal preference and lived experience.
Today the journey begins somewhere else - especially in the USA and less so in countries such as where I live like the Netherlands.
Before a patient sees any options, the system has already filtered them.
Insurer networks decide which doctors appear. Digital front doors triage symptoms and suppress certain pathways. Recommendation engines prioritize “preferred” providers. In some systems, patients are routed before they even know there was a fork in the path - despite that feeling of making decisions.
This feels like choice because the interface offers options. But most options have already been removed upstream.
The intention is not malicious. In a world of scarce specialists and rising costs, filtering can reduce delays, lower risk and guide people efficiently. The tension lies in what the user believes they are choosing versus what the system has already chosen for them.
The algorithm does not remove freedom. It defines it and scopes it - which is at the very least scary to realize in a world that will have more capacity to sophisticated manipulation.
2. The Invisible Tiers of Care
Virtually every health system relies on risk tiers. In the best cases, these models help identify high risk patients, so they receive more attention. When a person with heart failure gets flagged for early follow up, it can prevent hospitalization. When predictive tools route vulnerable patients to nurses or dietitians, it can improve outcomes.
But these tiers also shape access in ways that are rarely visible. Insurers use internal models to determine which treatments need approval. Some hospitals prioritize slots based on predicted adherence or expected cost. If a digital triage tool believes a patient is low priority, the system may guide them to a virtual visit instead of a specialist.
This is both a strength and a weakness. It improves efficiency, but it narrows the paths available.
A 2023 Health Affairs study reported that nearly forty percent of hospitals in the United States use AI-based risk scoring in patient management. These tools can improve care coordination, but they also influence decisions without patients knowing how or why.
Digital health is personal, but the structure behind it is tiered.
3. The Quiet Coercion of Data
Modern healthcare is built on actuarial science, so it is worth explaining it clearly.
An actuarial model estimates the likelihood that a person will use certain types of care. It does this by analyzing large pools of data. Inputs often include age, family history, past claims, chronic conditions, lifestyle data and medication patterns. Insurers use these predictions to set premiums, design coverage levels and plan financial risk.
At its best, actuarial modeling keeps systems solvent and allows preventive programs to be funded. At its worst, it becomes a silent gatekeeper that shapes access without the patient ever seeing the logic.
Digital health tools add another layer. Fitness tracking, sleep data and glucose logs can all feed models that reward or penalize behavior. Lower premiums for daily step totals can help drive healthier habits. But incentives can also become subtle forms of pressure when refusal means higher costs.
When every action becomes a data point, freedom becomes performance.
Quiet coercion does not feel like coercion. It feels like a helpful suggestion that just happens to be mandatory in practice.
4. The Algorithmic Gatekeepers
AI systems now help route patients, prioritize schedules and generate recommendations. Used responsibly, this improves care.
• Triage algorithms can reduce delays in emergency departments.
• Clinical decision support tools can reduce diagnostic errors.
• Chatbots can answer basic questions faster than phone lines.
• Predictive alerts can warn clinicians before a patient destabilizes.
These are real gains. Efficiency is not the enemy.
The tension emerges when the logic behind these systems is hidden. An AI assistant may present three treatment options, but perhaps ten were available. A recommendation engine may rank clinicians based on cost, contract obligations or predicted adherence. A symptom checker may steer a patient away from in person care to manage capacity.
Each step is rational inside the system. Each step shapes the patient’s path. Agency becomes the residue of design decisions made elsewhere.
Humans still appear to choose, but the system has already prepared the menu.
5. The Path Back to True Choice
Choice is not the number of options on the screen. Choice is understanding how the options were created.
To restore real agency, healthcare needs new forms of visibility. Not just transparency in the legal sense, but clarity that lets people understand and influence the path.
Five forms of visibility, I - perhaps naively - think that matter.
- Visibility into data logic. Patients should know why a recommendation appears, how it was generated and which factors influenced the ranking.
- Visibility into data lineage. People should be able to see where their health data came from, how it has been used across the system and which entities accessed it.
- Visibility into incentive structures. Every AI-driven suggestion carries a goal. Patients should know whether the system optimizes for cost, efficiency, adherence or health outcomes.
- Visibility into personalization boundaries. Users should be able to cap or adjust how deeply AI systems personalize recommendations. This includes the right to refuse certain types of modeling.
- Visibility into value capture. Patients should understand what commercial or institutional value is generated from their data and how that value flows through the system.
These ideas move choice from a superficial interface feature to a real form of sovereignty - of course, only for those that are interested 😄. They also foreshadow the type of health data relationships that future systems will need. Systems where users retain authorship of their data. Systems where incentives are visible. Systems where personalization is co-created, not imposed.
This is not a rejection of digital health. It is its evolution.
6. Why It Matters
The promise of digital healthcare was empowerment. The reality is more complicated.
Algorithms and actuarial tools have made care more efficient and more predictive. But they have also made it opaquer. Patients believe they are choosing, but often the system is choosing for them. This is not a failure of technology. It is a failure of visibility.
The illusion of choice is manageable. The solution is not to remove algorithms. It is to reveal them - in very simple terms so we all can understand. People cannot reclaim agency until they can see how their options are shaped.
Healthcare is becoming personalized, but personalization without clarity becomes guidance without consent. The future belongs to systems that offer both efficiency and transparency.
Both intelligence and agency. Both guidance and choice.
PS: Got some questions on what "agency" means. Here is a simple definition in this context: Agency means the ability of a person to understand their options, make their own decisions, and act on what they believe is best for their health.
Sources (some really cool!)
United States Office of the National Coordinator for Health IT.
Hospital Trends in the Use, Evaluation, and Governance of Predictive AI 2023–2024.
https://www.healthit.gov/data/data-briefs/hospital-trends-use-evaluation-and-governance-predictive-ai-2023-2024
Haug CJ.
Artificial Intelligence and Machine Learning in Clinical Medicine, 2023.
New England Journal of Medicine.
https://www.nejm.org/doi/abs/10.1056/NEJMra2302038
New England Journal of Medicine.
AI in Medicine – Topic Hub.
https://www.nejm.org/ai-in-medicine
Obermeyer Z, Powers B, Vogeli C, Mullainathan S.
Dissecting racial bias in an algorithm used to manage the health of populations.
Science. 2019.
https://www.science.org/doi/10.1126/science.aax2342
Cross JL, et al.
Bias in medical AI: Implications for clinical decision-making.
Journal of Medical Ethics, 2024.
https://pmc.ncbi.nlm.nih.gov/articles/PMC11542778/
Society of Actuaries.
Actuarial Glossary.
https://www.soa.org/4a537f/globalassets/assets/files/edu/actuarial-glossary.pdf
OECD.
Health Data Governance for the Digital Age.
Landing page: https://www.oecd.org/en/publications/2022/05/health-data-governance-for-the-digital-age_5c42de41.html
Direct PDF: https://www.oecd.org/content/dam/oecd/en/publications/reports/2022/05/health-data-governance-for-the-digital-age_5c42de41/68b60796-en.pdf
OECD.
Health at a Glance 2023: Digital Health Chapter.
https://www.oecd.org/en/publications/2023/11/health-at-a-glance-2023_e04f8239/full-report/digital-health_d79d912b.html
World Health Organization.
Global Strategy on Digital Health 2020–2025.
https://www.who.int/docs/default-source/documents/gs4dhdaa2a9f352b0445bafbc79ca799dce4d.pdf
Global Digital Health Monitor.
State of Digital Health 2023.
https://static1.squarespace.com/static/5ace2d0c5cfd792078a05e5f/t/656f97969301e337ada15270/1701812128734/State%2Bof%2BDigital%2BHealth_2023.pdf