The ethics of Ant Afu: Health AI within China’s super-app ecosystem

By Miranda Qianyu Wang

On 15 December 2025, Ant Group rebranded its healthcare app as “Ant Afu,” repositioning it from a diagnostic tool to an “AI Health Friend.” Embedded within Alipay – the digital payment platform used daily by over a billion people in China – Afu offers personalised health companionship and connects users directly to a service loop of pharmacies, insurance providers, and over 5,000 hospitals. The scale of adoption reflects the power of this ecosystem. Within one month of launch, Afu has attracted nearly 30 million active users and handled over 10 million daily queries.

Such rapid adoption amplifies the ethical stakes of this new model. The app raises a number of ethical concerns, including privacy, diagnostic accuracy, algorithmic biases, and the potential to undermine traditional doctor-patient relationships. This post will focus specifically on how integrating an anthropomorphic “AI Friend” into a ubiquitous super-app raises distinct concerns regarding AI’s perceived clinical authority and the transparency of commercial pathways in a seamless digital environment.

 

When Friendship Reframes Clinical Authority

One of the key drivers of such widespread adoption is a dual design strategy that combines friendliness with clinical authority. The interface constructs this through two primary features. First, it establishes accessibility through friendship cues – most visibly via the “Afu” cartoon avatar and a companion-like tone. Second, it integrates digital twin avatars of real-world physicians. These features go beyond text; they present the doctor’s photograph and enable simulated phone calls using AI-generated voice modelled on the specific physician.

When the system asks follow-up questions, probes medical history, and speaks with the reassuring cadence of a trusted doctor, it creates the sensory illusion of a clinical consultation rather than an automated output. While this design increases accessibility, it introduces risks as well: users may project clinical authority onto the software, potentially leading to a level of reliance that exceeds the system’s intended scope.

This concern is particularly salient given the user demographic. Reporting indicates that roughly 55% of users reside in lower-tier cities, which are areas marked by lower economic development and limited infrastructure compared to China’s major metropolises. In regions characterised by medical resource scarcity, the distinction between an AI clone and a real doctor may be less significant to users than the immediate accessibility of expert-branded guidance. In such contexts, standard legal disclaimers are unlikely to compete with the visceral, lived experience of “seeing a doctor.”

Consequently, ethical governance must attend not only to what the system claims to do, but to what it invites users to believe it is doing. In clinical encounters, professionals are expected to safety-net and assume accountability. An automated system cannot straightforwardly assume these obligations, yet by mimicking the sensory markers of care, it invites reliance as if it could.

 

Neutrality and Transparency in an Integrated Pathway

Ant Afu has publicly emphasised a “no-ads” stance, a choice that ostensibly prioritises patient interests and user experience. However, the absence of banner ads does not strictly equate to neutrality. Influence in such ecosystems is often structural rather than promotional; even without paid placements, the platform’s algorithms must select which hospital, pharmacy, or service to present as the “next step” following a symptom check. In this context, the risk is that recommendations may functionally resemble steering, guiding users through pre-determined pathways without the visual markers of traditional advertising.

The result is a profound transparency challenge. Within this ecosystem, Ant/Alibaba-affiliated healthcare services (such as Alibaba Health Pharmacy or Ant Insurance Services) sit alongside public hospital access. Because the interface is uniform and lacks ad markers, it becomes structurally difficult for a user to discern the logic behind a specific recommendation – whether it is a clinically motivated judgement, administrative convenience, or ecosystem-optimised routing shaped by internal partnerships. The ethical concern here is not necessarily one of deception, but of opacity: without explainability, the convenience of “one-click” healthcare risks obscuring the distinction between patient interests and commercial routing.

 

Conclusion

While its long-term clinical impact remains to be seen, Ant Afu has demonstrated remarkable market uptake since its launch just last month. With its massive coverage and high user engagement, the platform is already reshaping the healthcare landscape in China, proving that the demand for integrated, companion-like health AI is immense. Its ambition and rapid scale serve as a blueprint for the global industry, likely influencing similar initiatives such as the health-focused iterations of ChatGPT.

Ant Afu is therefore not just a local phenomenon but a preview of the future. Consequently, the ethical questions raised here regarding clinical authority, transparency, and trust make it all the more urgent to demand ethical safeguards proportionate to its reach and influence.

 

Author: Miranda Qianyu Wang

Affiliation: Durham University, Centre for Ethics and Law in the Life Sciences

Competing interests: None to declare

Social media: LinkedIn | www.linkedin.com/in/miranda-qianyu-wang

(Visited 560 times, 1 visits today)