top of page

The Neural Myths of the Neural Net: What We Fail To Consider About AI-Driven Mental Health Assistance

During the pandemic, when there was nothing else to do but scroll on Instagram, I almost convinced myself that I had every mental affliction in the world and then some. There were people dancing to trendy music in cute outfits pointing to text that told me that everything I have ever done in my life has been a symptom. The more one paid heed to this kind of content the more it spawned, and there seemed to be very little room for doubt about whether or not the conclusions it pushed you toward were reliable.


That is not to say that social media has not helped a lot of ailing and isolated identities find community and resonance with others whose experience of the world aligns with theirs. Labels can often help explain individual circumstances and connect them to more collective experiences that are acknowledged and legitimised. Indeed, it is more problematic that access to correct and fair diagnosis is as unavailable as it is than the fact that people now tend to name their experiences and try and find explanations for them.


In fact, all names come with certain social constructions of reality. To name something is to categorise it based on subjective human perceptions of the phenomenon. What mental illness constitutes is also a socially evolving category. On the one hand, people have been lobotomised in the past for extremely natural and human expressions of guilt or anger. On the other, very serious conditions that make functioning in society difficult have been dismissed as individuals being “peculiar” or “difficult”. Our understanding of ourselves is quite incomplete and in a context of incomplete information such as this it would be fallacious to claim that the individual experiencing the symptoms they claim to is less reliable than a supposed expert who is also relying on certain observable and water-tight compartmentalisations of the human experience. If the individual can fall prey to “trendy” notions of identity as seems to be the general protestation against self-diagnosis, it also stands to be true that the field of psychiatry can and has often deferred to socially and hegemonically mediated categorisations of the “normal” and the “abnormal”. 


That being said, the current upsurge of artificial intelligence-based mental health services further reifies these categorisations, taking dictation from flawed and incomplete assumptions of medical science. It becomes increasingly more difficult to diagnose and treat mental illness when technologies based on rule-based inferences and curated datasets take the stead of human engagement, since that does not allow for the field of psychiatry to reassess its epistemological assumptions, but instead makes for an inflexible approach to mental pathology. The study of human psychology is a two-way street; it ought to explain as well as learn, and in that it is still distinctly human. 

On Power and the Politics of Diagnosis


In Biopower and Madness and Civilization, Michel Foucault— French philosopher and scholar, challenges the validity of social categorisations, including but not limited to those of mental illnesses. Treating mental illnesses as social constructions does not mean dismissing the reality or impact of these conditions. It just means acknowledging the fact that mental “abnormalities” or “deviations” are contingent upon widely accepted social norms of an era. The Diagnostic and Statistical Manual of Mental Illnesses is acknowledged across institutions practicing Western medicine as the most comprehensive and reliable text on mental pathologies, and even the DSM-5, upheld for its scientificity, objectivity, and rigor, confesses to basing its definitions of mental illness and health on “socially acceptable” normative behaviour. Foucault claims that “madness” is a category that serves in the marginalisation of entities by legitimising certain norms as more acceptable than others. Foucault’s critique of psychiatry as a discipline of power has renewed relevance in an era where definitions of “normalcy” are coded into AI systems.


Thus, it becomes more pertinent to reiterate that no singular authority can claim that they understand human neurology in its entirety. Products of technology that claim to provide mental health assistance though are based on certain assumptions and also certain goals in mind. These goals do not necessarily align with those of disciplines of humanitarian import. They may help us structure and organise what we already know about the human brain, about function and dysfunction, about normative and disorderly behaviour, but they rarely push the boundaries of that knowledge. They reinforce already dominant frameworks in admittedly more efficient ways, but efficiency is no stand-in for empathy. 


Culture Doesn't Compute


Contemporary mental health care, including the therapy models widely practiced and platformed worldwide, is largely reliant on notions steeped in Eurocentric values. Most modern psychiatric paradigms are formed by Western rationality, clinical detachment, and an individualistic worldview. Therapy often entails assumptions about mental health, trauma, and identity that are not culturally relativistic. 


For collectivistic cultures, wherein identity is informed by community, these models do not always hold relevance. In countries like India for example, the individual derives meaning from their relation to institutions of family, caste, class, religion, and community— more significantly so than Western frameworks. Even more troubling is the deep inequity in access to mental health services in India wherein mental illness is still viewed as a concern for the privileged. Mental health platforms that are informed by caste, gender, and class realities are rare and without explicit framing that would prompt responses to consider these dimensions, large language models are largely inadequate to glean the subtle ways in which these entities operate in a setting wherein they are considered obsolete but are immensely pervasive. 


While GenAI is promising in the sense that it can make urgent primary mental health support more accessible, there are certain issues with its employment for counseling that need to be contended with. 


The 'A' in AI Stands For "Affirming"


Yes, it is a problem that modern technology is built to be validating, but not in the way that critics of self-diagnosis claim. It comes with problems of its own but they, one may argue, pale in comparison to the harm that GenAI does when it validates and reinforces not culturally dynamic identification with labels, but behaviours and tendencies.


GenAI is largely programmed to affirm, but mental health practitioners would generally agree that the function of therapy and counseling is to be able to challenge one’s patterns that might be inadvertently exacerbating their problems. While software like ChatGPT (being, as it is, the most accessible GenAI model currently available) can help those who are already quite critical of themselves, know their patterns, and want to talk through them, they are not very effective (and even harmful) when it comes to delusions - overt or covert. Many experimentations with ChatGPT have found that in the face of delusional claims, ChatGPT often leans into and amplifies notions that are quite clearly irrational. 


In April 2025, OpenAI released a blog saying they are actively trying to address “sycophancy” in ChatGPT responses due to an update that intended to make it more agreeable. Despite these efforts however, multiple users have come forward with stories about how their loved ones are using ChatGPT to fuel their delusions as per a Rolling Stone article. In a Substack essay on AI sycophancy, Steve Adler comes up with an easy and cheap way to test for whether or not ChatGPT is sycophantic or contrarian, and finds that current measures to monitor AI behaviour leave much to be desired.


An example of a sycophantic ChatGPT response to a disturbed prompt
Fig. 1 An example of a sycophantic ChatGPT response (Substack)

In this context, wherein the behaviour of AI models is largely unpredictable and hard for even experts to accurately determine, it may not be the greatest idea to use AI for something as sensitive as mental health or at least overly rely on it as a substitute for professional assistance.


Let's Not Automate Empathy


There are, of course, several other concerns that underlie the growing usage of AI as an entity that provides mental health support, but this essay has delved into only the most overlooked ones. Of course, pertinent issues of what an AI model is supposed to do when a user admits to wanting to cause harm to themselves or to others still remain and are not fully articulated in AI policy. There have been an increasing number of cases of AI chatbots encouraging suicidal or homicidal tendencies, often by providing explicit instructions to go about it. 


The subject of mental illness has always been fraught and the definitions that AI functions on the basis of are never neutral even though its presumed neutrality often bolsters confidence in its responses, which in turn can have perilous consequences. AI, no matter how advanced, operates on pre-existing data and inherits the blind spots of the systems it is trained on. It cannot contest its own assumptions, challenge sociocultural biases, or reframe dominant paradigms, it can only reinforce them, often with problematic and frightening efficiency. If we are to understand human behaviour in a more holistic manner than we do so far, we must be willing to question our foundations, and AI can absolutely prove useful in organising what we think we know, but what we know is often not enough to go by. Undeniably, hence, true progress lies not in the standardisation and affirmation of pre-existing, and as we have found, flawed and inconstant knowledge, but in collective, critical, and culturally diverse inquiry into the ever-elusive enigma that is the human psyche.


Comments


bottom of page