Generated Sexualities : How (AI)nclusivity Becomes Discrimination
- sreeshachakra
- Jun 18
- 6 min read
The overuse and over-adoption of artificial intelligence in human sectors has been a feature of this technology ever since its inception. Researchers such as Dawn McAra-Hunter term this as “hype,” relating that this hype has a deep relationship with bias and human metrics of value. Extending its activities, artificial intelligence has also been adopted in areas adjacent or directly in relation with inclusivity, harm and marginalised communities. When AI gets implemented in spheres of work such as employment, policy and creation, the question of human ethics and value become central to the nature of content generation and the algorithm.
This becomes increasingly apparent when AI is tasked with taking over nuanced, multi-layered and reflective tasks such as generating the population of a fictional town in video games meant to simulate real life, or generate diverse images regarding certain historical and socio-political events. In regards to sexuality, isolated on its own, AI has displayed that it is not without the bias of the humans behind it and the models that it is trained on, and in intersection with other marginal lifestyles, it has proved that inclusivity and diversity that is generated by such software can lead towards discrimination and harm instead.
Simulating Life
Video game coding, within the last five years, has become overly dependent on artificial intelligence models. However, this is not a new phenomenon. Historically, ever since their inception in 1948, games such as Nim, Mike Tyson : Punch Out! and Pac-man, have all utilised artificial intelligence in some manner or the other, such as playing against AI in single-player modes or generating harder levels based on the player’s progress in the game. One of the most significant manner in which modern video games utilised this technology was to generate Non-Playable Characters, also widely known as NPCs, to fill the environment of the game and make it seem akin to real life. Starting with quest-oriented hero games such as Dragon Quest IV, to Doom, Hitman and Halo. Using natural language processing (NLP) technology, these non-playable characters were able to communicate with the players in a more realistic manner as the technology was fine-tuned further.
Life simulation games, thus, also become a centre for flourishing AI technology, with big names such as The Sims franchise using artificial intelligence to generate “Townies” (NPCs), their personalities, likes, dislikes, clothing, and conversational habits. With life simulation gaming making a big comeback in the cultural scheme with the democratization of The Sims 4 base game which made a lot of players interested in this genre of gaming, the last five years has seen a rise in the news of development of “The Sims killers.” These refer to a collection of new age life simulation games that are developed with improved features in comparison to The Sims 4, making fans predict a mass migration towards these new games. With one of these “killer” games being released in March 2025, inZOI quickly generated a large amount of interest from video game lovers. The game developers were very public about their usage of an in-house artificial intelligence system that powered the game and its life simulation, but a number of fans and gamers turned out to be disappointed in this feature. Apart from the ethics of using generative AI within a game, and the fact that the heavy software made the game impossible to run smoothly on most PCs, inZOI’s artificial intelligence system had a huge sexuality problem. While it was possible to customise your own character to fall upon the queer spectrum, all of the NPCs that were generated by the AI system were coded to be heterosexual and cis, making it impossible to simulate a same-sex relationship within the game, unless the player went in and created several queer characters themselves, and also somehow reiterating the hateful rhetoric that equalises heterosexuality to normality.

The huge queer fanbase of The Sims franchise who went to test out this new game which promised improved gameplay and closer to life features, were left feeling dejected and sidelined, especially as the game advertised itself to be diverse and inclusive of all sexualties and genders. Along with the context that the majority audience of life simulation games are marginalized people who enjoy creating representations of them on screen due to the blatant lack of the same, this failure on the part of the AI system was a huge letdown for all, and this issue remains to be fixed although it was addressed by the developers.

Erasure and Rewriting
However, this issue is not only fixated within the realm of video gaming. Reece Rogers writes for Wired, that according to Midjourney’s outputs, bisexual and nonbinary people sure do love their textured, lilac-colored hair. Keeping the hair-coded representation going, Midjourney also repeatedly depicted lesbian women with the sides of their heads shaved and tattoos sprawling around their chests. When race or ethnicity remained unspecified to the Midjourney prompt, most of the queer people it generated looked white. The AI tool also failed at depicting transgender people in a realistic way. When asked to generate photos of a trans man as an elected representative, Midjourney created images of someone with a masculine jawline who looked like a professional politician, wearing a suit and posing in a wooden office, but whose styling is more closely aligned with how a feminine trans woman might express herself: a pink suit, pink lipstick, and long, frizzy hair.

Other studies, such as the one done on the Stable Diffusion model by PhD student Sourajit Ghosh, a widely used image generator, showed that such software views “nonbinary people the least person-like, or farthest away from its definition of ‘person.’” In his research Ghosh found that when Stable Diffusion was asked to depict an unspecified person, images of fair-skinned men from Western countries were most common. Images of nonbinary people were rare and sometimes made up of an eerie collage of human-esque features. I tried this exercise for myself, on an app I frequently use for academic and co-curricular work and the results that I found were not too far off from the previous research that were cited. On Canva AI, I tried the prompts “lesbians” (Fig. 1), “two women getting married” (Fig. 2), “a lesbian” (Fig. 3) and “an Indian lesbian” (Fig. 4). With reference to the screenshots of the activity provided below, some of the images seemingly depict a heterosexual relationship, and most of the unspecified race prompts were predominantly representations of white people.
(Left to Right) Fig. 1, Fig. 2, Fig. 3, Fig. 4
However, these are not only reflective of the system’s own biases but also the humans behind the same. Queer people and relationships remain a small portion of the data that is used to train these systems, therefore the lack of diversity and representations in generation, especially when the prompt becomes intersectional in nature. For example, Google’s training model contained a banned list of “Dirty, Naughty, Obscene and Otherwise Bad Words,” among which included words that relate to or describe queer experiences such as “gay,” “kinky,” “twink” and “dominatrix.” Due to this design element, Google’s T5 LLM lacks a lot of knowledge about queerness. A former Google employee who helped generate the data set told Nature that the team consciously chose the List of Dirty, Naughty, Obscene, and Otherwise Bad Words as their filter because it was “overly conservative.” And it’s not just T5 that was trained off of Google’s data. Meta used the same cleaned data set to train its LLM, Llama.
Queering the Future
AI exists as a commercial product, motivated by profit. The companies that propose AI as the solution to multiple areas of work and life, laud it as the future. Why then, are queer people not part of this future? Not only would queering such technologies generate more revenue for these companies from marginalised sectors, it would also future-proof them in the manner that inclusion is a blunt safety for any commercial product. Wired reports that queer people are investing in AI technology just as any other group, and big stakeholders in that industry are queer men. Sam Altman, the CEO of OpenAI, is gay; he married his husband last year in a private, beachfront ceremony. While projects such as ‘Queer in AI’ have been launched, with a core aspect of Queer in AI’s mission is to support LGBTQ researchers and scientists who have historically been silenced, specifically transgender people, nonbinary people, and people of colour, the software still seems to be outdated and regressive.
Queering the future, however, is not only about checking the biases and prejudices of our systems, but also ourselves. As AI remains connected and dependant on human involvement still, educating our developers, holding Big Tech accountable and being mindful of our own practices in everyday life remain a core part towards formulating a better future for queer people, and in extension, other marginalities.
Comments