Everyday Applications and AI Intrusion
- DEVIKA MENON 2333126
- Jun 10
- 4 min read
As a food enthusiast, some of my earliest memories of exploration were, weirdly enough, in the living room with a Sunday newspaper. I would flip through the weekend supplement in the search for food articles and small reviews tucked between lifestyle columns and crossword puzzles. A couple of years later, as Google crept into our lives, I would borrow my mother’s phone and read food blogs, and they gradually replaced the print recommendations, offering broader choices. Soon, I found myself engrossed in food blogs and even vlogs, not just as a viewer but as a budding reviewer myself during middle school. This early curiosity was furthered by the numerous series that aired on travel and lifestyle channels. The lush visuals of sizzeles and steams combined with the evocative descriptions about global cuisines managed to develop an appreciation of food in me.
In recent years, however, this exploratory habit has shifted onto Instagram. For the past two years, food vlogging, especially on Instagram, has played a critical role in shaping Bengaluru’s evolving cafe culture. The city’s penchant for aesthetic brunch spots and fusion menus lends itself naturally to visual storytelling, and short-form reels on Instagram have become everyone’s guide. They not only provide a glimpse into the space and food, but also offer a curated and often candid review of the spot. During one such scroll, I encountered something unexpected that, instead of enhancing, disrupted my experience.
In the midst of these reels, I was abruptly presented with a Meta AI Overview that attempted to summarise the reels that I searched. For a platform built on visual experience, this omnipresent layer of AI summary felt jarringly out of place. After all, the very appeal of these reels lies in their organic, personal curation, but to then be presented with a bland, generated summary that added no value was disappointing. In fact, it diluted the authenticity of the content and intruded upon my user experience.

AI integration is no longer the futuristic marvel it once seemed. Its ubiquity spans from workspaces to creative platforms, and while public opinion is divided, one growing sentiment is clear: many users are weary of the constant and often invasive, uninvited presence of artificial intelligence and generated content across everyday platforms. To this end, there has been a visible pushback where users consciously avoid AI-generated content, especially short-form material, because of the lack of emotional or creative depth. Increasingly, users express a desire to return to “human-made” content that is deliberate, nuanced, and reflective of time, care, and authenticity. Creators and influencers now advocate for a certain kind of digital detox and seek refuge in slower and older forms of content, including blogs, indie magazines, and early-2000s websites, devoid of the presence of AI.
Ironically, corporations continuously justify this proliferation of AI in the development and functioning of digital technology. From optimising workflows to enhancing consumer engagement, these narratives claim that AI offers immense value. However, if one digs beneath this seemingly polished surface, the troubling trend of overengineering driven by what Evgeny Morozov famously termed technological solutionism—the belief that all social and personal problems can be solved through technology, is evident. What makes the current wave more problematic is that many integrations are not rooted in solving a problem at all. Instead, they seem motivated by a fear of obsolescence and the pressure to maintain a competitive edge. This leads to tech companies incorporating AI where it is not needed, and where its presence can even be counterproductive.

This trend was perhaps most visible in Google’s recent “AI Overview” feature. Users accustomed to a clean list of search results are now greeted with AI-generated summaries that often repeat or misrepresent information. It is peculiarly ironic as many users, albeit the popularity of AI tools, return back to search engines like Google to search manually, probably for depth, comparison or credibility, are now provided AI overview. Forcing an AI generated summary on this demographic, in particular, not only misreads the user intent but also undermines the ethos of user agency. The summarisation has also been reported to be inaccurate or loaded with information that the user did not seek. What is even worse is that just like many other new features on everyday apps like Spotify, the option to disable this feature is far from accessible. This experience of being forced into AI-mediated interaction is not only inconvenient but also deeply unsettling. Some users have devised techniques, like that of including swear words in their search queries, just to bypass AI-generated summaries. Others have turned to older search engines that respect user control and choice.
This uncritical and pervasive integration of AI is not limited to search engines. Messaging platforms like WhatsApp have moved beyond mere communication and now offer Meta AI tools within each chat, enabling AI-generated images and text, all within the chat interface. The Instagram direct messaging platform too, has embedded the same AI model in its chat. Pinterest has recently made an attempt to introduce features to ensure transparency with its users about AI involvement in content curation. But this is more of an exception, than a standard practise. Most companies remain unbothered by how deeply most users resent this imposing and irksome update. So, why are tech companies so insistent on integrating AI, even at the risk of alienating users?
The answer, unfortunately, lies in data and the subsequent profit. AI systems thrive on data, and every interaction directly feeds the algorithms. In this economy of attention and analytics, users are not just consumers but commodities. Authentic human conversations and creative expressions are scraped, stored and used to train systems touted as ‘the most human-like AI ever’. Reddit, an online platform with ‘20-year archive of human conversations across more than 100,000 subreddits’ have filed a case against Google-backed AI company Anthropic to illegally train Claude AI models without permission.
What is perhaps most disturbing is how this capitalist mindset reduces humanity to mere datasets. In this context, the boundary between user and product becomes dangerously thin. And while this commodification and exploitation is not new, (history is replete with examples of exploitation in the name of progress) its digital iteration is faster, quieter, and more pervasive. There is an urgent need to critically question the technologies we are encouraged, or sometimes, even compelled to use. Not every innovation is a solution and not every advancement is progress.
Comments