top of page

AI Reshapes Language, Epistemic Access, and Identity Through Algorithmic Simplification and Gatekeeping

  • Writer: Tanya Zhuk
    Tanya Zhuk
  • 4 days ago
  • 8 min read

This paper argues that AI reshapes cultural and cognitive development not primarily through automation, but through the simplification and narrowing of language and knowledge — a shift that has profound implications for identity formation and epistemic access.


ree

Artificial intelligence has quickly moved from novelty to infrastructure across classrooms, workplaces, and homes. For children born in the past five years, AI will be as normalized as television was for earlier generations or smartphones for today’s adolescents. Each technological shift has carried anxieties — “television will rot your brain,” “video games normalize violence,” “social media fosters anxiety and false community” — before being absorbed into everyday life. AI now carries similar polarizing narratives: it is either the technology that will enhance learning, medicine, and productivity, or the force that will erode cognition, displace jobs, and undermine culture. Underlying these debates is a deeper issue: how AI transforms our very sense of what we “need to know.”


An overlooked dimension of AI integration is its effect on language itself. Voice-to-text systems and virtual assistants such as Siri and Alexa encourage users to select the most common words and syntactic forms to ensure recognition. Complex idioms, layered phrasing, or culturally specific expressions are often misinterpreted, pushing users toward oversimplification. While future iterations of AI may expand their capacity to parse complexity, the present trajectory risks flattening communication into a "lowest common denominator" style. The early Twitter logic — that messages should be compressed into 140 characters — illustrates how digital interfaces can normalize brevity at the expense of nuance.


Sherry Turkle’s early work on computing captures the danger of such simplification. In The Computer as Rorschach, she notes that introductory education often emphasizes ease of use at the expense of conceptual depth, creating blind spots that obscure systemic complexity: "First, when the computer acts as a projective screen for other social and political concerns, it can act as a smokescreen as well, drawing attention away from the underlying issues and onto debate ‘for or against computers.’” Today, as AI systems mediate daily communication, that smokescreen manifests in our reliance on stripped-down commands, where nuance is lost to ensure the machine understands. For communication scholars, the stakes are high: when simplification becomes habitual, what happens to creativity, diversity of expression, and the richness of cultural dialogue?


Current discourse has rushed to interpret AI's significance for the next generation. Educators and psychologists publicly speculate on AI's risks and opportunities for children, as illustrated in NPR's TED Radio Hour series Are the Kids Alright?, yet there is still no consensus on how these effects unfold. A recent report by the World Economic Forum (2025), Rethinking Media Literacy: A New Ecosystem Model for Information Integrity, underscores how far speculation has outpaced structured evidence. It notes that “by age 11, children’s confidence in evaluating online content often exceeds their actual competence, while false information proliferating online is now being cited by large language models as ‘fact.’” The report also highlights divergent global responses: Italy’s temporary ban on ChatGPT, the European Union’s AI Act, and uneven adoption of media and AI literacy curricula. Crucially, it identifies a persistent blind spot — most media literacy programs do not engage with the political economy of AI-driven business models, despite their impact on information ecosystems.


These developments raise fundamental questions about autonomy, identity formation, and social equity. AI systems increasingly influence what individuals see, learn, and internalize, with implications that extend across communication, development, and social participation. Algorithmic agents – from LLMs to social media – increasingly shape not only what we consume but also how we learn, communicate, and preserve culture. Current research offers only partial answers. Much of the scholarship has focused on news media, political polarization, or short-term productivity effects.


As we run to embrace new technology, society risks normalizing powerful systems without understanding their long-term developmental and cultural costs. The reliance on ChatGPT for writing has been shown to reduce creativity and critical thinking; pediatric research raises concerns about how AI engagement in early childhood could affect empathy and emotional regulation; and systematic reviews emphasize both potential harms.


Cognitive offloading is not only a matter of convenience; it reconfigures who or what becomes the source of knowledge. Current research has documented important cognitive shifts associated with digital technologies, particularly in relation to cognitive offloading — the transfer of memory, decision-making, and attention tasks to external devices such as calendars, search engines, or voice assistants. A recent systematic review emphasizes that “excessive use of digital technology can alter brain structure and function, leading to a range of cognitive impairments”.


Delegating knowledge retrieval to algorithmic systems risks shifting epistemic authority to system designers and data interpreters. This challenge creates the digital divide 2.0 — where socioeconomic status determines who can benefit from AI — and a form of epistemic inequality in which control over AI systems determines who has access to knowledge itself. If access to AI becomes a prerequisite for cultural capital, new forms of “generational wealth” may emerge — not financial, but epistemic. These disparities disproportionately affect children, older adults, and low-income communities.


Furthermore, as we move further into machine-optimized communication, we redefine social participation and collective identity. Since algorithmic gatekeepers configure autonomy, trust, and engagement, the individual may feel compelled to conform to emerging machine-optimized norms. Economic forecasts underscore both the promise and the uncertainty. According to The Economic Potential of Generative AI, generative AI could create between $2.6 to $4.4 trillion in value annually across industries. The report highlights sectors ranging from education to agriculture, yet the most significant projected impact is in marketing and sales, with comparatively minimal influence expected in finance or talent management. These projected efficiencies come with cultural and cognitive trade-offs that remain insufficiently examined. Small language models often automate discrete skills, whereas large language models have the capacity to mediate — and potentially replace — forms of knowledge retrieval and interpretation.


Theoretical research positions lay out the outcomes of our future in AI-guided communication. Earlier theories like Gerbner’s Cultivation Theory argued that sustained exposure to uniform media messages (particularly through television) cultivated shared perceptions of reality across diverse audiences. Applied to today, the analogy is clear: AI systems, like television before them, flood daily life with patterned messages. With AI, the difference is scale and personalization, rather than shared mass narratives. AI cultivates individualized realities, fragmenting and segmenting cultural identity into subtler forms of consensus.


Where cultivation emphasizes uniformity, Selective Exposure Theory underscores individual choice. It posits that people actively seek information aligned with preexisting beliefs, reinforcing echo chambers and ideological divides. In an AI-driven environment, this effect is intensified by algorithmic curation: personalization narrows informational diversity, not simply by user choice but by system.


Ball-Rokeach and DeFleur’s Media Dependency Theory argued that as individuals and societies depend more heavily on media to understand the world and media institutions gain disproportionate power. With AI platforms now functioning as both intermediaries and gatekeepers, dependency is no longer on media institutions alone but on algorithmic systems whose logics are opaque to most users. This raises pressing questions about autonomy, trust, and epistemic access: if students, workers, and families depend on AI for knowledge, what happens when those systems define not only what we know but how we come to know?


Bandura’s Social Cognitive Theory highlights the role of modeling in learning and behavior. Early television blurred the line between entertainment and expertise, with fictional doctors receiving letters from viewers seeking real medical advice. Today, the “models” are increasingly non-human. Virtual influencers like Lil Miquela or Shudu attract millions of followers, shaping norms, aesthetics, and identity formation. For children and adolescents, AI-powered synthetic role models increasingly shape identity.


Beyond these four canonical theories, critical frameworks such as Data Colonialism and Algorithms of Oppression highlight structural power and inequality. These perspectives foreground the political economy of AI systems: who designs them, who benefits, and who is excluded. Taken together, these frameworks reveal that AI-mediated communication does not merely transmit information; it reorganizes the conditions under which knowledge is produced, accessed, and validated. When applied to AI-mediated platforms, this power is embedded within the tools themselves.


With AI-mediated platforms, by design, the power of knowledge is shifting away from the individual. LLMs are developed to pull information from available sources. But when those sources are limited, AI-generated, or maliciously created, the individual may be forced to rely on constrained or distorted informational inputs. Access to information becomes more tangled and sparse when it is gatekept rather than ubiquitous. This unregulated gatekeeping concentrates epistemic authority in ways that require scrutiny and risks creating a world where access scarcity consolidates epistemic power while fracturing societal and individual development.


This paper traces how AI-mediated communication reshapes cognition through linguistic simplification, creating new forms of epistemic inequality with profound implications for identity formation. As AI systems proliferate through workplaces, schools, and homes, they are normalized rapidly—and our opportunity to intervene narrows accordingly. Power concentrates in the hands of system designers, a small group whose choices about training data and algorithmic logic become infrastructure that is opaque and difficult to contest. We are at an inflection point: the systems creating epistemic inequality are proliferating faster than our understanding of their effects. Research is urgent—not to measure productivity gains, but to illuminate who retains access to knowledge and who is excluded.


Reflexive note: This paper was workshopped using Claude and ChatGPT—a practice that embodies the epistemic dependencies it analyzes. Access to conversational AI for research, review, and formatting constitutes a form of intellectual capital not universally available to all applicants. If AI-mediated communication increasingly defines standards of clarity, structure, and argumentation in writing, then access to these systems becomes prerequisite to participation. This paper thus embodies the system it critiques.

  


///

Further Reading

This essay was originally prepared as an academic writing sample for my London School of Economics PhD application. I’m sharing it here with the full references for transparency and for others exploring this emerging research space.


  • Ball-Rokeach, S. J., & DeFleur, M. L. (1976). A dependency model of mass-media effects. Communication Research, 3(1), 3–21. https://doi.org/10.1177/009365027600300101

  • Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Prentice-Hall.

  • Chui, M., Manyika, J., Miremadi, M., Henke, N., Chung, R., Nel, P., & Malhotra, S. (2023). The economic potential of generative AI. McKinsey & Company. https://www.mckinsey.com

  • Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.

  • Frontiers in Communication. (2025). Frontiers in Communication. https://www.frontiersin.org/journals/communication

  • Garrett, R. K. (2009). Echo chambers online?: Politically motivated selective exposure among Internet news users. Journal of Computer-Mediated Communication, 14(2), 265–285. https://doi.org/10.1111/j.1083-6101.2009.01440.x

  • Gerbner, G., & Gross, L. (1976). Living with television: The violence profile. Journal of Communication, 26(2), 173–199. https://doi.org/10.1111/j.1460-2466.1976.tb01397.x

  • Livingstone, S., & Blum-Ross, A. (2020). Parenting for a digital future: How hopes and fears about technology shape children’s lives. Oxford University Press.

  • Manovich, L. (2001). The language of new media. MIT Press.

  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

  • Ruddock, A. (2020). Digital media influence: A cultivation approach. Sage.

  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

  • Turkle, S. (1984). The computer as Rorschach. In The Second Self: Computers and the Human Spirit (pp. 22–48). MIT Press.

  • World Economic Forum. (2025, July). Rethinking media literacy: A new ecosystem model for information integrity (Insight Report). Centre for AI Excellence. https://centres.weforum.org/centre-for-ai-excellence/home

  • de Visser, E. J., Pak, R., & Shaw, T. H. (2017). A little anthropomorphism goes a long way: Effects of oxytocin on trust, compliance, and team performance with automated agents. Human Factors, 59(1), 116–133

 
 
 
bottom of page