<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://bhaskar-mitra.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://bhaskar-mitra.github.io/" rel="alternate" type="text/html" /><updated>2026-04-17T20:47:11+00:00</updated><id>https://bhaskar-mitra.github.io/feed.xml</id><title type="html">Bhaskar Mitra’s Website</title><subtitle>I am an information retrieval researcher working on AI-mediated online information access and questions of social justice and emancipation in the context of these sociotechnical systems.</subtitle><author><name>Bhaskar Mitra | ভাস্কর মিত্র</name><email>bhaskar.mitra@acm.org</email></author><entry><title type="html">ACM and Caste</title><link href="https://bhaskar-mitra.github.io/posts/2026/04/16/acm-and-caste/" rel="alternate" type="text/html" title="ACM and Caste" /><published>2026-04-16T00:00:00+00:00</published><updated>2026-04-16T00:00:00+00:00</updated><id>https://bhaskar-mitra.github.io/posts/2026/04/16/acm-and-caste</id><content type="html" xml:base="https://bhaskar-mitra.github.io/posts/2026/04/16/acm-and-caste/"><![CDATA[<p>Over a year ago, I noticed that the <a href="https://www.acm.org/">ACM</a> <a href="https://www.acm.org/special-interest-groups/volunteer-resources/officers-manual/policy-against-discrimination-and-harassment">Policy Against Harassment</a> and their <a href="https://www.acm.org/diversity-inclusion/about">Diversity, Equity, and Inclusion (DEI) statement</a> prohibited harassment and discrimination based on race, ethnicity, religion, gender, sexual orientation, citizenship, nationality, disability, and age, but it had at least one glaring omission: <strong>Caste</strong>.</p>

<p>Caste is a millennia-old oppressive system of social stratification whose practices are predominant in South Asia and diasporic Southasian communities. It is a hierarchical system of societal segmentation whose membership is determined by birth, wherein Brahmins are placed in the privileged position as upper-caste and those assigned to its bottom rung—Dalits and Adivasis (indigenous people)—are regularly subjected to severe marginalization and persecution. These social demarcations of inequity and dehumanization are reified by the harmful practices of caste that emphasize caste ``purity’’ through untouchability, residential segregation, restrictions on what professions one can pursue based on their caste, food-based discrimination that create taboo against meat-consumption as being “unhygienic” or “dirty”, and enforcement of endogamy through sociocultural coercion and violence. Casteism shares many structural similarities with racism, and consequently there is a <a href="https://www.theguardian.com/world/2020/jul/28/untouchables-caste-system-us-race-martin-luther-king-india">shared intertwined history</a> of resistance to casteist and racist violence. Caste oppression is not just actively practiced today in countries like India (<a href="https://thewire.in/caste/ugly-reality-caste-violence-discrimination-urban-india">The Wire</a>, <a href="https://www.nbcnews.com/news/world/india-dalits-still-feel-bottom-caste-ladder-n1239846">NBC</a>, <a href="https://frontline.thehindu.com/social-issues/social-justice/caste-denial-indian-college-campuses/article70610794.ece">The Hindu</a>, <a href="https://www.dw.com/en/in-india-caste-still-defines-who-cleans-cities/a-73368510">DW</a>)—<em>e.g.</em>, impacting its <a href="https://www.nature.com/immersive/d41586-023-00015-2">science</a> and <a href="https://restofworld.org/2022/tech-india-caste-divides/">technology</a> sectors—but is also prevalent within Southasian diasporic communities in <a href="https://clerk.seattle.gov/~cfpics/cf_322573f.pdf">USA</a>, <a href="https://theconversation.com/how-caste-discrimination-impacts-communities-in-canada-224603">Canada</a>, and <a href="https://web.archive.org/web/20260305233413/https://www.dsnuk.org/caste-in-the-uk/">UK</a>. There has been some recent successes in recognizing caste discrimination in <a href="https://www.npr.org/2023/02/22/1158687243/seattle-becomes-the-first-u-s-city-to-ban-caste-discrimination">Seattle</a> and <a href="https://www.aljazeera.com/opinions/2023/3/10/can-toronto-help-canada-end-casteism-in-the-classroom">Toronto</a>, but dismantling caste system in the diaspora continues to face political challenges—<em>e.g.</em>, in <a href="https://slate.com/news-and-politics/2023/09/california-anti-caste-discrimination-bill-hindu-nationalism-hindutva.html">California</a>. Caste discrimination is also prevalent within Silicon Valley and the broader tech industry (<a href="https://www.reuters.com/article/us-cisco-lawsuit/california-accuses-cisco-of-job-discrimination-based-on-indian-employees-caste-idUSKBN2423YE/">Reuters</a>, <a href="https://www.nytimes.com/2020/07/14/opinion/caste-cisco-indian-americans-discrimination.html">New York Times</a>, <a href="https://www.washingtonpost.com/technology/2020/10/27/indian-caste-bias-silicon-valley/">Washington Post</a>, <a href="https://www.bloomberg.com/news/features/2021-03-11/how-big-tech-is-importing-india-s-caste-legacy-to-silicon-valley">Bloomberg</a>, <a href="https://www.wired.com/story/trapped-in-silicon-valleys-hidden-caste-system/">Wired</a>, <a href="https://www.nbcnews.com/news/asian-america/big-techs-big-problem-also-best-kept-secret-caste-discrimination-rcna33692">NBC</a>, <a href="https://slate.com/technology/2022/07/caste-silicon-valley-thenmozhi-soundararajan.html">Slate</a>, <a href="https://www.newyorker.com/news/q-and-a/googles-caste-bias-problem">New Yorker</a>, <a href="https://dl.acm.org/doi/10.1145/3491102.3502059">Vaghela et al.</a>) and unsurprisingly caste bias is also <a href="https://www.technologyreview.com/2025/10/01/1124621/openai-india-caste-bias/">reflected in the technology</a> this sector produces. It therefore becomes particularly important that ACM as “the world’s largest educational and scientific computing society” recognizes and prohibits caste discrimination.</p>

<p>And we have some good news! Having noticed the omission of caste in ACM’s anti-harassment policy and DEI statement, I reached out to Vanessa Murdock, the then Chair of the ACM SIGIR Executive Committee and a close friend. In an act of allyship, that I am really grateful for, Vanessa raised this issue with the relevant folks in the ACM organization and (just in time for this year’s <a href="https://en.wikipedia.org/wiki/Dalit_History_Month">Dalit History Month</a>) both ACM’s anti-harassment policy and DEI statement have been updated to explicitly include caste as a protected category.</p>

<div style="align-items:center;text-align:center;font-style:italic">
  <a href="https://www.acm.org/special-interest-groups/volunteer-resources/officers-manual/policy-against-discrimination-and-harassment">
    <div style="margin:auto">
      https://www.acm.org/special-interest-groups/volunteer-resources/officers-manual/policy-against-discrimination-and-harassment
    </div>
    <img src="https://bhaskar-mitra.github.io/images/acm-caste-anti-harassment.png" />
    <br />
  </a>
</div>
<p><br /></p>

<div style="align-items:center;text-align:center;font-style:italic">
  <a href="https://www.acm.org/diversity-inclusion">
    <div style="margin:auto">
      https://www.acm.org/diversity-inclusion
    </div>
    <img src="https://bhaskar-mitra.github.io/images/acm-caste-dei-1.png" />
    <br />
  </a>
</div>
<p><br /></p>

<div style="align-items:center;text-align:center;font-style:italic">
  <a href="https://www.acm.org/diversity-inclusion/about">
    <div style="margin:auto">
      https://www.acm.org/diversity-inclusion/about
    </div>
    <img src="https://bhaskar-mitra.github.io/images/acm-caste-dei-2.png" />
    <br />
  </a>
</div>
<p><br /></p>

<p>In spite of this positive news, a lot of work remains to be done yet to ensure that our research and professional communities remain anti-casteist spaces which is only possible if we put our anti-casteism into active practice. For those of us who identify as anti-caste ally, this is a reminder to stand in solidarity with our caste-oppressed colleagues who face discrimination in our professional spaces and whose contributions to computing are regularly erased. And those who may be less familiar with the history of caste oppression, I invite you to learn more on this topic and in turn become anti-caste allies. Here are pointers to a couple of books and a documentary to get you started on that journey.</p>

<center>
  <table>
    <tr>
      <td>
        <center>
          <a href="https://www.versobooks.com/en-ca/products/75-annihilation-of-caste">
            <img src="https://bhaskar-mitra.github.io/images/annihilation-of-caste-ambedkar.webp" alt="Annihilation of Caste by B.R. Ambedkar" style="height:400px" />
          </a>
        </center>
      </td>
      <td>
        <center>
          <a href="https://www.penguinrandomhouse.com/books/710528/the-trauma-of-caste-by-thenmozhi-soundararajan/">
            <img src="https://bhaskar-mitra.github.io/images/trauma-of-caste-soundararajan.jpg" alt="The Trauma of Caste: A Dalit Feminist Meditation on Survivorship, Healing, and Abolition by Thenmozhi Soundararajan" style="height:400px" />
          </a>
        </center>
      </td>
      <td>
        <center>
          <a href="https://www.penguinrandomhouse.com/books/653196/caste-by-isabel-wilkerson/">
            <img src="https://bhaskar-mitra.github.io/images/caste-wilkerson.jpg" alt="Caste: The Origins of Our Discontents by Isabel Wilkerson" style="height:400px" />
          </a>
        </center>
      </td>
    </tr>
  </table>
</center>

<center>
    <iframe width="560" height="315" src="https://www.youtube.com/embed/U05F_-UJKTw?si=n33SMKaBxYIo8R5b" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe>
</center>
<p><br /></p>

<p><strong>Annihilate Caste. Smash the Brahminical patriarchy. Jai Bhim.</strong></p>

<p><strong>Positionality:</strong> I identify as an anti-caste ally. My views on caste are informed by my lived experiences of witnessing caste oppression, as well as experiencing caste-related bigotry in my own life.</p>]]></content><author><name>Bhaskar Mitra | ভাস্কর মিত্র</name><email>bhaskar.mitra@acm.org</email></author><category term="Association for Computing Machinery" /><category term="Caste discrimination" /><summary type="html"><![CDATA[Over a year ago, I noticed that the ACM Policy Against Harassment and their Diversity, Equity, and Inclusion (DEI) statement prohibited harassment and discrimination based on race, ethnicity, religion, gender, sexual orientation, citizenship, nationality, disability, and age, but it had at least one glaring omission: Caste.]]></summary></entry><entry><title type="html">What _is_ IR-for-Good?</title><link href="https://bhaskar-mitra.github.io/posts/2025/09/01/what-is-ir-for-good/" rel="alternate" type="text/html" title="What _is_ IR-for-Good?" /><published>2025-09-01T00:00:00+00:00</published><updated>2025-09-01T00:00:00+00:00</updated><id>https://bhaskar-mitra.github.io/posts/2025/09/01/what-is-ir-for-good</id><content type="html" xml:base="https://bhaskar-mitra.github.io/posts/2025/09/01/what-is-ir-for-good/"><![CDATA[<p>(<em>This blog post has been jointly co-authored by Bhaskar Mitra and Maria Heuss, co-chairs of <a href="https://ecir2026.eu/">ECIR’26</a> IR-for-Good Track. Please see the <a href="https://ecir2026.eu/calls/call-for-ir-for-good-papers">call for papers</a> for more details about the track. <strong>Abstracts are due: October 21</strong> and <strong>papers are due: October 28</strong>.</em>)</p>

<p><br />
<em>Seriously, what is it?</em> When we met this summer as track chairs to plan ahead for next year’s IR-for-Good track, our conversation largely revolved around this question: “<em>What is IR-for-Good?</em>”.
Typically, when a conference has a special track the goal is to nurture a specific research direction that is not yet mainstream within the broader field and to build a community around it.
Special tracks generally focus on particular problems or new approaches.
But what does it mean to have a special track for IR research that contributes to societal good?
Shouldn’t <em>all</em> IR research be societally-beneficial?
Does a special track on IR-for-Good (😇) imply that other contributions to the conference are IR-for-Bad (😈)?
How do we define what societal good is?
And, how do we decide if particular IR contributions are likely to benefit or harm society?</p>

<p>These are the thorny—and yet intellectually exciting and societally critical—questions that we started with.
Reflecting on these questions explicitly helped us clarify for ourselves what we are trying to achieve with the IR-for-Good track and prompted several changes to the track this year that we hope the community will find useful.
In this post, we would like to share with the broader IR community our motivations for these changes and initiate a conversation about how we collectively center societal needs in IR research and about the role of the IR-for-Good track in affecting desired transformations in the IR community.</p>

<h1 id="what-is-societal-good">What is societal good?</h1>
<p>Access to trustworthy information is a critical societal need, including  supporting informed citizenry in democratic societies, as a catalyst for social transformations, and as a social determinant of health and economic progress.
It is imperative that IR research concerns itself with not just the information needs of individual users but also its responsibilities towards collective societal good.
We must not assume that all technological progress in IR contributes positively to society nor accept the techno-deterministic view that there is a single pre-determined path forward for progress in IR research.
Instead, we must explicitly study and critique the systemic impact of information access technologies on society in light of the sociopolitical context in which they are developed and deployed, and leverage our improved understanding to guide future IR research towards realizing positive societal outcomes.</p>

<p>But first, we must articulate what we mean by societal good.
These conversations are already happening in different parts of the IR community, including in forums like <a href="https://sites.google.com/view/swirl2025/">SWIRL 2025</a> (see Section 7 of the <a href="https://bhaskar-mitra.github.io/showpdf/?file=SWIRL-2025-Preprint">SWIRL’25 report</a>).
We reviewed some of this literature and then decided to adopt the following operative definition of IR-for-Good:.</p>

<center>
  <blockquote style="max-width:700px">
    IR-for-Good refers to IR research and practices that contribute towards realizing more <b>equitable</b>, <b>emancipatory</b>, and <b>sustainable</b> futures.
  </blockquote>
</center>

<p>Starting from this definition, we enumerated potential relevant topics of interest for this track to include <strong>how IR intersects with and/or can support:</strong></p>
<ul>
  <li>Accessibility and disability justice</li>
  <li>Art, culture, and representation</li>
  <li>Crisis and disaster management</li>
  <li>Decolonization and racial justice</li>
  <li>Emancipation, anti-oppression, and social justice</li>
  <li>Gender and sexuality justice</li>
  <li>Informed citizenry, democracy, and collective decision making</li>
  <li>Law and restorative justice</li>
  <li>Literacy and knowledge production</li>
  <li>Privacy and dignity</li>
  <li>Public health and community health</li>
  <li>Social, political, and economic equity</li>
  <li>Sustainability and environmental justice</li>
  <li>Worker rights and labor movements</li>
</ul>

<p>We made an intentional choice to center these topics on societally-beneficial <strong>outcomes</strong> (<em>e.g.</em>, equity, emancipation, justice, and sustainability) rather than on the <strong>approaches</strong> that may help us progress towards those outcomes (<em>e.g.</em>, procedural fairness, interpretability, and transparency).
We of course welcome submissions focusing on different approaches in this special track.
Our motivation for centering outcomes over approaches is to encourage exploration of a broader space of diverse sociotechnical methods as well as to hold ourselves accountable to the ultimate goal of affecting positive societal impact.</p>

<p>We consider this definition of societal good to be neither fixed nor complete.
It is finally up to the IR community to iterate, extend, and further explicate this definition over time.
But for now, we hope this definition provides reasonable clarity on what societal outcomes we are aspiring for.</p>

<p>Finally, we defined the scope of IR-for-Good track to include IR research that:</p>
<ol>
  <li>Explicitly concerns with new research directions and system design to achieve specific societally beneficial outcomes,</li>
  <li>Develops new fairness, privacy, transparency, accessibility, sustainability, and other similar societally-motivated interventions, and/or</li>
  <li>Identifies and critiques the ways in which existing IR methods and systems and how we do IR research may contribute to systemic harm or impede social progress.</li>
</ol>

<p>Within the above specified scope, we invite contributions to the track that explore new positions, critiques, tools, methods, resources, and interventions for IR-for-Good.</p>

<h1 id="how-do-we-decide-if-particular-ir-contributions-are-likely-to-benefit-or-harm-society">How do we decide if particular IR contributions are likely to benefit or harm society?</h1>
<p>The ultimate goal of IR-for-Good is to achieve positive societal impact through relevant IR research.
To affect real change, we must ensure that our research is grounded in rigorous understanding of the sociotechnical challenges and the complex sociopolitical context in which our work is embedded.
We must discourage non-performative research gaze and hold ourselves collectively accountable to ensure that we are not simply “spinning our wheels” and that our scholarship indeed translates to material positive societal impact.
And we must be particularly careful to ensure that our research pushes for structural change and does not unintentionally contribute to ethics-washing harmful technologies.</p>

<p>When we, the IR-for-Good track chairs, started discussing how we ensure the community is indeed making progress on desired societal outcomes it became quickly apparent to us that we need to develop new community practices that help us build shared understanding of how technologies impact society and help us be more effective in affecting positive societal change.
Intuitively, if we want to encourage more critical scholarly discourse within the IR community on how specific research directions may contribute towards desired societal outcomes, the first step should be to make these <strong>theories of change</strong> explicit in our scholarship.
With that motivation, we are requiring every ECIR’26 IR-for-Good track submissions that propose new IR tools, methods, resources, and interventions to explicitly include a separate section elaborating how the work contributes towards desired societal outcomes.
Position papers and critiques are exempted from this requirement as these arguments should anyways be a core contribution of those submissions.</p>

<p>We recommend that the “Theory of Change” section should explicitly state:</p>
<ol>
  <li>What is the identified societal need / problem, and how are the core contributions from this current work expected to address them?</li>
  <li>What preconditions are necessary or what assumptions need to hold for this work to have its desired effect, and how likely are they to hold true in practice?</li>
  <li>What are possible negative externalities of this approach and is it plausible that this may lead to new or different harms?</li>
</ol>

<p>Authors are encouraged to include any additional discussions that they may deem relevant in this section.
Authors should note that it is not mandatory to name the section “Theory of change” but it should be apparent from the section title that it elaborates on how the work contributes towards desired societal outcomes.</p>

<p>Contributions focusing on algorithmic bias, fairness, transparency, interpretability, explainability, trustworthiness, misinformation, disinformation, hate speech, replicability, transferability, robustness, uncertainty, security, ethics, and other related topics are also required to explicitly articulate how the work contributes towards positive societal outcomes and not implicitly assume that all research on these topics contribute to societal good.
As a corollary, certain IR topics that may not have historically been seen as societally focused (<em>e.g.</em>, designing distributed information access platforms or developing more effective ranking models without the use of user behavior data) would also be welcome in this track if they can appropriately argue that the work is likely to contribute to societal good, <em>e.g.</em>, by making platforms more robust to authoritarian capture or disincentivizing mass ubiquitous user surveillance, respectively.</p>

<p>We want to strongly emphasize that <strong>this section should not be an afterthought</strong>.
Instead it should be a critical part of the key motivations for the work and as important as any other core sections of the respective papers.
We encourage authors and reviewers to critically engage with this section while acknowledging the real uncertainty of how any well-intentioned research may impact society in practice.
Our goal is <strong>not</strong> to encourage authors to inflate their claims of societal impact but to rigorously deliberate on their sociotechnical assumptions and to thoroughly enumerate the necessary preconditions for the work to have its desired impact and potential negative externalities.
Having explicit theories of change in our publications further opens up the opportunity for future scholarship to analyze, critique, validate, and improve upon these theories of change.
It also creates the possibility for scholars from non-IR disciplines to engage with and critically analyze the emerging theories of change within the IR community.
And we hope that over time this practice also encourages IR researchers to more actively reach beyond their disciplinary boundaries to work with other scholars, experts, practitioners, policymakers, civil rights advocates, activists, and movement organizers pushing for social justice and sustainability.</p>

<p>Here are a few example cases to illustrate the kind of critical reflections we would like to see more of in the IR community:</p>

<center>
  <blockquote style="max-width:700px">
    <div align="left">
      <b>Claim:</b> Our work that proposes a method for making expensive machine learning models for IR more efficient contributes towards sustainability and reducing impact on the environment.
    </div>
    <div align="left">
      <b>Considerations for this claim:</b> What preconditions are necessary for the efficiency improvements to translate to reduced impact on the environment?
        E.g., according to the Jevons paradox in economics, when technological advancements make a resource more efficient to use (thereby reducing the amount needed for a single application) it often results in overall increase in demand, causing total resource consumption to rise instead of falling.
        Is it more likely then that more efficient models may in fact lead to a false sense of mitigation and result in much wider adoption contributing to increased harm to the environment?
    </div>
  </blockquote>
</center>

<center>
  <blockquote style="max-width:700px">
    <div align="left">
      <b>Claim:</b> Our work that develops new assistive tools for document authoring increases worker productivity and contributes towards reduced labor for workers.
    </div>
    <div align="left">
      <b>Considerations for this claim:</b> What preconditions are necessary for the improvement in productivity to benefit the workers?
        In other words, who gets to benefit from the surplus provided by technology here?
        Does it benefit workers or does it lead to further reduction in their compensation and changes in job expectations that lead to lower status?
        Does the proposed approach provide any enforceable mechanisms to ensure that the surplus primarily benefits the workers?
    </div>
  </blockquote>
</center>

<center>
  <blockquote style="max-width:700px">
    <div align="left">
      <b>Claim:</b> Our work that improves alignment of LLMs towards specific social values contributes towards user safety by preventing exposure to harmful content.
    </div>
    <div align="left">
      <b>Considerations for this claim:</b> Who gets to decide what is harmful or select the values the system should be aligned with?
        Could these approaches in fact further centralize power and control over what is deemed "acceptable" vs. "harmful" speech?
        Could this stifle the voices of marginalized people and social activists?
        Could this incentivize authoritarian capture of information access platforms to manipulate public opinion?
        Does the proposed approach consider both the social and technical aspects of this problem to ensure democratic oversight and emancipatory outcomes?
    </div>
  </blockquote>
</center>

<center>
  <blockquote style="max-width:700px">
    <div align="left">
      <b>Claim:</b> Our work that proposes new methods for generating explanations for model outputs contributes towards increasing user trust in the system.
    </div>
    <div align="left">
      <b>Considerations for this claim:</b> Is that trust beneficial or harmful for the user?
        Is that trust warranted or could it in fact draw users into a false sense of safety and distract them from noticing how the system surveils them and subtly manipulates their behavior?
        How can this explainability intervention actually help reveal and challenge existing power structures?
    </div>
  </blockquote>
</center>

<center>
  <blockquote style="max-width:700px">
    <div align="left">
      <b>Claim:</b> Our work that proposes a new ranking approach for gender fairness contributes towards gender justice.
    </div>
    <div align="left">
      <b>Considerations for this claim:</b> How does the adopted definition of "gender fairness" in this work translate to mitigating real-world "gender discrimination"?
        Does this work assume that gender is binary, erasing other identities?
        Does this work assume that gender is known for all users / subjects and incentivize further intensification of surveillance and collection of private demographic data from members of historically marginalized communities?
        How can this work be operationalized in practice towards gender and sexuality justice?
        Are there example use-cases where this can be reliably demonstrated (e.g., ranking in hiring or job recommendation applications)?
    </div>
  </blockquote>
</center>

<h1 id="our-own-theory-of-change">Our own theory of change</h1>
<p>It would be inconsistent for us to ask of others to state their theories of change without at least very briefly articulating our own.
After many conversations and careful deliberations over the summer, we, the co-chairs of ECIR’26 IR-for-Good track, concluded that societal good should obviously be the motivation and goal of <strong>all</strong> of IR research.
The role of the IR-for-Good track in this endeavor then is to be a space where we can explore, experiment with, and develop new community practices and norms to promote more societally-beneficial IR research.
Our motivation is to subsequently contribute the identified best practices back to the broader IR community in an effort to ensure that all of IR research is IR-for-Good.</p>

<p>The transformations that IR-for-Good track wants to realize in the broader IR community will not happen overnight.
At ECIR’26 we are trying to re-clarify for ourselves and the IR community what we want to achieve with this special track and build on the original IR-for-Good vision.
We need future IR-for-Good track chairs to continue evolving these emerging practices and experiment with new ones.
We need to collaborate with and enable cross-pollination of ideas and practices with other societally-motivated IR sub-communities, such as <a href="https://www2025.thewebconf.org/web4good">TheWebConf Web4Good Special Track</a>, <a href="https://www2025.thewebconf.org/research-tracks">TheWebConf Responsible Web track</a>, <a href="https://facctrec.github.io/facctrec2025/">RecSys FAccTRec workshop</a>, and <a href="https://sites.google.com/view/responsible-ai-day/">KDD Responsible AI Day</a>.
And we need to raise our sociopolitical consciousness within the IR community and push towards more epistemic rigor working in partnership with other disciplinary scholars and experts.</p>

<h1 id="we-need-you-">We need you 🫵🏽</h1>
<p>At ECIR’26, the IR-for-Good track will serve as a platform to showcase the best of the best societally-motivated research in the field of IR.
Starting this year, <strong>IR-for-Good will be a core track at the conference</strong> and will run alongside the main conference, not on workshop day.
We invite you to not just participate in this special track, but to be an active part of building a broader movement in the IR community to bend the arc of IR research towards societal good.</p>

<p>Here’s how you can get involved:</p>
<ul>
  <li><a href="https://ecir2026.eu/calls/call-for-ir-for-good-papers">Submit your work</a> to the track (Abstracts due: Oct 21, papers due: Oct 28).</li>
  <li><a href="https://forms.gle/5UWcG9X7Hq92wokU7">Sign up as a reviewer</a> for the track.
We are especially looking for reviewers who can bring in interdisciplinary perspectives, such as at the intersections of IR with human-computer interaction (HCI), information sciences, media studies, design, science and technology studies (STS), social and political sciences, philosophy, law, environmental sciences, public health, and educational sciences.</li>
  <li>Send us your feedback / ideas for the IR-for-Good track and tell us what you would like to see at the conference track this year.</li>
  <li>If you are involved in other societally-motivated IR sub-communities or tracks at other IR venues, then let’s share notes and work together!</li>
</ul>

<p>And please join us in Delft next year to continue the conversation!</p>

<p><br /><br />
<strong><em>Would you like to comment on or discuss this post?</em></strong> You can do so on these social media threads on <a href="https://bsky.app/profile/bmitra.bsky.social/post/3lxugbrenv223">Bluesky</a>, <a href="https://mastodon.social/@bmitra/115135391331084967">Mastodon</a>, <a href="https://www.linkedin.com/feed/update/urn:li:share:7368664254205362176/">LinkedIn</a>, and <a href="https://x.com/UnderdogGeek/status/1962899087973298458">Twitter</a>.</p>]]></content><author><name>Bhaskar Mitra | ভাস্কর মিত্র</name><email>bhaskar.mitra@acm.org</email></author><category term="Information retrieval" /><category term="IR and society" /><category term="Tech for good" /><summary type="html"><![CDATA[(This blog post has been jointly co-authored by Bhaskar Mitra and Maria Heuss, co-chairs of ECIR’26 IR-for-Good Track. Please see the call for papers for more details about the track. Abstracts are due: October 21 and papers are due: October 28.)]]></summary></entry><entry><title type="html">AI as politic of class exploitation</title><link href="https://bhaskar-mitra.github.io/posts/2025/07/31/ai-as-politic-of-class-exploitation/" rel="alternate" type="text/html" title="AI as politic of class exploitation" /><published>2025-07-31T00:00:00+00:00</published><updated>2025-07-31T00:00:00+00:00</updated><id>https://bhaskar-mitra.github.io/posts/2025/07/31/ai-as-politic-of-class-exploitation</id><content type="html" xml:base="https://bhaskar-mitra.github.io/posts/2025/07/31/ai-as-politic-of-class-exploitation/"><![CDATA[<p>In the grand debate about AI and its implication for society, a particularly important concern that justifiably receives a lot of attention is the likely impact of AI on workers.
A lot has been written about this in recent years informed by ongoing critical work in this space.
However, mainstream public discourse often reduces that conversation to “AI automation will kill jobs”.
That framing while not incorrect, is too reductive and fails to capture the systemic crisis that may be in front of us.
It also has the insidious effect of bolstering a <a href="https://en.wikipedia.org/wiki/Technological_determinism">technodeterministic</a> view that imagines a fabled race between “humans vs. machines” at the heart of the issue, with Big Tech and Silicon Valley simply acting out in roles preordained to them rather than being active participants in subverting technological progress towards unprecedented wealth and power accumulation.
That framing claims with self-righteous indignation that it should be self-evident that technological progress will increasingly make bigger strides and therefore the machine is inevitably destined to surpass human capabilities at some point, and it is society that must constantly evolve to survive the new realities.
<em>What is left to debate then?</em>
If you do not agree it must be because YOU are a technophobe, a luddite, anti-progress, and anti-AI.
I will reserve my temptation to rage against the tech-bro arrogance that dehumanizes us all by reducing us to “collections of skills of economic value to the capitalist system” for another time.
But I do think we should talk about why this view is not just dangerously wrong, but nefarious at its core.
It is a deliberate erasure of critical thought and scholarship on this topic to conceal the true reality of mass dispossession of workers, undoing of decades of labor right progress, and a drumming up of neocolonial extractive practices and class exploitation that is unfolding in front of our eyes.</p>

<p>So, buckle up!
We are going to talk about how AI is saliently a politic of class exploitation.</p>

<h1 id="the-bigger-picture">The bigger picture</h1>
<p>To understand the implications of AI for the working class, we must not just consider the direct <em>risks</em> from the technology itself but redirect our gaze to the several systemic <em>consequences</em> of what the technology does, how it is made, who it is intended to serve, and the broader sociopolitical context in which it is embedded.
I am drawing from a co-authored <a href="https://bhaskar-mitra.github.io/showpdf/?file=3630106.3658900">paper</a> we published last year at the ACM Conference on Fairness, Accountability, and Transparency (FAccT) where we took a similar perspective but in the more constrained scope of AI-mediated enterprise knowledge access.
In the rest of this post, I will briefly talk from my personal perspective about some of the ways in which AI commodifies and appropriates labor and marginalizes the working class.
These consequences, among others, co-constitute the conditions under which we are starting to observe a systemic dispossession of working-class status, power, and wealth.</p>

<h1 id="ai-commodifies-labor">AI commodifies labor</h1>
<p>If you are a screenwriter or a visual artist, your work bears a distinct mark of your personal style and identity.
It speaks through your voice, sees the world from your perspective.
It is that distinctiveness that makes it challenging for others to treat you as interchangeable with other artists, <em>i.e.</em>, prevents your labor from being commodified.
Commodification of labor refers to the acts of transforming labor into commodities, defined as objects of economic value whose instances are treated as equivalent, or nearly so, with no regard to who produced them.
When your work is distinct, you have negotiation power because no one else can produce the same as you.
Commodifying labor is the eternal capitalist project, from the factory floors to now our studios and writers’ rooms.
By commodifying labor, capitalists can renegotiate compensation for labor and drastically bring down the cost of production by pitting worker against worker.</p>

<p>Most technological automation transforms the fundamental nature of the underlying task and commodifies the labor required for the task in the process.
But there is something distinctive about how salient commodification is to the design of generative AI applications as tools for writing, generating visual arts and music, producing code, and assisting in other knowledge work.</p>

<p><strong>What are we automating?</strong>  
When you think about automation, you might imagine a group of inventors looking closely at a labor-intensive task and ideating how the problem can be approached differently to cut down the time and labor necessary to complete the task as well as to enable performing the task more effectively than hitherto has been possible.
As a caricature of an example, if you have a horse carriage, you want to invent the automobile to both reduce how hard the driver must work while also improving the distances they can travel and the speed of transport.
Now, imagine in an alternative universe a group of inventors got together and instead of focusing on the horse and the carriage, redirected their focus entirely on what the driver was doing and built an approximate model of the driver.
The model itself is unintelligent and simply tries to predict at each moment <em>what would the driver do</em>, and invariably it often gets it wrong.
So, now the job of the driver is no longer to drive the vehicle but to constantly monitor and correct for the model’s mistakes.
Instead of discovering the joys of driving an automobile and travelling longer distances more comfortably, the driver now sits atop the same slow horse-drawn carriage constantly supervising the model.
This would make for a rather hilarious cyber-punk fiction if not for the fact that it is a rather apt metaphor for what we have come to call as “AI” today.</p>

<div style="align-items:center;text-align:center;font-style:italic">
  <img src="https://bhaskar-mitra.github.io/images/automation-carriage.png" style="max-width:350px;" />
  <br />Are you automating the carriage or the driver?
</div>
<p><br />
Think of generative AI models that generate documents, code, images, and videos in response to prompts.
In all these cases, the model is fundamentally trained to mimic what a human would do.
Inserting this model in between the task and the person responsible for the task changes the very nature of their responsibilities.
They are no longer screenwriters, visual artists, musicians, or coders.
They are now prompt engineers tinkering with the words with which they express their requirements to the algorithm and then spend more time trying to massage the outputs.
To many artists and knowledge workers this represents an end to their craft, the replacement of creativity with mindless (re)production, a loss of a deeply personal source of pride and joy, and a justifiable fear of the drudgery of continually turning knobs of the machine till it accidentally produces something passable.
For some, experimenting with these tools are indeed joyful in their own ways, and some will find creative ways to use these tools to expand the boundaries of creativity.
But what this technology <em>can</em> do for some coexists with the concerns of what it <em>will</em> do for most shaped by the powerful forces of capitalism.</p>

<p>For most, this commodification not only distances them from the crafts they enjoy and moves them to a functional role of lower status but also translates to a significant loss of compensation <em>because</em> their new function is viewed as requiring less-skills that can be performed by any of the surplus of workers available in the market.
The surplus from any actual productivity improvements, if any, from the usage of these tools will be collected exclusively by the capitalist class.
On the other hand, artists and knowledge workers will be expected to produce more to deserve the same compensation.
Many of these roles will also become susceptible to gigification which will further correspond to many in the working-class losing benefits and protections associated with full time work that decades of labor movements have fought to put in place.</p>

<p>Note that this loss of compensation is tied to the perceived simplification of the task and corresponding speculations of productivity boosts.
If producing an article using generative AI and then editing it takes similar effort as writing it without these tools, that is of less consequence in the compensation negotiations than the speculated productivity boosts that the dominant AI narratives may have convinced us to buy into.
It is exactly for this reason that the “AI hype” is not extraneous to the value proposition of AI, but rather is an integral part of the same package.
The key profit-maker here is not the productivity tools, but the social construction of the AI productivity myth that creates the exact conditions that the capitalist class so eagerly desires to renegotiate down the compensation for labor.</p>

<p>And when something goes wrong or when the outputs fall far below what is desired, the blame will not be attributed to the mindless algorithms, nor will it land on the shoulders of the bosses who force their employees to use these technologies.
It is the workers who will be ultimately held responsible for quality control who will now also serve as the systems’ <a href="https://estsjournal.org/index.php/ests/article/view/260">moral crumple zones</a>.</p>

<h1 id="ai-appropriates-labor">AI appropriates labor</h1>
<p>The risks to the working class from AI are not limited to their impact on labor when these technologies are put into use.
Many of the concrete harms are direct results of the appropriation of data labor necessary for the development of these systems.
If you are unfamiliar with the invisible human labor force that powers so much of our AI technologies today, I recommend starting with Gray and Suri’s “Ghost Work”.</p>

<center>
  <a href="https://ghostwork.info/ghost-work">
    <img src="https://bhaskar-mitra.github.io/images/ghost-work-gray.jpg" alt="Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary L. Gray and Siddharth Suri" style="max-height:300px" />
  </a>
</center>
<p><br />
<strong>Grand theft data labor</strong>  
Data labor is <a href="https://dl.acm.org/doi/10.1145/3593013.3594070">defined</a> as “activities that produce digital records useful for capital generation”.
Appropriation of data labor includes underpaid crowd work for data labeling and content moderation that are critical for training and operationalizing AI models.
It also includes the uncompensated appropriation of works by writer, artists, and programmers created outside of the AI development process that are nonetheless extracted from the Web and fed in as training data to generative AI models.
It furthermore includes the user behavior data and other data generated when users interact with and participate on the platforms, <em>i.e.</em>, simply by using these systems, they generate more data for the AI model to train on to further automate exactly the type of tasks they are currently performing.</p>

<p>It is particularly harmful that these AI technologies developed on appropriated labor is then employed to displace and automate the jobs of those whose labor was appropriated, <em>e.g.</em>, artists, coders, and other knowledge workers.
This may result in vicious cycles of skill transfer from people to AI models whereby proprietary AI model capabilities continue to improve—by learning from both what the workers produce and from the traces of their personal workflows captured by the AI platforms when workers interact with them—while workers progressively lose their economic value and power.</p>

<p><strong>AI for me, data labor for thee</strong>  
Another pernicious aspect of AI data labor dynamics is how they mirror and reify racial capitalism and coloniality, employ global labor exploitation and extractive practices, and reinforce the global north and south divide.
While worldwide jobs might be created in certain cases, the workers are typically low paid and deprived of any share of the profit made from technologies built with their labor.
These dynamics encompass accruing the benefits of generative AI to privileged populations in western and other rich countries, while data labor is relegated to already marginalized populations, for example, in the global south.
Communities that significantly contribute to AI data labor may even find their own linguistic styles <a href="https://www.theguardian.com/technology/2024/apr/16/techscape-ai-gadgest-humane-ai-pin-chatgpt">being labeled AI-ese</a> and being forced to repeatedly prove their own humanity.</p>

<p><strong>The ghost in the machine is <em>us</em></strong>  
When AI models are trained on writings and other works of people, it is not merely appropriating their labor but also their identities.
Imagine a film, a video game, or a video ad that includes an AI-generated character who is Black.
The character was created based on limited instructions in the prompt with the AI model filling in all the remaining details.
No Black person was hired to play that character nor to write it.
The AI model is simply drawing from the ghostly echoes of all the stories and lived experiences of real Black people in its training data.
The model nor the person using it to create the character could ever experience the joys or the struggles of being Black, nor can they truly appreciate the history and culture of its peoples.
The character wears that identity as a hollow shell, reconstructed from an amalgamation of the entirety of human experience as it is shallowly reflected on the web, and has its strings pulled to say or act in ways that a real Black person might strongly oppose to.
This is the new “Digital Blackface”.
This is the great displacement of art that tells the stories of its peoples by synthetic content.</p>

<h1 id="ai-marginalizes-the-working-class">AI marginalizes the working class</h1>
<p><strong>Let them eat chatbots</strong>  
The adverse effects of AI on the working class are not restricted to the commodification and appropriation of labor.
AI technologies are also being positioned to provide cover for depriving the working class from basic services such as healthcare and education.
Most of us I imagine have heard at least one AI-bro predict that the solution to the scarcity of doctors, teachers, and therapists in under-funded communities is to give them access to AI chatbots designed to serve in those functions.
This idea is not just particularly ill-premised, it is maliciously ableist, classist, and racist, and mirrors the already ongoing dismantling of social services globally.
Our communities globally are suffering from lack of investments in those communities because of generations of class, colonial, and racial oppression.
A capitalist society only invests in communities to the extent that it in return supports more capital accumulation for them.
It wants us to forget that healthcare and wellbeing are universal rights, not privileges.
It wants to forget that education is supposed to liberate us and teach us to find community in each other, not to be perfectly shaped into cogs for the capitalist machine.
So, of course instead of meaningful investments in our communities to affect social change it wants us to further divert investments towards Big Tech.
Chatbots in this context aren’t meant to be real solutions, just a placebo for the masses, and a cover for further dismantling of our social infrastructure.
Put Big Tech in charge of education and it almost surely also guarantees further dismantling of Humanities and every other critical pedagogy that is supposed to teach us to resist capitalism, colonization, and oppression.</p>

<p>Note: There are incredibly exciting work happening in the AI-for-science space.
That is not what I am referring to here.
The research in AI-for-science is categorically different from research in generative AI technologies like LLMs, in terms of the orders of magnitude less resource requirements for model training and in having much clearer paths towards real societal impact.
That is not to say they also do not raise some societal concerns, <em>e.g.</em>, for creating the conditions for more health data extraction from parents that tech companies can then monopolize.
But overall, I personally believe it is important that we separate that class of problem-specific machine learning technologies from the LLMs and other generative AIs of the day in these conversations.</p>

<p><strong>The environmental costs of AI</strong>  
The environmental impacts of global data center expansion for AI are also being felt disproportionately by already marginalized and vulnerable communities.
Climate change is a <a href="https://www.ohchr.org/en/press-releases/2022/11/global-climate-crisis-racial-justice-crisis-un-expert">racial justice issue</a>.
Climate change is a <a href="https://www.greenpeace.org/international/story/58334/climate-justice-and-social-justice-two-sides-of-the-same-coin/">social justice issue</a>.
Instead of investing in our social infrastructure, protecting the already vulnerable communities, and taking drastic steps to reduce our fossil fuel emissions, Big Tech wants you to believe that AI will solve climate change.
Of course, <a href="https://www.technologyreview.com/2024/09/28/1104588/sorry-ai-wont-fix-climate-change/">it won’t</a>.
And coincidentally all of Big Tech’s climate pledges are also a <a href="https://www.technologyreview.com/2024/07/17/1095019/google-amazon-and-the-problem-with-big-techs-climate-claims/">hot mess</a> (pun intended).
It is remarkable to me that of all things it is chatbots that the ruling class has decided is worth burning the whole planet down for.
This is naked necropolitics.</p>

<div style="align-items:center;text-align:center;font-style:italic">
  <a href="https://www.newyorker.com/cartoon/a16995">
    <img src="https://bhaskar-mitra.github.io/images/121126_a16995_g2048.webp" alt="Cartoon by Tom Toro that reads 'Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.'" style="max-width:250px" />
    <br />
    <div style="max-width:350px;margin:auto">
      "Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders."
    </div>
  </a>
</div>
<p><br /><br />
<em>To summarize, generative AI technologies developed using theft and appropriation of data labor are commodifying the jobs of those whose labor it appropriates, and then acting to provide political cover for the dismantling of social services while also dangerously accelerating anthropogenic climate change that disproportionately impacts vulnerable and marginalized working-class people.</em></p>

<h1 id="resisting-our-ai-capitalist-overlords">Resisting our <del>AI</del> capitalist overlords</h1>
<p><strong>Imagining new futures, learning from our past</strong>  
It doesn’t have to be this way.
Technology is not inherently repressive.
But it does have its politics.
Technology is shaped by our visions of desired futures, and in turn actualizes social transformations towards envisioned futures.
And that is why the AI hype is dangerous because it manufactures a crisis of imagination, trying to convince everyone that the path we are on is the only possible future, and aggressively rebukes anyone for questioning or resisting <em>their</em> desired future being forced on all of us.</p>

<center>
  <blockquote style="max-width:700px">
"The exercise of imagination is dangerous to those who profit from the way things are because it has the power to show that the way things are is not permanent, not universal, not necessary."
  <div align="right">
– Ursula K. Le Guin
  </div>
  </blockquote>
</center>

<p>So, let us critically ask, whose sociotechnical imaginaries are we granting normative status and what myriads of radically alternative futures are we overlooking?
How does increasing dominance of Big Tech over <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4135581">academic research</a> and <a href="https://doctorow.medium.com/how-tech-does-regulatory-capture-2faa3332373a">policy</a> constrain the kinds of technologies we are allowed to imagine and build?
What would AI and other technologies look like if we designed them for futures informed by feminist, queer, decolonial, anti-racist, anti-casteist, anti-ableist, and abolitionist thoughts, and if the focus of our work was not to prop up colonial cisheteropatriarchal capitalist structures but to dismantle them?
As Ruha Benjamin argues, exercising our imagination is “an invitation to rid our mental and social structures from the tyranny of dominant imaginaries”.
So, how do we go about liberating ourselves from the dominant neocolonial capitalist imaginaries of AI, and radically reimagine what technology could be and redefine our desired relationships with technology?</p>

<center>
  <a href="https://www.ruhabenjamin.com/race-after-technology">
    <img src="https://bhaskar-mitra.github.io/images/imagination-benjamin.png" alt="Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin" style="max-height:300px" />
  </a>
</center>
<p><br />
There has been growing interest on this topic in recent years.
There was a CRAFT workshop at ACM FAccT’24 conference titled “<a href="https://facctconference.org/2024/acceptedcraft">Better Utopias: resisting Silicon Valley ideology and decolonizing our imaginaries of the future</a>”.
DAIR organized a workshop earlier this year on “<a href="https://peertube.dair-institute.org/w/9NTbauGHQ6BGXhnhDhuxte">Imagining Possible Futures</a>”.
I have been thinking about this topic myself and wrote a <a href="https://bhaskar-mitra.github.io/showpdf/?file=19654_Mitra">paper</a> in the context of information retrieval research.</p>

<p>But this is not a call for idle speculation, this must be an integral part of our day-to-day emancipatory praxis.
For example, if you work on developing AI tools and systems for knowledge work, ask yourself who you are building it for, the workers or the bosses?
How would your approach and design fundamentally change if your goal was explicitly to shift power from bosses to workers?
What mechanisms of collective action and resistance would you build in into the system design?
What technologies would you develop to empower artists and knowledge workers to safeguard the artefacts they produce from being commodified and appropriated (<a href="https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/">example</a>)?
What direct actions can <em>you</em> take today to challenge the AI hype and its uncritical adoption within the tech community?
What spaces can you create for collective learning about the history of labor movements, for collective sensemaking of our current challenges and risks, and for collective organizing (unions and otherwise)?
We must remember that it is not just our desired futures that should inform our actions in this present moment, but also our past.
Learn about the true history of the Luddite uprising, and ask yourself what would be the equivalent of “machine breaking” for our generation?</p>

<center>
  <a href="https://www.hachettebookgroup.com/titles/brian-merchant/blood-in-the-machine/9780316487740/">
    <img src="https://bhaskar-mitra.github.io/images/blood-in-the-machine-merchant.webp" alt="Blood in the Machine: The Origins of the Rebellion Against Big Tech by Brian Merchant" style="max-height:300px" />
  </a>
</center>
<p><br />
<strong>Workers of the world, unite!</strong>  
This is the moment for movement building.
It is incredibly difficult work, as it always has been throughout history.
It requires us to reconcile collective risks and systemic harms with individual circumstances.
I am empathetic towards those who may genuinely find these tools useful.
I am aware of some of the ongoing debates around AI art and accessibility, and again I empathize with those arguments.
I don’t believe these positions are irreconcilable, but it does require bringing a multitude of people to the table to build solidarity, find workable solutions, collectively strategize, and build a movement.
Because we cannot let Big Tech co-opt our differences to divide and defeat us.
We should resist when Big Tech tries to pry open those cracks and claim that AI shaming is “ableist” or that it arises from “class anxiety induced in middle class knowledge workers” and intended to protect “privileged class of knowledge work”.
Nuh uh!
Our liberation is bound together and capitulating to the politics of AI hype will be catastrophic for the entirety of the working class.
And we must also outright oppose when Big Tech claims that the path forward is to “upskill artists and knowledge workers to leverage AI” because training artists and knowledge workers to prompt an AI model is not teaching them a meaningful new skill but luring them to abdicate their craft, creativity, and critical thinking.</p>

<p>So, workers of the world, unite!
We have nothing to lose but our cha… chatbots.</p>

<p><br /><br />
<strong><em>Would you like to comment on or discuss this post?</em></strong> You can do so on these social media threads on <a href="https://bsky.app/profile/bmitra.bsky.social/post/3lvlh4n5b3225">Bluesky</a>, <a href="https://mastodon.social/@bmitra/114953730994508280">Mastodon</a>, <a href="https://www.linkedin.com/feed/update/urn:li:activity:7358167042622922752/">LinkedIn</a>, and <a href="https://x.com/UnderdogGeek/status/1952384567693734373">Twitter</a>.</p>]]></content><author><name>Bhaskar Mitra | ভাস্কর মিত্র</name><email>bhaskar.mitra@acm.org</email></author><category term="AI" /><category term="Big Tech" /><summary type="html"><![CDATA[In the grand debate about AI and its implication for society, a particularly important concern that justifiably receives a lot of attention is the likely impact of AI on workers. A lot has been written about this in recent years informed by ongoing critical work in this space. However, mainstream public discourse often reduces that conversation to “AI automation will kill jobs”. That framing while not incorrect, is too reductive and fails to capture the systemic crisis that may be in front of us. It also has the insidious effect of bolstering a technodeterministic view that imagines a fabled race between “humans vs. machines” at the heart of the issue, with Big Tech and Silicon Valley simply acting out in roles preordained to them rather than being active participants in subverting technological progress towards unprecedented wealth and power accumulation. That framing claims with self-righteous indignation that it should be self-evident that technological progress will increasingly make bigger strides and therefore the machine is inevitably destined to surpass human capabilities at some point, and it is society that must constantly evolve to survive the new realities. What is left to debate then? If you do not agree it must be because YOU are a technophobe, a luddite, anti-progress, and anti-AI. I will reserve my temptation to rage against the tech-bro arrogance that dehumanizes us all by reducing us to “collections of skills of economic value to the capitalist system” for another time. But I do think we should talk about why this view is not just dangerously wrong, but nefarious at its core. It is a deliberate erasure of critical thought and scholarship on this topic to conceal the true reality of mass dispossession of workers, undoing of decades of labor right progress, and a drumming up of neocolonial extractive practices and class exploitation that is unfolding in front of our eyes.]]></summary></entry><entry><title type="html">Why I am leaving big tech…</title><link href="https://bhaskar-mitra.github.io/posts/2025/07/16/why-i-am-leaving-big-tech/" rel="alternate" type="text/html" title="Why I am leaving big tech…" /><published>2025-07-16T00:00:00+00:00</published><updated>2025-07-16T00:00:00+00:00</updated><id>https://bhaskar-mitra.github.io/posts/2025/07/16/why-i-am-leaving-big-tech</id><content type="html" xml:base="https://bhaskar-mitra.github.io/posts/2025/07/16/why-i-am-leaving-big-tech/"><![CDATA[<p>After spending almost two decades in big tech, I was notified last month that I am being laid off.
There have been <a href="https://techcrunch.com/2025/07/15/tech-layoffs-2025-list/">massive waves of layoffs</a> across the industry recently, and I am just one of the many tens of thousands of tech workers who have been impacted.
However, the news marked a moment of much bigger personal change for me as it prompted me to finally gather up enough courage to make a decision that I have been putting off for years.
I am leaving big tech.</p>

<p>I will no longer be pursuing any job opportunities in big tech or typical silicon-valley-type startups.
This is not a decision that I am making lightly.
In fact, the intention to leave big tech has been constantly on my mind for the last several years.
I debated a lot about how openly I want to talk about my decision and finally convinced myself that it is important that I do.
Conversations with friends, colleagues, and collaborators over the years have led me to believe that I am not alone in wrestling with this decision, and if that’s you then I want you to know that I see you and if you need someone to talk to then please feel free to reach out.</p>

<p><em>Why am I leaving big tech?</em> There are several reasons.
While I list a few individually below, I believe they are the consequences of the same underlying structural problem: an unprecedented concentration of power in the hands of those in big tech who want to deliberately enact (or, at the very least, are incapable of imagining anything besides) a techno-fascist future.
I believe that is the root cause for the momentous cultural and material changes that we are collectively witnessing sweeping across the industry.</p>

<p><strong>The genocide that no one is allowed to talk about.</strong>  
According to a <a href="https://archive.ph/20250410202716/https://www.un.org/unispal/document/un-special-committee-press-release-19nov24/">United Nations (UN) Special Committee</a>, <a href="https://amnesty.ca/wp-content/uploads/2024/12/Amnesty-International-Gaza-Genocide-Report-December-4-2024.pdf">Amnesty International</a>, <a href="https://msf.org.uk/issues/gaza-genocide">Médecins Sans Frontières</a>, and many other experts, Israel is committing <a href="https://en.wikipedia.org/wiki/Gaza_genocide">genocide in Gaza</a> against the Palestinian people.
By April 2025, the Gaza Health Ministry had reported that <a href="https://www.ochaopt.org/content/reported-impact-snapshot-gaza-strip-3-april-2025">more than fifty thousand people in Gaza had been killed</a>—i.e., 1 out of every 44 people at an average of 93 deaths per day.
These deaths are a result of <a href="https://www.sgr.org.uk/resources/gaza-one-most-intense-bombardments-history">mass bombings</a>, <a href="https://www.amnesty.org/en/latest/news/2025/07/gaza-evidence-points-to-israels-continued-use-of-starvation-to-inflict-genocide-against-palestinians/">use of starvation as a weapon of war</a>, <a href="https://www.hindrajabfoundation.org/news/targeting-life-itself-israels-systematic-destruction-of-civilian-infrastructure-in-gaza">destruction of civilian infrastructure</a>, attacks on <a href="https://www.msf.org/strikes-raids-and-incursions-year-relentless-attacks-healthcare-palestine">healthcare workers</a> and <a href="https://www.haaretz.com/israel-news/2025-06-27/ty-article-magazine/.premium/idf-soldiers-ordered-to-shoot-deliberately-at-unarmed-gazans-waiting-for-humanitarian-aid/00000197-ad8e-de01-a39f-ffbe33780000">aid-seekers</a>, and <a href="https://www.hrw.org/report/2024/11/14/hopeless-starving-and-besieged/israels-forced-displacement-palestinians-gaza">forced displacement</a>.
Big tech giants have not only played a <a href="https://www.accessnow.org/gaza-genocide-big-tech/">pivotal role</a> in materially supporting and profiting from this ongoing genocide over the last two and half years (see <a href="https://www.aljazeera.com/news/2025/7/1/un-report-lists-companies-complicit-in-israels-genocide-who-are-they">UN report</a>), but have also <a href="https://www.washingtonpost.com/technology/2025/05/16/silicon-valley-workers-dissent-employment-layoffs-whistleblowers/">ruthlessly silenced</a> any dissenting voices among its employees.</p>

<p>In my early years in the tech industry, I learned about the <a href="https://www.theguardian.com/world/2002/mar/29/humanities.highereducation">infamous history</a> of how IBM, the big tech institution of the day, had provided key technological support for the Holocaust, committed by Nazi Germany against Jewish people.
How naïve I was then to wonder how that could have ever come to pass, and never did I in my wildest nightmare imagine that it would be the dominant story of tech of our generation.</p>

<p><strong>When hype <em>is</em> the product</strong>   
A decade ago, as I was just starting out on my PhD journey in the field of information retrieval (IR), I was part of an early cohort of IR researchers who saw big potential in deep learning methods for IR tasks.
I co-organized the <a href="https://bhaskar-mitra.github.io/showpdf/?file=3053408.3053425">first neural IR workshop at SIGIR</a>, co-authored <a href="https://www.nowpublishers.com/article/Details/INR-061">a book</a> on the topic, co-developed the <a href="http://msmarco.org/">MS MARCO benchmark</a>, and co-founded the <a href="https://microsoft.github.io/msmarco/TREC-Deep-Learning">TREC Deep Learning Track</a>.
Last year, I was awarded the ACM SIGIR Early Career Researcher Award for my research on neural IR.
I say these, not to brag, but as evidence for when I say I have felt genuine excitement over the years about the progress in the field of machine learning that I have both witnessed and in my own capacity contributed towards.
But I am deeply disconcerted by the state of AI discourse today and the impact it has already had on industry, academia, government, and civil society.</p>

<p>The hype itself is not a new phenomenon.
Even as I was starting out in the field, I did not care much for the sudden rebranding of neural networks into “deep learning”.
In fact, in many of my early works I continued to use the phrase “neural IR” (cheekily, shortening it to “neu-ir” to sound like “new IR”) over “deep learning for IR” and other such monikers.
But the hype around “AI” has taken a much more menacing turn.
It has turned into an <a href="https://www.youtube.com/watch?v=6ovuMoW2EGk">religious cult-ish phenomenon and a project of empire building</a>, that is uncompromising in its opposition to any rational critique or discourse.
Tech companies are mandating that every team insert large language models (LLMs) in every possible product feature, and even in their own daily workflows.
Whether or not that has a positive or negative impact is completely besides the point.
<em>Why?</em>
Because the evidence-free promises of AI utopia that tech “leaders” are so boldly prophesizing makes stocks go brrrrrrrrr….
No, AI will not be a “new digital species” (how much ever you try to anthropomorphize next-token prediction algorithms), nor will it be a wand that magically solves climate change or war or any of our other social problems.
But the grand fictitious narratives about AI, both the hype and the fearmongering, will continue to bolster claims of their “foundational” advancements resulting in potentially the biggest accumulation of power and wealth in the hands of a few in our lifetimes.
That <em>is</em> the intent and why AI is largely a fascist neocolonial project.</p>

<p>This is not to claim that LLMs are not useful and as a researcher I am genuinely excited by the incredible progress in language modeling techniques in recent years.
But you cannot separate the technological artefacts from the fact that the process of building these technologies mirror racial capitalism and coloniality, employ global labor exploitation and extractive practices, and reinforce the global north and south divide.
You cannot separate the technology from the exploitative appropriation of data labor necessary for its creation—including both the uncompensated appropriation of works by writers, authors, programmers, and peer production communities, and under-compensated crowd work for data labeling.</p>

<p>As an IR researcher, I am particularly concerned by the uncritical adoption of these technologies in information access, which has been a focus of <a href="https://bhaskar-mitra.github.io/showpdf/?file=978-3-031-73147-1_7">my own research</a>.
I am concerned about how institutions with access to treasure troves of people’s behavioral data combined with the capabilities of generative AI to produce persuasive language and imagery will produce tools for mass manipulation of public opinion.
These tools may look no more nefarious than conversational information access systems of the day, or may take more explicit form of generative ads in the future.
Imagine every time you searched online or accessed information via your digital assistant, the information was presented to you exactly in the form mostly likely alter your consumer preferences or political opinions.
This poses serious risks to the functioning of democratic societies, and even if we were to assume best intentions from specific corporations (you really shouldn’t!), the existence of such capabilities incentivizes authoritarian capture of these information access platforms.</p>

<p><strong>The co-optation of Responsible AI</strong>  
I have incredible respect for those in the industry who are doing critical work on Responsible AI / AI &amp; society.
However, I am also tremendously concerned by the shrinking power of those critical voices.
Those who do that work, do that under incredible stress and <a href="https://www.washingtonpost.com/technology/2023/03/30/tech-companies-cut-ai-ethics/">risks to their own careers</a>.
The boundaries of what you are allowed to critique is shrinking rapidly.
You are allowed (for now) to get on a pulpit and talk about fairness and representational harms (don’t get me wrong, those are very important!) <em>as long as</em> it paints the institutions as “responsible corporations trying to do the right thing for society for which they should receive accolades” but never to critique the institution and definitely never if it conflicts with profit.
The bad actors in your threat models must always be <em>out there</em>, never the institutions (i.e., the platform owners).
Never critique the concentration of wealth and power in the hands of these platforms.
And, definitely definitely never talk about the <a href="https://www.techpolicy.press/booming-military-spending-on-ai-is-a-windfall-for-tech-and-a-blow-to-democracy/">military-AI complex</a>.</p>

<p>The ultimate outcome of this is the securitization of “Responsible AI” which manifests today as the “AI safety” framing that selectively strips away any concerns of social justice from the agenda.
If Responsible AI / IR is framed to not challenge war, colonial extractive practices, racial capitalism, gender and sexual injustices, and other forms of oppression, then what are we even trying to do as a community?</p>

<p><br />
<strong>What’s next?</strong>  
I don’t want to sound blasé but getting laid off may have been the best thing to happen to me this year.
I don’t want to minimize how difficult it is to be on the receiving end of that news, and I am quite aware of my own privileges for having a permanent residence status in Canada and sufficient financial stability for the short term.
I don’t wish this on anyone, and my heart goes out to everyone who have been impacted.
If you have been impacted by recent layoffs and want to talk, please reach out!
But in my personal context, this sincerely feels like a blessing in disguise.
It took me a while to acknowledge this but every passing day since I got the news of the layoff, I have genuinely felt more excited about the future.</p>

<p>Over the years, I have had the immense privilege of working with so many incredibly kind and thoughtful people who mentored me, collaborated with me, and critically shaped me as a researcher and as a person.
I am filled with utmost gratitude to all of you, and I hope our paths will continue to cross! 🙏🏽</p>

<p>And as I look to the future, I am both excited and nervous.
I want to spend more time <a href="https://bhaskar-mitra.github.io/reading/">reading</a> and engaging with critical scholarship.
I want to spend more time in movement spaces.
I want to find people who are thinking about alternatives to “big tech” and fighting back against the global sliding into techno-fascism.
I want to continue working on information access and <a href="https://bhaskar-mitra.github.io/showpdf/?file=19654_Mitra">reimagine</a> very different futures for how we as individuals and collectively as society experience information.
I want to explore spaces where I can do research grounded explicitly in humanistic anti-fascist anti-capitalist decolonial values.
I want to continue my work on <a href="https://www.youtube.com/watch?v=wK-nHCg_ZHg">emancipatory information access</a> and realize my research as part of my emancipatory praxis.
And above all, I want to build technology that humanizes us, connects us, liberates us, gives us joy.</p>

<p>So, if you want to chat about any of the above or have any advice / recommendations for me, please reach out! I would love to hear from you.</p>

<p><br />
I leave you with one of my favorite quotes…</p>

<center>
  <blockquote style="max-width:500px">
    "Another world is not only possible, she is on her way. On a quiet day, I can hear her breathing."
    <div align="right">
      – Arundhati Roy
    </div>
  </blockquote>
</center>

<p><br />
Abolish big tech. Free Palestine.</p>

<p><br /><br />
<strong><em>Would you like to comment on or discuss this post?</em></strong> You can do so on these social media threads on <a href="https://bsky.app/profile/bmitra.bsky.social/post/3lu6674d5z22x">Bluesky</a>, <a href="https://mastodon.social/@bmitra/114869106092488513">Mastodon</a>, <a href="https://www.linkedin.com/feed/update/urn:li:activity:7351622772273344512/">LinkedIn</a>, and <a href="https://x.com/UnderdogGeek/status/1945857031954411743">Twitter</a>.</p>

<p><strong>P.S..</strong> An updated version of this letter has been <a href="https://disjunctionsmag.com/articles/why-leaving-big-tech/">published</a> in the inaugural issue of the <a href="https://disjunctionsmag.com/">Disjunctions magazine</a>.</p>]]></content><author><name>Bhaskar Mitra | ভাস্কর মিত্র</name><email>bhaskar.mitra@acm.org</email></author><category term="Big Tech" /><category term="AI" /><category term="Tech for good" /><category term="Gaza" /><summary type="html"><![CDATA[After spending almost two decades in big tech, I was notified last month that I am being laid off. There have been massive waves of layoffs across the industry recently, and I am just one of the many tens of thousands of tech workers who have been impacted. However, the news marked a moment of much bigger personal change for me as it prompted me to finally gather up enough courage to make a decision that I have been putting off for years. I am leaving big tech.]]></summary></entry></feed>