Why I am leaving big tech…

11 minute read

Published:

After spending almost two decades in big tech, I was notified last month that I am being laid off. There have been massive waves of layoffs across the industry recently, and I am just one of the many tens of thousands of tech workers who have been impacted. However, the news marked a moment of much bigger personal change for me as it prompted me to finally gather up enough courage to make a decision that I have been putting off for years. I am leaving big tech.

I will no longer be pursuing any job opportunities in big tech or typical silicon-valley-type startups. This is not a decision that I am making lightly. In fact, the intention to leave big tech has been constantly on my mind for the last several years. I debated a lot about how openly I want to talk about my decision and finally convinced myself that it is important that I do. Conversations with friends, colleagues, and collaborators over the years have led me to believe that I am not alone in wrestling with this decision, and if that’s you then I want you to know that I see you and if you need someone to talk to then please feel free to reach out.

Why am I leaving big tech? There are several reasons. While I list a few individually below, I believe they are the consequences of the same underlying structural problem: an unprecedented concentration of power in the hands of those in big tech who want to deliberately enact (or, at the very least, are incapable of imagining anything besides) a techno-fascist future. I believe that is the root cause for the momentous cultural and material changes that we are collectively witnessing sweeping across the industry.

The genocide that no one is allowed to talk about.   According to a United Nations (UN) Special Committee, Amnesty International, Médecins Sans Frontières, and many other experts, Israel is committing genocide in Gaza against the Palestinian people. By April 2025, the Gaza Health Ministry had reported that more than fifty thousand people in Gaza had been killed—i.e., 1 out of every 44 people at an average of 93 deaths per day. These deaths are a result of mass bombings, use of starvation as a weapon of war, destruction of civilian infrastructure, attacks on healthcare workers and aid-seekers, and forced displacement. Big tech giants have not only played a pivotal role in materially supporting and profiting from this ongoing genocide over the last two and half years (see UN report), but have also ruthlessly silenced any dissenting voices among its employees.

In my early years in the tech industry, I learned about the infamous history of how IBM, the big tech institution of the day, had provided key technological support for the Holocaust, committed by Nazi Germany against Jewish people. How naïve I was then to wonder how that could have ever come to pass, and never did I in my wildest nightmare imagine that it would be the dominant story of tech of our generation.

When hype is the product   A decade ago, as I was just starting out on my PhD journey in the field of information retrieval (IR), I was part of an early cohort of IR researchers who saw big potential in deep learning methods for IR tasks. I co-organized the first neural IR workshop at SIGIR, co-authored a book on the topic, co-developed the MS MARCO benchmark, and co-founded the TREC Deep Learning Track. Last year, I was awarded the ACM SIGIR Early Career Researcher Award for my research on neural IR. I say these, not to brag, but as evidence for when I say I have felt genuine excitement over the years about the progress in the field of machine learning that I have both witnessed and in my own capacity contributed towards. But I am deeply disconcerted by the state of AI discourse today and the impact it has already had on industry, academia, government, and civil society.

The hype itself is not a new phenomenon. Even as I was starting out in the field, I did not care much for the sudden rebranding of neural networks into “deep learning”. In fact, in many of my early works I continued to use the phrase “neural IR” (cheekily, shortening it to “neu-ir” to sound like “new IR”) over “deep learning for IR” and other such monikers. But the hype around “AI” has taken a much more menacing turn. It has turned into an religious cult-ish phenomenon and a project of empire building, that is uncompromising in its opposition to any rational critique or discourse. Tech companies are mandating that every team insert large language models (LLMs) in every possible product feature, and even in their own daily workflows. Whether or not that has a positive or negative impact is completely besides the point. Why? Because the evidence-free promises of AI utopia that tech “leaders” are so boldly prophesizing makes stocks go brrrrrrrrr…. No, AI will not be a “new digital species” (how much ever you try to anthropomorphize next-token prediction algorithms), nor will it be a wand that magically solves climate change or war or any of our other social problems. But the grand fictitious narratives about AI, both the hype and the fearmongering, will continue to bolster claims of their “foundational” advancements resulting in potentially the biggest accumulation of power and wealth in the hands of a few in our lifetimes. That is the intent and why AI is largely a fascist neocolonial project.

This is not to claim that LLMs are not useful and as a researcher I am genuinely excited by the incredible progress in language modeling techniques in recent years. But you cannot separate the technological artefacts from the fact that the process of building these technologies mirror racial capitalism and coloniality, employ global labor exploitation and extractive practices, and reinforce the global north and south divide. You cannot separate the technology from the exploitative appropriation of data labor necessary for its creation—including both the uncompensated appropriation of works by writers, authors, programmers, and peer production communities, and under-compensated crowd work for data labeling.

As an IR researcher, I am particularly concerned by the uncritical adoption of these technologies in information access, which has been a focus of my own research. I am concerned about how institutions with access to treasure troves of people’s behavioral data combined with the capabilities of generative AI to produce persuasive language and imagery will produce tools for mass manipulation of public opinion. These tools may look no more nefarious than conversational information access systems of the day, or may take more explicit form of generative ads in the future. Imagine every time you searched online or accessed information via your digital assistant, the information was presented to you exactly in the form mostly likely alter your consumer preferences or political opinions. This poses serious risks to the functioning of democratic societies, and even if we were to assume best intentions from specific corporations (you really shouldn’t!), the existence of such capabilities incentivizes authoritarian capture of these information access platforms.

The co-optation of Responsible AI   I have incredible respect for those in the industry who are doing critical work on Responsible AI / AI & society. However, I am also tremendously concerned by the shrinking power of those critical voices. Those who do that work, do that under incredible stress and risks to their own careers. The boundaries of what you are allowed to critique is shrinking rapidly. You are allowed (for now) to get on a pulpit and talk about fairness and representational harms (don’t get me wrong, those are very important!) as long as it paints the institutions as “responsible corporations trying to do the right thing for society for which they should receive accolades” but never to critique the institution and definitely never if it conflicts with profit. The bad actors in your threat models must always be out there, never the institutions (i.e., the platform owners). Never critique the concentration of wealth and power in the hands of these platforms. And, definitely definitely never talk about the military-AI complex.

The ultimate outcome of this is the securitization of “Responsible AI” which manifests today as the “AI safety” framing that selectively strips away any concerns of social justice from the agenda. If Responsible AI / IR is framed to not challenge war, colonial extractive practices, racial capitalism, gender and sexual injustices, and other forms of oppression, then what are we even trying to do as a community?


What’s next?   I don’t want to sound blasé but getting laid off may have been the best thing to happen to me this year. I don’t want to minimize how difficult it is to be on the receiving end of that news, and I am quite aware of my own privileges for having a permanent residence status in Canada and sufficient financial stability for the short term. I don’t wish this on anyone, and my heart goes out to everyone who have been impacted. If you have been impacted by recent layoffs and want to talk, please reach out! But in my personal context, this sincerely feels like a blessing in disguise. It took me a while to acknowledge this but every passing day since I got the news of the layoff, I have genuinely felt more excited about the future.

Over the years, I have had the immense privilege of working with so many incredibly kind and thoughtful people who mentored me, collaborated with me, and critically shaped me as a researcher and as a person. I am filled with utmost gratitude to all of you, and I hope our paths will continue to cross! 🙏🏽

And as I look to the future, I am both excited and nervous. I want to spend more time reading and engaging with critical scholarship. I want to spend more time in movement spaces. I want to find people who are thinking about alternatives to “big tech” and fighting back against the global sliding into techno-fascism. I want to continue working on information access and reimagine very different futures for how we as individuals and collectively as society experience information. I want to explore spaces where I can do research grounded explicitly in humanistic anti-fascist anti-capitalist decolonial values. I want to continue my work on emancipatory information access and realize my research as part of my emancipatory praxis. And above all, I want to build technology that humanizes us, connects us, liberates us, gives us joy.

So, if you want to chat about any of the above or have any advice / recommendations for me, please reach out! I would love to hear from you.


I leave you with one of my favorite quotes…

"Another world is not only possible, she is on her way. On a quiet day, I can hear her breathing."
– Arundhati Roy


Abolish big tech. Free Palestine.