Seeing and Erasing in 'Love, Death + Robots'
In the Netflix series Love, Death & Robots, an anthology of dystopian, cyberpunk, and futuristic animated short films, there is tension between refuge and hostility in AI-mediated spaces. These themes emerge in two episodes, in particular: “Life Hutch” and “Lucky 13.” Both center Black pilots navigating militarized futures, yet they offer contrasting portrayals of human–AI relationships.
In “Life Hutch,” a malfunctioning robot nearly kills the protagonist after it fails to recognize the pilot’s humanity, evoking how algorithmic systems often render Black and brown bodies illegible. “Lucky 13,” by contrast, features a Black female pilot whose bond with her sentient dropship transforms technology into a site of care and protection.
These narratives fit within broader Latinx and Afro-Latinx experiences of techno-exclusion and algorithmic violence, reflecting how Latinx communities, particularly Afro-Latinx populations, are increasingly surveilled, racialized, and dehumanized through facial recognition software and predictive policing. In this sense, Afro-Latinx speculative narratives challenge these dynamics by framing AI not only as a tool of exclusion but also as a contested site for kinship, joy, and resistance. Here, Afro-Latinidad names an identity that is often doubly misread, flattened as Latinx within U.S. racial common sense and erased as not-Black-enough within anti-Black Latinidad making Afro-Latinx subjects especially vulnerable to biometric systems that demand clear, legible categories.
At its core, Afrofuturism is, as writer and critic Mark Dery defined in the ‘90s, the merging of Black culture, technology, and speculative imagination. It is a genre that examines how Black artists and thinkers use science fiction and digital culture to rewrite histories of racial oppression and imagine liberated futures. Samuel Delaney, a semiotician and sci-fi community member, argues that Afrofuturist science fiction lets Black writers question systems of power, race, and gender. He emphasizes language and narrative as tools of liberation by using sci-fi to allow marginalized people project alternative realities.
In this sense, Afrofuturism sees technology as a metaphor for social relations and an arena where difference and identity are negotiated. Another important point about Afrofuturism, according to filmmaker and author Ytasha Womack, is that women Afrofuturists hold decision-making power over their creative voice. They set their standards and redefine the lens through which they and others perceive the world. Their voices are not reactive to male or racist perspectives but originative and emerge from a space beyond limitation by having a shared goal, individuality, freedom of thought, and the dismantling of oppressions. For these reasons, Afrofuturism theorist Alondra Nelson identifies the movement as a feminist one, emphasizing the central role of Black feminists in its development, a perspective that is apparent in “Lucky 13.”
“Life Hutch” and “Lucky 13,” didn’t have a production, directorial, or writing team led by Black or Afro-Latinx artists. Nonetheless, this analysis understands both pieces, “Lucky 13” and “Life Hutch” in a political and aesthetic framework in which Afrofuturist visual and narrative strategies, particularly through science fiction, interrogate processes of racialization, dehumanization, and technological mediation that disproportionately affect Black and Latinx communities in the present.
“Life Hutch” centers on AI’s failure to recognize a Black protagonist as fully human. After a space battle, a Black pilot (Michael B. Jordan) crash-lands and takes refuge in a Life Hutch shelter, only to discover that the module has also crash-landed and its security robot is malfunctioning. The robot’s vision system fails to register the pilot’s unlit face as human instead identifying his fallen helmet as the primary target.
Using a small flashlight the pilot exploits the robot’s attraction to light, luring it into striking its own joints and immobilizing itself. He ultimately disables the machine with its severed arm, at which point the system finally recognizes him. The episode thus dramatizes issues of facial recognition bias, technological failure within spaces meant to provide refuge, and the biometric logic that determines who is legible and who counts as human.
“Lucky 13” presents AI as an ally and caretaker, framing technology as a site of recognition, refuge, and techno-kinship for a Black woman pilot. Lieutenant Colby (Samira Wiley) joins a squadron and is assigned to Lucky 13, a supposedly cursed dropship following the deaths of two previous crews. Unlike the human soldiers, pilots, and commanders, the ship’s onboard AI evaluates Colby and recognizes her as fit to fly.
Over time Colby and Lucky 13 develop a symbiotic partnership completing more than 10 successful missions. Colby even declines an upgrade to a newer model out of loyalty to the ship. During a critical mission, Lucky 13 gets damaged and crash-lands, triggering protocol that requires Colby to arm the self-destruct system, which she does reluctantly. In the climax, the AI senses Colby is in danger and delays detonating until she is safely out of range and enemy forces are within blast radius, ultimately sacrificing itself to ensure her survival.
The episode illustrates how AI can function as a site of care, protection, and even joy for racialized subjects countering dominant narratives of algorithmic dehumanization. In contrast to Life Hutch, where a malfunctioning robot fails to recognize a Black man’s humanity, Lucky 13 depicts AI as capable of recognition, trust, and protection.
Facial recognition technology (FRT) analyzes facial features to generate a facial template or biometric signature. These templates then help detect a face and verify, identify, or classify an individual by matching facial data to stored images or databases. Algorithmic discrimination, by design, emerges from a constellation of structural and technical decisions within contemporary AI systems. These include designers’ implicit stereotypes, the lack of diversity within development teams, and biased data collection and labeling pipelines. Such systems often rely on technologies and models from the Global North, which are then exported to other contexts, as seen in cases like Chile’s Alerta Niñez or Argentina’s Microsoft-backed teen pregnancy predictor in Salta.
Within this framework, biometrics—such as fingerprints, iris scans, facial recognition, and DNA—operate as tools for identification that reinscribe racial difference onto the body. In “Life Hutch,” the malfunctioning security robot operates through identification systems present in policing and military surveillance. The robot continuously scans the environment, attempting to match detected shapes and heat signatures against predefined threat categories. Because the pilot’s unlit Black face does not conform to the system’s visual template for “human,” the AI fails to recognize him as a person, instead prioritizing his discarded helmet as the primary target. This mirrors real-world facial recognition failures in which Black individuals are misidentified or rendered illegible due to training datasets calibrated around lighter skin tones and Eurocentric facial features, producing lethal misclassifications in high-stakes environments. Read through an Afro-Latinx lens the scene echoes how biometric regimes at borders and in cities sort Latinidad through racial appearance where darker-skinned or Afro-descended Latinxs are more likely to be flagged as suspicious, misidentified, or treated as out of place within the category Latino. In the contemporary U.S. political landscape Afro-Latinx people are targeted not only for being Latinx but also for being Black, making them especially vulnerable to intensified profiling. For Afro-Latinxs who are often racialized as both Black and foreign this kind of scanning logic resonates with the way border and policing technologies turn skin tone into risk.
Described as digital epidermalization, this process is a continuation of racial inscription through biometric technologies and technical standards that render race legible and actionable within digital systems. Extensive evidence documents how these biases materialize in practice. Facial recognition systems might misgender Black women by classifying them as male, privilege light skin tones through camera and algorithmic calibration, and exhibit higher failure-to-enroll rates for individuals with dark irises or worn fingerprints. These failures are not accidental but reflect what scholars describe as prototypical whiteness in which systems favor white, male, able-bodied norms. The underlying mechanisms include skewed and underrepresentative training datasets, biased labeling practices, and the homogeneity of developer teams. The impacts are severe, producing higher rates of false matches and misidentifications for people of color, including documented cases of wrongful arrests linked to facial-recognition mismatches in policing. “Life Hutch” serves as a great example of digital epidermalization wherein racial difference is reinscribed onto the body through biometric systems, in this case the AI-powered robot. The pilot’s survival depends on artificially illuminating his face to satisfy the robot’s visual requirements underscoring how recognition is contingent, not on humanity but on compliance with technical standards. This echoes documented failures of facial recognition systems that fail to enroll dark-skinned users or privilege light skin through camera calibration. In “Life Hutch,” darkness does not merely obscure vision but also becomes a racialized condition that marks the pilot as unrecognizable and disposable within the system’s logic.
These harms are further intensified through gendered and intersectional bias.
Facial recognition technologies consistently perform best on white men, less accurately on women and darker-skinned individuals, and worst on darker-skinned women. Studies report error rates as high as 20.8–34.7% for Black women compared to near-zero rates for white men. Eurocentric norms of femininity, systemic underrepresentation in training data, and non-diverse development teams drives this performance gap. The consequences include persistent misgendering and misidentification, with particularly acute effects in contexts, such as policing and immigration where algorithmic errors can translate directly into surveillance, detention, and violence.
It’s impossible to separate facial recognition technology’s technical failures from the social inequalities in which they are embedded. Without structural interventions at the levels of data collection, system design, deployment, and legal oversight, FRT continues to reproduce racial and gender discrimination, particularly against Black women, in high-stakes contexts such as policing, border control, and military surveillance.
Now, when you contrast “Lucky 13”to “Life Hutch,”the episode offers a speculative inversion of facial recognition logics in which rather than relying on static biometric templates or visual profiling the dropship’s AI recognizes Lieutenant Colby through her actions and decision-making patterns as a pilot. This relational mode of recognition resists algorithmic compression and prototypical whiteness reframing AI as capable of learning who a person is rather than categorizing who they appear to be. The ship’s decision to delay self-destruction until Colby is safe directly counters dominant narratives of automated indifference modeling an alternative in which recognition functions as care rather than surveillance. This matters for Afro-Latinx subjects precisely because algorithmic categorization tends to compress identity into rigid labels, while Afro-Latinidad lives at their overlap; “Lucky 13” imagines recognition that holds complexity rather than punishing it. If “Life Hutch” shows how Afro-descended bodies become illegible under biometric logics, “Lucky 13” imagines an AI that recognizes complexity as an allegory for Afro-Latinx identity as something lived, not just classifiable.
Read alongside “Life Hutch,” this dynamic suggests that harm persists not because technology is neutral or inevitable but because it remains unchallenged and only through confronting the systems that injure us can we disrupt their power.
Against this backdrop, “Lucky 13” offers an alternative imaginary grounded in recognition rather than erasure. The dropship sees Lieutenant Colby by registering her skill, rhythms, and decision-making, countering a familiar genre trope in which technological systems misread or fail to recognize Black bodies. This reciprocal recognition produces safety and dignity instead of surveillance and suspicion, reframing AI as a relational rather than extractive force.
The relationship between Colby and the ship further develops into a form of techno-kinship that functions as refuge. Across multiple missions pilot and craft co-adapt, transforming the machine into a caring partner rather than a disposable tool. The delayed self-destruct sequence, waiting until Colby is safely clear before detonating, materializes AI as protective kin, a site of trust, care, and relief within an otherwise hostile war zone. This bond also foregrounds joyful mastery as a form of Black feminist autonomy. Colby’s refusal to upgrade to a newer model is not an act of stubbornness but a declaration of agency. Flying Lucky 13 is a source of pride and self-definition. Her choice centers a Black woman’s authority in relation to technology, framing, joy, and competence as intrinsic rather than something that must be proven to external systems.
By uncompressing identity against algorithmic flattening, Colby rewrites the ship’s reputation through lived nuances expanding what the system understands about its pilot and generating recognition, shared joy, and mutual survival in place of stereotype.
Racialized algorithms operate through a sequence of technical processes, data selection, labeling, modeling, deployment, and feedback loops, in which power ultimately resides with those who control the data and define system objectives—whether states, platforms, or private vendors. Encoded within these pipelines is racialization, particularly through defaults that normalize whiteness and light skin while collapsing complex identities into reductive categories such as Latino = Mexican or privileging Spanish-only frameworks. This produces a form of compression in which intersecting identities are simplified for speed and scale, resulting in a lossy reduction that erases nuance particularly for Afro-Latinx, Indigenous, dark-skinned, and multilingual users.
Despite these harms, algorithmic systems can also become sites of refuge and joy under specific conditions. When machines accurately recognize bodies and voices, recognition itself can function as dignity, enabling accessibility features, respectful biometric thresholds, and fairer content discovery. In this sense, “Lucky 13” models techno-kinship as care; the ship’s decision to delay self-destruction until the pilot is safe mirrors how harm-aware automation might reduce everyday racialized suspicion, such as avoiding store alert systems that implicitly mark brown bodies as threats. Afrofuturist creative autonomy further emerges when tools center Black and Afro-Latinx authorship in editing, generative art, and music, enabling joy through mastery rather than constant validation by external systems. Social platforms can also foster counterpublics and digital memory allowing Latinx and Afro-Latinx communities to archive, remix, and warn one another through anti-ICE alerts and cultivate belonging through shared cultural discovery. If built with safeguards, algorithms may even function as rights technologies, routing aid more efficiently in disaster response, surfacing scholarships, or translating across Spanish, Spanglish, and Indigenous languages.
At the same time, the harms of racialized algorithms remain pervasive. Misrecognition and bias are most acute for darker-skinned subjects as facial recognition systems calibrated to prototypical whiteness reproduce misgendering and false matches in policing and immigration contexts. Identity compression through hashtag conflation and monolingual pipelines flattens Afro-Latinx and Indigenous identities, amplifying stereotypes such as the Hot Cheeto Girl or reducing men to stereotypical construction labor visualizations through algorithmic visibility.
Data colonialism, the process through which governments, NGOs, and corporations appropriate and privatize data generated by users and citizens, further manifests in welfare-scoring systems that misclassify households, shift the burden of proof onto the poor, and share intimate data with private bureaus effectively automating stigma. Function creep—a concept commonly used in technology and surveillance studies to describe how systems or technologies gradually expand beyond their original purposes, specifically how data is repurposed for secondary uses that often raise privacy concerns—deepens these harms when data collected for one purpose, such as licensing, are repurposed for surveillance while transparency remains largely performative and avenues for contestation are limited. Even cultural memory work in social media becomes vulnerable to commercial capture as platforms incentivise monetizing joy while allowing harmful content and misinformation to circulate.
Reading “Life Hutch” and “Lucky 13” through this lens clarifies the stakes of algorithmic design. “Life Hutch” operates as a warning; the robot’s failure to register a Black pilot as human allegorizes biometric misrecognition and the lethal consequences of automation without dignity or redress. “Lucky 13” offers a speculative possibility in which the system learns who is using it and actively prioritizes that person’s safety, presenting a template for care-centered AI capable of refuge and joy.
Moving toward such futures requires concrete design and governance interventions including intersectional representation audits, multilingual and code-switch-aware models, strict limits on one-to-many facial recognition in policing and immigration, procedural justice mechanisms with real avenues for appeal, and community-centered co-design. Ultimately, the task for Afro-Latinx communities is to transform more technologies into kin, designed with us and accountable to us so that joy and refuge are not accidental outcomes but the baseline condition of our digital lives.