In the past two decades, discourse on systemic racism has expanded beyond the realms of politics, economics, or education, entering the digital sphere that is often perceived as neutral and free from human bias. Technologyparticularly algorithmsis frequently regarded as objective, mechanical, and rational instruments. However, recent studies reveal that algorithms in fact absorb social biases from data and reproduce them in new forms that are more difficult to detect (Papakyriakopoulos & Mboya, 2021). This phenomenon raises a fundamental question: to what extent can algorithms embedded in everyday featuressuch as emoji, autocorrect, and search enginesserve as mediums that perpetuate systemic racism?

Emoji, for instance, emerged as a visual language designed to simplify global communication. Initially, emojis were considered universal and neutral, but their evolution has revealed the complexity of social representation. When skin tone variations were introduced to promote diversity, new dilemmas arose concerning identity, stereotypes, and social acceptance. Several studies highlight that the choice of skin tone in emoji may reflect implicit or explicit preferences related to race (Gill & Lippmann, 2024). This illustrates how seemingly simple technical designs actually contain political dimensions of identity.

On the other hand, autocorrect systems and search engine features demonstrate different yet equally significant forms of bias in shaping user experience. Autocorrect red flagssuch as when certain words are automatically corrected into terms with racial or pejorative connotationsdo not merely reflect technical flaws, but also reveal how algorithmic training data carry the historical legacy of discrimination embedded in language, texts, and online interactions (Kacperski, Rains, & Weber, 2023). Machine learning processes that rely on large-scale data ultimately reinforce biased associative patterns, producing a “false truth” that appears objective but is laden with symbolic injustice. Similarly, search results often reinforce gender and racial stereotypes: from image presentation, ranking of results, to the prioritization of keywords, search engines have never been entirely neutral (Makhortykh, Urman, & Ulloa, 2021). Such representations are then internalized by users, shaping public opinion and even influencing how certain groups are perceived within society.

San Jose Tech, as one of the major technology hubs in the United States, has often come under scrutiny in this context, as it is seen to represent the dominant face of the global industry. The narrative that unfolds reveals that algorithmic bias is not merely a technical error occurring by chance, but rather a reflection of the social, political, and economic structures that sustain the existence of large technology corporations. In other words, technology is not born in a vacuum: it is produced, maintained, and distributed by social actors with specific backgroundsfrom cultural values, economic perspectives, to political orientationsthereby implicitly embedding values that may reinforce inequality. This underscores that algorithmic bias must not be understood merely as a computational problem, but as a social phenomenon that requires a critical reading of the power relations underlying the design and implementation of contemporary digital technologies.

Emoji as Social Representation

Emoji are often positioned as part of the “global language” in the digital era. They are designed to transcend linguistic boundaries and provide convenience in expressing emotions within online communication. However, the universality assumed in emoji instead reveals problems of social representation. Visual representation is never neutral; it always carries design choices, cultural contexts, and political implications (Hall, 1997). Thus, emoji can be understood as a medium that simultaneously adopts and reflects existing social structures.

The academic debate on emoji intensified after the introduction of skin tone variations in 2015. This innovation was intended as an inclusive step, providing space for users from diverse ethnic backgrounds to represent themselves. Yet, this seemingly positive effort gave rise to new ambiguities. First, the choice of skin tone made the act of sending an emoji no longer merely an emotional expression but also a statement of identity. Second, research indicates that skin tone choices in emoji are often influenced by implicit biases: for example, white users more frequently choose the standard yellow emoji because it is perceived as “neutral,” while darker skin tone emojis are rarely used except by users of certain racial identities (Gill & Lippmann, 2024). This phenomenon highlights the existence of a representational hierarchy in digital spaces.

From the perspective of Hall’s (1997) representation theory, visual symbols are never value-free. The yellow emoji, positioned as the default, signifies a dominant position assumed to be universal, while darker skin tone variations are considered supplementary. The political implication of this design is the establishment of hidden norms about who is regarded as the “standard” and who is positioned as the “other.”

Furthermore, empirical studies have found that emojis with darker skin tones are often subject to jokes or mockery in digital public spaces. For instance, the use of Black skin-tone emojis by non-Black individuals is sometimes interpreted as a form of cultural appropriation (Ahmed, Vidgen, & Hale, 2022). This underscores that although emojis were created to simplify expression, they have, in practice, opened new spaces for discriminatory practices.

Therefore, emojis cannot be reduced to neutral icons. They are symbols of social representation tied to the dynamics of power, identity, and politics. Analysis of emoji reminds us that even seemingly trivial elements of communication may carry layers of meaning that reflect social inequality.

Autocorrect, Search Engines, and Algorithmic Bias

If emojis illustrate how visual representations can reflect social bias, then autocorrect and search engines reveal the linguistic dimension of algorithmic bias. Both are integral to the infrastructure of everyday digital communication: autocorrect is embedded in nearly every smart device to aid in spelling correction, while search engines serve as the primary gateway for users to access global information. However, precisely because of their pervasiveness, the biases embedded within them can have far-reaching consequences that often go unnoticed.

Autocorrect is designed to automatically fix spelling errors. Yet, this feature is not exempt from the influence of its training data. In some cases, autocorrect produces substitutions that generate discriminatory associations. This phenomenon is not merely a “technical bug” but rather a manifestation of historical bias internalized within linguistic databases. This can be explained through the concept of algorithmic inheritance, which refers to how algorithms inherit discriminatory patterns from social data (Kacperski et al., 2023).

The role of search engines is even more complex. Algorithmic audits show that certain queries can systematically generate biased results. For instance, searches for the term “CEO” often yield images of white men, while terms such as “Black girls” at certain times have returned hypersexualized content (Makhortykh et al., 2021). These examples underscore how search algorithms embed hierarchies of representation, whereby dominant groups are more frequently displayed, while minority groups are distorted or marginalized.

From a critical perspective, search engines operate as gatekeepers of information that distribute visibility (Papakyriakopoulos & Mboya, 2021). Whoever appears on the first page of search results ultimately shapes public perception. When algorithms prioritize certain representations, they do not merely present information; they actively construct social norms about who is considered “normal” and who is positioned as the “other.”

Social Consequences and Pathways to Solutions

An analysis of emojis, autocorrect, and search engines reveals a clear common thread: algorithms are not entirely neutral, contrary to what is often assumed in discourses on technology. Rather than functioning merely as technical instruments, algorithms absorb, reproduce, and distribute biases deeply rooted in social and cultural structures. When these biases involve issues of race, the consequences extend beyond the technical realm, reinforcing long-standing historical patterns of discrimination embedded in everyday social practices.

At the micro level, individuals may feel unrepresented, misunderstood, or even humiliated by digital features that implicitly position their identities as “other” or deviant from dominant norms. Such seemingly small experiences, when repeated, can generate psychological strain, erode self-confidence, and intensify feelings of alienation. At the macro level, societies are collectively inundated with skewed representations that not only reinforce stereotypes but also reconstruct social hierarchies in the digital sphere, thereby narrowing the space for diversity, plural identities, and alternative narratives (Ahmed et al., 2022).

Several solutions have been proposed to address these issues. First, algorithmic transparency: technology companies should disclose design processes and training data to allow public scrutiny of bias. Second, independent algorithmic audits: academic research should collaborate with tech companies to test potential discriminatory outcomes. Third, diversity in development teams: incorporating varied backgrounds into design teams can reduce the dominance of homogeneous perspectives. Fourth, regulatory policies: governments must hold technology companies accountable for issues of digital representation.

This discussion demonstrates that systemic racism in technology is not merely speculative discourse or theoretical debate but a tangible reality observable in the details of everyday user interactions with digital platforms. Phenomena such as emojis coded with certain skin tones, autocorrect that tends toward pejorative substitutions, and search results that present racially skewed representations provide concrete evidence that algorithmic bias has seeped into the most fundamental layers of digital experience. What appear to be trivial matters are in fact significant precisely because they work subtly: they internalize discriminatory norms without users’ awareness. Left unchallenged, these digital elements risk functioning as mechanisms of normalized discrimination, rendering injustice as something natural and inevitable. In other words, discrimination is no longer confined to face-to-face social interactions but also occurs in symbolic interactions with systems designed to mediate global communication.

Yet amid this problematization, there remains room for optimism. By combining critical academic analysis, more reflective industry practices, and public policies that favor inclusivity, there exists a genuine opportunity to build technologies that are fairer, more inclusive, and more accountable. Such efforts require collective engagement from multiple stakeholders: academics who continue to expose structures of algorithmic bias, technology practitioners committed to ethical auditing, regulators enforcing standards of fairness, and civil society that remains critical of how technology shapes their lives. Thus, the struggle against systemic racism does not only take place in political, social, or economic arenas, but also extends into the code, data, and algorithms that constitute our digital infrastructure. Acknowledging this means recognizing that the fight against discrimination in the modern era must include the digital sphere, for it is precisely there that new forms of inequality are produced and massively reproduced.

References

Recent posts

Quote of the week

"People ask me what I do in the winter when there's no baseball. I'll tell you what I do. I stare out the window and wait for spring."

~ Rogers Hornsby