AI and the Risk of Technological Colonialism

A monthly column from the founder and CEO of Qantm AI and former chief AI officer of IBM

Seth Dobrin

February 6, 2024

15 Min Read
Ai IQ logo

In an era where artificial intelligence is rapidly evolving, the emergence of Generative AI (GenAI) marks a crucial turning point, setting us on a steady march toward technological colonialism. This significant shift transcends conventional technological advancements, profoundly influencing our societal and cultural landscapes.

GenAI, a foundation model essentially a pre-trained deep learning model, is adept at assimilating vast datasets that reflect a broad spectrum of human knowledge and behavior. This capacity transforms AI, enabling it to undertake tasks previously deemed unattainable, such as generating intricate content and making sophisticated predictions, mimicking human creativity and insight. The fusion of machine learning, natural language processing, and data analytics paves the way for AI systems that are more intuitive and responsive, aligning closely with human thought processes.

However, this advancement has its challenges. The rapid concentration of power in the AI industry, driven by the immense resources needed to develop and manage these models, signals a movement toward the industry becoming an oligopoly. This centralization poses significant risks, potentially exacerbating the digital divide and reshaping societal norms and cultural values to favor the perspectives of a few industry leaders.

In exploring the implications of GenAI and foundation models, we confront a paradox: The quest for unbiased and equitable AI systems may inadvertently lead to the imposition of dominant cultural values. Without deliberate action, this trend will inevitably lead to a state of technological colonialism where we will see an erosion of the diversity of global cultural identities. As we stand at this critical crossroads, we must navigate with caution and a deep sense of responsibility, ensuring that our path forward in AI development is marked not only by technological innovation but also by a commitment to equity, diversity, and inclusivity.

The oligopoly and its socioeconomic implications

The rapid centralization of the AI industry into an oligopoly is a development of profound significance. This transformation, primarily driven by the advent of foundation models such as ChatGPT, has not been gradual – it has occurred almost overnight. A handful of players, equipped with vast computational resources and elite talent pools, have become the gatekeepers of AI development. This concentration of power has far-reaching implications, extending beyond the confines of the tech industry into the broader socioeconomic and cultural landscapes.

The emergence of this oligopoly is not merely a matter of market dynamics – it raises critical questions about the equitable distribution of technology and its benefits. The centralization of AI development in the hands of a few corporations means that a limited circle of influencers is making decisions regarding the direction, ethics, and applications of AI technologies. This scenario poses a significant risk of creating a technology landscape skewed towards the interests and perspectives of these dominant entities, potentially sidelining the needs and values of the broad global society.

Additionally, this situation threatens to exacerbate existing socioeconomic disparities. Access to cutting-edge AI technologies and their benefits could become increasingly restricted to those who can afford it or are near these major players. The teams building these models do not represent the societies to which they are being deployed, which sets up a less-than-optimal scenario when deployed in a way that impacts these communities.

A few stats reinforce this, showing a dramatic overrepresentation of males and Asians. While no data explicitly examines white and Asian (South or East) men in data science, one can extrapolate from existing data that these two ethnicities are significantly over-represented: 64% white and 18.8% Asian, respectively, as derived from 2020 Census data. As reflected in other parts of tech, women who are data scientists in the U.S. are severely under-represented: 18% vs. 82% for men.

As the tools are being built from the perspective of white and Asian men from the West, India and China, this imbalance is putting us on a path to a widening of the digital divide, where advanced AI tools and their advantages are built for the privileged few, leaving behind vast segments of the global population.

The concentration of power in the AI industry also has implications for shaping societal norms and cultural values. As AI systems become more integrated into everyday life, from decision-making processes in businesses to personal assistants in homes, the values embedded in these systems by their developers start to influence broader societal norms. This influence can be subtle but pervasive, potentially leading to a homogenization of cultural and social values that align with those of the dominant players in the AI industry.

The oligopoly in AI raises questions about the responsiveness of these technologies to diverse global needs. When a few entities hold the reins of technological advancement, the variety of perspectives and innovative approaches that fuel progress could be stifled. The risk is not just of a monopolized market but of a monopolized mindset, where alternative approaches and solutions are overlooked or underfunded.

The foundation model paradigm

GenAI and foundation models, in general, are, at their core, pre-trained deep learning models. As they are pre-trained on enormous datasets, they are unique in their ability to capture a vast array of human knowledge and behavior. This extensive training empowers them to perform unimaginable tasks, such as generating sophisticated content, making complex predictions, and exhibiting creativity that parallels human ingenuity.

Due to their expansive applicability and adaptability, these models hold the potential to democratize AI, making advanced AI tools accessible to a broader spectrum of users and developers – they are versatile, powerful, and transformative. However, this democratization comes with a caveat.

The development and control of these models necessitate significant computational resources and specialized talent, which are concentrated in the hands of a few dominant players. This centralization of power raises critical questions about the future of AI development and its implications for society.

The reliance on vast and diverse datasets for training these models introduces another layer of complexity – the data. The complexity starts at the data sourcing, both in how the data is sourced and where it is sourced. Current foundation models, in the form of Generative AI models, sourced data by scraping the internet, heavily skewed towards the North American, European, or Chinese parts of the internet. Additionally, these data were sourced, for the most part,  without regard for data ownership, legal, and regulatory requirements.

Once the data is sourced, the second big issue arises in data representation. which is heavily skewed towards the Western and Chinese worlds. They are primarily focused on English and Chinese languages, respectively. Given the state of the data that lives on the internet, the data is also over-represented with that of more Caucasians in the Western models and seriously under-represented with data from black and brown communities.

Adding to this is the data (the permanent record of humanity) – the good, the bad, and the ugly. And since Reddit is one of the main sources of training data – before it started charging for access to its information – we see a slice of the good, the bad and the ugly trained into these models.

Good intentions, bias control and cultural imposition

The landscape of foundation models is painted with the brush of good intentions. The primary objectives behind their development and deployment are commendable – to enhance human understanding, optimize processes, and democratize AI access across diverse user groups. Yet, in this noble endeavor, a paradox emerges, delving into the complex interplay of bias control and cultural imposition.

On the surface, the mission to control biases in these systems appears straightforward. The goal is to create AI models that are fair, equitable, and unbiased. However, the execution of this mission needs to be more complex. The significant builders of foundation models are concentrated in the West, China, and the U.A.E. – it is important to note that most of the talent building the models in the U.A.E. are from the West and China. The European Union is driving the primary global regulation – just approved by its member states last week. In an effort to ‘do the right thing,’ these regulators are pushing to control bias in the foundation models, as are the builders of the models.

Technological Colonialism (noun): The dominance of a small number of entities, typically large corporations or specific geographic regions, in controlling and shaping the development, deployment, and norms of advanced technological systems. This dominance leads to the imposition of these entities' cultural values, biases, and societal norms on a global scale, often resulting in the marginalization of diverse cultural identities, exacerbating socioeconomic disparities, and potential homogenization of global cultures. The phenomenon raises ethical concerns about equity, diversity and inclusivity in the development and application of technology.

We risk imposing Western and Chinese constructs of bias on the rest of the world. Bias in AI is not just a technical problem; it reflects the deeper societal and cultural constructs. Outside of gender bias, these constructs are not globally ubiquitous. Western and Chinese definitions of bias are fundamentally different than African, Southeast Asian and South American. This puts us on a path to just one type of technological colonialism.

As we look deeper at other forces leading to technological colonialism, it is not overt – it is woven subtly into the fabric of AI systems. It manifests in how these systems interpret language, the cultural nuances they recognize or ignore, and how they navigate ethical dilemmas.

For instance, an AI model trained predominantly on data from a particular region or demographic may develop a skewed understanding of language and context. This leads to interpretations and decisions that align with specific cultural norms but are alien or even inappropriate in others.

The implications of this are profound. The risk of cultural homogenization escalates as AI systems become more integrated into global societies – in health care, education, finance and more. These technologies could start shaping societal norms and values in ways that reflect the dominant cultures of their developers rather than the diverse tapestry of global cultures. The danger here is in creating a monolithic cultural perspective and eroding the richness of diverse cultural identities.

This technological colonialism challenges the essence of what AI was meant to achieve – a broader, more inclusive understanding and service to humanity. When AI systems carry the imprint of a limited cultural perspective, their ability to serve diverse communities equitably is compromised.

Therefore, the challenge is not only to control biases in the technical sense but to foster a development environment that truly represents global diversity. This requires a concerted effort to diversify the AI workforce with a global perspective, source data reflective of varied cultural contexts, and engage in continuous dialogue about AI's ethical and cultural implications.

The cultural construction and its global impact

The integration of a specific worldview into AI systems, particularly those developed using foundation models, poses a significant risk of technological colonialism on a global scale. This concern transcends theoretical discourse and manifests in real-world scenarios, impacting diverse spheres ranging from social media content moderation to decision-making in financial services.

When AI systems are embedded with a particular set of cultural values and perspectives, they inadvertently become agents of cultural construction. This process can lead to the subtle yet pervasive imposition of certain norms and values across various societies, particularly affecting those with different cultural backgrounds or value systems. This phenomenon is not limited to high-level concepts but permeates the minutiae of daily life and decision-making.

In social media, for instance, content moderation algorithms, guided by these constructed norms, can disproportionately affect what is deemed appropriate or inappropriate, influencing public discourse and societal norms. While designed to be neutral, these algorithms often carry the biases and cultural perspectives of their creators, leading to decisions that may not align with the diverse values of a global user base.

Similarly, AI-driven decision-making tools in the financial sector can embed certain cultural biases in risk assessments and customer interactions. This can result in skewed financial opportunities, credit decisions, and customer service experiences, favoring certain groups over others based on the cultural construct of the underlying AI system.

Furthermore, this technological colonialism can homogenize cultural nuances, diversity of thought, and local values. As AI systems become more prevalent in areas like education, health care, and governance, there is a risk that a homogenized, technology-driven perspective could overshadow the rich tapestry of global cultural diversity.

This overshadowing not only undermines the cultural identity of various communities but also limits the potential of AI to truly understand and cater to the diverse needs of a global population. This raises ethical concerns about the role of technology in shaping cultural narratives and the responsibility of those who design and deploy these systems to ensure cultural sensitivity and inclusivity.

The five culprits revisited

As we navigate the evolving landscape of foundation models, we encounter a narrative deeply intertwined with the concept of technological colonialism. A confluence of factors shapes this narrative, each playing a pivotal role in steering the direction of AI development. These elements, often discussed separately, collectively paint a picture of an industry at a crossroads, with the potential to either democratize technology or entrench existing disparities.

At the forefront of this narrative is the issue of talent scarcity. The development of foundation models necessitates a high degree of specialization, resulting in a scenario where a select group of highly skilled professionals exert significant influence over the trajectory of AI. This scarcity not only intensifies competition but also limits the diversity of perspectives, leading to a development path that may only partially consider a global audience's varied needs and challenges.

Closely linked to talent scarcity is the challenge of data handling. The vast and complex datasets required for training these AI models are a double-edged sword. While they enable the models to capture a broad spectrum of human knowledge, they also raise critical issues around privacy, representation, and potential biases. How this data is managed can significantly sway the outcomes of AI applications, potentially perpetuating existing societal biases or creating new forms of inequality.

Inherent biases within these AI systems are another crucial aspect of this story. Despite concerted efforts to mitigate these biases, they often persist, subtly influencing the AI's decision-making processes and outcomes. These biases, reflective of the developers' cultural and societal norms, can inadvertently reinforce a homogenized worldview, further marginalizing already underrepresented groups and viewpoints.

Another dramatic chapter in this narrative is the concentration of power within the AI industry. The resource-intensive nature of GenAI and foundation model development has led to an oligopoly, with a few dominant players holding the reins. This centralization of power is not just a market dynamic; it has profound implications for the equitable distribution of AI's benefits, raising the specter of a widening digital divide.

Lastly, the prevailing industry culture, often characterized by its exclusivity and homogeneity, plays a subtle yet significant role in shaping the development of AI. This culture, which tends to marginalize certain groups, especially women and minorities, results in a narrow range of ideas and solutions, failing to address the diverse needs of a global populace.

The road ahead: Caution and responsibility

As we traverse the complex terrain, a narrative unfolds, leading us to technological colonialism. This journey, marked by innovation and transformative potential, casts shadows of oligopoly, socioeconomic disparities, and cultural homogenization. GenAI and foundation models, generally a paradigm shift in AI, are more than just a technological leap – they are a redefinition of how technology influences society.

As a cornerstone of foundation models, GenAI's power lies in its vast, data-driven knowledge base, enabling unprecedented capabilities in content generation, predictive analytics, and creative problem-solving. Yet, this power is double-edged. The democratization promised by these models comes with the caveat of centralizing control in the hands of a few, raising questions about the equitable distribution of technology and its benefits.

The oligopoly formed in the AI industry is not merely a market dynamic but a phenomenon with profound socioeconomic implications. A few major players, wielding significant resources now shape the AI landscape, potentially sidelining diverse societal needs and values. This centralization risks creating a digital divide where the benefits of advanced AI are accessible only to the privileged few.

Moreover, the good intentions behind AI development often confront the challenges of bias control and cultural imposition. The endeavor to create unbiased AI models inadvertently leads to the embedding of developers' cultural values, risking the imposition of a homogenized worldview. This scenario poses the threat of cultural colonialism, where dominant cultures, through technology, influence and reshape global cultural narratives and norms.

In this context, the intertwined factors of talent scarcity, data handling challenges, inherent biases, power concentration, and industry culture gain heightened significance. Collectively, they contribute to a technology ecosystem that, while efficient and powerful, may not align with equity, diversity, and cultural sensitivity principles.

As we march forward, if we do not take action, we risk dramatic socioeconomic and global disparities that could lead to social strife. How do we prevent this? While I do not have all the answers, there are some places that we can start.

First, instead of trying to control bias in the foundation models directly, we need to focus on controlling the bias locally on the outcomes by looking at disparate outcomes – do you care if a model is biased, or do you care if a decision, recommendation, response, etc., is biased? This will allow for the culturally appropriate definition of bias to be applied.

Second, a significant effort must be made to diversify the workforce globally so that the builders of these highly impactful systems represent the populations they are being deployed. Lastly, there needs to be a greater effort for government funding and better control of acquisitions in this arena so that innovation and control of the market do not continue to reside in the hands of the few.

About the Author(s)

Seth Dobrin

Seth Dobrin is the founder and CEO of Qantm AI and the former chief AI officer at IBM.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like