The Ethics of Digital Doppelgangers: When AI Reasons Like Us

The relationship between humans and their digital twins is already evolving beyond simple mirroring

Yang Li, Co-founder and COO at Cosine

November 21, 2024

4 Min Read
A cartoon of a woman using her laptop in front of a mirror showing her digital doppelganger
Getty Images

I am stating the obvious here, but humans aren’t perfect. It’s what makes us human. So, when we talk about the convergence of human reasoning and artificial intelligence (AI) and creating digital doppelgangers (copies of ourselves), there are two ways of considering it. One is about what happens when AI doesn’t quite get it “right” and the other is about what we lose when LLMs do something flawlessly. There are problems with both, of course. This ethical conversation goes well beyond the discussion of preventing harm and instead is a much more holistic analysis of where AI currently is. 

The Problem With Always Having an Answer

The problem with creating a digital twin of human reasoning is that current AI systems are forced to give an answer, regardless of whether they have confidence in the answer or not. Come hell or high water, they will come up with something. But that just isn't how humans work, right? Most humans would say no if they really don't know something and you won’t get any output at all. Ethical concerns now need to become about how you set up guardrails when you're working with something that can't essentially say no.

And let's add in the problem of scale – look at Google, it has over a trillion lines of code. No human is able to decipher that. You are basically operating in partial darkness at all times anyway. So when AI keeps pushing out answers regardless of confidence, you're just amplifying a problem that already exists - you're still operating in partial darkness, just at a much faster pace.

Related:The AI-Powered Skills Gap

The Bias Question Isn’t What You Think

There’s a common misconception that AI needs to separate itself from human reasoning by removing bias. However, there is no such thing as unbiased data; it has bias because it is always sourced by a human, with a purpose. Amazon learned this the hard way in 2015 when their AI hiring tool had to be scrapped after it began penalizing resumes containing the word "women's" – the AI had amplified existing gender biases in the historical hiring data it was trained on. 

Our challenge shouldn’t be eliminating bias, but rather limiting it and not allowing it to create an echo chamber. The number of times an AI can iterate is far faster than a human and so you essentially create a flywheel of bias. This is why the ethics of creating digital doppelgangers isn't about making perfect copies of human reasoning but about recreating the entire ecosystem of how humans work together.

Instead, we should be looking to recreate what humans do. How do humans verify output? They check with another human. You have to create the other side of AI as well. 

Related:Patient Passports: The Key to Globalising the National Health Service

Let’s take AI code engineering for instance. We are thinking about how to create AI code reviewers or code compliance people. You have to create the counterpart. 

The Question of Ownership

If AI and humans are now working and reasoning so closely together then there will, of course, be questions around who owns what and where liability lies. Foundational models all operate using IP-complaint data, whatever that actually means, and so I think we need a common sense approach. What is created is owned by whoever is creating or prompting it. Think about other ecosystems - whatever I create on Google Docs, is owned by me. That's probably how we need to think about AI-generated content too.

The governance of these digital doppelgangers shouldn't focus on limitation but on optimization. As an AI maximalist, I believe our focus should be on pushing towards the full capabilities of AI rather than constantly looking back. It's not the AI itself that is the problem, it's how you apply the AI. That's the equivalent of saying we shouldn't make steel because it could be turned into bullets and missiles. 

Beyond the Mirror

The relationship between humans and their digital twins is already evolving beyond simple mirroring. While AI handles the detailed cognitive tasks – the checks, balances and grunt work – humans are evolving into creators and conductors. We're seeing this transformation already; people who struggled with Excel now excel at it and junior lawyers are able to reason like seniors. The experience of time is diminishing, but what's becoming more critical is the ability to direct these digital doppelgangers toward meaningful outcomes. 

As for us humans? Let’s focus on perfecting our skills in the creative and emotional realm given we now have all this free time. 

About the Author

Yang Li

Co-founder and COO at Cosine, Cosine

Yang Li is co-founder and COO at Cosine, a Y Combinator alumni and one of the very few companies globally with access to fine-tune GPT-4. They work directly with OpenAI and Microsoft's AI teams.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like