Skip to main content

Why the digital identity juggernaut needs safety belts

By January 17, 2023February 1st, 2023Blog
The glove of a beekeeper touching a wooden hive covered with bees
Beekeeper photo by Bianca Ackermann on Unsplash

74 Civil Society Organizations wrote to the World Bank with “grave concerns” about the Bank’s ID4D programme.

“For too long, the emphasis has been on the development promises of digital ID systems, but it is past time to reckon with their vast potential for abuse and exploitation”.
Letter from global CSOs to the World Bank | Privacy International, September 2022 

Trust Over IP’s “Overcoming Human Harm Challenges in Digital Identity Ecosystems” paper (pdf) is our first attempt to describe the risks of harm that may arise through the use of Digital Identity, how those harms occur, and how to prevent or mitigate them. After one year of research and analysis, the paper is ready for your review. Can SSI harm people in the real world? Our paper says “yes” and why that’s so. In short: there are many examples of digital identity systems hurting people, and there’s no reason to imagine SSI-based systems will be exempt. We invite your feedback on this paper. If you are a ToIP member, you can comment directly in the Google doc. If you are not yet a member, please join or you can comment in the GitHub discussion.  To read the document without commenting, here is a pdf of the current version and the text on github.

Imagine someone you know has a life-changing tragedy, like… 

  • A voter wrongfully convicted of electoral fraud 
  • A gambler’s identity used to fuel addiction
  • An immigrant family unable to work, bank, or get basic services
  • A teen suicide
  • An indigenous people whose relationship with their forest is shattered
  • A soldier’s biometrics targeted them for execution

In this paper we explain how each of these real life examples were the outcome of digital identity abused, misused, or gone wrong. In each case, SSI can make things better, or worse. So, can you do anything about these risks that affect others? And why should you bother? 

Can you do anything? Anything that matters?

We listed some responses to the risk of human harms:

  • Design for a balance of power. Well-executed decentralized or federated identity systems have a shot at assuring agency by empowering those at the edge of the ecosystem, and recognising the vulnerabilities of the least powerful people. 
  • Add human harms to existing risk management. Cybersecurity teams, corporate finance, strategy, and legal teams can add negative externality risk assessments for identity ecosystems they support. Investing in detection, prevention, intervention, and recovery services enables identity ecosystem members to manage their exposure according to their respective risk appetites. 
  • Align objectives and incentives to minimize regret. Human harms arise when ecosystem actors’ objectives conflict and when mistuned incentives drive harmful choices.
  • Measure the cost of externalities. SSI’s trustworthy provenance might improve accountability and discourage harmful behavior within an ecosystem. But harms are infectious. The collective resilience of an ecosystem, like herd immunity, reduces harm.
  • Build a community of practice on human harm reduction. New disciplines and practices will emerge, as they have for other areas of risk reduction. Start inside and join forces with your business ecosystem. 

You can imagine many more. Still, … 

Why should you act? 

Doing nothing is nearly always the easiest path. It can feel like a distraction from today’s work to be aware of negative externalities that show up downstream from your standards efforts, your product development, your engineering and operations. They are distant from you in time, in space, across regulatory jurisdictions, with many intermediaries between you and people who might be harmed. It’s hard to care about hypotheticals and abstract numbers. Should you care anyway? 

Let us offer you five reasons you should act. 

  1. Economics. Prevention is cheaper. 

Nearly always.“Shift-left” is the practice of moving security concerns closer to the start of the product development lifecycle. Again, it’s easier and more complete to build security into services from the start than try to bolt security onto an inherently insecure system. Perhaps we can shift-left when it comes to harming others, embed harms work into the identity product and governance checklists so those considerations help us steer clear of foreseeable harms. 

But who pays for prevention and other harm reduction and remediation work? Who should invest in testing if your trust architectures come with few negative externalities built in? Should it be those closer to the utility side of things? Should governments pick up the slack? Will ecosystem operators that serve more vulnerable humans have a better shot at understanding their risks? 

We should have an “accountability” conversation sooner rather than later. 

  1. Financial and Legal Risk. Action reduces liability. 

But can you avoid liability if laws and regulations are obsolete or enforcement is toothless or regulators are captured? How can we pool exposure so it’s safer to start digital identity ecosystems or to join them? 

  1. Opportunity for value and advantage. 

Harm reduction practices improve your understanding of strategic context and unmet customer needs. Harm surveillance practices improve identity ecosystem situational awareness and security. Harm intervention and recovery practices improve customer service and speed crisis response. 

  1. Harms are infectious and no-one is immune.

Harms impact beyond the close families and friends of those directly harmed to other actors in the ecosystem and in adjacent ecosystems. The 2022 FTX cryptocurrency exchange bankruptcy amplified and spread the human harms of lost investments throughout DeFi and crypto markets. 

  1. Sleep better. It’s good karma. 

Being real about the potential for harm makes it easier to put it in perspective, to weigh your tradeoffs, and to act in good conscience. 

Start small, start together… 

  1. Enroll. Join ToIP’s Human Experience Working Group to build our collective maturity and capability. If you’d like to lead Trust Over IP’s design and implementation of harms work, join us
  2. Awareness. Discuss SSI’s potential for bad things with your crew. You know you have buy-in when… 
  3. Educate. Review the paper with your team. Take a stab at identifying other harms or sorting the list. Comment on the paper. 
  4. Show up. At the ToIP All Members meeting on Wednesday, January 18th, 2023 at 10 am Pacific Time there was a presentation and discussion about the paper and how to get involved. Listen to a recording of the meeting.

This report is a product of the ToIP Human Experience Working Group’s SSI Harms Task Force

Principal authors are Nicky Hickman (CEO, Come to the Edge; Industry Advisor Blockchain & Digital Identity Lab, JYU; Advisor at cheqd), Phil Wolff (Wider Team) and Pyrou Chung (East West Management Institute). Contributors were Aamir Abdullah (University of Colorado Law School), Christine Martin and Darrell O’Donnell (Continuum Loop), Jacques Bikoundou, Dr Jill Bamforth (Swinburne University of Technology), John Phillips and Jo Spencer (Sezoo), Kaliya Young (Identity Woman), Kim Hamilton (Centre Consortium), Oskar van Deventer and Rieks Joosten (TNO), Paul Knowles (Human Colossus Foundation), Sankarshan Mukhopadhyay (Dhiway), Scott Perry (Schellman).  Many more joined Task Force calls and contributed their time and expertise.