In the identity space, 2016 was a year of unprecedented activity. As we look back on the year that was, One World Identity spotlights three macro trends that have emerged. Our first post discussed the identity evolution, and then we delved into the need for standards and interoperability. In our final post in this series, we explain how identity is fundamentally human.
With all of the hype around technology such as biometrics, block chain, machine learning, artificial intelligence, and smart dishwashers, it’s sometimes easy to forget that humans are fundamentally at the center of identity. This means that we bring our biases, vulnerabilities, and follies in addition to our ingenuity and innovation. Technology is not values neutral. Our opinions on privacy are shaped by deeply personal experiences. A look at the privacy and personal data regulations and national ID schemes (or lack thereof) across different countries is akin to a lens on the social and geopolitical history of that region. China’s identity system is arguably one of the most innovative uses of both e-ID and social data for reputation and risk scoring, but many in the West may regard it as an Orwellian dystopia.
We’re seeing “user centric” and “user controlled” increasingly entering the lexicon of companies large and small. MIT Media Labs advocates for “human in the loop” computing when it comes to machine learning and artificial intelligence. Ctrl-Shift cites that in the UK alone every week there is a new entrant into the so called “me2b” market that allows consumers to control which data they share with businesses in what context.1Jamie Smith (Ctr-Shift). Digital identity: a step challenge for business and designing for trust. Personal Information Economy. December 2016. Facebook commissioned an extensive study on innovation in personal data with user self determination. And even in the mundane world of customs and border patrol, passenger needs of convenience, speed, and respect of privacy are being balanced against border authorities desire for security and control.
However, with agency comes responsibility.
Our vulnerability to deception plays out time and time again. Google reports that the majority of malware found on Android devices is distributed through social engineering as opposed to security exploits; consumers actually give device permissions to malicious applications. Similarly, Intel cites that while technology can help prevent phishing, it will ultimately never go away due to the psychological human element. Even the best identity and access management systems continue to be vulnerable to social engineering. Moreover, humans do not always act rationally — as anyone who has interacted with a four year old may know. This particularly evident in the realm of personal data sharing. The term “privacy paradox” refers to the phenomena that consumers’ articulated privacy concerns do not match their actual behavior. In other words, consumers who cite rate privacy as being very important are still willing to share their very personal data for nominal use cases such as getting a small discount.
We also know that human behavior at both the institutional and individual levels is hard to change. Humans tend to prefer the status quo. Today, various IoT devices are shipped with known vulnerabilities not because of some malicious intent on behalf of the manufacturers, but simply because there is no incentive not to. We are programmed to avoid cognitive load – users routinely breeze through pages of privacy policies, warnings about cookies, and rarely touch complicated settings pages.
Ultimately, organizations that not only employ user centric design but have a deep understanding of the human aspects of identity will be ones that realize the innovation and value.
On to 2017!
References [ + ]
|1.||↑||Jamie Smith (Ctr-Shift). Digital identity: a step challenge for business and designing for trust. Personal Information Economy. December 2016.|