- How would you feel if your loan application was decided by an #algorithm that used your #socialmedia data or info about your sex/ethnicity? We partnered with @CDEIUK to find out. 💻 Tweet
- Our experiment involved 3 people: 2 ‘bankers’ & 1 spectator. Spectators were shown two bank choices - one used business-as-usual practices for approving loans, the other used an #algorithm. @TheChoiceLab Tweet
- Spectators 👀 decided how much money the bankers received. For some, the banks that used #algorithms also used info that could imply a persons sex and ethnicity e.g. their postcode, salary. Tweet
- On average, people moved twice as much money away from banks that use these #algorithms in loan application decisions. 💰 Tweet
How fair do people perceive algorithmic decision-making in financial services and does it depend on the data that they use?
The Centre for Data Ethics and Innovation (CDEI) was created to develop the right governance regime for data-driven technologies. As part of this mandate, the CDEI is interested in building an evidence base on people’s perceptions of fairness of data use in business decision-making, especially in the financial services sector. The Behavioural Insights Team (BIT) and the CDEI collaborated to investigate how people respond to the use of algorithms in a particularly consequential area of everyday life: personal finances. This brings together the CDEI’s expertise in algorithmic decision-making and BIT’s expertise in human behaviour and robust evaluation.
The experiment involves an adaptation of an established fairness experimental design to explore how fair people perceive the use of algorithmic decision-making by banks when making a decision about loan eligibility. Further it considers whether fairness perceptions vary dependent on the type of information the algorithm takes into account. It highlights how messaging framing and individual characteristics interact with perceptions. On average, 15.5% more people punish a bank financially when informed that it uses information which could act as a proxy for other characteristics (gender, ethnicity or social media usage) in their algorithms compared to a neutral description.