![]() ![]() As the technology advances and becomes more integrated with financial systems, new systemic risks may become apparent. The current black box paradigm which many LLMs operate may make it difficult for companies to identify and avoid potentially factually incorrect information used for assessments.ĪI systems in Financial Services offer many opportunities, but also raise real concerns about bias, privacy and moral considerations. The issue we currently face is that no one really understands exactly how decisions are arrived at – for example, the founders of ChatGPT cannot explain exactly how the technology arrives at a given answer. How can we ensure fairness, transparency, and accountability in AI-driven financial services? However, transparency will naturally need to be balanced with proprietary intellectual property when discussing valuable technology that helps drive the profit centre at a bank. For example, allowing a customer to challenge an unfavourable loan decision based on an algorithmic creditworthiness assessment that involved factually incorrect information. Transparency can enable customers to understand, and where appropriate, challenge the basis of particular outcomes. Transparency in AI, which refers to the ability to look into an AI model to understand how it reaches decisions, will be key in this respect, to demonstrate trustworthiness which, in turn, is a key factor for the adoption and public acceptance of AI systems. On the other hand, how appropriate is it to incorporate ‘social credit’ scores in creditworthiness assessments – who arbitrates what is good versus bad social behaviour, given its very real impact on the financial livelihoods of people? At what point does big data being used in Financial Services AI become an Orwellian panoptic quandary?įinancial institutions will need to find a way to safeguard customer data and prevent unauthorized access or breaches when using AI technologies, particularly when the data held becomes so all-encompassing and is not just limited to financial information. On the one hand, it can enable people who are not engaged with traditional banking systems in LEDC countries to much wider access to finance. There are a number of moral and ethical considerations to be worked through before the widespread implementation of AI to make financial decisions.Ĭredit scoring systems that incorporate AI and big analytics to assess creditworthiness based on factors such as financial behaviour, online purchases, and social connections, providing credit scores to individuals without traditional credit histories are already in use in parts of the world. The information used by the AI learning mechanisms could be inextricable from the model itself – it’s not as simple as removing your name from a database. However, AI could further muddy the waters of data protection and privacy. It’s not new for companies to try to to use data for a different purpose to which it was collected for (despite the legal restrictions in many countries) – think of all the marketing you receive as a result of signing up to win a free trip to Puerto Rico at a trade show. What if the same AI technology then utilizes the same data in assessing the viability of providing a loan or the eligibility of certain credit cards? The customer volunteers details about their health and lifestyle and is then rewarded with a lower premium. Linked to regulation are questions around data privacy, security and consent – what measures should be taken to obtain informed consent and ensure customers understand how their data is used in AI-driven financial services.įor example, AI-based insurance risk assessment may use dynamic pricing based on answers given by the customer around their health. Data privacy and consent – can customers say “no”? ![]()
0 Comments
Leave a Reply. |