You have 4 summaries left

Breaking Banks

Episode 489: Artificial Intelligence Can Create Real Liabilities for Leaders & Can ChatGPT Speak Bank

Thu Apr 13 2023

AI and its Potential

00:00 - 07:20

  • AI has the potential to improve business operations, increase efficiency, reduce costs, and drive innovation.
  • However, there are also risks associated with AI such as biased or inaccurate decision-making, job displacement, and data privacy and security concerns.
  • Al Kaugher is an attorney and author of One Nation Under Algorithms which looks at the growing use of algorithms and artificial intelligence.
  • There is more awareness of the ramifications of AI now that chatbots like Chat GPT have come out.
  • The risks associated with AI are not new but society is starting to become aware of them.
  • Corporate fiduciary duty in the age of algorithms is a new article by Al Kaugher that explores how AI impacts corporate operations and strategic decision making.
  • Executives and board members must carefully consider their fiduciary duties when implementing AI in their organizations.
  • Fiduciary duties boil down to two areas: duty of due care and duty of loyalty.
  • Duty of due care requires executives to make informed decisions based on all available information while duty of loyalty requires them to act in good faith for the benefit of shareholders.

Fiduciary Duties and AI

06:53 - 22:17

  • Fiduciary duties can be broken down into two areas: duty of due care and duty of loyalty.
  • Fiduciaries are individuals who make decisions that could impact the equity value of shareholders, such as board members, senior executives, managers, or general partners.
  • Duty of due care requires fiduciaries to use the best means of analyzing data and making decisions.
  • Duty of loyalty traditionally meant avoiding conflicts of interest but has been expanded to include analyzing transactions and ensuring compliance with laws that prevent harm to society.
  • AI can help fulfill fiduciary duties by providing greater volumes of data and drawing proper conclusions from it.
  • AI can also come with inherent deficiencies such as biases in the algorithm design or historical discrimination in the data used by AI programs.
  • The competency of designers is a factor in introducing biases into AI algorithms during the design process.
  • Designers may introduce their own biases in the design process of AI programs.
  • AI programs can find correlations due to improper design or historical biases picked up during machine learning.
  • The analytical process of AI is often a black box, making it difficult to analyze how correlations were drawn.
  • Historical discrimination can be reintroduced and highlighted by AI programs.
  • Fiduciaries have an obligation to question and double-check AI processes before making decisions based on them.
  • Double checks should be built into AI programs to ensure transparency and accuracy.
  • Hindsight is 2020, but fiduciaries are expected to act like reasonable business people when using modern technology tools such as AI.
  • Not using reasonable and prudent AI tools could result in losing the immunity of the business judgment rule.
  • Relying too much on AI without doing proper analysis and critical thinking could also violate the business judgment rule.

Legal Issues and AI

21:48 - 29:33

  • Shareholders have sued brokerage or financial advisors for using simplistic AI programs that resulted in inappropriate investments.
  • Employee actions have been brought against schools that used simplistic AI programs to evaluate teachers, resulting in negative consequences for good teachers.
  • The industry is so new that courts haven't yet looked at what fiduciaries should be looking at if there's over-reliance on AI programs.
  • There are no best practices yet, but a human should be the final decision maker regardless of the AI process used and any good AI should have a component of self-check or self-criticism and means of transparency.
  • Legislators are trying to legislate against AI without asking the right questions or looking at all ramifications, which could defeat the growth of the AI industry to make it better.
  • Industry leaders, consumer advocates, and plaintiff lawyer organizations should work on standards that set criteria for when an AI should be used and questioned. Courts and legislatures can then come up with court rulings and statutory law based upon adequate rounds.

AI in Financial Services

29:14 - 41:26

  • Each market's payment landscape is unique and so are its participants.
  • The Global Payments Report from FAS provides data on more than 48,000 consumers across 40 global markets.
  • Chatbots are reducing costs but doing little to improve the customer experience in financial services.
  • Breaking Bank's Asia-Pacific hosts Rachel Williamson and Simon Spencer discuss whether Chat GPT can help the sector do better.
  • NRELT Bank is only just starting to think about how a conversational AI like Chat GPT might be useful to them.
  • Simon Spencer, an international AI expert, discusses banks' long-standing interest in using AI for various purposes.
  • Suncorp has done some interesting work with chatbots and machine learning to add smarts and ethics to their use of AI.
  • There is a real desire at Suncorp to use these technologies in a way that actually drives better outcomes for customers.
  • Bankers are cautious about using chat GPT due to its inability to find an exact truth.
  • Chat GPT and AI are in the early stages of development, but iterations can happen rapidly.
  • Chat GPT can be used for summarizing documents and producing an understanding of context.
  • Organizations should use chat GPT for personalization and framing messages correctly.
  • Sophisticated experiences will be contextually aware and sensitive to what's going on, potentially against equally dumb backends.
  • Guardrails must be put in place to ensure that AI is not used against customer self-interest or lacks integrity.

AI in Insurance

40:57 - 53:57

  • Successful customers are more profitable to the bank than just gaming them for a few extra coins.
  • AI should help customers and get out of the way if it can't help.
  • Guardrails are important in managing risk and delivering better outcomes.
  • Insurance companies need to understand risks and protect customers from them.
  • Flipping insurance on its head by protecting businesses against possible risks is a new approach.
  • Highly personalized and transparent risk pricing is necessary for this approach.
  • Parametric insurance products that pay automatically when certain parameters are reached may become more common.
  • Chat GPT is still in prototype stage with lots of limitations, but many companies are experimenting with it.
  • In the future, Chat GPT will be connected to the internet, news feeds, weather, and contextual information.
  • AI usage for car insurance is now looking at driving behavior of the driver to assess risk based on where they are driving, how fast they are driving and what time of day they are driving.
  • A weather insurance product in Vietnam sits on blockchain and uses automated decision-making to assess data from weather monitoring stations. If the weather falls below or above a certain level, a claim is automatically paid out.
  • Chat GPT is an AI kit that can be used for initial research, policy creation, compliance with policies and helping identify inconsistencies across policies.
  • Chat GPT has limitations when it comes to legal compliance questions as it trolls the entire internet and may give wrong information confidently.
  • Chat GPT could be used operationally for better customer servicing and more standardized customer servicing.

Future of AI

47:03 - 1:00:34

  • AI will be as profound as the rollout of the internet and web.
  • AI will connect with other technologies like semantic web, distributed web, and web3.
  • Businesses that are just AI will emerge.
  • AI needs to learn how to be a good corporate citizen.
  • Regtech and regulation oversight will shift towards AI in the future.
  • AIs will act as guardrails for acceptable transactions or behaviors.
  • Igloo Ensure uses AI to make insurance simple, affordable, and painless.
  • Igloo uses rule-based AI approach to assess claims based on parameters defined by them.
  • Igloo assesses risk of policyholders based on various factors including driving behavior.

Ethical Considerations of AI

53:28 - 1:09:14

  • Technology can help standardize customer service and improve brand recognition.
  • Risk-based pricing for insurance companies based on individual data points could lead to pricing out customers in need of insurance due to racial, age, or lifestyle biases.
  • Predict and prevent approach could be used instead of detect and repair approach to encourage lifestyle changes and still include high-risk customers.
  • Compliance issues are a general risk with AI decision-making. Transparency, ethics, empathy, accountability, and fairness should be defined by organizations using AI.
  • Chatbots could replace the need for human customer service agents in certain use cases but can also assist agents by providing access to a database or drafting responses.
  • Hiring people who are comfortable addressing more complex use cases is necessary.
  • Employees must have a basic understanding of AI and their responsibility when using it.
  • Blindly following SLPs is not recommended.
  • Skills that were once needed may no longer be necessary due to the rise of AI.
1