Home » News » Massachusetts Uses Existing Law to Challenge AI Underwriting

Massachusetts Uses Existing Law to Challenge AI Underwriting

This document is an analysis of the implications of a regulatory action, likely a settlement or enforcement action, referred to as the “AOD” (which likely stands for something like “Assurance of Discontinuance” or a similar regulatory agreement) against a Company. the focus is on how this AOD impacts the use of Artificial Intelligence (AI) in underwriting practices, particularly in the lending sector.

Here’s a breakdown of the key takeaways:

1. Algorithmic Governance Mandates (The Compliance Blueprint):

The AOD sets a precedent for AI governance: The document presents the AOD’s requirements as a “blueprint” for other companies using AI in underwriting. This blueprint is seen as representative of a broader trend from federal and state regulators.
Key elements of the compliance blueprint:
Written AI Policies and Procedures: Companies need clear,written rules for how AI models are designed,developed,deployed,monitored,and updated,ensuring they comply with anti-discrimination and fair lending laws.
Algorithmic Oversight Team: An internal team, with a dedicated leader, is crucial for managing fair lending tests, keeping track of models, and addressing bias concerns. Annual Fair lending Testing: AI underwriting models and “knockout rules” (rules that automatically disqualify applicants) must be tested annually for “disparate impact” (unintentional discrimination against protected groups). More testing is needed if a model is updated or if there are credible complaints.
Model Inventories and Documentation: Detailed records are essential. This includes information about the algorithms, the data used to train them, their parameters, when they were actively used, and the results of fair lending tests.
Interpretable Models for Adverse Action notices: When credit is denied, the reasons must be clearly identifiable, implying that “black box” models (models that are difficult to understand) are problematic.
Discontinuation of Problematic Variables: Companies need to understand how different data sources (including publicly available information) are weighted in the model to identify and remove data points that could lead to discrimination.

2. Implications for Fintech and AI in Lending:

Regulators are holding lenders accountable for AI outputs: The AOD shows a trend, especially at the state level, to hold lenders responsible for what their AI systems produce, even if discrimination wasn’t intended. “Black box” models are risky: Relying on models that cannot be audited or explained creates notable risk.
Fundamental compliance controls are highlighted:
Rigorous Fair Lending Testing: This needs to happen at every stage of model progress and deployment. Comprehensive Documentation: This proves explainability and defendability of AI decisions.
Robust Governance Frameworks: Clear roles for compliance, legal, and data science teams are necessary to oversee AI systems.
Data Set Review: Regulators may scrutinize the data used to ensure legal compliance.

3. Looking Ahead: AI Risk Management as a regulatory Imperative:

Growing regulatory scrutiny: This settlement is part of a larger trend of regulatory actions addressing AI in consumer finance.
Focus on fairness and openness: Regulators want AI systems to be fair and clear,and companies to be accountable for their use.
Proactive AI governance is crucial: Given potential penalties and long-term oversight, companies using AI in credit underwriting should actively improve their AI governance programs to meet evolving regulatory expectations.

In essence, the document serves as a warning and a guide for companies using AI in lending, emphasizing the need for robust governance, transparency, thorough testing, and comprehensive documentation to ensure compliance with fair lending laws and avoid regulatory penalties.

What specific provisions of the Massachusetts Unfair Discrimination Law (MUDL) are being applied to challenge AI underwriting practices?

Massachusetts Uses Existing Law to Challenge AI Underwriting

The Pioneering Approach to AI Regulation

Massachusetts is taking a novel approach to regulating artificial intelligence (AI) in the financial sector, specifically within insurance underwriting. Rather than waiting for new, dedicated AI legislation – a process often fraught with delays – the state is leveraging its existing Massachusetts Unfair Discrimination Law (MUDL), Chapter 151C, to challenge potentially discriminatory practices arising from AI-driven underwriting. This strategy positions Massachusetts as a leader in proactive AI governance and sets a precedent for other states grappling with the ethical and legal implications of algorithmic bias in financial services.

Understanding the Massachusetts Unfair Discrimination Law (MUDL)

Chapter 151C, originally enacted in 1974, prohibits discrimination in public accommodations, including insurance. Traditionally, this law focused on protected characteristics like race, religion, and gender. Though, the Massachusetts Commission Against Discrimination (MCAD) has interpreted the law to extend to discrimination based on characteristics correlated with protected classes, even if those characteristics aren’t explicitly used in the underwriting process.

This is where AI underwriting becomes a focal point. AI models,trained on past data,can inadvertently perpetuate and amplify existing societal biases,leading to disparate impact on protected groups. Even if an insurance company doesn’t intend to discriminate, the use of an algorithm that results in discriminatory outcomes can be a violation of MUDL.

How AI Underwriting Can Lead to Discrimination

AI in insurance is increasingly used to assess risk and determine premiums. These AI systems analyse vast datasets, often including non-conventional factors like zip code, social media activity, or purchasing habits. While these factors might seem innocuous,they can serve as proxies for protected characteristics.

Here’s how it can happen:

Redlining 2.0: AI algorithms might identify certain zip codes as “high-risk” based on historical data reflecting past discriminatory lending practices, effectively recreating redlining.

Proxy Discrimination: An AI model might correlate certain purchasing habits with a protected class, leading to higher premiums or denial of coverage.

Data Bias: If the data used to train the AI model is biased (e.g., underrepresenting certain demographics), the model will likely perpetuate those biases in its predictions.

Lack of Transparency: The “black box” nature of some AI algorithms makes it difficult to identify and address discriminatory outcomes. Explainable AI (XAI) is becoming crucial, but isn’t universally implemented.

The MCAD’s Stance and Recent Actions

The MCAD has signaled its intent to actively investigate potential violations of MUDL related to AI underwriting. In early 2024, the MCAD issued guidance clarifying that AI-driven decision-making is subject to scrutiny under the law. This guidance emphasizes the importance of:

fairness Testing: Regularly auditing AI models for disparate impact on protected groups.

Data auditing: ensuring the data used to train AI models is representative and free from bias.

Transparency and Explainability: Being able to explain how an AI model arrived at a particular decision.

Human Oversight: Maintaining human review of AI-driven decisions, particularly in cases where adverse action is taken.

Several complaints have been filed with the MCAD alleging discriminatory practices by insurance companies using AI underwriting. While specific details of these cases are frequently enough confidential, they highlight the growing concern about algorithmic discrimination in the insurance industry.

Implications for Insurance Companies

Massachusetts’ approach has significant implications for insurance providers operating in the state:

Increased Compliance Costs: Insurance companies will need to invest in AI fairness testing, data auditing, and explainable AI technologies.

Potential Legal Liability: Companies found to be violating MUDL could face significant fines and legal challenges.

reputational Risk: Allegations of algorithmic bias can damage a company’s reputation and erode customer trust.

Need for Robust AI Governance: Establishing clear policies and procedures for the advancement and deployment of AI systems is essential.

Beyond Massachusetts: A National Trend?

Massachusetts’ proactive stance is likely to influence other states considering how to regulate AI. While some states are pursuing

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.