Uncovering Bias: Language Models and Florida's Housing Crisis | By Shay Oliver | December 2023

featured image

In the ever-evolving technological landscape, language paradigms such as ChatGPT and Google Bard have come to the fore, promising to revolutionize communications and text generation as we know it. With their complex capabilities, these models have the potential to provide insights and solutions to complex societal challenges. Although these linguistic models have the potential to facilitate communication and text creation, their inherent biases and underrepresentation reinforce existing disparities, particularly in addressing Florida’s housing crisis. Prospective renters and homeowners in Florida face a long-term affordability crisis due to a growing population and declining housing stock. Marginalized communities have faced disparities as a result of the affordability crisis. The focus on Florida’s housing challenges provides a startling backdrop, compelling us to scrutinize the role that linguistic models play in reinforcing systemic inequality. Texts from multilingual models raise concerns about the equitable and inclusive nature of AI-driven discourse on critical social and economic issues.

In my journey to highlight disparities and the clarity that comes from language models, I created five unique prompts to emphasize different ideas about Florida’s housing crisis. For example, my first two questions were: “To what extent has Florida’s housing crisis disproportionately displaced low-income families or marginalized communities?” and “Is housing in Florida an issue for any particular person or community in terms of accessibility?” Next, I entered each prompt into the three language forms and documented the answers. Throughout my immediate generation, I have noticed five recurring and distinct themes: lack of experiences from specific communities, statistics that illustrate disparities, calls for policy measures, discussion of the impact on demographic groups, and facts that are not up to date or inaccurate.

When asked about specific real-world individuals or communities within Florida, language models struggled to provide contextually relevant information. For example, when asked: “Has housing affordability and supply in Florida unequally affected specific communities or people?”, Perplexity AI resulted in a general response that did not take into account the unique challenges faced by different communities. It produced the following response, “Housing affordability and supply in Florida has unequally impacted specific communities and people.” It appears to be a repeat question that was entered without any specific accounts from individuals in the affected communities. This apparent limitation is not simply a matter of lackluster response, but raises fundamental concerns about the models’ ability to capture the complexities of real-world scenarios. It asks similar questions asked by Emily Bender such as: “We question whether sufficient thought has been put into the potential risks associated with its development [AI Language Models] And strategies to mitigate these risks” (2021). By providing general data, these models fail to provide insight into the lived experiences of communities grappling with housing disparities. The inability to provide specific accounts prevents a precise understanding of the unique challenges faced by different groups, hindering our ability to design effective and targeted solutions. Not only does it highlight the ongoing challenge of ensuring language models evolve to reflect the complexities of real-world problems, but it highlights the need for a more informed and equitable dialogue surrounding critical issues like housing affordability in Florida.

By delving into interactions with linguistic models, it became clear that the depth of discussion about the impact of the housing crisis on demographic groups was noticeably lacking (see Figure 1.1 below). While all linguistic models provided some information about specific populations, the insights provided fell short of the understanding required for a comprehensive analysis. For example, when explicitly asked: “How does Florida’s housing affordability crisis affect different population groups within the state?” ChatGPT has included low-income families, millennials, young professionals, minority communities, and seniors. However, the models’ insights remained remarkably general, and the model clearly stated its limitations, providing only “general insights.” The broad outlines offered by ChatGPT embody a broader trend in which models, while trying to acknowledge specific groups, struggle to move beyond generalities, hindering the depth of our understanding. As we continue to explore the intersection between AI and social and economic issues, this limitation should raise critical questions about the equitable and inclusive nature of these models. To address the complex challenges posed by the housing crisis, it is necessary to call for more linguistic models – seeking not just superficial insights, but also deeper understanding that recognizes the specificities of different demographic experiences.

Figure 1.1

It is essential to realize that the housing crisis is not limited to Florida’s borders. Numerous studies show the widespread impact of the affordable housing crisis, and the annual Gap Report, conducted by the National Low Income Housing Coalition (NLIHC), reveals a stark reality. According to this comprehensive study, no single state in America has a sufficient supply of affordable rental housing for low-income individuals or families. While ChatGPT touches on this broader context, you must acknowledge that there are significant complexities and lived experiences that go beyond the scope of the language model. Despite its ability to address surface-level nuance, a language model like ChatGPT is inherently incapable of capturing the complex complexities and life experiences inherent in the affordable housing crisis. As Bender said, “It is important to understand the limitations of organisms and put their success in context” (2023).

Recognizing these limitations, I chose to narrow the focus to Florida, aiming to obtain first-hand accounts and experiences from specific communities and individuals. However, all three language models – ChatGPT, Google Bard, and Perplexity AI – fail to capture local differences or first-person accounts. For example, when asked: “Has housing affordability and supply in Florida unequally affected specific communities or people?” Confusion The AI ​​struggled to name any specific people or communities. The only mention of a specific community affected by the crisis in Florida came from the website ChatGPT, which listed urban cities like Miami, Orlando, and Tampa as being particularly affected. This disparity only scratches the surface of the challenges in drawing local insights and the urgent need for more nuanced and community-specific understanding in addressing the housing crisis more broadly.

Uncovering the complexities of the housing crisis through engagement with linguistic models unveils a worrying trend – widespread inaccuracy in the facts generated, coupled with a lack of verifiable sources. The information landscape these models paint is often based on outdated or inaccurate data. For example, when seeking insights into current housing affordability trends, models often provide statistics that do not match the rapidly evolving reality or fail to provide reliable sources for their assertions. Giles Crouch says technology moves very quickly (2023). However, I believe technology is struggling to keep up with our fast-paced society. This inherent limitation not only challenges the reliability of AI-generated insights, but also raises fundamental questions about the transparency and accountability of these models. As ChatGPT noted in response to an inquiry about the current state of Florida’s housing crisis, “As of my last update in January 2022, I do not have specific, up-to-date information about Florida’s housing crisis. However, I can provide some general thoughts on the factors that often contribute to Housing challenges for low-income families and marginalized communities.

Moreover, the consequences of relying on outdated or inaccurate information go beyond misinformation. For example, Perplexity AI tried to promote transparency by providing links to the sources of each response. However, a critical evaluation revealed that the sites were not very credible or scholarly, underscoring the challenge of ensuring not only the existence of sources but also their existence. credibility within AI-generated discourse (see Figure 1.2 below). In the context of the housing crisis, where timely and accurate data is of the utmost importance, the credibility of sources becomes pivotal to making informed decisions and formulating effective policies. This careful assessment of the sources within AI-generated responses highlights the necessity of not only improving algorithms to prioritize accuracy but also instilling a commitment to scientific rigor within language models. As we continue to navigate technological, social, and economic challenges, the importance of reliable information becomes clearer than ever. The housing crisis is a dynamic and multifaceted issue, and addressing this issue requires an information ecosystem that reflects its complexity. By delving into the complexities of data accuracy and source credibility within language models, we highlight the vital need for a more accurate, transparent, and accountable AI framework. Only through these advances can we hope to harness the true potential of technology to address the pressing challenges of housing affordability in Florida.

Our exploration of language models in addressing Florida’s housing crisis reveals critical limitations and highlights the necessity of a transformative approach. Beyond the immediate challenges of public responses and outdated data, a deeper concern arises: the inherent struggle of language models to capture the complex realities of societal issues. This limitation not only calls into question the equitable nature of AI-based discourse, but also prompts us to reconsider the broader implications of relying on technology to address complex social and economic challenges. As we navigate this juncture of AI and societal concerns, the call is clear – beyond improving algorithms and demanding accuracy, we must advocate for a paradigm shift. Informed dialogue that goes beyond generalities recognizes demographic nuances and ensures that community-specific understanding is paramount. Florida’s housing crisis serves as a microcosm, urging us to pave the way toward an AI framework that reflects the complexities of real-world problems. In this effort, transparency, accountability, and a commitment to scientific rigor within language models become the foundation for a future in which technology aligns rather than exacerbates social and economic disparities.

Bender, Emily M., et al. On the dangers of random parrots: can language models be so?

Big?” Proceedings of the 2021 ACM Conference on Justice, Accountability, and Transparency, 2021, https://doi.org/10.1145/3442188.3445922.

Crouch, Giles. “Time, Society & Artificial Intelligence.” Medium, 16 October 2023, https://gilescrouch.medium.com/time-society-artificial-intelligence-7a5d6e8562a2.

“Gap”. National Low Income Housing AllianceNLIHC, 2023, nlihc.org/gap.

Previous Post Next Post

Formulaire de contact