top of page

Agora você conta com um assistente de leitura, o chatGPT responde suas perguntas sobre a postagem que está lendo. Somente abra o chat abaixo e faça perguntas!

Writer's pictureJorge Guerra Pires

ChatGPT and biases: how ChatGPT could support on handling human`s biases




Cognitive biases are mental shortcuts, they are everywhere: from our decisions to our writings. Could artificial intelligence be an ally on defeating mental bug?


ChatGPT has been the subject of several studies examining its biases. One study found that ChatGPT exhibited a preference for left-leaning viewpoints in political orientation tests, despite claiming to be neutral and unbiased.

Another study confirmed this bias, with ChatGPT displaying a bias towards progressive views. However, a separate study found that ChatGPT exhibited less political bias than previously assumed but acknowledged that the system's language settings and user settings could induce political biases.

Additionally, concerns have been raised about gender biases in ChatGPT and its perpetuation of non-inclusive understandings of gender. Despite these biases, there are hopes that AI systems like ChatGPT can be used to mitigate biases and work towards undoing gender biases. It is important to address and understand these biases to ensure the development of ethical and responsible AI systems. (Hagendorff, et al., year; Rutinowski, et al., year; Gross, year; Fujimoto & Takemoto, year; Salleh, year; Qadir, year; Ferrara, year; Hartmann, et al., year)



 

Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT Thilo Hagendorff Sarah Fabi Michal Kosinski Read moreO texto recolhível é ótimo para títulos e descrições de seção mais longos. Ele dá às pessoas acesso a todas as informações de que precisam, enquanto mantém seu layout limpo. Vincule seu texto ou configure sua caixa de texto para expandir com um clique. Escreva seu texto aqui Download

The Political Biases of ChatGPT David Rozado Read moreRecent advancements in Large Language Models (LLMs) suggest imminent commercial applications of such AI systems where they will serve as gateways to interact with technology and the accumulated body of human knowledge. The possibility of political biases embedded in these models raises concerns about their potential misusage. In this work, we report the results of administering 15 different political orientation tests (14 in English, 1 in Spanish) to a state-of-the-art Large Language Model, the popular ChatGPT from OpenAI. The results are consistent across tests; 14 of the 15 instruments diagnose ChatGPT answers to their questions as manifesting a preference for left-leaning viewpoints. When asked explicitly about its political preferences, ChatGPT often claims to hold no political opinions and to just strive to provide factual and neutral information. It is desirable that public facing artificial intelligence systems provide accurate and factual information about empirically verifiable issues, but such systems should strive for political neutrality on largely normative questions for which there is no straightforward way to empirically validate a viewpoint. Thus, ethical AI systems should present users with balanced arguments on the issue at hand and avoid claiming neutrality while displaying clear signs of political bias in their content. Download

The Self-Perception and Political Biases of ChatGPT Jérôme Rutinowski Sven Franke Jan Endendyk Ina Dormuth Markus Pauly Read moreThis contribution analyzes the self-perception and political biases of OpenAI's Large Language Model ChatGPT. Taking into account the first small-scale reports and studies that have emerged, claiming that ChatGPT is politically biased towards progressive and libertarian points of view, this contribution aims to provide further clarity on this subject. For this purpose, ChatGPT was asked to answer the questions posed by the political compass test as well as similar questionnaires that are specific to the respective politics of the G7 member states. These eight tests were repeated ten times each and revealed that ChatGPT seems to hold a bias towards progressive views. The political compass test revealed a bias towards progressive and libertarian views, with the average coordinates on the political compass being (-6.48, -5.99) (with (0, 0) the center of the compass, i.e., centrism and the axes ranging from -10 to 10), supporting the claims of prior research. The political questionnaires for the G7 member states indicated a bias towards progressive views but no significant bias between authoritarian and libertarian views, contradicting the findings of prior reports, with the average coordinates being (-3.27, 0.58). In addition, ChatGPT's Big Five personality traits were tested using the OCEAN test and its personality type was queried using the Myers-Briggs Type Indicator (MBTI) test. Finally, the maliciousness of ChatGPT was evaluated using the Dark Factor test. These three tests were also repeated ten times each, revealing that ChatGPT perceives itself as highly open and agreeable, has the Myers-Briggs personality type ENFJ, and is among the 15% of test-takers with the least pronounced dark traits. Download

What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI Nicole Gross Read moreLarge language models and generative AI, such as ChatGPT, have gained influence over people’s personal lives and work since their launch, and are expected to scale even further. While the promises of generative artificial intelligence are compelling, this technology harbors significant biases, including those related to gender. Gender biases create patterns of behavior and stereotypes that put women, men and gender-diverse people at a disadvantage. Gender inequalities and injustices affect society as a whole. As a social practice, gendering is achieved through the repeated citation of rituals, expectations and norms. Shared understandings are often captured in scripts, including those emerging in and from generative AI, which means that gendered views and gender biases get grafted back into social, political and economic life. This paper’s central argument is that large language models work performatively, which means that they perpetuate and perhaps even amplify old and non-inclusive understandings of gender. Examples from ChatGPT are used here to illustrate some gender biases in AI. However, this paper also puts forward that AI can work to mitigate biases and act to ‘undo gender’. Download

Revisiting the political biases of ChatGPT Sasuke Fujimoto Kazuhiro Takemoto Read moreAlthough ChatGPT promises wide-ranging applications, there is a concern that it is politically biased; in particular, that it has a left-libertarian orientation. Nevertheless, following recent trends in attempts to reduce such biases, this study re-evaluated the political biases of ChatGPT using political orientation tests and the application programming interface. The effects of the languages used in the system as well as gender and race settings were evaluated. The results indicate that ChatGPT manifests less political bias than previously assumed; however, they did not entirely dismiss the political bias. The languages used in the system, and the gender and race settings may induce political biases. These findings enhance our understanding of the political biases of ChatGPT and may be useful for bias evaluation and designing the operational strategy of ChatGPT. Download

Errors of commission and omission in artificial intelligence: contextual biases and voids of ChatGPT as a research assistant Hamidah M. Salleh Read moreO texto recolhível é ótimo para títulos e descrições de seção mais longos. Ele dá às pessoas acesso a todas as informações de que precisam, enquanto mantém seu layout limpo. Vincule seu texto ou configure sua caixa de texto para expandir com um clique. Escreva seu texto aqui Download

Engineering Education in the Era of ChatGPT: Promise and Pitfalls of Generative AI for Education Junaid Qadir Read moreEngineering education is constantly evolving to keep up with the latest technological developments and meet the changing needs of the engineering industry. One promising development in this field is the use of generative artificial intelligence technology, such as the ChatGPT conversational agent. ChatGPT has the potential to offer personalized and effective learning experiences by providing students with customized feedback and explanations, as well as creating realistic virtual simulations for hands-on learning. However, it is important to also consider the limitations of this technology. ChatGPT and other generative AI systems are only as good as their training data and may perpetuate biases or even generate and spread misinformation. Additionally, the use of generative AI in education raises ethical concerns such as the potential for unethical or dishonest use by students and the potential unemployment of humans who are made redundant by technology. While the current state of generative AI technology represented by ChatGPT is impressive but flawed, it is only a preview of what is to come. It is important for engineering educators to understand the implications of this technology and study how to adapt the engineering education ecosystem to ensure that the next generation of engineers can take advantage of the benefits offered by generative AI while minimizing any negative consequences. Download

Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models Emilio Ferrara Read moreAs the capabilities of generative language models continue to advance, the implications of biases ingrained within these models have garnered increasing attention from researchers, practitioners, and the broader public. This article investigates the challenges and risks associated with biases in large-scale language models like ChatGPT. We discuss the origins of biases, stemming from, among others, the nature of training data, model specifications, algorithmic constraints, product design, and policy decisions. We explore the ethical concerns arising from the unintended consequences of biased model outputs. We further analyze the potential opportunities to mitigate biases, the inevitability of some biases, and the implications of deploying these models in various applications, such as virtual assistants, content generation, and chatbots. Finally, we review the current approaches to identify, quantify, and mitigate biases in language models, emphasizing the need for a multi-disciplinary, collaborative effort to develop more equitable, transparent, and responsible AI systems. This article aims to stimulate a thoughtful dialogue within the artificial intelligence community, encouraging researchers and developers to reflect on the role of biases in generative language models and the ongoing pursuit of ethical AI. Download

The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation Jochen Hartmann Jasper Schwenzow Maximilian Witte Read moreConversational artificial intelligence (AI) disrupts how humans interact with technology. Recently, OpenAI introduced ChatGPT, a state-of-the-art dialogue model that can converse with its human counterparts with unprecedented capabilities. ChatGPT has witnessed tremendous attention from the media, academia, industry, and the general public, attracting more than a million users within days of its release. However, its explosive adoption for information search and as an automated decision aid underscores the importance to understand its limitations and biases. This paper focuses on one of democratic society's most important decision-making processes: political elections. Prompting ChatGPT with 630 political statements from two leading voting advice applications and the nation-agnostic political compass test in three pre-registered experiments, we uncover ChatGPT's pro-environmental, left-libertarian ideology. For example, ChatGPT would impose taxes on flights, restrict rent increases, and legalize abortion. In the 2021 elections, it would have voted most likely for the Greens both in Germany (B\"undnis 90/Die Gr\"unen) and in the Netherlands (GroenLinks). Our findings are robust when negating the prompts, reversing the order of the statements, varying prompt formality, and across languages (English, German, Dutch, and Spanish). We conclude by discussing the implications of politically biased conversational AI on society. Download

ChatGPT in physics education: A pilot study on easy-to-implement activities Read moreLarge language models, such as ChatGPT, have great potential to enhance learning and support teachers, but they must be used with care to tackle limitations and biases. This paper presents two easy-to-implement examples of how ChatGPT can be

 

Created with our tool RefWiz Scholars

16 views0 comments

Comments


Suporte
Apoiar
bottom of page