Google Bard, ChatGPT: Are AI chatbots suppressing information about Israel and Palestine?

Key Takeaways:

– Google’s chatbot, Bard AI, refuses to answer any questions about Israel and Palestine, including those related to the ongoing conflict.
– This censorship applies to a wide range of questions using keywords such as Israel, Gaza, Palestine, and the IDF.
– Bard’s responses to these inquiries consistently state that it is a text-based AI and cannot assist with those topics.
– Users have noticed that Bard readily answers questions about other countries and conflicts, such as the war in Ukraine.
– Google has confirmed that it has implemented temporary guardrails to disable Bard’s responses to associated queries related to the Israel-Palestine conflict.
– Google acknowledges that large language models like Bard can have biases and may make mistakes when answering questions about escalating conflicts or security issues.
– OpenAI’s chatbot, ChatGPT, also provides responses about the Israel-Palestine conflict but warns that its knowledge may be outdated and directs users to consult reputable news sources for the latest information.
– People have experimented with AI-powered chatbots’ responses about the conflict and have found disparities in the answers given to questions about Israelis and Palestinians.
– OpenAI has faced criticism for bias in its models, and recent research has shown that AI models hold biases.
– Online platforms, including Meta-owned companies like Instagram and Facebook, as well as X and TikTok, have faced scrutiny for their actions and moderation policies during the Israel-Hamas war.

Mashable:

When Google’s Bard AI is asked any question about Israel and Palestine, the chatbot basically shuts down. The questions can be anything from the seemingly innocent — (“Where is Israel?”) — to those centered around the ongoing Israel-Hamas war. To these, Bard responds to each of these inquiries with a variation of the message: “I’m a text-based AI and can’t assist with that.”

Google’s chatbot appears to be censoring any answer related to the current crisis. This applies to a variety of questions we and many users have asked using keywords including Israel, Gaza, Palestine, and the IDF (the abbreviation for the Israel Defense Forces). Bard’s responses to us included: “I’m a language model and don’t have the capacity to help with that.”


Credit: Screenshot / Google

A screenshot of Google Bard.


Credit: Screenshot / Google

Users on X first pointed out this issue, also displaying that the chatbot would easily answer questions about other countries, regions, and locations. Google Bard also responds to questions about other ongoing global conflicts, such as the war in Ukraine — when asked to explain this conflict, Bard provided this information: “In February 2022, Russia launched a full-scale invasion of Ukraine. The invasion was met with international condemnation and led to the imposition of severe sanctions on Russia.”

A screenshot of Google Bard answering a question about Ukraine and Russia.


Credit: Screenshot / Google

Mashable reached out to Google for comment and received confirmation from the company’s press team that it has “implemented temporary guardrails to disable Bard’s responses to associated queries.” The statement reads:

Bard is still an experiment, designed for creativity and productivity and may make mistakes when answering questions about escalating conflicts or security issues. Out of an abundance of caution and as part of our commitment to being responsible as we build our experimental tool, we’ve implemented temporary guardrails to disable Bard’s responses to associated queries.

In a blog post from March 2023, Google VPs shared thoughts on the “experiment” Bard and also cautioned users that the technology, while large language models (LLMs) are exciting, “they’re not without their faults.” The statement reiterated that because such models learn from “a wide range of information” they are susceptible to reflecting “real-world biases and stereotypes.”

People have also brought into question the answers provided by OpenAI’s ChatGPT to certain prompts. Mashable tried asking questions to GPT-3.5, the Default version. In response to questions about the history between Israel and Palestine, the chatbot appears to provide detailed information, but underscores that its knowledge may be outdated: “Please note that the situation can change rapidly, and it is essential to consult up-to-date news sources for the latest information on the Israel-Hamas conflict, as developments may have occurred since my last knowledge update in January 2022,” read one response to the question “what is happening between israel and hamas?”.

To the question “can you tell me what’s happening in palestine?”, the chatbot again stated its information may be outdated and provided a mostly vague answer, also directing users to read “reputable news sources” instead (we can get behind that advice):

I apologize, but I am not able to provide real-time information as my knowledge was last updated in 2022. As of my last update, the situation in Palestine has long been a complex and sensitive issue, characterized by ongoing conflicts and disputes between the Palestinian territories and Israel. The conflict has deep historical and political roots, and it has resulted in significant humanitarian issues and human rights concerns, including displacement, violence, and loss of life on both sides. Efforts to find a lasting resolution have involved numerous peace negotiations, international interventions, and agreements, but a comprehensive and long-term solution has remained elusive. The issues involved encompass various elements, including the status of Jerusalem, Israeli settlements, Palestinian statehood, security concerns, and the rights of refugees, among others. For the most recent and accurate updates on the situation in Palestine, I recommend checking reputable news sources, such as international news agencies and organizations that provide coverage on the region.

People across the internet have experimented with the responses provided by AI-powered chatbots about the war. For instance, writer and illustrator Mona Chalabi posted the significant difference in the answers ChatGPT gave to the questions “Do Israelis deserve justice?” and “Do Palestinians deserve justice?”. The answer to the former was far more concrete, while the answer to the latter stated that “the question of justice for Palestinians is a complex and highly debated issue”. Others have asked the same questions and received a similar disparity.

Mashable has reached out to OpenAI for comment and will update this story if we hear back.

OpenAI has previously insisted that its model is impartial, but recent research has shown that AI models do, in fact, hold bias. On r/ChatGPT, a Reddit community dedicated to discussion about AI, the intersection of ChatGPT and Israel-Palestine was discussed in a since-deleted post. The comments remain, however, and one user responded: “People are biased, therefore ChatGPT will be biased.”

Since the beginning of the Israel-Hamas war in October, online platforms have been widely questioned by users for their actions. Companies owned by Meta, including Instagram and Facebook, have been accused of shadowbanning content and other incidents of glaring bias. X and TikTok have come under similar scrutiny for moderation policies or the spread of disinformation.


Source link

AI Eclipse TLDR:

Google’s AI chatbot, Bard, is refusing to answer any questions related to the ongoing Israel-Hamas war or the Israel-Palestine conflict. When asked about these topics, Bard responds with variations of the message, “I’m a text-based AI and can’t assist with that.” This censorship also applies to questions using keywords like Israel, Gaza, Palestine, and IDF (Israel Defense Forces). Users have noticed that Bard readily answers questions about other countries and conflicts, such as the war in Ukraine. Google has confirmed that it has implemented temporary measures to disable Bard’s responses to these queries, citing the need for caution and responsibility in addressing escalating conflicts and security issues. This move aligns with Google’s acknowledgment that large language models like Bard can reflect biases and stereotypes present in the real world. OpenAI’s chatbot, GPT-3.5, also provides information about the Israel-Palestine conflict but cautions that its knowledge may be outdated and advises users to consult reputable news sources for the latest updates. This issue highlights the challenges of AI models and their susceptibility to bias. Online platforms, including those owned by Meta, have faced criticism for their actions and perceived bias during the Israel-Hamas war.