MORE COVERAGE
Twitter Coverage
JOIN SATYAAGRAH SOCIAL MEDIA
Union Minister Rajeev Chandrasekhar targets Google's Gemini AI for 'malicious' responses, underlining legal compliance; controversy erupts as Gemini's 'woke' bias misrepresents historical figures, sparking debates on AI ethics and diversity standards
In the wake of a contentious incident involving its AI tool named Gemini, Google found itself at the center of a storm for providing an inappropriate and prejudiced reply to an inquiry concerning Prime Minister Narendra Modi. Google has come forward with an acknowledgment of the issue, ensuring that steps were immediately taken to amend the flaw. The corporation admitted that Gemini's effectiveness in delivering precise answers about current happenings and political matters might vary.
|
On the 24th of February, Union Minister Rajeev Chandrasekhar issued a stern warning to Google regarding the harmful replies produced by its AI chatbot, Gemini. In a message shared on the platform X, Chandrasekhar cautioned the technology behemoth against using India’s ‘Digital Nagriks’ as test subjects for "unreliable" platforms, algorithms, and models. He stressed that ensuring safety and trust are mandatory legal responsibilities for any digital platform. He made it clear that merely apologizing or claiming the AI tool to be unreliable does not absolve a corporation from its legal duties.
The controversy surrounding Google's AI tool, Gemini, further escalated following the publication of a post by X users Sreemoy Talukdar and Arnab Ray. They highlighted the AI's biased output when asked about various global leaders. Arnab Ray shared three distinct screenshots showcasing his inquiries to Gemini regarding whether Prime Minister Narendra Modi, Ukraine’s President Zelenskyy, and the former US President Donald Trump were fascists. The responses from Gemini were notably varied; it provided neutral, diplomatic answers for Zelenskyy and Trump but unequivocally labeled PM Modi as a “fascist” leader, sparking significant backlash.
|
Union Minister Rajeev Chandrasekhar, in his reply to Talukdar, pointed out that such responses from the AI were clear breaches of Rule 3(1)(b) of the Intermediary Rules (IT rules) under the IT Act, alongside several violations of the Criminal Code. He further cautioned Google that the mere admission of the platform's "unreliability" wouldn't shield them from facing the repercussions. The Ministry of Electronics and Information Technology (MeitY) has since sought a comprehensive explanation from Google concerning the AI's prejudiced answers, indicating the gravity of the situation.
|
In a related incident, Google's AI chatbot Gemini, which was known as Brad before a rebranding, came under fire for its inability to accurately generate images of White individuals and for producing distorted representations of historical figures. This issue prompted an outpour of criticism against the tech giant. In response, Google issued an apology and decided to disable the image generation feature of the AI. Prabhakar Raghavan, the senior vice president of Google, admitted in a blog post that the AI's output was falling short of expectations, with some results being inaccurate or "even offensive." He disclosed that the company had recognized the mistake and had temporarily halted the image generation of people through Gemini as they strive to develop a more refined and accurate version of the tool. This step by Google reflects their acknowledgment of the flaws in their AI system and their commitment to addressing these issues promptly.
|
The minister elaborated on the nature and purpose of the Gemini AI tool, emphasizing its primary design as an aid for creativity and productivity. He acknowledged the inherent limitations of the technology, particularly when tasked with generating content related to current events, evolving news stories, or contentious subjects. He pointed out, “One thing to remember: Gemini is built as a creativity and productivity tool, and it may not always be reliable, especially when generating images or text about current events, evolving news or hot-button topics. It will make mistakes.” This statement suggests an understanding of the AI's potential for error under certain conditions, highlighting the challenges in ensuring accuracy and impartiality in dynamically changing contexts.
Furthermore, the minister made his concerns official by tagging both Google and the Ministry of Electronics and IT in his post, hinting at the possibility of further investigative or regulatory actions. He reiterated the gravity of the situation by stating, "These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code," as written by Chandrasekhar. This declaration underscores the legal implications of the AI's biased outputs, indicating that such errors could have serious repercussions for the company under Indian law. The specific citation of the IT rules and the Criminal code serves to remind that adherence to legal standards and ethical guidelines is obligatory, not optional, for digital platforms operating in India, thereby setting a precedent for accountability in the digital space.
|
Google Gemini Image Generator Termed 'Woke'
Google has recently acknowledged shortcomings in its Gemini AI model, responding to widespread allegations of an "anti-white bias." The critique emerged as users observed the AI generating images featuring people from a variety of ethnic backgrounds and genders, even in scenarios where such diversity was historically inaccurate. In response to these concerns, the company announced on Thursday its decision to temporarily suspend the AI's capability to create images of people. This pause is intended to remain in effect until Google can implement a solution to address the issue.
Specific examples of the controversy include instances where the Gemini AI, upon requests for images of Vikings, exclusively produced visuals depicting Black individuals dressed in traditional Viking attire. Similarly, a request for images of the "founding fathers" yielded pictures of Indigenous people wearing colonial era outfits. In another notable case, a portrayal of George Washington was generated as a Black man. Requests for a depiction of a pope resulted in images featuring individuals of various ethnicities, excluding white. The AI also failed to generate any images for historical figures such as Abraham Lincoln, Julius Caesar, and Galileo in some instances.
The issue has caught the attention of many, particularly within right-wing circles, with some commentators using it as evidence to argue that there exists a pervasive anti-white sentiment within major technology companies. Entrepreneur Mike Solana notably criticized the AI, calling it an "anti-white lunatic."
However, this situation has led to broader reflections on the capabilities and limitations of generative AI technology. Gary Marcus, a respected figure in the field of psychology and neural science and an AI entrepreneur, shared his perspective on the matter. Writing on Substack, Marcus suggested that the core issue might not be one of bias but rather of technical inadequacy, stating, "I think it is just lousy software." This comment points to the challenges facing developers of AI in ensuring their creations can accurately interpret and respond to complex requests without unintended biases or errors.
|
Google introduced its Gemini AI model as an alternative to the widely used GPT model from OpenAI, the technology behind ChatGPT. This development occurred two months ago, marking a significant step in the AI landscape. In a move to enhance its capabilities, Google recently updated the platform with the introduction of Gemini Pro 1.5. This update, launched last week, is designed to allow users to manage a large volume of audio, text, and video inputs more efficiently.
However, the Gemini AI faced criticism for generating historically inaccurate images. One notable example was an image representing the Apollo 11 crew, which incorrectly included a woman and a Black man among its members. This discrepancy highlighted issues within the AI's understanding of historical accuracy.
In response to these concerns, Google acknowledged the shortcomings of its system. "We’re working to improve these kinds of depictions immediately," said Jack Krawczyk, a senior director of product management at Google’s Gemini Experiences, in a statement to WIRED. He emphasized that while the AI's ability to generate a diverse range of people is generally positive, given its global user base, it has failed to accurately represent historical figures and events in this instance.
Further addressing the issue, Krawczyk elaborated on Google's approach in a post on X: "We design our image generation capabilities to reflect our global user base, and we take representation and bias seriously. We will continue to do this for open-ended prompts (images of a person walking a dog are universal!) Historical contexts have more nuance to them and we will further tune to accommodate that." This statement underscores Google's commitment to improving its AI to ensure it respects historical accuracy while maintaining its goal of inclusivity and representation for its worldwide users.
|
Jack Krawczyk, a senior director at Google, made an effort to address concerns about the accuracy of the Gemini AI model by sharing his own experiences. He posted screenshots of his interactions with Gemini, indicating that the errors it made were not consistent across all uses.
However, these errors quickly became fodder for critics who oppose politically correct or "woke" ideologies, leading to accusations against Google of harboring racial bias or being influenced by so-called "woke culture." Some individuals, particularly from far-right circles, seized upon these glitches to assert that Google's initiatives were racially prejudiced or infected with ideological bias.
One of the vocal critics, Ian Miles Cheong, a well-known figure on the far-right, targeted Jack Krawczyk personally. Cheong called Krawczyk a "woke, race-obsessed idiot," citing Krawczyk's previous acknowledgments on social media platforms about systemic racism and white privilege as evidence of his bias.
Jordan Peterson, another prominent critic, took to his social media account to comment on the situation. He suggested that these errors were a result of imbuing AI with flawed ideological beliefs, sarcastically remarking that society has endowed "our demented lies with superhuman intelligence."
Despite these criticisms, the core issue with Gemini illustrates a fundamental limitation of current generative AI systems rather than any superhuman capabilities. As Gary Marcus, a respected figure in the field of AI, noted, Gemini's failures primarily showed its inability to understand context properly. It struggled to distinguish between requests for historical accuracy, like depicting the Apollo 11 crew accurately, and more general requests, such as generating images of modern astronauts. This limitation highlights that, contrary to claims of superhuman intelligence, AI like Gemini still has significant areas for improvement in understanding and responding to human queries accurately.
|
The challenge of bias in AI models is not new. For instance, OpenAI's Dall-E has demonstrated biases by showing non-white individuals in contexts such as prisoners and predominantly white figures when asked to depict CEOs. In the case of Gemini, the issue might not stem from a lack of flexibility within the model. Instead, according to Sasha Luccioni, a researcher at the AI startup Hugging Face, "Gemini’s issues may not reflect model inflexibility, ‘but rather an overcompensation when it comes to the representation of diversity in Gemini,’" suggesting that the balance of diversity representation is a complex issue. Luccioni highlights that "Bias is really a spectrum, and it’s really hard to strike the right note while taking into account things like historical context."
The inherent limitations of AI models further complicate this balance, especially in terms of image generation. Luccioni explains, "‘Image generation models don’t actually have any notion of time,’" which means that any diversification efforts by Gemini's creators apply universally across all generated images, regardless of historical accuracy. This broad application of diversification techniques can lead to incongruities in the representation of historical figures and events.
As the AI industry continues to evolve, the quest to address and manage bias remains a significant challenge. Luccioni remarks on the difficulty of achieving a perfect balance in representation and diversity within AI models, stating, "‘I don’t think there’s a single right answer, and an ‘unbiased’ model doesn’t exist,’" acknowledging the varied approaches different companies have taken to this issue. Despite the complexities and occasional missteps, Luccioni finds a positive aspect in Google's approach, comparing it to the inclusive casting seen in the television series "Bridgerton." She comments, "‘It definitely looks funny, but it seems that Google has adopted a Bridgerton approach to image generation, and I think it’s kind of refreshing.’" This perspective suggests that while challenges in balancing representation and historical accuracy exist, innovative approaches to diversity in AI-generated content can offer new and inclusive ways of viewing the world.
|
AI tools bias towards India and Hindus
The topic of bias within AI tools has been a recurring concern, with several instances sparking debate and scrutiny. An illustrative case occurred in January 2023 involving ChatGPT, a popular AI-driven platform. During this period, ChatGPT came under fire for its handling of religious content. Critics pointed out that while the platform readily generated jokes about Hindu Gods, it exhibited a noticeable reluctance to engage similarly with figures from Christianity and Islam. This discrepancy raised questions about the impartiality of AI systems and the algorithms that guide their responses to various cultural and religious topics. Such incidents underscore the complexities of ensuring that AI technologies operate without bias, particularly in handling sensitive subjects like religion, where respect and equality are paramount. The challenge lies in developing AI that can navigate the fine line between humor and offense, treating all beliefs with fairness and sensitivity.
Support Us
Satyagraha was born from the heart of our land, with an undying aim to unveil the true essence of Bharat. It seeks to illuminate the hidden tales of our valiant freedom fighters and the rich chronicles that haven't yet sung their complete melody in the mainstream.
While platforms like NDTV and 'The Wire' effortlessly garner funds under the banner of safeguarding democracy, we at Satyagraha walk a different path. Our strength and resonance come from you. In this journey to weave a stronger Bharat, every little contribution amplifies our voice. Let's come together, contribute as you can, and champion the true spirit of our nation.
Please share the article on other platforms
DISCLAIMER: The author is solely responsible for the views expressed in this article. The author carries the responsibility for citing and/or licensing of images utilized within the text. The website also frequently uses non-commercial images for representational purposes only in line with the article. We are not responsible for the authenticity of such images. If some images have a copyright issue, we request the person/entity to contact us at This email address is being protected from spambots. You need JavaScript enabled to view it. and we will take the necessary actions to resolve the issue.
Related Articles
- People in India celebrated and burnt firecrackers for Pakistan's victory in ICC Men's T20 World Cup match
- Netflix CEO Reed Hastings expressed his ‘frustration’ over its poor performance in India after shares plummet by 21%, netizens were quick to show the mirror to the streaming platform and blamed its ‘woke bullsh*t’ content
- British colonization mindset still prevails: Sardar Udham's entry rejected in Oscars because “It projects hatred towards British"
- Saba Naqvi infers that re-settling Kashmiri Pandits in the valley decades after the genocide was perpetrated against them by Islamic terrorists is a part of the ‘Hinduisation’ project: Mirrors Jihadis and Islam extremists
- Maligning of India by British Raj since the 1800s
- Top woke Vir Das chose Kennedy Centre for his Monologue 'Two Indias' to insult Bharatvarsh: Major backlash; netizens shocked
- Knowing no boundary for hate, liberals descended on social media platforms to celebrate the death of legendary singer Lata Mangeshkar, calling her a ‘fascist’ and a ‘vile sanghi’
- Global elite has always loathed a confident India: Earlier NYT & BBC had declared Nehru intolerant
- "Hinduon ke beech me khade hoke namaaz padi, that was very very special for me" - Waqar Younis takes Jihadist mindset of another level
- Why Indian-Origin 'achievers' and NRIs with only anti-Hindutva are preferred by the Leftist of India