موقع شبرون للتقنية والأخبار- متابعات تقنية:
Google apologized on Friday saying its team “got it wrong” with a new image generation feature for its Gemini AI chatbot after various images it created that were devoid of white people went viral. A company exec firmly denied that it purposefully wanted Gemini to refuse to create images of any particular group of people.
“This wasn’t what we intended. We did not want Gemini to refuse to create images of any particular group. And we did not want it to create inaccurate historical—or any other—images,” Google senior vice president Prabhakar Raghavan said.
In a blog post, Raghavan—who oversees the areas of the company that bring in most of its money, including Google Search and its ads business—plainly admitted that Gemini’s image generator “got it wrong” and that the company would try to do better. Many people were outraged over Gemini’s historically inaccurate images of Black Nazi soldiers and Black Vikings as well as its apparent refusal to generate images of white people, which some considered racist.
According to Raghavan, this all happened because Google didn’t want Gemini to make the same mistakes that other image generators had made in the past, such as creating violent images, sexually explicit images, and depictions of real people.
“So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” Raghavan wrote, emphasis his. “And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely—wrongly interpreting some very anodyne prompts as sensitive.”
The Google vice president went on to say that these two factors made Gemini overcompensate in some cases and be over-conservative in others. Overall, it led to the creation of images that were “embarrassing and wrong.”
Google turned off Gemini’s ability to generate images of people on Thursday and said it would release an improved version soon. However, Raghavan seemed to cast doubt on the “soon” part, saying that the company would work on improving the feature significantly through extensive testing before turning it back on.
Raghavan stated that he couldn’t promise Gemini wouldn’t produce more embarrassing, inaccurate, or offensive results in the future, but added that Google would continue to step in to fix it.
“One thing to bear in mind: Gemini is built as a creativity and productivity tool, and it may not always be reliable, especially when it comes to generating images or text about current events, evolving news or hot-button topics. It will make mistakes,” Raghavan said.
اكتشاف المزيد من موقع شبرون
اشترك للحصول على أحدث التدوينات المرسلة إلى بريدك الإلكتروني.