Set as Homepage - Add to Favorites

日韩欧美成人一区二区三区免费-日韩欧美成人免费中文字幕-日韩欧美成人免费观看-日韩欧美成人免-日韩欧美不卡一区-日韩欧美爱情中文字幕在线

【cerita lucah sensasi】Google's AI LaMDA isn't sentient, but has racial and gender biases

While a sentient AI is cerita lucah sensasia thoroughly freaky concept, it's not (yet) a reality. But a racist and sexist AI? Unfortunately, very much a reality.

In a recent interview with Wired, engineer and mystic Christian priest Blake Lemoine discussed why he believes that Google's large language model named LaMDA has become sentient, complete with a soul. While that claim has been refuted by many in the artificial intelligence community and has resulted in Lemoine being placed on paid administrative leave by Google, Lemoine also explained how he began working on LaMDA.

His journey with the AI started with a much more real-world problem: examining the model for harmful biases in relation to sexual orientation, gender, identity, ethnicity, and religion.


You May Also Like

"I do not believe there exists such a thing as an unbiased system," said Lemoine to Wired."The question was whether or not [LaMDA] had any of the harmful biases that we wanted to eliminate. The short answer is yes, I found plenty."

Lemoine also explained that the Google team has done a good job repairing these biased "bugs," as far as he could tell. When asked whether LaMDA showed racist or sexist tendencies, Lemoine answered carefully, stating that he "wouldn't use that term." Instead, he claims "the real question is whether or not the stereotypes it uses would be endorsed by the people that [LaMDA is] talking about."

SEE ALSO: Amazon used AI to promote diversity. Too bad it’s plagued with gender bias.

Lemoine's hesitancy to label LaMDA's "bugs" as outright racist or sexist highlights an ongoing battle within the AI community, where many have spoken out about the harmful stereotypes that AI systems often perpetuate. But when those who do speak out about these issues are largely Black women — and those women are subsequently fired from companies like Google — many feel that it falls on men in tech like Lemoine to continue to call attention to AI's current bias problems, rather than confound researchers' and the public's attention span with claims of AI sentience.

“I don't want to talk about sentient robots, because at all ends of the spectrum there are humans harming other humans, and that’s where I’d like the conversation to be focused,” said former Google Ethical AI team co-lead Timnit Gebru to Wired.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

Artificial intelligence faces long history of harmful stereotypes, and Google is not new to or unaware of these issues.

In 2015, Jacky Alciné tweeted about Google Photos tagging 80 photos of a Black man to an album titled "gorillas." Google Photos learned how to do so using a neural network, which analyzed enormous sets of data in order to categorize subjects like people and gorillas — clearly, incorrectly.

It was the responsibility of Google engineers to ensure that the data used to train its AI photosystem was correct and diverse. And when it failed, it was their responsibility to rectify the issue. According to the New York Times, Google's response was to eliminate "gorilla" as a photo category, rather than retrain its neural network.

Companies like Microsoft, IBM, and Amazon also face the same biased AI issues. At each of these companies, the AI used to power facial recognition technology encounter significantly higher error rates when identifying the sex of women with darker skin tones than when compared to sex identification of lighter skin, as reported by the Times.

SEE ALSO: Meet the designer who makes high-tech nail art and fights facial recognition with flowers

In 2020, Gebru published a paper with six other researchers, four of whom also worked at Google, criticizing large language models like LaMDA and their propensity to parrot words based on the datasets that they learn from. If those datasets contain biased language and/or racist or sexist stereotypes, then AIs like LaMDA would repeat those biases when generating language. Gebru also criticized training language models with increasingly larger datasets, allowing the AI to learn to mimic language even better and convincing audiences of progress and sentience, as Lemoine fell into.


Related Stories
  • Uber's artificial intelligence ambitions just got bigger
  • These people aren't real. Can you tell?
  • How to blur your home on Google Street View (and why you should)
  • Facial recognition cameras to be rolled out in London amid privacy concerns
  • Clearview AI, the creepy facial recognition company, is reportedly developing a surveillance camera

After a dispute over this paper, Gebru says Google fired her in December 2020 (the company maintains she resigned). A few months later, Google also fired Dr. Margaret Mitchell, founder of the ethical AI team, a co-author of the paper, and defender of Gebru.

Despite a supposed commitment to "responsible AI," Google still faces ethical AI problems, leaving no time for sentient AI claims

After the drama and admitted hit to its reputation, Google promised to double its responsible AI research staff to 200 people. And according to Recode, CEO Sundar Pichai pledged his support to fund more ethical AI projects. And yet, the small group of people still on Google's ethical AI team feel that the company might no longer listen to the group's ideas.

After Gebru and Mitchell's departure in 2021, two more prominent ethical AI team members left a year later. Alex Hanna and Dylan Baker quit Google to work for Gebru's research institute, DAIR, or Distributed Artificial Intelligence Research. The already small team grew even smaller and perhaps points to why Lemoine, who is not on the ethical AI team, was asked to step in and research LaMDA's biases in the first place.

As more and more societal functions turn to AI systems in their advancement, it's more important than ever to continue to examine how AI's underpinnings affect its functions. In an already often racist and sexist society, we cannot afford to have our police systems, transportation methods, translation services, and more rely on technology that has racism and sexism built into its foundations. And, as Gebru points out, when (predominantly) white men in technology choose to focus on issues like AI sentience rather than these existing biases — especially when that was their original purpose, like Lemoine's involvement with LaMDA — the biases will continue to proliferate, hidden away under the hullabaloo of robot sentience.

“Quite a large gap exists between the current narrative of AI and what it can actually do,” said Giada Pistilli, an ethicist at Hugging Face, to Wired.“This narrative provokes fear, amazement, and excitement simultaneously, but it is mainly based on lies to sell products and take advantage of the hype.”

Topics Artificial Intelligence Facial Recognition Google

0.123s , 9802.4140625 kb

Copyright © 2025 Powered by 【cerita lucah sensasi】Google's AI LaMDA isn't sentient, but has racial and gender biases,Public Opinion Flash  

Sitemap

Top 主站蜘蛛池模板: 国产乱理伦片在线观 | 国外精品视频在线观看免费 | 久久精品国产曰本波多野结衣 | 久久国产精品久久精品国产 | 亚洲国产精华液2024 | 亚洲中文字幕久久精品码 | 国产综合久久一区二区三区 | 日本全黄一级十页 | 2024中文字幕在线动漫在线观看 | 无码人妻丰满熟妇啪啪欧美 | 亚洲成av人影片在线观看 | 亚洲欧美另类在线一区二区三区 | 欧美乱大交在线观看 | 里番全彩爆乳女教师精品人妻码一区二区三区 | 国产人妖一区二区动漫黄片 | 欧美 xxxx18性欧美 | 岛国在线观看 | 亚洲av无码精品无码麻豆 | 欧美国产伦久久久久 | 四虎影库在线永久影院免费观看 | 国产高清一区二区三区直播 | 久久亚洲aⅴ无码精品午夜麻豆 | 91在线视频一区 | 中日韩精品无码一区二区三区 | AV每日更新在线观看 | 久久久久久无码国产精品中文字幕 | 国产麻豆成人传媒免费观看 | 伊人成综成人综合网 | 国产真实老熟女无套内射 | 中文字幕乱码一区二区欧美 | 狠狠色丁香婷婷综合久久片 | 抠逼喷水 | 六月丁香综合 | 在线观看99无码人妻一区二区三区免 | 国产专区自拍三级 | 桃色AV久久无码线观 | 精品人妻一区二区A片 | 2024无码高潮喷水A片 | 麻豆国产精品一二三在线观看 | 一区二区三区在线看 | 美妇乱人伦交换小说/大乳欲妇三级一区二区三区/天天操 |