Meta’s New AI Council Lacks Diversity, Sparking Criticism

Meta’s New AI Council Lacks Diversity, Sparking Criticism

You are at:

Meta’s New AI Council Lacks Diversity, Sparking Criticism

Meta’s New AI Council Lacks Diversity, Sparking Criticism

Meta’s recent announcement of its new AI advisory council, composed entirely of white men, has reignited concerns over diversity within the tech industry.

On Wednesday, Meta unveiled its latest initiative to guide advancements in artificial intelligence: an AI advisory council exclusively made up of white men. This decision has drawn sharp criticism, particularly from women and people of color who have long voiced their exclusion from the AI field despite their qualifications and significant contributions.

When approached for a comment on the council’s lack of diversity, Meta did not provide an immediate response.

Comparatively, Meta’s Oversight Board and board of directors are more diverse in terms of race and gender. Unlike these boards, the AI advisory board was not elected by shareholders and holds no direct accountability to them. Meta has stated that the council will focus on “technological advances, innovation, and strategic growth opportunities,” with meetings set to occur periodically.

Critics highlight that the council is composed solely of business professionals and entrepreneurs, lacking ethicists or scholars with deep academic backgrounds. While executives with experience at companies such as Stripe, Shopify, and Microsoft might seem like appropriate candidates for overseeing Meta’s AI product roadmap, AI is fundamentally different from other products. The inherent risks and potential negative impacts, particularly on marginalized groups, necessitate a broader range of expertise.

Sarah Myers West, managing director of the AI Now Institute—a nonprofit organization studying the social implications of AI—emphasized the importance of scrutinizing the companies developing AI technologies to ensure they serve public needs.

“This technology frequently errs, and our research indicates these errors disproportionately harm historically discriminated communities,” said West. “We should set a very, very high bar.”

Women, in particular, face greater adverse effects from AI. A 2019 study by Sensity AI found that 96% of online AI-generated deepfake videos were non-consensual and sexually explicit. This issue has only worsened with the rise of generative AI.

A notable incident in January involved unauthorized sexual deepfakes of pop star Taylor Swift, which amassed hundreds of thousands of likes and 45 million views. While X (formerly Twitter) took action by banning search terms like “taylor swift ai” and “taylor swift deepfake,” similar safeguards are not always in place for those outside the public eye. Reports have indicated that middle and high school students are creating explicit deepfakes of their peers, using easily accessible apps that require minimal technical skill.

NBC’s Kat Tenbarge recently reported that Facebook and Instagram displayed ads for an app called Perky AI, which promised to create graphic images. These ads used blurred photos of celebrities Sabrina Carpenter and Jenna Ortega, inviting users to “undress” them. Shockingly, one ad featured an image of Ortega from when she was just 16 years old. It was only after Tenbarge flagged these ads that Meta removed them.

Meta’s Oversight Board has since initiated an investigation into the company’s handling of AI-generated sexually explicit content.

The necessity of including women and people of color in AI development cannot be overstated. Historically marginalized groups have often been excluded from technological advancements, leading to detrimental outcomes. For instance, women were barred from clinical trials until the 1970s, resulting in entire fields of study developed without considering their impact on women. Similarly, a 2019 study by the Georgia Institute of Technology found that self-driving cars were more likely to hit black individuals due to sensors’ difficulties in recognizing darker skin.

When algorithms are trained on biased data, they perpetuate these biases. AI systems have already exacerbated racial discrimination in housing, employment, and criminal justice. As Axios noted, voice assistants often struggle to understand diverse accents, leading to miscommunication and misunderstanding.

This latest move by Meta underscores the ongoing need for greater diversity and inclusion in AI leadership to prevent further entrenchment of these biases and ensure equitable technological progress.

More Information Visit Now Content Baskit.

Share.

Related Posts:

Abraham Quiros Villalba
Elon Musk's Batik Green Shirt Steals Spotlight at Starlink Launch in Indonesia
Schengen Visa Fees Set to Increase by 12% Starting June 2024

Leave a Reply

1 × two =

Latest Posts:

The Ultimate Guide to Crackstreams for Sports Fans
The Ultimate Guide to Crackstreams for Sports Fans
iOS 18
Apple Messages App Adds RCS and Scheduled Texts
Satellite Bus
Apex Raises $95M to Expand Its Satellite Bus Operations
TikTok
Google Feels the Heat as TikTok Introduces Picture Search to TikTok Shop
Amazon Expands Drone Deliveries Nationwide After FAA Approval
Amazon Set to Expand Drone Deliveries Nationwide Following FAA Approval
Saudi Arabia Launches the World's First Luxury Arabian Cruise Line
Saudi Arabia Launches the World's First Luxury Arabian Cruise Line

STAY IN TOUCH:

Contact Us

Subscribe to Updates

Get the latest tech news from FooBar about tech, design and biz.

Subscription Form