Microsoft Scraps Entire Ethical AI Team Amid AI Boom

[ad_1]

Microsoft is currently in the process of shoehorning text-generating artificial intelligence into every single product that it can. And starting this month, the company will be continuing on its AI rampage without a team dedicated to internally ensuring those AI features meet Microsoft’s ethical standards, according to a Monday night report from Platformer.

Microsoft has scrapped its whole Ethics and Society team within the company’s AI sector, as part of ongoing layoffs set to impact 10,000 total employees, per Platformer. The company maintains its Office of Responsible AI, which creates the broad, Microsoft-wide principles to govern corporate AI decision making. But the ethics and society taskforce, which bridged the gap between policy and products, is reportedly no more.

Gizmodo reached out to Microsoft to confirm the news, but did not immediately hear back. In a statement to Platformer, the company wrote:

Microsoft is committed to developing AI products and experiences safely and responsibly…Over the past six years we have increased the number of people across our product teams within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice…We appreciate the trailblazing work the ethics and society team did to help us on our ongoing responsible AI journey.

Yet, despite Microsoft’s reassurances, former employees told Platformer that the Ethics and Society team played a key role translating big ideas from the responsibility office into actionable changes at the product development level.

From the outlet:

“People would look at the principles coming out of the office of responsible AI and say, ‘I don’t know how this applies,’” one former employee says. “Our job was to show them and to create rules in areas where there were none.”

At the time of the Ethics and Society AI team’s elimination, it was reportedly made up of just seven employees. However, previously it had been much more robust. At its largest, the group was made up of 30 workers from across disciplines (including philosophers). In October 2022, the team was whittled down to seven as part of a reorganization. In that reorg, most of the employees were not laid off, but rather shifted elsewhere.

The company’s VP of AI told the remaining ethics and society workers that their team wasn’t “going away,” but rather “evolving,” per Platformer. Just about five months later though, the team is gone. Employees were reportedly notified in a Zoom meeting on March 6.

It’s a concerning corporate choice given that Microsoft (along with Google and just about every other tech company under the sun) is pushing to rapidly expand its artificial intelligence tech and add it into every product sector that it can. Last month, the company announced it was going to cram its Bing AI into word processing, and maybe even Outlook email.

Meanwhile, in the rush to bring its ChatGPT-powered AI-text generator to the search market, Microsoft has already made some alarming flubs. The company’s first demos of its chatbot were full of financial mistakes and flawed advice (Google, too, fucked up). Then, there was the unsettling “personality” of Bing’s alter-ego Sydney, that the company had to scrap after it was freaking users out.

Granted, while the Ethics and Society team was still around, Microsoft execs might not have been listening to their concerns very much anyway. Sure, the company eventually backtracked on its AI-powered emotion recognition tech. But former employees told Platformer that Microsoft partially ignored a memo expressing concerns about the company’s Bing Image Creator (not yet available in the U.S.), and the ways it would steal from and harm artists.

The Ethics and Society team reportedly offered a list of mitigation strategies, including that the image generator could block users from inputting the names of living artists as prompts, or create a marketplace to support artists whose work was surfaced in search. Neither of these suggestions were incorporated into the AI tool, sources told Platformer. However, the concerns highlighted by Ethics and Society have certainly come out to bear, as multiple U.S. lawsuits against other companies peddling AI image generation have emerged in recent months. 

[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply