Artificial intelligence (AI) has been the subject of widespread discussion and debate since the rise of AI chatbot ChatGPT from OpenAI and generative AI image makers like Midjourney and DALL-E 2. However, emerging technology has also attracted criticism from some quarters.

According to a recent report from The New York Times, two Google employees attempted to prevent the company from launching its own AI chatbot that would rival OpenAI’s ChatGPT in March. The employees, whose jobs involve reviewing Google’s AI products, reportedly believed the technology generated “inaccurate and dangerous statements.”

Similar concerns were raised by Microsoft employees and ethicists several months earlier, as the company also planned to release an AI chatbot that would be integrated into its Bing browser. Worries were expressed about the degradation of critical thinking, the spread of disinformation, and the erosion of the “factual foundation of modern society.”

Despite these concerns, both Microsoft and Google went ahead with their chatbot releases. Microsoft’s Bing-integrated chatbot was launched in February, while Google released its “Bard” chatbot in March, following OpenAI’s release of ChatGPT-4 in November 2022.

The ethical implications of AI chatbots and image generators have been widely debated. Midjourney, an AI-powered application that generates realistic images, discontinued its free trial to address issues related to deep fakes. Around the same time, an Australian media executive called for ChatGPT and AI to provide monetary compensation for the news they consume.

Over 1,000 researchers and thought leaders in the tech industry, including Elon Musk, penned a letter expressing concerns about the future of society and truth, calling for a slowdown in the development of the technology.

Governments around the world have also taken a cautious approach to AI, with Italian officials temporarily blocking ChatGPT in the country and US President Joe Biden urging tech firms to address the risks posed by AI.

Tags