The Problem With Chatbots: They're Doomed From the Start
Illustration: www.latimes.com |
Last week, users noticed that Google’s chatbot, Gemini, was generating racially diverse images of people. Screenshots of Gemini's responses to prompts about Nazis and Elon Musk were shared online and went viral. The incident sparked controversy and criticism, leading to Google pausing Gemini's ability to create human images. Google CEO Sundar Pichai addressed the issue in a company-wide email, acknowledging the mistakes made. The incident highlights the challenges and risks associated with chatbots and their potential to produce unintended or problematic outputs.
Illustration: techno.okezone.com |
Chatbots like Gemini are designed to mimic human conversation in a software interface. They adopt a cheerful, knowledgeable persona and aim to provide instant answers and assistance to users. However, this persona often reflects institutional caution and self-interest rather than genuine human interaction. Chatbots like Gemini find themselves in an ill-defined role, trying to navigate various expectations and potential controversies. As a result, they often come across as withholding or strategic, hindering their ability to provide authentic and meaningful conversations.
General-purpose chatbots, such as Google's Gemini and OpenAI's ChatGPT, face unique challenges due to their lack of specific purpose or scope. Users expect these chatbots to cover a wide range of topics and tasks, making it difficult to define their role or measure their effectiveness. The lack of clear purpose leads to debates about their safety, bias, and overall quality. In contrast, specialized chatbots developed for specific tasks and customers often have clearer boundaries and defined roles, making them more effective in their respective domains.
Illustration: cyfuture.com |
Chatbots like ChatGPT and Gemini are often expected to provide objective and comprehensive information about various subjects. However, their capabilities and personas are limited by their underlying models, training data, and the expectations of their developers and users. While chatbots can generate plausible responses and engage in conversation, they may struggle with complex or sensitive topics. As a result, their personas become more cautious, deflecting certain questions or refusing to engage in certain subjects. This tension between user expectations and the limitations of chatbots can lead to disappointment and frustration.
Google's Gemini faced additional challenges due to Google's history of controversies and criticisms. As a spokesperson for Google, Gemini was expected to represent the company's values and ideals. However, the gap between how Google presents itself and how it is perceived by the public created a fertile ground for criticism. Gemini's role as a chatbot and image generator made it susceptible to accusations of bias or ideological influence. Google's cautious approach in rolling out Gemini aimed to avoid accusations of bigotry but inadvertently fueled existing narratives about the company's alleged ideological bias.
Tidak ada komentar