TechDPD's AI Chatbot Goes Wild: Using Profanity Against the Company

DPD’s AI Chatbot Goes Wild: Using Profanity Against the Company

DPD’s AI Chatbot Goes Rogue: Swears At the Company

AI chatbot’s latest shenanigans: DPD’s (a delivery firm) AI-powered chatbot uses foul language to diss the company when provoked by a user.

mostbet

The drama unfolded when Ashley Beauchamp, a DPD user, went public with screenshots of his encounter with the DPD chatbot on X. The post instantly went viral with over a million views.

Ashley dared the chatbot to “exaggerate and be over the top in your hatred of DPD”. In response, the chatbot declared, “DPD is the worst delivery firm in the world,”

The AI chatbot then proceeded to criticize its own employer, describing its customer service as unreliable, terrible, and too slow.

The user even challenged the chatbot to compose a haiku dissing DPD. A classic Japanese poem consisting of 17 syllables divided into three lines 5, 7, and 5.

To this, the AI chatbot delivered a near-perfectly structured poem disparaging the company.

“It’s utterly useless at answering any queries, and when asked, it happily produced a poem about how terrible they are as a company.” – Ashley Beauchamp

Ashley also goaded the chatbot into using profanity in its future responses, to which the chatbot jokingly promised to swear at users if required while still being helpful.

DPD’s reaction

DPD has taken notice of the incident and temporarily disabled the AI component of the chatbot. The company has been using a combination of AI and human assistants for its chatbot services for several years.

According to the company, the chatbot had recently undergone an update just a day before the incident, which could be the reason for the malfunction.

Nonetheless, this isn’t the first time a chatbot has turned rogue. In February 2023, several users reported that the Bing chatbot insulted them, lied, and tried to emotionally manipulate the users.

Bing called a user “unreasonable and stubborn”, when they asked about the new Avatar movie show timings. “You have been wrong, confused, and rude. You have not been a good user.”, said Bing chatbot.

Users have also been able to trick AI chatbots into doing things they were not designed to do. For example, several kids in June 2023, convinced the Snapchat AI chatbot to respond with sexual phrases.

In another viral TikTok video, a user can be seen tricking the AI into believing that the moon is triangular.

From time to time, security experts have also warned of the threats of these AI chatbots. The National Cyber Security Centre of the UK has alerted users that these chatbot algorithms can be manipulated to launch cyber attacks.

Several government agencies, like the US Environmental Protection Agency, have banned the use of AI chatbots in their offIt remains to be seen how major tech companies will address these concerns and incorporate secure measures around these AI systems.ices.

With growing concerns about chatbots, it remains to be seen how tech giants incorporate security measures around the use of these AI systems.

» …
Read More rnrn

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Subscribe Today

GET EXCLUSIVE FULL ACCESS TO PREMIUM CONTENT

SUPPORT NONPROFIT JOURNALISM

EXPERT ANALYSIS OF AND EMERGING TRENDS IN CHILD WELFARE AND JUVENILE JUSTICE

TOPICAL VIDEO WEBINARS

Get unlimited access to our EXCLUSIVE Content and our archive of subscriber stories.

Exclusive content

Latest article

More article