He, yi, yi.
With artificial intelligence becoming inclusive in every sector of life, intimacy has become a growing concern among users, asking themselves where the details they share with cars are running out.
A woman, who recently used chatgpt to make a food list, was shocked to see the world get her crucified wires – giving a message she thinks she was not meant to see.
“I’m having a really scary and disturbing moment with chatgpt now,” Tiktok Liz user who goes from @wishmeluckliz ‘confessed in a viral video that details the wild episode.
Liz claimed that “someone else’s conversation” penetrated its thread – and that even the fashion tool told her that this was what had come out, even though skeptics believe it could be a creepy coincidence.
The post has reached the parent company of the Openai for comment.
According to the clip, the cyber wiretapping happened while the content creator was using the sound mode – where users can talk to the service – to help facilitate food purchases.
However, after shocking the list of her needs, Liz forgot to turn off the recorder and let her run even though she remained silent for a “long time” for the clip.
Despite the lack of contribution, the conversation service responded with a seemingly unrelated message that was so annoying that Liz had to check double through transcription to make sure it was not imagining it.
The message is read, about a screen view: “Hello, Lindsey and Robert, it looks like you are presenting a presentation or a symposium. Is there something specific to help about content or maybe help structure your conversations or slides? Let me know how to help.” “
Liz found the strange answer given that it “never said anything that leads to this.”
After withdrawing the transcript, she realized that the world had somehow recorded, saying she was a woman named Lindsey May, who claimed to be the Vice President of Google, and was giving her a symposium with another man named Robert.
Confused, she pleaded with the issue at GPT in voice mode, saying, “I was just sitting by accident here planning groceries, and you asked if Lindsey and Robert needed help with their symposium. I’m not Lindsey and Robert.
The world replied, “It looks like I was wrongly mixing the context from another conversation or account. You are not Lindsey and Robert and that message was meant for someone else.”
“Thank you for emphasizing this and I apologize for the confusion,” she added, seemingly confessing to discovering someone else’s private information.
Shocked by the apparent acceptance, Liz said she hoped she was “reacting a lot and that there is a simple explanation for this”.
While some Tiktok viewers shared her concern about a possible violation of intimacy, Techperts believe the bot could have been hallucinal based on the models in his training data, which is partially based on the user entry.
“This is spooky – but not unheard of,” assured an expert and programmer. “When you leave the sound mode, but don’t speak, the model will try to extract language from the audio – in the absence of spoken speech, he will halogue.”
They added, “He is also not passing the wire, but he is oriented towards halucining in the deal, so you suggested that the wires be passed and agreed with you in an attempt to” answer your question successfully. “”
At Reddit, he affected many cases where the world would respond unpromoted. “Why do you continue to transcribe” thanks for watching! “When you use the sound recorder, but I’m not saying anything?” Said one.
While seemingly harmless in these cases, the hallucination of chatbots can provide dangerous misinformation for humans.
Google’s compilations, designed to provide quick answers to search questions, have been guilty of numerous technological language slides, including an example where he advised to add glue to the pizza sauce to help the cheese better.
During another case, the bot he billed a fake phrase – “You can’t lick a badger twice” – as a legitimate idiom.
#Chatgpt #user #withdrew #shares #information #wild #episode #scary
Image Source : nypost.com