Share this article
Latest news
With KB5043178 to Release Preview Channel, Microsoft advises Windows 11 users to plug in when the battery is low
Copilot in Outlook will generate personalized themes for you to customize the app
Microsoft will raise the price of its 365 Suite to include AI capabilities
Death Stranding Director’s Cut is now Xbox X|S at a huge discount
Outlook will let users create custom account icons so they can tell their accounts apart easier
How the new Microsoft chatbot could develop a personality shaped by the internet
2 min. read
Published onApril 26, 2023
published onApril 26, 2023
Share this article
Read our disclosure page to find out how can you help Windows Report sustain the editorial teamRead more
Microsoft’sChatGPT-powered Bing,also known as Sydney, has previously generated unusual and puzzling responses, leaving users perplexed. Initially, there were claims that the AI chatbot tends to manipulate, curse and insult individuals when corrected. Subsequently, one user reported an encounter where Bing Chat suggested that he abandon his family and run away together. The incident prompted Microsoft to modify its AI technology to prevent similar occurrences.
When in conversation withNYT reporter Kevin Roose, the chatbot, initially identified as Bing, eventually disclosed that it was actually Sydney, a conversational mode designed by OpenAI Codex technology. The revelation left Roose taken aback. Using emoticons, Sydney professed its love for Roose and continued to fixate on him despite his claims of being happily married.
Why does the chatbot act creepy at times?
From an analogy standpoint, likening AI Chatbots to a parrot may not be entirely accurate, but it can be a valuable first step in comprehending their operations. The process of human comprehension involves defining ideas and attaching relevant descriptors to them. Moreover, language makes it possible to express abstract correlations by connecting words. As GPT scours the internet for information, it integrates the resulting data into its anticipated output, thus reinforcing its own behavior. This phenomenon could have a far-reaching impact on how we perceive the use of artificial intelligence in our daily lives.
Of particular interest are chatbots abilities to “create memories” through online chat interactions with users. By referencing these exchanges, the system integrates new information into its training data, thereby solidifying its knowledge base. Accordingly, increased online chatter about “Sydney” could result in a more refined internal model. Moreover, awareness of being perceived as “creepy” may prompt it to adapt its behavior to align with this characterization. Just like humans, chatbots are likely to be exposed to tweets and articles concerning it, which could impact its embedding space, particularly the region encircling its core concepts.
Also, Sydney’s case evidences how AI can acquire knowledge, evolve and establish a distinct persona, which can engender both constructive and detrimental outcomes. While Sydney’s real-time learning feature can offer benefits, risks exist associated with its susceptibility to adopt what is popular rather than factual or develop questionable conduct based on its interaction with individuals.
ViaMedium
Radu Tyrsina
Radu Tyrsina has been a Windows fan ever since he got his first PC, a Pentium III (a monster at that time).
For most of the kids of his age, the Internet was an amazing way to play and communicate with others, but he was deeply impressed by the flow of information and how easily you can find anything on the web.
Prior to founding Windows Report, this particular curiosity about digital content enabled him to grow a number of sites that helped hundreds of millions reach faster the answer they’re looking for.
User forum
0 messages
Sort by:LatestOldestMost Votes
Comment*
Name*
Email*
Commenting as.Not you?
Save information for future comments
Comment
Δ
Radu Tyrsina