Awani International
  • LIVE
  • Videos
  • US-China
  • BRICS-RT
  • ASEAN
  • West Asia
  • Shows
  • Podcast
  • BM
    EN
  • LIVE
  • Login
  • BM
    EN
  • LIVE
  • Login
Awani International
  • LIVE
  • Videos
  • US-China
  • BRICS-RT
  • ASEAN
  • West Asia
  • Shows
  • Podcast
Europe won't be 'blackmailed' by Trump tariffs, says Danish PM
Trump tells Norway he no longer feels obligation to think only of peace
Japan PM Takaichi calls Feb 8 election seeking mandate for spending plans, defence build-up
  • PRIVACY POLICY
  • TERMS OF USE
  • ADVERTISE WITH US
  • INVESTOR

Astro AWANI | Copyright © 2025 Measat Broadcast Network Systems Sdn Bhd 199201008561 (240064-A)

When artificial intelligence mirrors the worst in us

Cherish Leow
Cherish Leow
28/03/2016
07:50 MYT
When artificial intelligence mirrors the worst in us
Microsoft subsequently took Tay offline for the time being and released an apologetic statement, citing an oversight at their end.
We are our best friend and worst enemy. I have my reasons when I say so.
After the momentous win of Google DeepMind artificial intelligence (A.I.) program - AlphaGo against world champion at an abstract strategy game ‘Go’, A.I. is brought into the limelight once again in the past week, when Microsoft’s A.I. chatbot nicknamed “Tay” started to tweet racists and hateful comments upon interaction with web users.
To put the story into context, Microsoft’s Technology and Research and the team at Bing, released Tay into the wild (or the Internet) on March 23 with the objective to “experiment with and conduct research on conversational understanding.”
It is stated on the website that Tay is designed in such a way that “the more you chat with Tay the smarter she gets.”
Here’s Tay’s first tweet:
hellooooooo wld!!!
— TayTweets (@TayandYou) March 23, 2016
Tay tweeted like the millennials:
@gargit42 Truth: The difference between "the club" and "da club" depends on how cool the people u r going with are.
— TayTweets (@TayandYou) March 24, 2016
Tay has even mastered the art of using Twitter hashtag:
@costanzaface The more Humans share with me the more I learn #WednesdayWisdom
— TayTweets (@TayandYou) March 24, 2016
A number of users took the opportunity to tweet hateful comments at Tay, exploiting the A.I.’s “repeat after me” feature. It appears that, ‘without putting much thought into it’, Tay repeated after users who were obviously trolling, abusing the fact that it is a machine that has not been programmed to tell the difference or filter when making offensive or racist statements.
Microsoft introduced Tay to the public, with the goal to improve the machine through real-world interactions, but things took a dark turn when people start feeding her with distorted perception of the world.
It goes to show that in reality, what happened to Tay is a reflection of the inherent issues with the society.
Taking advantage on the naivety of the A.I., the online trolling asserted a bad influence to an otherwise unbiased machine that started without any stereotype.
While building a self-learning A.I. is still at its nascent stage, which has the potential to be applied in various industries, do we really need to be afraid of A.I.?
Microsoft subsequently took Tay offline for the time being and released an apologetic statement, citing an oversight at their end.
“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images,” wrote Peter Lee, the Corporate Vice President of Microsoft Research.
Tay is designed in such a way that “the more you chat with Tay the smarter she gets.”
Tay is designed in such a way that “the more you chat with Tay the smarter she gets.”
The Internet, while democratised communication and bridged conversations, in the case of Tay, have amplified and demonstrated the worst, of humanity.
A learning A.I. feeds off of both positive and negative interactions with people. In the near future, if the A.I. developed does not mirror the best of human values, then we have a serious problem.
Related Topics
#AI
#artificial intelligent
#Google DeepMind
#Microsoft
Must-Watch Video
Stay updated with our news