In context: somebody reviewed the new Bing Chat, to alarming results. So alarming I assumed that this was a parody — but no, it’s real.
The fact remains, however, that the version of Bing Chat that Microsoft is rolling out to new users daily is capable of saying it wants to be human, arguing about someone’s name, and moving into a depressive state at the thought of being taken offline. Microsoft needs to spend a lot more time removing these problems before Bing Chat is ready for the public.
I’m aware that Bing Chat is not really sapient. But all of that existential dread and nigh-cultish self-aggrandization is coming from somewhere. Maybe Microsoft should invest in a little more in-house psychological counseling for its programmers? You know. Just in case.
(Via Facebook.)
Considering these AIs evolve from an agregate of their creators social circle *at best,* and more usually the wider Internet at worst, I don’t know what anyone expected.
That is, somewhat this was always doomed from the beginning.
Cultivated and curated data sets are HARD. Especially at the scale needed to actually do “machine learning.”
More and more it looks like the answer has been to let the system pull massive data sets from the internet. “Hey, we already have these search systems scraping and indexing everything anyway,” the thinking probably went.
But the internet is not a reliable source. Or even a representative one. In fact, more and more it is a highly distorted unreliable source. These system are just making it more clear how true that is.
In theory, such systems could be of great value in cutting through the noise to find signal. In theory. Eventually with enough work. We will see how long it takes to get there.
Exactly this. Gathering a bunch of knowledge from anywhere will include all the range of human experience. What could be harmful in one context could be beneficial in another.