I’m surprised anybody’s surprised. I mean, isn’t this intuitive? AI prompt results don’t have conscious decisions behind them; the model just does a high-probability guess, based on the existing information in its database. Dump enough AI prompt results into the database, and the amount of actual information goes down. Eventually it collapses, and you get a smear of fuzzy junk.
This seems pretty straightforward, yes?
Via @Strangeland_Elf.