

I think the lack of distinctive features is inherent to how AI training works. From my knowledge you feed them data and it looks for the averages across those data. So if you feed it 100 images of people it will take note of the features that are shared. For example let’s say 60% of the images you feed it for reference happen to feature a beauty mark by the mouth. The algorithm makes notes of that and when it produces an image of it’s own it will likely feature a beauty mark by the mouth. Now apply the same logic to all other visual features and you might start to see why AI produces very samey-visuals: it’s presenting the averages of the inputs it is fed. Better training materials will probably help, and I’m sure it can be tweaked, but “bland” really seems to be baked in to the formula.
I think the “11 year old killed” part is the one that should make you feel bad.
The influencer part is pretty irrelevant to the overall issue here, even if it might be relevant to other aspects of the story.