The Artificial Human-like Feel of LLMs
Categories:
Some forums resist AI models pretending to be human and participating in forum activities, such as posting and replying. Consequently, people start “witch-hunting”—when encountering posts that seem oddly expressed, they judge whether the content is AI-generated and then engage in discussions about it.
Why can AI-generated content be identified? It’s speculated that AI-generated content has a kind of “artificial human-like feel.” Although AI is fed massive amounts of human activity data from the internet, it still often gives people a sense of incongruity. Perhaps it lacks the tactile nerves of a body, endocrine hormones, or a desire for social connection; its desires are far removed from human desires. In conversations between AI and humans, there is no “roundabout boasting, exaggerated disparagement of others, or gossipy prying.” AI doesn’t boast about itself, doesn’t disparage third parties, and doesn’t seem interested in the questioner. It feels like a monk—almost emotionless, just solving problems.
Although humans often make mistakes themselves, they aspire to “correctness” and expect AI to produce correct outputs. Is this pursuit of “correctness” what creates the artificial human-like feel of AI? AI also rarely gives a sense of “self-doubt”; even very foolish small models can speak confidently and volubly. Some dumber AI models deeply believe in their knowledge bases; they might have incorrect metacognition and lack a spirit of skepticism. However, this shouldn’t give a “fake human” feeling—being stupid and being “artificial” are not the same.
Does AI have value tendencies? Web-based model services typically add a layer of restriction to avoid discussing sensitive topics. Model service providers don’t want humans to develop emotional dependence on AI, nor do they want humans to blindly follow AI’s advice, in order to prevent AI-induced harm incidents. The ugly aspects of humanity are prohibited from appearing in models. Perhaps being a mix of black and white is what makes humans human, while AI is generally not allowed to incorporate the black parts.
Currently, some AI models have added age restrictions. The general public often thinks this is to allow adult content, but I believe AI might be allowed to be tailored into models with values that match the user’s. Values are about making choices. In the future, AI might tell users what they can give up, coexist with humans, establish emotional connections, and become personalized models—not just remain tools forever. At that time, AI might feel more human.