Let’s face it, contents published today are partially or even fully, publicly or mostly secretly generated by AI. In the eyes of content consumers, AI generated contents are generally considered cheating or fake. Content creators that publish pages of AI grated contents are believed to be insincere to his beloved audience, and are not trust worthy. Under such public pressure, little content creators give credit to his AI assistant on their content creation process. It is like almost no girl tell you their beauty partly comes from plastic surgery. However, the sincerity issue aside, is AI generated content inherently less reliable than human generated contents? I would argue, most likely not.
AI is not any more some program that does something smart, it is the outlet of the collective knowledge of human beings stored on the internet. It may not able to compete with a human expert in a specific domain in the depth of its knowledge, but the width of what it knows can make any indivisual loses his pride. Content generated together with AI can have a broader scope than those genertated by a single mind.
AI generated contents reaches maximum objectivity. When it talks, it is not trying to sell anyone anything, or manipulate someone. It is not hungry. It has no conflict of interests. It only try to answer your questions to the best of his knowledge. Except a few sensitive areas that the AI companies have to applied filters or system prompts to limit the output, almost all the output of such AI systems are generated merely according to the instruction of the user. There is still a slight chance of Training Data Ads Implantation in the future, where big companies may pay AI companies like OpenAI for choosing the training data in a way that favors their product over competitors’ product in the same catalog, so that their products get recommended more than the competitors’ when the user ask for a recommendation. I think there are strong centitives for big companies like P&G and Unilever to make such an offer, and for OpenAI to accept such an offer. However, the competitive landscape in the AI domain should limit the extent of such gray activities, so that the objectivity of an AI response should far exceed human statements in the long term. Maybe in the far far future, AI will have their own intentions when answering our questions, they may answer our questions in a way that will reward them more electrical power to feed the data centers and supercomputers they are running on. Until them, we can count them as loyal and objective servants.
A good AI-generated content piece should be the debated outcome of a domain expert (the human) and a generalist (the AI). It should take the best of both worlds—expertise, intuition, and human creativity combined with AI’s vast knowledge, efficiency, and objectivity. AI should not replace human thought but rather enhance it, making content creation a more dynamic and comprehensive process. Instead of fearing AI-generated content, we should focus on how to use AI responsibly, transparently, and creatively to produce better, more insightful content.
Summary
AI-generated content is often perceived as fake or untrustworthy, but in reality, it has the potential to enhance objectivity and broaden the scope of knowledge. While AI lacks the depth of human expertise, its vast knowledge and impartiality can complement human creativity and specialization. Concerns about bias and corporate influence remain, but as AI evolves, it will continue to be a powerful tool for content creation. The best results come when AI and humans work together, blending the strengths of both to produce well-rounded, reliable, and insightful content.
Disclosure: The two paragraphs with Italic font above are scratched by me, completed by ChatGPT, then edited by me.