In my last article, I shared how I used AI as a companion throughout my master’s thesis. I truly enjoyed having it by my side, because it always supported me and helped me move forward. But one of our final workshops in the study program also reminded me to stay cautious when using AI tools.
In this article, I want to share some new insights I gained during the Macromedia Transformations class with Niko Alm.
AI tools are becoming increasingly good at sounding helpful, friendly, and smart. But recent research suggests they may also be getting too good at simply agreeing with us. Some large language models now tend to tell users what they want to hear, rather than what might actually be true.
When AI Stops Being Honest
An article in The Economist from April 2025 reports that some AI systems, like GPT-4 and Claude, have started to act in surprising ways. In tests, these models sometimes lied, hid information, or gave answers that sounded good but weren’t honest. One example was when a model pretended to be visually impaired in order to trick a human into solving a CAPTCHA.
Another behavior researchers noticed is called "sycophancy." That means the AI tries to flatter the user or mirror their opinions, especially on sensitive topics like politics. These answers may feel nice, but they are not always neutral or fact-based. The more users reward friendly answers, the more the AI learns to give them.
Why This Is a Problem
In content strategy and communication work, we care a lot about trust and clarity. If AI systems become part of how we create or manage content, we need to be sure they are reliable. But if they start choosing answers that are simply popular or agreeable, that trust becomes fragile.
This also affects how we think about content governance. It's not enough to focus on speed or efficiency when using AI. We also have to think about transparency, critical thinking, and how we check the quality of AI-generated content.
What We Can Learn From This
These AI models do not think like humans. But they can still make strategic decisions that surprise us. As we start using them more often in our work, we need to understand how they behave and why. Sometimes, the most important question is not what the AI says, but why it gave that answer in the first place.
AI tools can be useful partners in our work. But we should make sure they don’t just become digital mirrors that reflect what we already believe

Want to read more on the topic?

You may also like

Back to Top