Without a doubt, generative artificial intelligence (AI) has revolutionized digital marketing, from automated content creation to hyper-personalized campaigns. However, while brands are falling in love with the possibilities of tools like ChatGPT, DALL·E, or Midjourney, consumers seem to be more skeptical, keeping a critical eye and harboring some (or a lot of) distrust.
According to the report Artificial Intelligence (AI) Job Market, prepared with data from Activate, Statista, and the World Economic Forum, the public’s perception of generative AI can have a direct impact on brand reputation. What do users think, what are their main concerns, and how can branding be affected if the use of this technology is not properly managed? These are some of the relevant questions marketers might ask themselves.
ALSO READ. 10 Key Facts About Artificial Intelligence Every Marketer Should Know
🧠 What Concerns Do Users Have About Generative AI?
In a survey conducted in the United States, more than 4,000 adults familiar with AI use identified the main concerns among both users and non-users of generative tools. These are the most notable:
Concern | Non-users (%) | Current users (%) |
---|---|---|
Data privacy and security | 45% | 37% |
Accuracy of information | 36% | 36% |
Loss of human jobs | 31% | 31% |
Spread of harmful content | 30% | 36% |
Unauthorized use of original content | 23% | 19% |
Use for cheating on assignments | 23% | 13% |
Lack of transparency in its functioning | 22% | 16% |
Environmental impact | 19% | 6% |
These figures show that distrust not only comes from non-users of AI but also from those who already interact with it. The difference lies in the magnitude of the perceived impact by each group.
🔐 Privacy and Security: The Number One Concern
45% of non-users and 37% of current users express concern about the privacy and security of their data when using generative tools. In an environment where consumers increasingly value control over their personal information, brands using generative AI must be fully transparent about how data is collected, processed, and stored.
This is a wake-up call for automated marketing: AI-driven forms, recommendation systems, and intelligent assistants must align with regulations such as GDPR or each country’s Personal Data Protection Law.
❌ False Information: A Direct Risk to Brand Credibility
36% of users and non-users agree that the accuracy of AI-generated information is a major concern. If a brand uses generative AI to create content but does not review or validate it, it may spread incorrect, biased, or even false data.
This not only affects content but also the brand’s reputation. In the digital world, trust is a fragile asset. A poorly managed automated error can turn into a crisis.
👩🎨 Creativity and Copyright: An Ethical Dilemma in Marketing
23% of non-users fear that their creative work will be used without permission. Among users, the figure is 19%, reflecting growing concern about the misuse of original works to train or feed generative models.
Brands using AI to generate images, texts, or music must ensure they use ethically trained models or create from datasets with clear licenses. Otherwise, they risk being accused of algorithmic plagiarism, a new concept already sparking legal debate in the United States and Europe.
📉 Branding and Reputation: How Are They Affected by These Perceptions?
Public perception directly influences the trust a consumer places in a brand. If a company is associated with unethical or non-transparent AI, consumers might:
- Feel manipulated by artificial content
- Lose trust in the accuracy of the information
- Associate the brand with inhuman or invasive practices
- Doubt the authorship of creative pieces
This can translate into lower loyalty, digital boycotts, or a drop in engagement. In fact, 22% of surveyed users view a lack of transparency as a weakness.
💡 What Can Brands Do?
To reduce these risks and capitalize on the use of generative AI ethically and creatively, brands should consider:
1. Clear and Transparent Communication
Inform users when AI-generated content is used and explain the human oversight processes.
2. Content Curation and Validation
AI can propose, but human judgment must filter what is ultimately published.
3. Use of Ethically Trained Models
Choose technology providers that respect copyright, diversity, and privacy.
4. Internal AI Ethics Policies
Establish guidelines for the responsible use of AI in campaigns, content, personalization, and customer service.
Generative AI Is Powerful but Not Neutral
The AI Job Market report makes it clear that the adoption of generative AI by brands must be accompanied by responsibility, ethics, and active consumer listening. Public perception will be key in determining whether the use of this technology strengthens or damages branding.
AI is a tool, not a strategy. Brands that use it with transparency, oversight, and sensitivity will have a competitive advantage not only technologically but also reputationally and humanly.
⇒ SUBSCRIBE TO OUR CONTENT ON GOOGLE NEWS