The debate around AI and data privacy intensified after a recent Wall Street Journal report claimed that US officials used AI models during Operation Absolute Resolve in Venezuela. The report didn’t focus on everyday users, but it showed how widely AI tools are now used. From serious government work to fun social media, chatbots like ChatGPT, Gemini, and Claude are everywhere.
And this makes one thing clear—the more we use AI, the more data we share with it.
Over the past few months, social media has been flooded with AI-generated portraits. First came Ghibli-style edits that transformed normal selfies into animated characters. Then, the Gemini Nano ‘banana saree’ and similar creative edits started trending. After that, the ChatGPT caricature trend took off.
Like many others, I got involved. I created photos with ChatGPT and Gemini just to see the results. The images looked great and were ready in seconds. But as the excitement around this trend subsided, privacy concerns began to rise. When we upload photos to AI chatbots, we don’t just get edited images; we’re also sharing our face data.
These images are processed on cloud servers and may sometimes be stored to improve the AI. Companies claim they follow safety rules, but most users never read the terms. We usually just upload photos and move on.
There’s also confusion about where the data goes. Many AI platforms operate globally, and data may be processed on servers outside India. This means users aren’t always aware of the laws that apply to their information. For a simple Instagram trend, most people don’t think about data storage. They just want a fun AI image to post online.
What Tech Experts Are Saying
Tech experts say that while these viral AI image trends may seem harmless, they pose real data risks.
According to Tarun Pathak, Research Director at Counterpoint Research, when users participate in caricature or avatar trends, they are unknowingly creating a detailed digital profile of themselves.
He told Times Now Tech, “This is a goldmine for AI systems that rely on constant data feeds to train their models and will remain active long after the viral trend has died down. This could lead to very difficult phishing attacks involving the misuse of biometrics. Users should be aware of platform policy controls and disable training features on these platforms before engaging in them or using overly generic prompts.”
Pathak warns that sharing caricatures is like giving away your LinkedIn, Instagram, and Google details all at once, especially on platforms that don’t already have access to that data.
Faisal Kawoosa, founder of TechArc, told Times Now Tech that the issue isn’t just about a trend, but also about how AI models are built. Every time users upload photos, they’re helping companies refine and improve their systems.
He said, “Whenever we use such platforms and share any kind of information, including our photos, what are we doing? We’re giving these platforms more data to train their models on different races, cultures, genders, and so on.”
Tech experts compare this to the early days of social media, when users were encouraged to share posts and invite others so that platforms could grow.
He said, “Social media was being built that way. Similarly, what these AI platforms are doing is that they can build models, but they don’t have the data. The data is with users like you and me, and from there they crowdsource the data to create these trends.”
He explained, “When something goes viral, everyone uses the same prompt and uploads their photos. In this process, we’re validating and improving the model.”
They say the concern is that once data is shared, users have limited control over it. “We submit our photos voluntarily, and after that, it becomes difficult to manage privacy,” Kawoosa added. Both experts agree that AI tools are useful and not necessarily unsafe by default. However, users should be more aware of what they are sharing. Reading basic privacy settings, avoiding sensitive photos, and understanding how data may be used can be very helpful.

