"Me as an action figure"New AI hype on social media - what happens to personal data?

This trend is currently hard to miss on TikTok, Instagram, LinkedIn or X: images of people as action figures, comic figures or futuristic avatars, mostly generated by ChatGPT with image generators such as DALL-E-3 or GPT-4o, appear in the feeds. It's not just children and young people who are taking part, but also adults, influencers and companies who use the AI-generated action figures for marketing or self-promotion. The enthusiasm is great, but at the same time important questions arise: What data is disclosed and processed? And what happens to this information in the background?

Seeing yourself as a superhero, knight or pop star - for many children and young people, this is a dream that can be fulfilled in just a few clicks, at least visually, with the help of AI tools. More and more platforms are offering to create a personalized 3D image or even a digital action figure. The finished avatars often look impressively realistic and contain many creative details.

However, in order to create a digital figure that is as realistic as possible, AI services require a large amount of personal information. In addition to one or more portrait photos, the name, occupation or school are often requested. The place of residence is also often requested so that suitable regional outfits or backgrounds can be selected. In addition, many users provide information about their hobbies, favourite colors or favourite animals. In some cases, it is even possible or desirable to enter personal favourite sayings. Children and young people in particular often willingly fill out such forms without considering that they are revealing very personal and sensitive data about themselves or others.

How AI systems handle the data

AI-supported tools require large amounts of data so that their models can be trained and further developed. The terms of use of many services often stipulate that uploaded content may be used to "improve the services" or to "train AI models". In practice, this means that faces and other personal information can be included in large data sets that are used to train the AI. For example, the system records what people of different ages and from different regions look like or which hobbies are common in certain age groups. Location data or voice recordings can also be recorded and analyzed, to better reflect regional differences. Even if the data is not sold on directly, there is still a risk that it could be passed on to third parties unintentionally via technical interfaces or through security gaps.

A harmless click can end up being expensive - especially if the "action figure" violates trademark or personal rights. You can find out more in the article by mimikama: Action figure with AI: One click. One picture. One warning.

What parents should know and consider now

  1. Take a close look at what is requested: Not every field has to be filled in. Many tools also work if, for example, no real name or location is entered. Children should know: The less real data, the safer. 
  2. Check terms and conditions and data protection notices or avoid tools where this is not possible: Some services are not transparent with regard to the data entered. If it remains unclear who has access to the information or how long it is stored, caution is advised. 
  3. Sensitize children to data economy and privacy: Explain to your child why they need to protect their photos or personal information such as their name or place of residence online. Comparisons help, especially with creative tools: "Would you tell strangers on the street all these things?" 
  4. Take age ratings and terms of use seriously: Many AI tools are only permitted from the age of 13 or 16 and for good reason. Find out about the minimum age and discuss the risks openly.
  5. Look for safe alternatives: There are many child-friendly and data-saving offers, such as creative craft apps or drawing programs that do not ask for any personal information.