Harriet Kingaby and Jake Dubbins, Co-Chairs of the Conscious Advertising Network, highlight the uncomfortable fact that advertisers, and the platforms we advertise on, are not doing all we can to tackle the issues of child safety. In this article, they share the updated Children’s Wellbeing Manifesto, which maps out how the industry can move from compliance to leadership.

This article is part of a Spotlight EMEA series on Gen Alpha. Read more

In November last year, we were lucky enough to watch Baroness Beeban Kidron OBE speak at the Conscious Thinking event in London. Taking to the stage, she delivered one of the most compelling speeches we have ever heard, on a topic that has been the focal point for so much legislative effort in recent years: children’s wellbeing online.

Baroness Kidron described a system of badly executed and regulated targeted advertising and content recommendations which were having devastating effects on young people. In one section, she compared traditional, passive advertising such as that on the side of a bus with digital advertising, and gave real-life accounts of its impact:

“The difference is, the [advertising on the] bus does not sense the child’s gaze and follow them down the road.

“[The bus] does not identify that they are between 13-to-17-years-old, home on a Friday night, and then send commercials for cosmetic surgery on the basis that a teen at home on a Friday is unhappy with their body.

“...[the bus does not bombard them with an item, link them to scammers or send them to suicide content based on what it knows].”

The regulation of advertising to children has been a key focus of legislation over the last 2 years. The EU’s Digital Services Act has brought in more transparency and accountability for platforms, restricted targeted advertising, and designated special protections for minors: with the Online Safety Act taking a similar tack, empowering regulators and bringing in requirements for age verification and assurance.

This legislation comes at a moment where it is clear our online information environments are having profound effects on children. While online connection creates new opportunities for children to socialise, game, and access knowledge, worrying trends are also in evidence. From the chilling misogyny of influencers like Andrew Tate parroted in the classroom, to the explosion of mental health issues that psychologist Jonathan Haidt blames squarely on the use of mobile phones, to the devastating online bullying that has led to child suicides, it’s clear that the effects of our new media environments are not all benign.

Children’s wellbeing faces 4Cs of risk: content, contact, conduct and contract. These risks range from advertising practices jeopardising corporate brand safety to those illegally targeting children. Examples include targeted adverts promoting unhealthy eating habits and body dysmorphia, social media content sexualising minors with corporate advertisements, and illegal adverts targeted at children for products such as vaping and gambling. Misinformation can also be particularly harmful for children, leading to belief in harmful stereotypes, disrupting education, or even leading to doomerism on important issues like climate change.

These legal interventions are much needed to counter the financial incentives that have caused such harm. A 2022 study concluded that nearly $11 billion in ad revenue is generated annually by social media platforms from US-based users under the age of 18. And in 2017 it was estimated that ad tech companies hold an average of 72 million data points on a child by the time they turn 13. Beyond key ethical and child rights issues such as consent, agency and privacy, this datafication can have significant consequences for a child’s development.

We advertisers, and the platforms we advertise on, are not doing all we can to tackle the issues of child safety. This is an uncomfortable and incontrovertible fact. Innovations in digital technologies affect children’s lives and their rights in ways that are wide-ranging and interdependent, even where children do not themselves access the internet. Children and young people are often bearing the brunt of risks to information spaces, being directly affected by emerging technologies and media trends. And younger children in particular can lack the ability to recognise the difference between AI-generated content and reality, which does not bode well in an era where the number of AI powered MFA sites is exponentially increasing.

These issues are exactly why the Conscious Advertising Network has relaunched our Children’s Wellbeing manifesto, to map out how the industry can move from compliance to leadership. It has been put together following consultation with over 40 individuals and organisations, from Omnicom Media Group UK to the NSPCC and Barnardo's.

The manifesto sets out how the rights of every child must be respected, protected, and fulfilled in the digital environment as in the physical world. Advertisers and brands should advertise in a way that is sensitive to these unique needs both online and offline. Children’s internet safety relies on the existence of trusted relationships between online platforms, children, parents/guardians and legislative frameworks, but it is essential that we advertisers understand our role in protecting them.

The reasons for acting are commercial as well as moral. Ensuring children’s rights and wellbeing not only enhances a company’s reputation but also strengthens risk management and investor confidence in ethical business practices. In a world where trust is in decline, developing trust and ethical standards as a brand are more crucial than ever, particularly for savvy Gen Z.

When children are a brand’s primary audience, brand safety must be a brand priority, rather than a function or a discipline within a business department. Even brands not directly marketing to or targeting children should still consider them in their marketing plans.

The same goes for the platforms we advertise on. Too often, platforms prioritise revenue generation, over safety, which can inadvertently lead to compromises in safety standards such as lenient content policies, inadequate moderation practices, or features designed without sufficient consideration for child safety implications. We are advocating for safety by design, including robust policies and procedures, continuous monitoring and evaluation of safety measures, and transparency in reporting safety outcomes.

By fostering a safety-first culture, platforms not only uphold their responsibility to protect children but also contribute to a safer, and brand-safe, digital environment overall. This approach instils trust among users, parents, and regulatory bodies while promoting the long-term sustainability of the platform by prioritising user well-being over short-term financial gains. It underscores the importance of ethical decision-making and accountability in digital platforms' responsibilities towards their youngest users.

The updated manifesto is based on six principles that are inspired by the eleven principles of Child Rights by Design from the 5Rights Foundation, as well as on the UN Guiding Principles for Information Integrity which contains advice to both technology companies and advertisers on age verification, parental controls, and special reporting and complaints mechanisms for children so that Human Rights are prioritised and children are protected from harmful misinformation.

Principle 1: Safety-by-Design - Embed safety-by-design in development and distribution of advertisement

Placing reducing the risk of harm to users at the heart of our decision-making. It is preventative, as opposed to reacting once the harm has already been caused.

Principle 2: Responsible Practice – Comply with legal frameworks and conduct a Child Rights Impact Assessment

The advertising industry should recognise and adhere to international standards, national laws, regulations, industry standards and other relevant measures.

Principle 3: Age Appropriate - Develop and place advertising that is age-appropriate by design

We, as conscious advertisers, need to carefully adapt to the children’s age range and diverse developmental stages, ensuring data collection aligns with appropriate levels of protection. We should look at implementing age-assurance measures for high-risk products or services, and rigorously test their efficacy.

Principle 4: Agency - Support child users’ decision-making and reduce exploitative features and business models that harm their agency

Advocating for children’s autonomy in navigating the digital environment. Children should have the freedom to initiate and finish their interactions with digital platforms at their own will, feeling empowered to explore without fear of missing out. It means they understand what they have agreed to without being coerced or influenced into affecting their safety, privacy, growth, or overall welfare.

Principle 5: Privacy - Embed privacy-by-design and data protection in marketing development and distribution

Clear disclosure of how children’s personal information is utilised and processed. This information should be presented in an accessible way for children to find and understand, with the utmost care taken to protect their privacy and prevent exposure of their information.

Principle 6: Diversity, Equality & Inclusion – Be inclusive, treat everyone fairly and provide for diverse needs and circumstances

Promoting equality and diversity for all children across content, conduct, contact and contract. We should promote positive representation by marketing products and content designed for children. 

Of course, we advertisers cannot make change happen in a vacuum. We must also work with the platforms, and regulators to change the fabric of our online environments. But we can, and should, do our best to protect children, and return brand trust with the best behaviour we can.

This article includes contributions from Yeo Joung Suh & Eline Yarra Jeanné.