Technology Law Blog

Child Digital Privacy in Malaysia: Risks, Regulation and Solutions

This is a transcript of the presentation of this paper by SLC Partner, Darmain Segaran at the Malaysian Communication & Multimedia Ministry (MCMC) Symposium held on 21 November 2019.

Good morning ladies and gentlemen. I must admit that the initiative to pursue this research was motivated by selfish reasons. Having two young children who will soon be reaching an age where social media will infiltrate their lives, with or without their choice, I began asking the question of their safety and security in cyberspace.

My research so far in this area, which, to note, is ongoing, has led me to the interim conclusion that the severe risks in child digital privacy and the consequences of ignoring them are not adequately appreciated. I should point out here that no country has sufficiently addressed or found a solution to the issue of child digital privacy. That said, Malaysia does not seem to be charting the path. 

Privacy can be traced to the ‘right to be let alone’ propounded in an 1890 research paper. Data protection, on the other hand, is a far newer area of law seeing the earliest dedicated legislation on data protection enacted in Sweden in 1973. Data protection is a subset of the larger branch of privacy and although they may overlap in many ways, privacy can be viewed as having a wider scope. 

In the rush to deal with this ‘data craze’ in the commercial sector, the majority of countries have enacted their own versions of data protection legislation but seem to have glanced over the category of children as a segment of society that requires both special protection and attention. It is likely that one of the reasons for this is the inherent complexity in dealing with children.

Jurisprudential theory on the question of children’s rights remains largely under-theorized. Do children have human rights? Although international instruments now categorise children as ‘beings’ entitled to rights, many theories had (and continue to) suggest that these legal instruments are not based on any real legal theory. Indicative of the varied view on this topic, Onora O’Neil famously remarked that ‘a child’s main remedy is to grow up’.    

Getting past the theory, from a definition standpoint, we face the issue of age polarity. Catering solutions for what we legally define as ‘children’ spans between the ages of 0 - 18. As you can imagine, the usage of online and digital spaces varies wildly within such a large age range. Tied in closely to this is the range of emotional and psychological maturity which is not necessarily correlated to age. 

Children, at an age where they are still discovering the boundaries of social interaction, are given a platform to record publicly and permanently their thoughts and opinions.

Noting these unsettled difficult foothills, we still cannot escape the reality that the digital space poses significant risk concerns to children, a few of which I shall highlight here. 

Under the broad umbrella of ‘safety’ the first risk element we can consider is the perpetuity of data. Data may be deleted but nothing is ever truly erased online. Of particular concern on this issue is the collective repository of unfiltered publicly accessible records of an individual’s personal life, known commonly as social media. 

Many social media platforms permit access to children 13 years old and above. Children, at an age where they are still discovering the boundaries of social interaction, are given a platform to record publicly and permanently their thoughts and opinions.

The problem is amplified when social media data begins to be used by the insurance industry, loan applications, employment applications or legal proceedings.

Adding a new dimension to children’s privacy in the digital age is the phenomenon of ‘sharenting’ – where a parent shares a child’s personal information or images without their consent. The absence of any form of consent by the child presupposes that the information being shared is, in fact, harmless. This of course, cannot be guaranteed.

Beyond the risk to reputation is the combination of actual personal risk and personal data risk. For example, integration of platforms using, IoT-enabled devices can see facial recognition technologies with ‘tagging’ functionality, enabling the identification of children in photos and in real-time with geo-location identification. One unexpected concern arose in IoT-enabled devices in toys with voice recognition features that may allow identification and recognition of children as well as the ability to communicate with them and record their voices. 

Security researchers have already found massive risks with such toys that connected automatically to unsecured wireless networks. In some cases a hacker can easily connect to the toy and communicate with the child directly to obtain information or give instructions. As a measure of the severity of this issue, consider that the German government classified a doll called My Friend Cayla as an “illegal espionage apparatus,” before it was pulled from the shelves.

Unsupervised internet access also poses a risk to children. A survey by Microsoft in 2003 showed that parents were on average willing to allow unsupervised internet access to their children by the age of 8. Recent research shows that 90% of third-party apps in the Android Play store harvest user data such as age, gender, location, and usage patterns. Children’s digital footprints, and the ability of various platforms to combine and integrate that data to produce insights, has the potential to determine and impact children’s futures. We run the risk of allowing an entire generation’s childhood to be captured as data, quantified, sold, and re-sold without consent or even knowledge in some cases. 

Using such data, children’s private lives can thus be exploited by marketers, who can observe their online activity and even mimic those online environments that appeal to children and in which they feel safe. A looming ethical question exists as to whether such targeted marketing is appropriate at all. 

On the more sinister side of safety we observe child personal data being used in cases of grooming and sexual abuse. In such circumstances, the data can be used to identify a potential victim or where there are elements of shame or embarrassment, data such as images or recordings can be used against victims in the form of blackmail. Leaked private information has also been attributed to a number of cyberbullying cases amongst children. 

In addition, we may see profiling, much like those used for targeted advertising, being used instead for recruitment into radical organisations and the promotion of political agendas using hate speech and extremism.

Hearing all these risks tends to cause most concerned parties to immediately turn towards access restriction as a form of safety measure. However, we must also recognise that the digital space was always intended to be an enabler for knowledge, learning and development of children. By restricting access to such a valuable resource we are instead stifling development and we will very quickly find our children being ill-equipped to deal with the technological advancements globally. Over the long run we would be losing talent, ideas and innovation - a cost which is possibly too steep. We must find a balance between these two poles.

Returning to Malaysia, this paper considers the legal backdrop of child digital privacy. Privacy as a legal concept itself is not fully developed here. Although it is recognised under Article 5 of our Federal Constitution (albeit not directly) we have no stand alone legislation for privacy and the limited common law has shown a reluctance to develop the area. Against this backdrop the enactment of the Personal Data Protection Act seemed like a necessary knee jerk reaction to the ‘data phenomenon’ sweeping the world.

Malaysia’s legislative approach to child protection is fragmented. We have provisions in numerous pieces of legislation that deal with varying aspects of child protection such as the Penal Code, the Sexual Offences against Children Act, the Communications and Multimedia Act, The Child Act and even the Computer Crimes Act. However none of these provisions deal exclusively with the issue of child digital privacy - neither do they individually or collectively deal adequately with the issue as they were not enacted with those intentions in mind.

At an international level, Malaysia is a signatory to the Convention on the Rights of Children (CRC) and two optional protocols relating to it which indicates the country’s willingness to safeguard the rights of children. Unfortunately, the provisions set out do not directly address child digital privacy concerns either.

Looking outwardly we can draw lessons from other jurisdictions that are attempting to address this concern directly. Often referred to as the current standard is The Children’s Online Privacy Protection Act (COPPA), a piece of American legislation aimed at guarding online information of children below the age of 13. This is done by requiring specific styled privacy policies and obtaining parental or guardian consent in most circumstances. The Federal Trade Commission (FTC) enforces its provisions and has in fact issued large fines including famously in February 2019, a 5.7 million dollar fine to ByteDance, the company operating TikTok, a social media platform. In line with promoting the principles under this Act the FTC has also implemented a number of safe harbour provisions to encourage self regulation by industries. In addition, the FTC is often kept in check by watchdog groups such as EPIC, the Electronic Privacy Information Center which regularly holds a watching brief in FTC prosecution cases.

Europe, on the other hand, adopts a more layered approach. The right to privacy is enshrined in Article 24 of the European Convention on Human Rights (ECHR) and all member states have unique legislation to deal with it. The application of this law seems to be uniformed to both adults and children alike.  It has been argued that in Europe, data protection tends to gain more attention than privacy. Notably, the GDPR does contain provisions that require unique attention to be given to data of children however this is clearly not a focus of the regulation. 

Closer to home we see Japan adopting a broader approach by way of a combination of diverse legislation and government-led initiative through policy implementation. This approach has both forced compliance and encouraged self regulation. 

While recognizing the need for adequate legislative provisions, this paper argues that a comprehensive solution includes participation from the corporate and commercial sectors particularly those in the tech sector. Privacy as a right must first be recognised and then demanded. Borrowing from Brad Smith, Microsoft’s current President and Chief Legal Officer - I quote “This is a fundamental fact of life that everyone who works in the tech sector needs to remember every day. We’re fortunate enough to work in one of the most lucrative economic sectors of our lifetime. But the money at stake pales in comparison to the responsibility we have for peoples freedom and lives”. 

Implementation of project based or organisation wide privacy programs that utilize measurable privacy frameworks becomes essential.

At a practical level, organisations can begin the process by understanding the risks and potential solutions that can be implemented to address privacy rights and certainly child digital privacy rights. Implementation of project based or organisation wide privacy programs that utilize measurable privacy frameworks becomes essential. 

One of the standout frameworks addressing child digital privacy is the principle of ‘privacy by design’ set out in the GDPR. In this framework, the development of a product or service takes into account from the beginning, the concern of privacy and implements into the natural design of the offering, various privacy solutions. Corporations can also take advantage of publicly available tools that will assist in determining whether they have adequately addressed the issue of child privacy.   

In addition, with the right business environment created by the government, businesses can be built around tech solutions that address privacy concerns uniquely for children. Such solutions certainly add a unique dimension to tackling the problem. Apps like Bark and Net Nanny have already proven effective in addressing child online protection however more focused tools could be developed for privacy concerns.

As a final element of the solution process, the paper proposes that digital literacy must form a critical component. Although it is, of course, important that children are the beneficiaries of literacy programs aimed at identifying the risks and red flags in digital spaces, it is equally important to educate parents and guardians on the same issue. The question however remains as to who will lead the way on this front. 

I would point out here that although the government is the first port of call for this issue, it is not always the case. DiGi has over a number of years, spearheaded, through their CSR initiative, an ongoing campaign to combat cyberbullying which includes education on internet usage for both children and parents. Initiatives such as this which collaborate with the government sector can produce unique solutions.   

Ladies and Gentlemen. When discussing the issue of privacy in any context I have often been asked, “Aren’t there bigger issues than this?” - I am quite certain there are. But that in no way diminishes the reality that digital privacy in this modern day is a concern like never seen before. And as it stands, children are at the greatest risk of being glanced over. This is something we simply cannot afford.

Darmain Segaran