Skip to main content

The Online Safety Act? — Or The Online Regulatory Act?

R4shSec
Author
R4shSec
I like it when things work how they’re not supposed to.
Table of Contents

What Is The “Online Safety Act”?
#

Most countries, such as the United Kingdom (UK), Australia, and Malaysia, have enacted the Online Safety Act, meant to protect children from adult content and remove harmful material. However, this law also gives government agencies control over social media platforms and ruins user experience. Here’s why you should take this matter seriously 👇


Countries
#

Countries all over the world, particularly Australia, have enacted legislation in place for the “safety” of their internet users. It also gives the government in the following regions control. Social media platforms can face fines or criminal action for not enacting any safety measures. This has led to age gating of the internet.

United Kingdom (UK)
#

United Kingdom
Photo by enrico bet on Unsplash

👆 Online Safety Act 2023

The Online Safety Act 2023 was made to “protect kids.” The government of the United Kingdom (UK) requires all users to do either a biometric scan or a scan of a government document such as an ID or driving license.

  • On 19th April 2019, a teenage girl from the UK, Chelsea Russell, posted lyrics from Snap Dogg I’m Trippin’ to pay tribute to a boy who died in a road crash. She was given an eight-week community order, placed on an eight-week curfew, and told to pay costs of £500 and an £85 victim surcharge. (Source)

Malaysia
#

Malaysia
Photo by Theodore Nguyen on Pexels

👆 Online Safety Act 2025

Malaysia’s social media ban for under-16s made the government push for mandatory eKYC (electronic Know Your Customer) for social media platforms. The government has since taken a step aside and not taken similar actions as Australia.

  • Malaysia is strict about protecting monarchs. Content that seems offensively gross can get a user charged under the Sedition Act 1948 (Act 15).

Australia
#

Australia
Photo by Kevin Kobal on Pexels

Australia is the first country in the world to have a blanket social media ban for under-16s. This has caused mixed reactions among parents, teenagers, and children.

Social Media Age Limits
#

The Children’s Online Privacy Protection Act (COPPA) made it necessary for social media platforms to adopt the age limit with parental consent of 13. This age limit was broken by Australia when they set their mandatory age limit to 16. Users have to do a biometric scan or take a picture of their driving license or ID to access social media. This is later followed by other countries (e.g., Malaysia, Spain, the UK, etc.). Despite claims that this is for “protecting children,” it introduces new risks.

Risks
#

Data Leaks
#

A data leak containing sensitive personal information (e.g., email address, IP address, password, etc.) is already dangerous. It only gets even riskier when extremely sensitive personal information such as your ID or driving license is involved, which is a haven for malicious actors. Database breaches like this are already happening, including Discord’s database breach, which leaked more than 70,000 photos of user IDs.

Grooming Exposure
#

Teen Group & Adult Group: Grouping accounts into the teenager category and adult category may seem good at first. However, adults can still message teenagers. [Please note that this isn’t fully implemented & couldn’t be completely tested.]

Doxxing
#

Users under 16 may use their parents’ account to access social media. Their parents’ account may contain their personal information, such as their parents’ picture and full name, that could be used for Open Source Intelligence (OSINT) purposes. This may lead malicious actors leaking their personal information.

User Experience
#

User experience on social media platforms and games matters. Users use social media platforms every day to express themselves, scroll through endless content, or get reliable information from various sources. There are also vulnerable groups that use the freedom of speech to express themselves. However, when they are met with biometric verification and know that it might be traced back to them, it would effectively cut the freedom of speech.

Platform-Wide Age-Gating
#

We’re already seeing platform-wide age gating on platforms such as Roblox and Discord, meant to protect children from adult content. However, this affects user experience. Users can’t access chat in Roblox without proper verification. Users on Discord would switch to a “teen-by-default” setting until they can verify their age.

Data Retention
#

Platforms utilize various methods to verify biometrics. They also store your data for a certain period of time before being automatically deleted unless requested by law enforcement.

PlatformData Retention
DiscordData is stored for 7 days for UK users.
k-IDOnly Verified / Unverified Data Is Stored.
PersonaBiometric Data & Might Be Disclosed To Third-Party Providers.
GoogleStored Until Deletion.
MetaStored Up To 1 Year.
RobloxStored Up To 30 Days.

Proper Resources
#

Due to worried parents, other internet users should not suffer. Some users are not comfortable with uploading their IDs due to the risks involved. Instead of a blanket ban, it is better to educate parents and youths to ensure a safer place for children.

Parental Controls
#

Each and every parent has their own way to control their children. Parental controls are a great way to ensure that your kids are safe.

Android

Apple

DNS Blocking
#

You can a utilise Domain Name Server (DNS) provider that blocks harmful content.