Last month, Instagram’s owner, Meta, announced its intention to deploy an AI-powered rating system in an effort to prevent teens from seeing content that is not suitable for them. Meta expressed intentions to deploy a youngster filter after an Ipsos survey concluded that 96% of parents were in favor of having a more restrictive ‘Limited Content’ setting available on the platform.
Meta decided to adopt the film rating system owned by the Motion Picture Association (MPA). But the film trade association did not well receive the announcement. MPA stated that Meta never asked them whether they agreed to allow the Facebook owners to use the term PG-13. Days after Meta’s announcement, MPA served Instagram’s owner with a cease-and-desist letter. MPA’s X account shared a few articles covering the topic and issued statements confirming that Meta does not have MPA’s blessing to use its rating system on Instagram.
Key takeaways
- Meta wanted to implement the MPA’s film rating term PG-13 on Instagram’s teen filters, but the MPA blocked the plan. Reacted with a post on X the same day Meta shared its intention. Also sent the social media company a cease-and-desist letter.
- Meta, Hollywood, and 96% of parents surveyed by Ipsos support stricter content limits for teens. This support drives the push for a new “Limited Content” setting on Instagram.
- MPA had a branding clash with Meta because Hollywood believes AI can’t fully replace human review. MPA stated that its film ratings rely on real people watching full movies. Meta, on the other hand, planned to rely on AI tools.
- Parents must be proactive in protecting their child’s privacy and security. Meta’s tools (such as Teen Accounts) help, but they aren’t perfect. Kids under 13 shouldn’t be on the app, and families should utilize parental controls and supervision.
Why was PG-13 likely not the right name in the first place?
A simple trademark search shows that MPA owns the PG-13 trademark. The motion picture company uses it in its film rating system alongside other terms. These terms guide parents on whether movies are appropriate for kids. In the film world, PG-13 means that the rating considers the movie inappropriate for children under 13.
However, Instagram wanted to use the very same term to protect teens (youngsters aged 13-19). Meta was hoping that the new setting would prevent teens from accessing content that is inappropriate for their age. The MPA opposed the decision as they believe Meta won’t be able to live up to the expectations of implementing the same standards for billions of social media posts. MPA says that real people, but not AI, review every rated movie, but this won’t be the case with Instagram.
Will Instagram follow through on its intentions, despite receiving a cease-and-desist letter?
MPA has publicly applauded Meta’s decision to provide additional protection for children on social media. However, MPA stated it does not want to see Meta exploit the trust in MPA’s film rating system for Meta’s own benefit.
There is no doubt that Meta’s Instagram and other platforms need adequate moderation. Although Meta has quietly removed the PG-13 rating from its posts, Instagram is rolling out the new security features. In the U.S., UK, Australia, and Canada, the company is launching these improvements, with plans to expand them globally.
The Ipsos survey, funded by Meta, confirmed that parents would gladly accept filters that prevent their kids from seeing inappropriate content on their social media timelines, and the new setting would make that possible. Even if the filter is not using the PG-13 branding, ensuring that parents keep kids from seeing inappropriate content should be a must.
Why was a PG-13-like filter not implemented already?
Facebook has been around for more than two decades, and one would think that things are already in place, ensuring that kids cannot access content not intended for them. However, this is not precisely the case. Meta released the ‘Teen Accounts’ feature just last year, and it is not without its imperfections. Even Meta admits that the feature is not perfect, and teens may occasionally see something inappropriate on Instagram. This includes strong language, sexually suggestive content, risky stunts, etc.
The new ‘Limited Content‘ setting will try to limit those and prevent teens from ending up searching and seeing other sensitive things. This includes posts related to suicide, self-harm, drug/alcohol use, and eating disorders.
What else can parents do to prevent kids from seeing inappropriate content?
One of the most important things for parents to keep in mind is that it is against Meta’s terms and conditions for kids to have social media profiles on Instagram or Facebook until the age of 13. Children under 13 should not be on social media at all, and if they use platforms such as Messenger Kids, parents should closely supervise their behavior.
Use Parental Controls and Supervision Effectively
Kids can certainly be creative and exploit loopholes in social media networks, such as Instagram and Facebook, to find their way in. Apart from not allowing kids to have social media profiles and using tools such as Meta’s Teen Accounts feature. Parents should also consider utilizing parental controls that often come as part of standard antivirus protection. Relying on big corporations to safeguard children has proven to have room for mistakes. Taking responsibility for your children’s safety is predominantly in your hands. Utilizing proper parental controls is undoubtedly a step forward towards safer web browsing and social media presence.
Meta’s social media platforms, which include Instagram, certainly have room for improvement when it comes to child safety. Although the MPA acted quickly in serving Meta with a cease-and-desist letter, both companies agree that child and teen safety should be of the highest priority. While Meta might be forced to rename its system and has already quietly removed the PG-13 label. The fact that they are doing more to prevent teens from accessing inappropriate content is admirable. However, parents need to remember that their child’s safety is not in safe hands, and kids should never really be babysat by big tech and major entertainment conglomerates – those companies are sadly known to have security and privacy loopholes that could be exploited.