Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

Parents, A.I., and task forces: How to protect kids online

(Credit: Brook Mitchell/The Sydney Morning Herald via Getty Images)

Earlier this week, I asked you all to suggest solutions to the thorny problem of protecting kids on social media. Thoughtful thoughts ensued, and I’ll get to them in a moment, but first, a trio of congratulations:

— to Nvidia, which just saw its valuation leap by 25% on the back of an analyst-shocking, A.I.-fueled sales forecast. A $940 billion market cap is nothing to be sneezed at, though it does somewhat undermine CEO Jensen Huang’s assertion that controls on exports to China mean the company is reduced to working “with our hands tied behind our back.”

— to the EU’s General Data Protection Regulation (GDPR), which is five years old today. It only took half a decade to grow real teeth

— and to the solar industry, which is projected to this year rake in more investment than oil for the first time. Now that’s a milestone we can all celebrate.

So, protecting kids. 

Data Sheet reader K.E. writes: “I suspect the major problem with excessive time spent on social media leads to less time interacting in real life with family and friends, and this in turn is the biggest problem.” I think there’s a lot of truth to this, though on the other hand, it depends who your family and friends are.

Here’s S.P.: “All social media accounts for anyone under the age of 16 (legal driving age) must be set up and jointly accessible by a verified parent or legal guardian. This means that all contact/friends, messages and activity sent, received and seen by the minor can be viewed, moderated and edited by the adult. In essence, they can join conversations to correct untruths, intervene in bullying and potential predatory behavior, unfriend or unfollow connections, and have ongoing interaction with their child about what they’re seeing.”

Again, it depends on who your parent or guardian is. If I put myself in the shoes of a kid whose dad disapproves of their LGBTQ identity, for example, the last thing I’d want is to have him policing my interactions as I grow into adulthood. I’m also concerned about the idea of encouraging kids to think constant surveillance is acceptable. See, this is why it’s such a tricky subject!

R.G.: “My recommendation is for a public/private task force be formed with a six-month deadline to examine the problem and come out with a bipartisan set of solutions. The task force ought to include individuals from the federal government, education (states) professionals, business and community leaders. The maximum number of participants should be 25.”

As it happens, on the same day the Surgeon General issued his advisory on social media’s effects on kids’ mental health, the White House announced an interagency task force on the issue. It’s not quite what R.G. describes, but it will at least consult those other experts as it decides what needs to be done.

And finally, T.D. sent over a detailed proposal that is sadly too long to reproduce in full, but that can be reasonably summarized thus: Use A.I. to spot and block hate speech and bullying as it’s being authored; impose cooling-off periods on accounts that pass a certain threshold of content featuring “disdain, misinformation and grandeur”; and penalize platforms for allowing too much prominent content featuring the aforementioned three sins. 

Sample quote: “I'm not suggesting that social media companies should be liable for the things users post on their platforms. But, do they have an obligation to promote disdainful, misleading, and grandiose content in their feeds? Absolutely not…They're just protecting the algorithms that keep people looking at ads. So, to that point, I would suggest a penalty enforced by an entity like the FCC that is based on similar thresholds described above but specifically for the platform's most heavily promoted/viewed content.”

In terms of effectiveness, the reliability of the A.I. is the wildcard here, and over-blocking would be a particular risk if penalties are involved—on which note, I can see potential first amendment challenges on the horizon. But that said, if social-media companies face official pressure to fix the problem, I suspect this is the sort of result we may see.

Thanks for your suggestions! More news below.

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

David Meyer

Data Sheet’s daily news section was written and curated by Andrea Guzman.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.