Why Good News Goes Nowhere
Last month, a clinical trial achieved total protection from HIV. Algorithms are really good at giving us what we want to see, not necessarily what we want to know.
In the last month, great wins for the world went unnoticed:
A large clinical trial showed a twice-yearly vaccine fully prevented HIV.
South Asia’s child mortality rate dropped 21%.
Chad eliminated its first tropical disease (sleeping sickness).
Sierra Leone bans child marriages.
I had to seek this out by subscribing to a newsletter on good news.
If this were done during wartime, it would be as if WWII newspapers
”just weren’t interested in allied victories”. This would have really affected morale and people’s sense of safety.
This hasn’t become as big of a deal yet because the median voter is in his 50s and watches six hours of TV daily. Mainstream TV still reports some good news. Viewers won't turn off the TV when it airs.
However, people under 30 are more likely to get their news online, particularly social media and news sites, where each news piece must compete for attention. The good news that would fill an empty spot on evening TV doesn't make it into algorithms, as each piece competes individually.
It affects local news too. People click on sensational stories about racism or wokeness from distant towns, while missing local changes like a new job training center or a free public library program.
It's like being captivated by a distant storm on TV while ignoring the sunny day outside your window.
This focus on the worst news isn’t just bad for people’s stress levels; it contributes to polarization because it creates the impression that the world is in dire straits.
Even negative news without clear partisan drivers, like fentanyl overdoses, struggles to get attention. Media tried but failed to make more of a headline out of it. CNN even made a whole splash page about fentanyl that never really took off. Without a partisan angle to fentanyl, it's less compelling to share.
Algorithms just react to what people click on. And we click because our brain wants to protect us from harm( and signal to our friends that we care).
But, this becomes problematic when over half of 16-25-year-olds believe the world is doomed due to climate change.
So, how do we fix this? The answer might lie in how we handle AI and algorithms. At first glance, AI seems poised to make things worse by enhancing attention-grabbing algorithms. But what if AI could balance our consumption of content?
GPT-4 is pretty good at providing a balanced response when asked whether climate change will kill us all.
Social media recommendation algorithms could be trained to value positive, local, or less-partisan content. The way Facebook algorithms are set up now, they won’t show users content that disagrees with them even if they subscribed to it.
This shift might mean people spend less time on these platforms. But even a 5-10% drop wouldn’t kill social media companies, considering the average user spends 96 minutes a day on TikTok, opening it 19 times daily.
Generative AI can be trained based on nuanced guidelines. Anthropic's Constitutional AI uses principles like the UN Declaration of Human Rights or Apple’s Terms of Service from the beginning, unlike methods that only check for harmful content after training.
Imagine Facebook and TikTok using a similar approach. While not directly using Constitutional AI (which is for generative systems), social media companies could still adopt transparent guidelines for their recommendation algorithms to have nuanced content prioritization, not solely the most addictive stuff.
Adopting transparent guidelines can help social media companies reduce health risks, build trust, and avoid the fate of the tobacco industry.



Interesting suggestion @Abigail Olvera , about using AI to make “news” more balanced.
Our brains are wired to view the present in a negative light…we actively seek out negative news. At the same time, our brains suppress negative memories.
This means that the present is viewed in the negative, while the past appears rose-colored. I call this the “reality distortion field.” It’s a very serious problem to which I do not have a clear solution for: https://www.lianeon.org/p/progress-is-counterintuitive