In February of this year, Sam Altman, CEO of OpenAI, mentioned seeking 7 trillion dollars to fund the hardware necessary to build Artificial General Intelligence. Since then he has stated that his comments have been misinterpreted, and while he isn’t personally seeking funding for that amount, the $7 Trillion is the approximate cost to manufacture and distribute the high-powered chips on which AGI will need to run.
People swallow this gargantuan number with a feeling of inevitability. AI is the future that developers tell us we want, even though no one knows what it actually is or what it actually will do. All we are promised is godlike powers to process information, and with that, it is our best hope to progress humanity forward. Even if it doesn’t, it is almost guaranteed to generate a massive, even unending return to all investors.
If we want progress out of AI, we first need to define what progress is. Some say progress is capital gains. Others claim novelty is progress. But people only want capital gains or novelty because they believe those things will provide an enriched human experience. Joseph Campbell, a professor and expert on literature, mythology, and religion, said that even more than meaning, people want the experience of truly being alive. I’m leery that a synthetic mind will give us our best shot at fully actualized biology. But there are some things I think AI could work out that would be massively beneficial for society. These things include curing diseases and ending world hunger.
My fear is that instead of focusing on these things, AI will overwhelmingly be used to put people on the treadmills of their own self-satisfaction. Since the very recent Information Age, our great technological advances have done a great job of helping rich-enough people cope with mundane or difficult tasks, and a terrible job of making quality solutions affordable for people in need. When we historically see that technology widens income gaps and money doesn't trickle down, we have to wonder—who will AI benefit and in what way? I have to assume it will largely benefit the rich because as a culture, we don’t value generosity.
We need institutional generosity more than we need AI. Our economic values are sure to be coded into this unfeeling mind that observes our obsession with returns. If we can learn to value things that will actually make us whole, then we can actually attain progress. Until then, all our advancements will find us short-term solutions to deeper problems.
I find it very interesting that people funding AI are seeking $7 Trillion with certainty that they will find it, while the World Food Programme states that it is in a “crippling and historic funding crisis.” I would rather see someone ask for $7 Trillion to solve food security and affordable housing worldwide. Gray Group International reports that world hunger could be solved with $1.6 Trillion over 6 years.
I truly believe that we have the power to solve the world’s largest problems if we care enough. Building AI won’t make us care more, and the way to security isn’t in stockpiling wealth. It’s in sacrificial love.
For now the closest I can get to raising $7 Trillion is writing articles like these. Maybe they make a difference. But I’m just a dreamer. How much is that worth?