Ten Challenges

by Joshua Walker, CodeX Co-Founder and Fellow

In the context of our 20+ anniversary at CodeX—CONGRATS!—Roland and I got to talking about global challenges we could take on: Problems to be solved over the NEXT 20 years (or—ideally but not realistically—over the next two . . . years or months). In essence, this is our way of talking to the future, for the succor of today.

The set of Ten CodeX Challenges was inspired by David Hilbert’s challenges to the mathematics community. In 1900, Hilbert ultimately published 23 problems to the global mathematics community. Not all have been solved, as of this afternoon, and some only “partially”. Hilbert’s precision varied. But his questions / problems served to catalyze and focus a vast enterprise over time, with both abstract (i.e., fun) and humanity-serving advances. We are not Hilbert, but we can be inspired and focused by his example—and his questions. And since we are quintessential Silicon Valley types: This is just a hack, to be regularly iterated. Our hope is to offer prizes for solved problems in the future. (No guarantees.) And let Roland or I know if you are interested in sponsoring and “naming” one. [FN: This is a chance to foment a highly specific, measurable human good across time. Few people get an opportunity to do that.] And/or even “nominating” / crystallizing a specific challenge. A great question, with a crystalline analysis, can come from anywhere (especially from a CodeXer). And if people are incented, you can change the world.

While CodeX deals with formal reasoning and mathematics, we of course ALSO deal with helping people—in very direct ways. This inevitably means that both analysis and resolution / “proof” is murkier. But we can address arbitrary human factors with arbitrary human decisors: We can use committees and prize panels to simply vote—and simply try to be the most just, reasonable, and empirically-based decisors we can be. But ideally, we are specific enough for objective success.

PROBLEM ONE: Fractal Communities

One of my teachers, Professor Cass Sunstein, commented in 1999 [ish] that the Internet was splintering people and allowing them to grow in extremis, within increasingly small echo-chambers. I pooh poohed his comment; and felt that people “had the right” to choose any media they might desire. There is something to this. People should be able to choose their information streams, or media communities. But, since 1999, this fractal “echo chamber” effect has compounded to the point that it threatens society . . . ALL of them. That’s a startling fact. How do we preserve the effective space for debate when we can silence other voices in our horizon at the click of a button—like scientists can silence genes. This turns us into giant, empowered babies incapable of thought; and will eventually mutate us into non-sentient troglodyte hordes. (Okay. Maybe the “troglodyte” characterization was a bit much. But we are definitely retreating into a “Plato’s Cave”—one that closes in ineluctably through (a) our own design / local wont + (b) algorithmic self-similarity functions.)

Hmmm. On second thought: Maybe “troglodyte” is too . . . complimentary a term for our current societal vector. And, instead of “Plato’s Cave” maybe our growing intellectual laziness may be termed “Potato’s Cave”.

This isn’t the fault of companies per se. It’s our own laziness + information economies of scale. Like abundant sugar made our genetic propensity for seeking sweet foods (adaptively beneficial back in our “OG” troglodyte days) potentially lethal in modern times of sweet abundance. Except . . . media self-similarity today is much more like a mainline injection of heroin / cocaine, with zero inconvenience. (Why would anyone choose the difficult pain of debate and self-querying when self-similarity / chatbot toadying is SO much more accessible?) I even have trouble escaping my own app’s music self-similarity DJ! Help me escape, AI!

That’s it, exactly: We need AI to escape AI (or: the algorithmic cave).

The challenge is to develop a mind-expanding, debate-encouraging app for one million people. One that is viral. One that improves democracy (or Republic . . . or whatever your ideal governance sitch may be). The people can be anywhere. But you have to achieve critical mass within at least one country (yes: Luxembourg or Monaco is ok; but points for scaling).

By the use of “viral” we mean that you didn’t get a million bona fide human users through government mandate.
You can’t use fraud. (Generally: Don’t do illegal stuff. Obv.)
If you use monetary incentives, you have to do it in a way that will scale to one billion people.
You can’t use a legal (read: governmental) mandate.

Now, one might say: How is such a thing legal, computational, or: any of the above? Well: It is legal: but in the negative. My original objection to Sunstein’s Dilemma was that the implied remedy was to impose some legal constraint on people’s psychology / opportunity. That’s sad. Why can’t we find a way around our algorithmically-driven tomfoolery without resort to legal force?

Lawyers aren’t ALL troglodytes. 🙂 Sometimes our best use is helping people avoid legal conflicts at, or before, inception.

Law is to society as DNA is to known life. See “On Legal AI” (2019). Informationally, and in terms of forming the “societal proteins” that enable us to achieve great things (peace not least among them). Let’s navigate the legal-computational network to do some good. Give people the “right” plus the incentive, to be better and more robust thinkers and citizens.