My name is Michael Schmatz, and Finding Protopia is my blog. This is mainly a place for me to think out loud.

This page is about this website. For information about me, see the personal page.

Why Finding Protopia?

Protopia is a term coined by Kevin Kelly in his book The Inevitable1 to describe where he believes technology is taking our world. He believes that rather than heading to a perfect (yet stagnant) utopia or a terrible (but unsustainable) dystopia, we are heading towards a gradually improving protopia. In a protopia, subtle and incremental progress leads to a future which is better than present, on the whole. Not a lot better day to day, but better in more ways than it is worse.

A protopia is a likely future, but is far from certain to occur. Many civilizations in the past have fallen from their heights. 2020 has shown that our world can be shaken by random events, like a pandemic that’s fairly mild by historical standards. Our world civilization is not immune to moving backwards or collapsing.

There are actions that individuals can take that will make a protopian future more likely. These include the development of new technologies, improvements to culture, political changes, and many other actions to make the world tomorrow better than it is today. Individuals can either directly contribute or fund efforts in those areas.

I’m currently only contributing to these efforts philanthropically. However, I also plan to pivot my career towards working on these areas by 2025. I’m educating myself in my free time in math, physics, and Chinese as I think these will be useful skills future contributions towards protopia.

How could a protopia not occur?

Existential Risk

Existential risk are risks that threaten the destruction of humanity’s long term potential.2 These can manifest through human extinction, an unrecoverable collapse, or a permanent dystopia. Toby Ord argues that preserving and protecting the future potential of humanity is good because it would allow our descendants to fulfill that potential and realize one of the best possible futures for humanity. I believe preventing the destruction of humanity’s long term potential is an axiomatic good.3 For a stirring vision of what such a future could be like, I recommend Bostrom’s Letter from Utopia.

Like Ord, I’m principally concerned about the existential risk of runaway AI. The primary reason why I’m concerned about this is that there are enormous incentives, both economic and moral, to developing that technology. There is also a strong argument that can be made that developing strong AI is one way we can mitigate most other existential risks, making its development even more likely.


We also risk futures in which progress is extremely slow, nonexistent, or negative. This stagnation could be economic, scientific, moral, or otherwise. I generally accept Thiel’s notions that stagnation in these areas would be be bad for a number of reasons and that growth is generally slowing down.4 Preventing or reducing this stagnation would enable a better future. In 2020, we are starting to feel the societal strain that comes from a lack of growth and opportunity for many, making this risk particularly topical.

  1. Kelly, K. (2017). The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future. United Kingdom: Penguin Books. ↩︎

  2. Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. United Kingdom: Bloomsbury Publishing. ↩︎

  3. Ord spends a considerable amount of words explaining why human extinction is bad. There are some philosophical arguments as to why the demise of humanity would be neutral or good. I won’t dig into these, but I generally believe the demise of humanity would be one of the worst things that could happen. ↩︎

  4. Ross Douthat has a good examination of these issues in his book The Decadent Society. ↩︎