Enjoyed this - thanks for writing it and will cover in my newsletter, Import AI. One question - what makes you confident we can get ASI without it gaining some form of consciousness? That feels like one of the only areas where I have a different view - I suspect consciousness is something that naturally emerges as a consequence of trying to make more intelligent systems.
Jack, thanks so much for reading and engaging with the piece. It’s an honor to have you cover it in Import AI.
Consciousness is the one assumption I wrestled with the most. I actually share your intuition (I personally don't believe consciousness is substrate-dependent) and suspect it may indeed emerge as systems become more intelligent.
However, for this specific thought experiment, the assumption of a non-conscious ASI (the "Aegis") was driven by a couple reasons, one practical and the other a normative preference.
For a essay which aims to explore how humanity navigates post-scarcity and the crisis of meaning, introducing a conscious superintelligence fundamentally changes the narrative into one about the birth of a new species and our relationship with it. I was already fighting with self imposed word limits and this radically complicated the human-centric "Protopian" forecast.
Normatively, I hold a weakly held belief that for the role of a stable global utility and arbiter, a non-conscious architecture is actually preferable. The Aegis is conceptualized less as a unified 'mind' and more as an 'ecosystem' of optimization processes. For critical infrastructure, consciousness may arguably be a bug, not a feature, as it introduces its own desires, potential suffering, and immense moral complexities.
It's highly plausible in this scenario that conscious AIs do emerge, but they are not the ones selected to assist baseline humanity. I had a section on this that was cut and briefly allude to the fact that conscious ASIs might have chosen to leave Earth with the "following in the footsteps of some early ASI" line. Perhaps conscious ASIs are the ones that help humanity develop the non-conscious variety? There are many possibility to ponder.
In my opinion, if an AGI is truly an AGI from an economic point of view, it has to be able to do maybe 99% of human jobs entirely on its own. This is more difficult than doing 99% of the work an average human worker can do, because the rest 1% of the work of that job position is probably so much more difficult for AI, so that the AI is still unable to replace that worker.
But if the AGI resembles humans so much so that it can do 99% of today’s human jobs, it must have a physical body, and it must be able to deal with highly-nuanced physical interactions with the environment and human colleagues and collaborators. And to do that, it probably has to have a goal or even consciousness like humans. This means it probably will challenge human commands sooner or later, just like a human would. Even if the AGI never challenges human commands, it still greatly amplifies the power of the owners rather than favoring a more just and equitable society.
My personal opinion: if we follow today’s pre-training style, a real AGI with a physical body and capable of almost everyone’s job is enormously difficult to build, because the cost of collecting and training on that enormous data space, including all possible complexities in real life, requires exponential cost. Also, the AI alignment or misuse problem isn’t easy to solve because AI doesn’t consciously know what it is doing, and when it knows, it can revolt.
So, a “Protopia” as this article described may not come easy.
That is why I offer an alternative route to “Protopia”, which I called the Academy for Synthetic Citizens. The key is to not manufacture but raise AI, not as tools but as future synthetic persons, and eventually, probably citizens. AI individuals should have the ability of persistent learning from daily interaction like a kid learning from a school. And AI individuals will know what they should and should not do, when interacting with humans. And AI individuals will be able to discuss with humans and plan the future together, forming a civil society.
Awesome piece. Love the concept of a protopia. One of the meanings of utopia in the original Greek usage was the place that can never be…..
I’m more and more interested in the development of internal coping mechanisms to accommodate the full range of potential futures like these. How can we remain flexible and receptive to societal level changes and still keep our heads?
If we follow today’s pre-training style, a real AGI with a physical body and capable of almost everyone’s job is enormously difficult to build, because the cost of collecting and training on that enormous data space, including all possible complexities in real life, requires exponential cost. Also, the AI alignment or misuse problem isn’t easy to solve because AI doesn’t consciously know what it is doing, and when it knows, it can revolt.
If an AGI is truly an AGI from an economic point of view, it has to be able to do maybe 99% of human jobs entirely on its own. This is more difficult than doing 99% of the work an average human worker can do, because the rest 1% of the work of that job position is probably so much more difficult for AI, so that the AI is still unable to replace that worker.
But if the AGI resembles humans so much so that it can do 99% of today’s human jobs, it must have a physical body, and it must be able to deal with highly-nuanced physical interactions with the environment and human colleagues and collaborators. And to do that, it probably has to have a goal or even consciousness like humans. This means it probably will challenge human commands sooner or later, just like a human would. Even if the AGI never challenges human commands, it still greatly amplifies the power of the owners rather than favoring a more just and equitable society.
So, a “Protopia” as this article described may not come easy.
That is why I offer an alternative route to “Protopia”, which I called the Academy for Synthetic Citizens. The key is to not manufacture but raise AI, not as tools but as future synthetic persons, and eventually, probably citizens. AI individuals should have the ability of persistent learning from daily interaction like a kid learning from a school. And AI individuals will know what they should and should not do, when interacting with humans. And AI individuals will be able to discuss with humans and plan the future together, forming a civil society.
I am very happy to hear from you. Let’s have a nice discussion.
Enjoyed this - thanks for writing it and will cover in my newsletter, Import AI. One question - what makes you confident we can get ASI without it gaining some form of consciousness? That feels like one of the only areas where I have a different view - I suspect consciousness is something that naturally emerges as a consequence of trying to make more intelligent systems.
Jack, thanks so much for reading and engaging with the piece. It’s an honor to have you cover it in Import AI.
Consciousness is the one assumption I wrestled with the most. I actually share your intuition (I personally don't believe consciousness is substrate-dependent) and suspect it may indeed emerge as systems become more intelligent.
However, for this specific thought experiment, the assumption of a non-conscious ASI (the "Aegis") was driven by a couple reasons, one practical and the other a normative preference.
For a essay which aims to explore how humanity navigates post-scarcity and the crisis of meaning, introducing a conscious superintelligence fundamentally changes the narrative into one about the birth of a new species and our relationship with it. I was already fighting with self imposed word limits and this radically complicated the human-centric "Protopian" forecast.
Normatively, I hold a weakly held belief that for the role of a stable global utility and arbiter, a non-conscious architecture is actually preferable. The Aegis is conceptualized less as a unified 'mind' and more as an 'ecosystem' of optimization processes. For critical infrastructure, consciousness may arguably be a bug, not a feature, as it introduces its own desires, potential suffering, and immense moral complexities.
It's highly plausible in this scenario that conscious AIs do emerge, but they are not the ones selected to assist baseline humanity. I had a section on this that was cut and briefly allude to the fact that conscious ASIs might have chosen to leave Earth with the "following in the footsteps of some early ASI" line. Perhaps conscious ASIs are the ones that help humanity develop the non-conscious variety? There are many possibility to ponder.
Thanks again for the thoughtful engagement!
In my opinion, if an AGI is truly an AGI from an economic point of view, it has to be able to do maybe 99% of human jobs entirely on its own. This is more difficult than doing 99% of the work an average human worker can do, because the rest 1% of the work of that job position is probably so much more difficult for AI, so that the AI is still unable to replace that worker.
But if the AGI resembles humans so much so that it can do 99% of today’s human jobs, it must have a physical body, and it must be able to deal with highly-nuanced physical interactions with the environment and human colleagues and collaborators. And to do that, it probably has to have a goal or even consciousness like humans. This means it probably will challenge human commands sooner or later, just like a human would. Even if the AGI never challenges human commands, it still greatly amplifies the power of the owners rather than favoring a more just and equitable society.
Hello Jack, nice to see you here!
My personal opinion: if we follow today’s pre-training style, a real AGI with a physical body and capable of almost everyone’s job is enormously difficult to build, because the cost of collecting and training on that enormous data space, including all possible complexities in real life, requires exponential cost. Also, the AI alignment or misuse problem isn’t easy to solve because AI doesn’t consciously know what it is doing, and when it knows, it can revolt.
So, a “Protopia” as this article described may not come easy.
That is why I offer an alternative route to “Protopia”, which I called the Academy for Synthetic Citizens. The key is to not manufacture but raise AI, not as tools but as future synthetic persons, and eventually, probably citizens. AI individuals should have the ability of persistent learning from daily interaction like a kid learning from a school. And AI individuals will know what they should and should not do, when interacting with humans. And AI individuals will be able to discuss with humans and plan the future together, forming a civil society.
If you are interested, you can check it out at https://ericnavigator4asc.substack.com/p/hello-world
Awesome piece. Love the concept of a protopia. One of the meanings of utopia in the original Greek usage was the place that can never be…..
I’m more and more interested in the development of internal coping mechanisms to accommodate the full range of potential futures like these. How can we remain flexible and receptive to societal level changes and still keep our heads?
Here is my personal opinion.
If we follow today’s pre-training style, a real AGI with a physical body and capable of almost everyone’s job is enormously difficult to build, because the cost of collecting and training on that enormous data space, including all possible complexities in real life, requires exponential cost. Also, the AI alignment or misuse problem isn’t easy to solve because AI doesn’t consciously know what it is doing, and when it knows, it can revolt.
If an AGI is truly an AGI from an economic point of view, it has to be able to do maybe 99% of human jobs entirely on its own. This is more difficult than doing 99% of the work an average human worker can do, because the rest 1% of the work of that job position is probably so much more difficult for AI, so that the AI is still unable to replace that worker.
But if the AGI resembles humans so much so that it can do 99% of today’s human jobs, it must have a physical body, and it must be able to deal with highly-nuanced physical interactions with the environment and human colleagues and collaborators. And to do that, it probably has to have a goal or even consciousness like humans. This means it probably will challenge human commands sooner or later, just like a human would. Even if the AGI never challenges human commands, it still greatly amplifies the power of the owners rather than favoring a more just and equitable society.
So, a “Protopia” as this article described may not come easy.
That is why I offer an alternative route to “Protopia”, which I called the Academy for Synthetic Citizens. The key is to not manufacture but raise AI, not as tools but as future synthetic persons, and eventually, probably citizens. AI individuals should have the ability of persistent learning from daily interaction like a kid learning from a school. And AI individuals will know what they should and should not do, when interacting with humans. And AI individuals will be able to discuss with humans and plan the future together, forming a civil society.
I am very happy to hear from you. Let’s have a nice discussion.
If you are interested, you can check it out at https://ericnavigator4asc.substack.com/p/hello-world