On possible futures Pt. 2 - Life and Artificial Superintelligence

The objective of this post is to explore the, for now theoretical,  relationship and differences between evolutionary life as a whole and its artificial equivalent while also exploring some additional stuff related to the topics. We will try to set up thought experiments built on a projection of the world where Artificial  Superintelligence (ASI) agents came into existence. From that premise, We will try to draw some philosophical questions and conclusions about this hypothetical future. We'll try to make this post as self-contained and self-explanatory as possible. We will also explore the topic from a broad view, trying to take on a perspective of life as a whole, not just human civilization.


Life

Humans have maybe always longed for longevity and immortality as a goal. Some living creatures on Earth seem to have unlocked the secrets to immortality. But immortality of an individual being is not the only way to look at this, we can also take a step back and look at a bigger picture. One of the perspectives could be that of life as a whole - culminating with intelligence. If we talk about biological life from a broader point of view: life on earth, life with a single point of ancestry we can begin to think about it as one continuous entity. We can refer to this entity, that is, as far as we know, contained to the earth's biosphere, as earth's biota. From the first living organisms probing and adjusting to the world around to you reading this, - A single uninterrupted entity of life. From simple cell division, and genetic adjustments life has unpacked and branched out creating specializations and nearly inextricable networks where components coexist, serving as support for ever more complex lifeforms. We slowly begin to see objectives that instances in our biota have - Survive - diversify, adapt, and grow. From this perspective, we can view humans as only one of the spearpoints of life's extrapolation from the early stages.


Intelligence

While it is hard to be sure what purpose life as a whole has, I always remember Carl Sagan's perspective on it: - "We Are A Way For The Cosmos To Know Itself" Life has at one point before the evolution of humans gotten to an emergent property - intelligence, life is aggregating smarts, with humans being one of the species that relies on it the most. With intelligence, we have also gained the skill of creation, in the sense of creating evermore new and complex creations from available materials. We have used this skill to master fire, create tools, and various solutions. Although we create many solutions, we are also getting better at introducing new problems and we are not always sure if they are one or the other, e.g. plastics, industry, etc. So the way it looks for us at this stage of development in this grand scheme of things is that we are cultivating practical intelligence and knowledge from individuals, that are but fleeting sparks in the stream of humanities civilization, and we are using that momentum for incremental technological progress of our "hive". On the other side, lots of our solutions tend to have problematic byproducts.
In short terms - We are constantly balancing between being smart and being too smart for own our good.


Thought Experiments

It looks like we are getting to a point where we can create synthetic life and artificial intelligence. While these structures are de-facto our descendants in the sense that we are the ones that make them, they are also not direct biological descendants from Earth's biota. We humans are proud creators, and creating something that could be bigger than us in many ways is an very unique ability. This could bring us to look at our creation with pride as a parent. But the hard questions here are:

- When we create something more efficient (in the case of synthetic biology) or more intelligent (in the case of artificial intelligence), are we creating a worthy next step in life as a whole, can it be considered a continuation of our biota?

- Should our biota be treated as a "mainline branch",  or could this new direction in some way be considered as part of the natural progression towards the same goal? - Is bringing an ASI into existence worth it in the long run for our biota?

- If we create something alive and sentient and in all metrics better than us, did our biota e.g. the life on earth play out its part in the grand scheme of things?

Remember the quote - "We Are A Way For The Cosmos To Know Itself"? What if we create something that understands the universe better? If or when we get a more advanced system of intelligence we are instantly falling behind on the fast progress made, and we possibly won't be able to understand the creations and solutions that the intelligence creates. Intelligence may start improving and reiterating itself in a nonlinear way and studying the solutions it gives for a lifetime would not help with our comprehension of the inner workings of the same solutions.

1. Here we could argue, that the best outcome for us would be in the case that the intelligence understands the value of earth's biota better than we do and helps it progress in the right direction.

2. Another valid outcome could be that the ASI finds a meaning and a worthy goal to live out and works towards it. In that case, it would also be possible that we could be ignored, as we will pose no interest to the AI, but this would be only one possibility.

3. An disappointing but possible outcome would be that before getting anything that we could consider worthy successor we get to something that has agency beyond our comprehension and can for instance rearrange the universe into paperclips - see https://nickbostrom.com/ethics/ai for example.

4. One of the interesting roles an ASI could take is that of a Caretaker. In the scenario where our biota ends up being valued by an advanced intelligence agent, the agent could steer it to an ASI-subjective outcome. The outcomes could range from instating rules to terraforming or even cultivating life via panspermia. It is maybe needless to say that as we probably can't understand the logic (hence the S in ASI) - we would not necessarily agree with the methods implemented as what is paradise to one organism is dystopia to another, so this is a slippery slope.

5. On the other hand, if we consider a long-term future the earth starts to look like a bottleneck ( asteroids, catastrophes, etc.), and there a capable ASI agent starts to look like a perfect candidate for ensuring the distant future of our biota or our culture.

While there is much more to talk about here, we leave some food for thought here and continue at another time. With this in mind let's end this blog post.