While these ideas are not necessarily an exact forecast of things to come, I tried to build them as an exercise on top of my current working model of the world, any correction or addition would deepen my understanding and I would be more than thankful to get this kind of feedback.

Programming languages emerged as a bridge between command and automation, while first computers took instructions via direct machine code input, there was also a need to automate even the instruction input in a way that both machines and humans could process. From symbolic representations to assembly language and further people always wanted to abstract more and more of the underlying machine instructions. With the emergence of LLM and generative AI the abstraction possibilities are growing even more and it is not hard to envision the emerging paradigms: Specification as Code and real-time Reactive Programming.

Picture this feedback loop - You start with a declarative document in a natural language and the output is an un-perfected but working program, as you change the program specification the program behavior also changes. The release cycle only depends on you deciding that a version of your specification is ready to be deployed and the quality of your program only depends on you knowing what you want and writing it down. Maybe this is hard to imagine, but this could also be only one of the coming steps in the evolution of automation. Imagine for a moment an entity that is by some standard as smart as the smartest people you know, with a slight difference being that it is not restricted by a human time frame, capacity for learning, nor the human one-at-a-time nature of attention and agency. At some point, the entity would not need any input besides a simple command to cook up anything the most skilled programmer teams can do today. You could say something like create a business, and it could do all the steps needed from market analysis to building a supply chain and creating strategies, commercials, and a brand. Naturally, this could be used for many different applications and could result in many different outcomes, some being hostile, but keep in mind that we are not even talking about Super Artificial General intelligence yet.

Before 1997 we knew that computers could outperform us in mathematical operations, but it was really hard to imagine them being better than humans in any complex or creative task. I remember a quote from a professor saying: “Computers really are dumb, you need to instruct them on every single detail, or they will fail”. Well once in a while the human race gets a wake-up call saying you are not that unique. One of these reminders was the game of Humanities best - Garry Kasparov VS Deep Blue, there the computer reminded us that it does not need to be infinite or perfect to beat us, it only needs to be better than us. One discipline at a time computers have surprised humans in things we deemed to be cornerstones of our uniqueness and sense of supremacy.

As we seemingly move towards AGI and SAGI, there are also some efforts to enable the human’s biological descendants to compete with the machines in terms of abilities, one such attempt being Elon Musk’s Neuralink which has the long-term goal of fusing superhuman AI capabilities with human hosts. While this looks like a temporarily valid solution, it could in many ways lessen the thing we call the human experience. Neuralink could also make us dependent on assistive software even more than we are now, and at one moment the question could be what is the point of it all? Biological humans just won’t be able to work at the speeds and capacities of artificial entities, and if and when we digitize ourselves, we could argue that the replicas would maybe be less worthy successors to biological humans than the AI entities would be.

It is easy to just imagine augmented humans living beside the biological ones, heck we already kinda have that in our civilization, but what I once imagined as fighting for competitiveness in the labor market filled with augmented, faster versions of ourselves is turning into a vision of a world where no work is necessary and humans are simply chasing mastery of skills, mindfulness and quality of life to experience meaning in their lives in a some post-capitalist AGI driven era of abundance. The reason for this again is that AGI could maybe progress much faster than we do, and the speed of that progression could be the thing that will dictate the length of the coming transitions, eventually even eliminating the need for humans to compete with AI.

The question we likely mostly fear is becoming more relevant and more in our hands than ever - “What do we want this world to be”. Since the start of the industrial age, we have collectively rolled a wheel that is hard to stop, in the direction we are unsure we want to go in. We are already in a stage where going back is not an option, we can only slow down if we actually try to. There is some poetic beauty in the fact that we are collectively deciding the direction of our civilization. The sad thing is that the main driver and deciding element in our collective journey is system inertia.

As we pass these milestones where computers are surpassing us, we also gain a humbler perspective of the world around us but also learn ways to, like in Chess and Go, enjoy the things in which we are surpassed. As you can see we did not stop playing Chess just because computers are better than us, if anything, we learned to appreciate the human way of playing it more. Bit by bit we are moving away from a world where we need to think about ourselves as the best, smartest, most unique beings around.

The sun, once again, could stop revolving around us.