Preface

This post is a bundle of thoughts on the near future of Augmented Reality implementations, ways of integration, and introducing new layers of cognition. We are going to talk about AR as a way of enabling transhumanism by broadening the human cognitive and sensory toolbox.
We are not going to be concerned with what is and what is not technologically possible at the moment.

Introduction

Have you ever taken a walk and thought about all the accessible information on animals, plants, objects, or even people? How much information is available on the internet? How many facts have you encountered and already forgotten?

Ancient Greeks put an active effort into training and maintaining their memorizing capabilities, and already at this stage of progress, human memory is shifting – or has shifted from remembering facts to remembering how and where to find facts. We are keeping less and less information hardcoded in our permanent memory and are in turn learning how to
consume, process, and find useful data in an endless stream of information and misinformation.

A technologically proficient user has access to a vast volume and spectrum of data at his/her disposal, sitting right in the pocket. The human approach to holding knowledge is evolving as we speak. Heck, it started evolving with the invention of speech. But another significant
change is on the cusp of happening. Our mobile phone interfaces, while serving all the information we need, are still detached from us, having considerable latency when we need to find, show, or input data. With the introduction of AR, AI, and human-computer interfaces, everything is about to change. AR or human augmentation in general will make that data
more accessible and drastically lessen the latency needed for the interaction, making the tools feel more and more as part of ourselves.

Imagine walking down the street. You look around to observe people on the street and notifications start popping up alongside people’s heads. For one person the message says:
"You have seen this person three times this week already"; for another one it states:
"You have five mutual connections and common interests".
Those notifications could also tell you if the person is willing to make new connections, based on previously shared data or facial microexpression analysis. Maybe we have gone too far down the rabbit hole for now, but you
get the idea.

Defining AR and Transhumanism

Merriam-Webster defines Augmented Reality as "an enhanced version of reality created by the use of technology to overlay digital information on an image of something being viewed through a device (such as a smartphone camera)". For the purposes of this post, I would like us not to limit reality to human vision alone.

Merriam-Webster has no definition for Transhumanism, but Wikipedia defines it as a philosophical movement that advocates for the transformation of the human condition by developing and making widely available sophisticated technologies to greatly enhance human intellect and physiology.

The two are getting more and more intertwined as technology advances, so I would like us to redefine AR, if only for this post, as "any technology that enhances or expands the human experience of reality", or maybe even as "practical transhumanism".

Augmentation technology milestones

These are some freely set milestones for AR technology.

  • LVL 0 – environment emulation via VR – for developmental fast-tracking
  • LVL 1 – external IO devices – headsets, glasses, gloves
  • LVL 2 – embedded IO devices – lenses, external neural signal readers
  • LVL 3 – tapping into existing sensory and neural networks
  • LVL 4 – registering new input/output devices in neural networks

Degrees of augmentation

For the time being, and at least until we master the LVL 4 milestone described above, we are not creating new sensory or actuation systems, we are only hijacking the existing ones. Hi, Neuralink ;). This means that with every bit of information we gain, we are blocking the bandwidth and use for our existing IO mechanisms.In all fairness, people recovering mobility after a severe head trauma are evidence that the brain could possess enough plasticity to use whatever is thrown at it, given we connect the right dots, have enough time and practice. User age could also be a part of the equation.
Sounds easier than it is, I know.

So, the more we augment our existing senses, the more we obstruct our basic physical-world data flow, the need to set some design restriction guidelines will grow. We still have accidents while using mobile phones or headphones, so the question arises: how could we manage a society full of people that overlap and fully cover their field of vision and hearing?
Some safety rules integrated into the tech would be quite handy. Here is an example of defining degrees of augmentation:

  • No augmentation
  • Info - can be an overlay, but should not block much sensory bandwidth
  • Safe communication – headset equivalent
  • Immersive experience – not safe for driving, safe for outdoors
  • Fully overlaying sensory input with enabled alerts – public safe zones
  • Fully overlaying sensory input – only for safe zones at home

Augmentation usage categories

Knowledge management

At some point, AR should enable access to data from the internet, which should be retrieved and presented without too much user effort, based on surrounding situations and context. Also, the useful data should be categorized and stored for retrieval at a later time.
The interface should be able to combine relevant data and present solutions, statistics, and probabilities.

This could also enable saving verifiable event recordings, snapshots, and transcripts, real-time language translation, or seeing blueprints and instructions while fixing devices.

Extending sensory reach

The human sensory reach is already drastically extended with the use of IoT and the internet in general, but there is so much more of existing tech to be integrated into an AR system to extend human senses. Users could see and feel if their home or possessions are safe, or more generally, sense physical-world information streams in real-time.
For example, physical-world data can be processed in parallel with the user experiencing the immediate surroundings. Thus, the user would, for instance, return home and detect certain objects have been moved, etc.

Extending physical actuation/action reach

Besides our physical reach, most of us have already experienced controlling or affecting the physical-world at a distance. AR could help make the world around us feel more like an extension of our physical bodies.
We could run automated processes upon visual triggers, i.e. unlock the front door without thinking, or control devices, drones, and robotics with ease.

Communication and individuality

Communication could also be streamlined. We could have an instant connection with anyone around the world, almost as if standing side by side. Communication could play out on several levels, depending on the level of technological advancement, and could be:

  • Restrained – messages and recordings with a delay and opportunity for curation
  • Unrestrained – conservative – real-time audio-visual communication
  • Unrestrained – progressive – direct thought transfer in a Neuralink-like brain-computer interface manner

We could socialize, play games, or work like never before:
imagine coding while sharing your thought process with a colleague, and also having an "AI" assisted IDE.

Extending the cognition toolbox

Given having a portable AR system and a smart enough underlying OS that replicates our surroundings and anticipates our needs OR an efficient enough way of selecting and using computational tools, we could:

  • see object trajectories
  • have perfect math at our disposal
  • know objects, plant life, animals, etc.

AI or General AI could, if available, also be used as an intermediary to the available cognition toolbox, to serve the necessary tools and results to the end-user.

Pastime and entertainment

I won’t go into great detail here, VR games are already immersive and fun, but try to imagine what they would be like if some freedom is added to the mix. As games and entertainment are presumably most likely to require as much immersion as possible, it will be necessary to consider immersion safety.

VR games and APS - fully overlaying sensory input

Cars, stairs, sudden drops, and obstacles are here to stay, so for any device that can write over your sensory input, a way to risk mitigation must be in place.

End user safety zone calibration

Imagine you want to play a full-on VR game somewhere. How could you stay safe and not trip over something or fall over some curb? It is simple, you could just walk around your playground to define its contours and limits. A certain amount of space can even be taken out of the outer limits of the playground, any obstacles are reproduced in-game so you can navigate past them. You start the game and play, and in case you move to the outer limits, the game simply starts to fade away.

Safe surfaces

OK, but where else could you use 100% augmentation opacity in the real world? The first thing that comes to mind as safe would be walls. You can project stuff in front of almost any wall and be sure that nothing you can’t see will hit you from that direction, or at least as sure as you are now.

Content killswitch

Another useful feature would be to have the immersive content stop when quiet time or rest is needed, as well as if stress levels are critical.

Pitfalls of AR and Human augmentation

  • Having no signal or battery
  • Developing a dependence on AR in everyday life
  • Losing sense of the physical self
  • Losing individuality
  • Identifying the AR OS as a part of the personality

We can work on developing AR today

I guess we are all well aware of the current limitations of the AR systems, but looking past the non-existing or low tech ways of input and output that we nowadays possess, by emulating non-existing augmentation pathways and processes, we can still extrapolate the things we already can improve and how things need to be done. Today’s VR capabilities can
serve as the platform emulating future augmentation systems. Application and Operating Systems can be and are already being made. VR has developed ways of tracking user’s eye-movement and scanning user’s 3D environment to emulate it in place. The following are some of the systems that we can already work on:

  • AR emulation and development layer in VR
  • Validation and signing user data stream segments
  • User data stream privacy
  • Sharing immutable data records at will
  • Interpreting sensory data streams via ML or some other AI method
  • Sensory enhancements based on data stream interpretations
  • OS and application modules
  • Automation tools based on AR

What did I miss? If you want to comment, criticize, or just talk feel free to drop me an e-mail at [email protected]