Interfacing, in my story world, is done by:
- Reading muscles — Using simple means that read contraction and bio-electical impulses sent to the muscles
- Reading the body — Second by reading the body itself: the position of limbs.
- Sending feedback to the muscles and skin — Using electric pulses. This to simulate touch and “resistance” as experienced in the virtual world
- Closing your eyes — Using either lenses or implants to see visual feedback without the interference of the outside world
In most cases people close their eyes to shut out all visual interference from the outside world. Manipulation of the internal world is done in a “Minority report” kind of way. Typing happens in the air, on a virtual keyboard that provides haptic feedback.
As muscles are read, it is not needed to actually move your body. Stating your intent is sufficient. I illustrate that here in “Sunrise” via the ‘twitching’:
She sits on the couch, slouched against one arm-rest and her legs stretched over the length of the cushions. […] Her hands and arms are twitching as she is working within the space behind her eyes. The wearable is like a black bracelet around her left ankle.
Computers are either worn on the body (a set of parts you can shape as a bracelet or whatever in the case of “Sunrise”) or inside the body and attached on- or embedded as strings in the bones to ensure protection and a solid basis for the hardware.
In the most extreme or complete form vision is provided via implants in the back of the skull.
This is how the process is described in an earlier chapter in “Sunrise”:
Fajita double-checks his signed consent for the implants, the waiver of responsibilities. She opens drawers, collect stuff. Then she picks up the gun, loads it with the capsule we designed together.
She shoots one needle through his skin into his skull. His head bobs slightly with the impact. Blood wells up through the little wound as the knob tears through the flesh.
Each needle connects to the visual cortex by inserting a thiner pin into the brain once it breaks through the hard material of the skull. Each of the needles creates a part of the total set of signals that tricks that visual cortex into “seeing” images.
The simplified process of shooting needles (each with their own reciever/tranciever in the head of the pin) makes the process commercially viable as no operations are needed to open that part of the skull.
Encrypted data between parts (including the needles and the wearable) is transmitted as weak electric signals through the body itself.
You can download a PDF with the illustration above and many other works from that same period here: “The Ator Mondis years”
Cyberpunk, Gibson and my own take in 1991
The 1983 novel “Neuromancer” by William Gibson blew my mind when I first read it in 1990. It had this film-noir kind of ring to it that matched the early work of Alfred Bester (“The stars, my destination” and “The demolisehd man“) but in a more contemporary setting.
What I did not buy in the setting of “Neuromancer” was the way Gibson depicted the “computers of the future”. Why bother carrying around external computers or “decks” when you could also integrate them in the body? So when I wrote my own kind of “cyberpunk” story in 1991, named: “The pearl in my hand” I took these integrated systems as a starting point.
Integrated, version 1 (1991)
The illustration above ((C) by me) depicts several aspects of that vision. The string of “saucages” behind the male character is a representation of a neural cord as (according to the encyclopedia I used at that time) it runs through the spinal cord of the main character in that story.
His eyes are closed. There is no real keyboard. All feedback he receives is from within his body, using direct links to his visual cortex and his spinal cord (for muscle-reading and haptic feedback from his integrated system).
A connector on his chest (encircled by the tattoo of a sun in the image) allows for interfacing with external systems.
Integrated, version 3 (2009)
In my 2009 until current: “Decline of Europe” set of near-future stories wearable computers take two different directions:
- Integrated — Integrated in the body.
- External — As external modules you wear in your pocket and on your body
No screens, no external hardware
As in “version 1” I decided to go for the most bare-bone solution: no external screens. No external hardware (except the computer itself). All wearable. All accessible immediately.
In “Sunrise” in my “Decline of Europe” series, my main character states:
“I am bored by all these crappy solutions that just move sideways. Offering only more variations on the same boring theme. In human-computer interactions, screens are a broken design. Sure you can endlessly improve endless details without really looking at the core of our needs, but what will that bring you? Yet another Smart Glasses variation. Yet another set of smart lenses. I want to destroy that. Completely wipe it out. I want to erase anything and everything that is a legacy from the 20th century.”
Her dream is to integrate those systems completely within her body. To gain that goal she has gone as far to have German implants developed for visually impaired people to interface directly with her visual cortex. At the end of the story she succeeds in fulfilling most of her dreams, making it commercially viable and an easy to implement process to shoot implants in the back of the skull: interfacing with the visual cortex using thin probes and interference-patterns that trick the cortex into translating those signals into images.
This, however, is not where this development stops in that story world.
Moving the concept one step further
A raw part from “Limiters”.
The linings of my system self-repaired as my broken bones grew together.
They behave like augmented cancer cells, with this difference that their growth is controlled and bound, that they have genetic locks encoded on personal levels of the receiver to prevent infection to others, that they build and shape memory crystals and organic processors […] It can be transferred from mother to child as the strings are coded with our DNA. It is as much part of me as everything else is.
Apart from stylistic aspects (too much telling / blunt explanation) this gives you some idea where I think things can move to.
Using the human bones as a solid frame
When working on this “extreme” version (which uses bio-engineering and tiny assemblers in the body which in their turn use minerals and other materials ingested by the subject) the countless pockets inside the bones of the human body are used as the “frame” in which the multiple cores of the total systems are suspended.
Cost, complexity and implementation of neuro-links
Three major backdrops of “version 1” in which the system is linked directly to the neural system, are cost, complexity and implementation.
The amount of data sent through the spinal cord is massive. The number of nerves is mind blowing. Only a few of these nerves deal with the muscular system we use to move our body and interact with the world around us.
To perform the surgery to connect “wires” directly to those nerves is insanely complex. Apart from finding and separating the right nerves without damaging the others around it in the process to begin with.
Reading muscles, feedback via the muscles and skin
Every solution is the process of evolution. Sometimes from simple starting points to incredibly complex end results (computers, our current mobile phones). Others remain fairly simple and stable as more complex solutions simply do not solve the problem that much better than the simple solutions do (i.e. the design of your toilet bowl).
To interface with the human body. the simplest way is to interface with the end-points of the human nervous-system:
- The muscles
- The skin
Muscles can be stimulated and read. The skin can be stimulated by electric impulses generating similar enough experiences as to touching specific surfaces.
MYO is one example of using secondary muscular movement in the arm to “read”, among other things, the hand. It is unclear if the MYO also has movement-sensors.
Using certain “holographic” aspects of muscle reading
Using your body is anything but a clear and clean process.
When you move whatever part of your body, many other muscles are used and contracted as well. Place a sensor around the upper arm and you can deduct from secondary muscle-movements in the upper arm which fingers are used, how much the wrist is turned and how much contraction is probably given to all muscles used.
Using one single spot?
Upping this game, it will probably be possible to take one or two spots on the body (the back?) and read the muscle-movements from there, using all feedback from the body to deduct which muscle is used in what extend.
The complexity of deducting this information is probably comparable to the process of speech-recognition: from a bunch of indirect signals (muscular contractions, timing, movement of the body itself) you need to deduct which exact limbs are used and what muscle-groups are used to move that specific limb.
Using sensors on the body?
To increase precision you might want to decide to use sensors on the body measuring absolute rotation relative to the direction of gravitational pull of the earth as are present in your phone. To understand rotation of each limb, and knowing the length of your bones / limbs, you can deduct the exact position of each of your limbs.
Would it be necessary? I assume not in my story world. Less hardware is more.
Using the body for the transmission of signals
Your body is mostly a collections of billions of tiny sacks of salt-water (your cells). You can use that same body and the conductive aspects of those countless tiny sacks of salt water to conduct signals from one point to another (and there are experiments done using the body just so, like this one.)
This eliminates the need for wires.
Using the body in this way opens several lines. First is to read sensors attached on your body. The second is communication with other systems, including those inside the body of other people.
Using systems for people with impaired vision
Current research either focuses on the retina (see here for one example) or on the visual cortex (see here for an overview). The visual results (on these implants) now are quite low-res as far as I understand.
In my story world, the main character of “Sunrise” works together with a German company to develop a system that offers high-resolution images to the visual cortex: high enough to be able to read and work with an operating system when you close your eyes.
Why lenses and glasses suck
Each external system sucks on the long run. If and when you are wearing glasses and are used to lenses (like I do and am) you understand why.
You do not want things falling of your body, being prone to being crushed under your feet by accident, getting damaged when something hits your face or you hit your face against something.
Another thing is that the more your systems are becoming part of your daily life, the more relevant it becomes that it is “always on” unless you deliberately decide it is not.
You (or I at least) want to be able to take a dive in the ocean without losing parts or damaging them.
Re: Neuromancer and my version 1 – connecting to the nervous system?
So why would you directly try to connect to the nervous system to begin with, when you already have other interfaces (skin, muscles) which are easier to use?
Sure it looks cool on paper when you look at it without thinking about the implications, but when you look at the issues (mentioned before) and the alternatives, it is actually as viable as the flying cars from the 1930’s (it will never reach mass-production due to simpler and more affordable solutions which are easier to make, less complex, less error-prone and more stable).