We have always sought to find better methods of interacting with computers. From the punch cards of the ancient ENIAC to the touchscreens of today, the user interface has been evolving to become more intuitive with each iteration. The next generation of UI will aim to make computers more human.
Most of those systems are based around the Graphical User Interface. This allows people to navigate a screen by clicking or tapping on objects. This ‘point and click’ mechanism hasn’t been rivalled for simplicity since the mid-1980’s. However it still requires a physical effort on the user’s part.
Breaching The Human-Device Frontier
Companies are already making efforts to introduce intuitiveness in the next generation of UI. A version of this is Natural Language Processing (NLP), which hopes to make computers understand human speech and its context. NLP has allowed search engines to return more relevant results. This coupled with AI has led to the rise of intelligent chatbots.
Factor in the Internet of Things, and you’re looking at a decentralised computer core that functions across all your devices. Voice assistants are already leveraging this to become the control hubs for devices without you having to lift a finger. It could have the air conditioning at a comfortable temperate, tea on the table and your favourite music on before you entered the house.
Advancements in computer vision and machine learning have made it possible for eye-tracking technology to become more affordable and feature in mobile and PC cameras. While this has obvious benefits for gaming or virtual reality, precision tracking would allow users to control computers with just their eyes.
Driverless cars are an example of how the systems of today can anticipate intent without the user being actively involved. The same technology that detects driver drowsiness could make it possible for software to react to facial expressions.
Eventually, we could be looking at brain machine interfaces that use just thought to control computers. Several large technology firms are investing heavily in neurotech. Using the electrical signals generated by our motor neurons – the nerves that signal activity to the muscles – an external sensor could create a direct link between man and machine. As an extension of this, neural implants make it possible for people with strokes to lead an independent life. It decodes motor signals and transmitting them to artificial limbs with electrodes.
Mice and keyboard-like devices won’t go away anytime soon, but there are signs that they will become less prominent. Several exciting developments, including holographic projections and augmented reality, promise to create a future where the line between ourselves and a device is blurred even further.