From GUIs to NUIs
Last week there was much discussion in the office concerning ‘Natural User Interfaces (NUIs) so I thought I would grab this opportunity to share our thoughts on the topic.
Natural User Interfaces have been given this name because they rely on natural ways of interaction, such as gestures, body movements, speech and vision. Users manipulate digital objects as they do with physical objects.
NUIs are the next step for Graphical User Interfaces (GUIs). GUIs were ground-breaking back in the 80′s because they allowed users to directly manipulate objects through a control device (think mouse and keyboard). They were driven by the principle of “what you see is what you get (WYSIWYG)”.
In contrast Natural User Interfaces are ground-breaking because they take the control device away and replace it with the users’ natural communication language.
My definition is short, if you want a slightly longer exploration of the subject this article on NUIs is pretty good!
Some NUI examples
Smartphones are one of the most simple examples of a device which uses a few natural ways to interact with. Even though their design philosophy is based on GUIs, multi-touch interfaces of some sort are now commonplace in the vast majority of smartphones, tablets and increasingly laptops/desktops.
Another popular but slightly less developed NUI used in smartphones is speech recognition. Most of you will have played with Siri or Google’s ‘search by voice‘, albeit to varying degrees of success.
Indeed, I find touch-free interfaces to be more interesting, if you have ever jumped around like a child while playing with an Xbox chances are you have used another popular NUI, Kinect. Kinect is the technology that enables this interaction. It is a multi-sensor that scans a space and recognises body movements, gestures and voice – brilliant fun. The good news is that it will soon be available to use with Windows!
However, this technology is not all about gaming, Kinect is being used in more serious endeavours like health care, for physical rehab and surgeries.
Why do we like NUIs?
- They provide a multi-sensory experience. There’s no mediator in the interaction. It’s our gestures, body movements and voice
- They enhance user’s control over the interface. Again there isn’t any artificial control device with wheels, buttons or keys
- Users don’t have to be computer literate in order to handle the interface. It is supposed to be designed for ‘natural’ interaction!
- They support simultaneous interaction by more than one user
- They can be fun and entertaining and they are encourage exploration
Where can they go wrong?
- There aren’t any established conventions. Apart from some design patterns in multi-touch devices, other interfaces don’t have a set of guidelines yet. This can cause some confusion among users
- There’s lack of feedback. For example, there isn’t always any indicator of where the finger is pointing towards. GUIs have the mouse cursor.
- It’s very difficult for complex systems to be used solely with gesture-based interfaces. Think of an everyday task that you do at work. How would it be if you didn’t have a mouse or a keyboard?
- Same gestures mean different things in different cultures. Therefore, there’s a need for cross-cultural conventions
- For all the above reasons, users might be easily annoyed, bored or tired if they can’t figure out how to efficiently control the interface
As you might have concluded, NUIs don’t come without drawbacks, which is to be expected from an emerging technology. One thing is certain, when designing an NUI, focusing on user needs and the context of use becomes even more essential than when designing GUIs. User testing is a must!
What do you think of NUIs? What are your favourite examples?