“Please take some rest, it’s been three hours on the laptop.”
“You look tired; go and get some rest.”
“It’s late in the night, please have some sleep.”
Wouldn’t it be creepy if a laptop or mobile screen tells you these words instead of your mother? Well, not anymore. Researchers have been working on the next generation of smart equipment; such equipment can detect human behavior, emotions, and state, and then respond accordingly.
So, don’t be afraid if your newly bought laptop screen tells you to handle yourself well as you work or interacts with you emotionally, or if it adjusts the lighting of your room depending on the mood that you are in.
Empathy
How? Well, we can look forward to a combination of deep learning, changed architectures like edge computing, machine learning, artificial intelligence, and a host of devices (cameras, sensors, etc.) to be able to detect the current state, perform analysis (fast), and then respond with an appropriate action (from the database of actions) to become smarter as they interact with us while learning all the time about our biggest asset: EQ.
It certainly has to be a combination of our body gestures, voice, facial expressions, and maybe other features as we progress on this path. This branch of computing is specifically called “affective computing.” Affective computing has multidisciplinary fields of computer science, cognition, psychology, education, ethics, and information technology as its constituents and gets its “affect” from psychology that essentially means emotion.
There are companies that have made significant progress in this field and are beginning to go beyond the labs and create use cases that affect our day-to-day lives. For instance, some of them have made strides in the field of call voice analytics for effectively detecting fraud.
IBM Watson can currently detect (as claimed) sarcasm and has an emotions-detecting API as well that can span across various use cases to understand, interpret, and process human emotions, experiences, and feelings.
Then, there is Affectiva, which is working on emotion recognition software and analysis using standard devices (like a webcam). They have a massive emotional data repository of 5.7 million faces analyzed in 75 countries and also have an affdex SDK that can be used with applications and can provide default metrics like seven emotion metrics, 20 facial expression metrics, 13 emojis, and four appearance metrics.
These metrics are a way of detecting and plotting the human expressions along with a degree of confidence (Source: Affective.com). It is now working with a very large Japanese car manufacturing company to integrate its technology that can alert the driver if he/she is feeling sleepy or distracted and can further take an action (like calling emergency services or family) basis that.
The uses may span across our lifestyle and may not be limited to the elite.
For instance, your shirt telling your flask to offer hot water as you cough or care-giving robots understanding the context before administering medicine to a patient. It may be as simple as acquiring common sense to not to ask too many questions when someone is sad.
Gestures, speech, perception, learning, communication, or even our experience has emotion as the fundamental state attached to it and machines can certainly be politer toward handling those for us as they become smarter and emotionally equipped.
But as they become emotionally smarter, we do face some intriguing issues of ‘machine bias’ as artificial intelligence sometimes has been found to be or them being ‘too much rational’ or simply where to draw the line of their intrusion.
Additionally, the fundamental question also remains as to how (and how much) can we measure its understanding of our human values and ethics that we have learnt over time or we continuously do from our social interactions. How can we teach them to be moral?
For instance, a driverless automobile may face a situation of saving someone and a collision at the same time. How would it react? Will there be someone to make it work smarter and take a decision that a human being would have taken in such a situation?
Data-driven rationality needs to be combined with empathy if we are to take one step toward artificial intelligence services (that itself feeds into the concept too) but where do we draw the line and stop or how do we make this empathy to be self-aware so it draws a line for itself basis the context.
These are certainly the questions looming large in front of the researchers and, as they say, ‘technology is the easiest part.’ Only time will tell if its creepy or (humanely) personal touch, intruding privacy or helping, context aware or too much data sharing, but the essence of human experience with technology is surely going to get greatly affected with the advent of this era.