The Student News Site of Trinity University

Trinitonian

The Student News Site of Trinity University

Trinitonian

The Student News Site of Trinity University

Trinitonian

Is Your Computer Your Best Friend?

Is Your Computer Your Best Friend?

Humans have a remarkable ability to project human-like characteristics on inanimate objects. Personification isn’t just a literary technique, it is something we seem to do on a regular basis with objects that surround us in our daily lives. This includes the machines that inhabit our lives. Roombas and other little robots that skitter around people’s houses are often treated more like pets than machines running simple algorithms to make sure that they cover all of a room. This leads to the question, how will we treat the devices that run much more complex software? How will be think about devices when their software includes elements that can truly be viewed as creative or emotional?

In my last piece for the Trinitonian, I looked at some of the advancements that have come along with the development of a form of artificial intelligence called deep learning. One of these advancements is improvement in the ability to interact with machines through voice interfaces. This can go a long way to making computers seem more human, but only if they talk about the right things. Just providing answers to user queries or dictating messages isn’t going to cause most people to think of their personal computing device as more than just a little machine. However, if your conversations with your device felt more human, you just might find that you prefer talking to your device over talking to other people a lot of the time.

In the book “The Age of Spiritual Machines,” Ray Kurzweil runs through how this might come to pass. It begins with a virtual assistant who is very far from being human, something like what you have on your phone today. While your ability to interact with that assistant is limited, it definitely knows a lot about you. I am an Android user and I make extensive use of the Google ecosystem. In theory, that Google Assistant knows my calendar, my emails, my location history, all the photos taken on my phone, my music preferences and a lot of my search history. In Kurzweil’s scenario, the digital assistant gains capabilities over time, and that intimate knowledge of its user makes it the perfect companion. So where are we in our progress toward that future?

The newest assistant on the Google Pixel has added the ability to keep track of context when you are requesting information. So if you follow up the question “What is the weather like in London today?” with “And in Moscow?” it knows that the second query is still referring to the weather. That is a long way from engrossing conversation, but Google isn’t currently aiming for conversation in their digital assistant. Other groups are trying to create conversational programs “” every year they compete to try to pass the Turing Test, a standard test in AI where a machine tries to fool humans into believing it is a human and succeeds in doing so sufficiently well. This last year a group claimed to have officially passed the test, but they did so by modeling their AI as a young person whose first language was not English. That is progress, but there is still work to be done before we are at the level of general conversation.

For you to feel like it can relate to you, your digital friend would need to display empathy. It will have to understand and react to your emotions. The task of reading human emotions has been largely cracked. Two years ago, Current Biology published a paper by scientists at the University of California-San Diego where they reported having a program that could correctly identify if an emotion was genuine or faked 85 percent of the time. Humans were only 50/50 in doing this. There has been a lot of progress in this type of technology since then, thanks to its commercial value. Retailers would love to know what people are thinking when they look at items, and movie studios would appreciate real time, accurate reporting on human emotions during films. Both of those are happening, again with better accuracy than humans as cameras do a better job of picking up brief flickers of “micro-expressions” that pass across our faces without our control.

So your assistant will know your emotional state, but can they react to it properly? There is work in that field as well. Right now that work is mostly in the form of research on robots that can help to work with children who have special needs such as autism.

You new digital friend will also need to have some appearance of creativity to stay interesting over time. Just reacting to your emotional state, even if they do so perfectly, can only go so far. While creativity is hard to define, this is another area where deep neural networks are playing a big role. After being trained on large data sets, these systems are being used to create art and compose music “” activities that we normally associate with creativity.

So while it is unlikely that you are going to prefer the company of your phone to your human friends today, many of the pieces are in place or close to being in place for you to be able to enjoy stimulating conversation with your personal assistants, and, when ready, these assistants will know you and your interests better than any human possibly could.

Mark Lewis is a professor of computer science. He’s also an avid rollerskater.

Leave a Comment
More to Discover

Comments (0)

All Trinitonian Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *