What We Learned From the Intelligent Assistants Conference

Last week, at the Mark Hopkins Intercontinental Hotel in San Francisco, the folks at Opus Research hosted their Fourth Annual Intelligent Assistants Conference. From case studies to expert panels to annual awards, the event sheds light on the ever growing industry.

Here's our take on the highlights, and what you need to know if you weren't able to attend.

It's Not All Figured Out Yet

people-2587775_1920The industry as a whole, and especially the users, are just starting to figure this space out. For one, the industry has a hard time knowing what to call these things. Are they intelligent assistants, digital assistants, voice assistants, virtual assistants, virtual humans, conversational user interfaces, artificial intelligence, augmented intelligence, chatbots, or simply just bots? Yes! They're called all these things!
We're witnessing a Cambrian explosion of thousands of evolutionary experiments, each recombining the interface and back end components into yet another new reflection of humanity. Which platform? Which language? Which libraries? What use case? Which mode? To what purpose and to whom's benefit? These questions will evolve with these curious systems in a growing sphere of influence. Based on how humans interact we assign the family of "bots" three species, namely "chatbots" (text), "assistants" (voice), and "conversational avatars" (graphic). Anticipate ongoing evolution of bots, their taxonomies, and we users.

Authentication Is A Must 

During Monday's panel, How Continuous Optichannel Authentication Engables Intelligent Assistants, we saw the panelists talking about how a bot may authenticate a user. Multi-factor authentication of voice, face, email, personal trust certificates and other methods were heartily argued. The problem was clear: the bot needs to know that the user is who they claim to be. But by the end of the panel it became clear that only half the picture was being considered: the user, also, needs to know the bot is authentic.
Trust is a bi-directional equation and we saw the panel only discuss the human side of the equation. If we consider a bot as a person wearing a mask we begin to see why authentication matters. To whit, bots can scam, spam and phish as well - if not better - than any human because, most simply, humans write them. So our trust of a bot should be cynical, canine even, in which we assume guilty until proven authentic. All bots handling user data should be authenticated. All bots should be trusted.

Where Should Companies Even Start?

As a company's raison d'être is its customers, a bot's raison d'être is its users. In order to know why a bot would be valuable it must have a use, and the way to uncover the use is to look to the user. What is the user having a hard time doing? How can their tasks be made simpler? What jobs are boring, rote, repetitive, and compliant? If your user is doing those things then you have traction.
With thousands of tools on the market - most becoming continually more powerful and also easier to use - it is tempting to try to build something divine, some kind of all-knowing machine, like a crystal ball. But that is exactly the wrong direction to run. Instead go specific, and rather than frame the entire sky just frame your customer. Context is key. Context brings meaning. Context reduces a project's scope and prioritizes the goals. Context is great for most AI systems. And context comes from the user. So start with your user. Please.
Additional resources: