Think of a new parent unsure if their baby’s rash is just a skin condition or an early sign of meningitis, or someone with a sports injury not sure if they’ve sprained their ankle or ruptured a ligament.
Using a setup similar to Siri or Cortana, the individual could talk directly to an app, listing their symptoms and concerns, and be advised whether to take a couple of aspirin or get themselves to the emergency room.
A system called AI2, developed at MIT’s Computer Science and Artificial Intelligence Laboratory, reviews data from tens of millions of log lines each day and pinpoints anything suspicious. A human takes it from there, checking for signs of a breach. The one-two punch identifies 86 percent of attacks while sparing analysts the tedium of chasing bogus leads.
Watson is more capable and human-like than ever before, especially when injected into a robot body. We got to see this first-hand at NVIDIA’s GPU Technology Conference (GTC) when Rob High, an IBM fellow, vice president, and chief technology officer for Watson, introduced attendees to a robot powered by Watson. During the demonstration, we saw Watson in robot form respond to queries just like a human would, using not only speech but movement as well. When Watson’s dancing skills were called into question, the robot responded by showing off its Gangnam Style moves.
This is the next level of cognitive computing that’s beginning to take shape now, both in terms of what Watson can do when given the proper form, and what it can sense. Just like a real person, the underlying AI can get a read on people through movement and cognitive analysis of their speech. It can determine mood, tone, inflection, and so forth.
Watson and other systems, as they become more intelligent, “will have to communicate with us on our terms,” he said. “They will have to adapt to our needs, rather than us needing to interpret and adapt to them.”
They will have to not only understand the questions humans ask and the statements they say, but will have to be able to pick up on all the visual and other non-verbal cues—such as facial expressions, the emphasis placed on words in a sentence and the tone of the voice—that people do in the normal course of interactions they have with each other. High wants to “change the role between humans and computers.”
It seems more obvious every day that man and machine are quickly assimilating. The transparency that’s inherent in technology will eventually destroy privacy. Automation will eventually eliminate the need for human labor. There’s a short window of time between then and now. We need a master plan for how we’ll manage the disruption that goes along with it.
“Alexa—and Siri and Cortana and all of the other virtual assistants that now populate our computers, phones, and living rooms—are just beginning to insinuate themselves, sometimes stealthily, sometimes overtly, and sometimes a tad creepily, into the rhythms of our daily lives. As they grow smarter and more capable, they will routinely surprise us by making our lives easier, and we’ll steadily become more reliant on them.”
An ambitious new program, funded by the federal government’s intelligence arm, aims to bring artificial intelligence more in line with our own mental powers. Three teams composed of neuroscientists and computer scientists will attempt to figure out how the brain performs these feats of visual identification, then make machines that do the same. “Today’s machine learning fails where humans excel,” said Jacob Vogelstein, who heads the program at the Intelligence Advanced Research Projects Activity (IARPA). “W
“Unlike human day care staff, the Or-B don’t suffer from mental or physical fatigue. They’ll never tire of repeating the same stories and performing the same daily tasks,” Hara said.
“Furthermore, as they can access a vast library of “Anpanman” and “Teletubbies” episodes, they can quickly defuse any temper tantrum and crying jag that might occur.”
In terms of teaching and nurturing, Or-B units have certain advantages.
“Or-B’s voice can be female, male or gender neutral,” said Yoshikazu Musaki, a specialist in early childhood education. Furthermore its learning capabilities, coupled with the latest in artificial intelligence, will allow it to customize its care to each child, Musaki added.