Why You Want Your Drone to Have Emotions – IEEE Spectrum

Researchers from Stanford University, led by Dr. Jessica Cauchard, have established an “emotional model space” for drones, which consists of a set of eight emotional states (personalities) that have defining characteristics that can be easily recognized by human users, and that can be accurately represented through simple actions that the drone can perform. These personalities include: brave, dopey, sleepy, grumpy, happy, sad, scared, and shy. For example, a drone with a brave personality moves quickly and smoothly, and if you ask it to go backwards, it’ll instead turn around and go forwards. A dopey drone flies a little wobbly. A grumpy drone may require you to repeat commands, while a sad drone flies low to the ground.

Source: Why You Want Your Drone to Have Emotions – IEEE Spectrum

Uncle Sam’s boffins stumble upon battery storage holy grail • The Register

Currently, a significant factor holding back renewable energy sources like solar and wind is the fact that energy storage is often inefficient and expensive. When the sun stops shining or the wind stops blowing, that energy source is cut off. With better energy storage, however, the economics of the entire industry would change.

Source: Uncle Sam’s boffins stumble upon battery storage holy grail • The Register

Deep learning helps robots perfect skills | KurzweilAI

Deep learning enables the robot to perceive its immediate environment, including the location and movement of its limbs. Reinforcement learning means improving  at a task by trial and error. A robot with these two skills could refine its performance based on real-time feedback.

Applications for such a skilled robot might range from helping humans with tedious housekeeping chores to assisting in highly detailed surgery. In fact, Abbeel says, “Robots might even be able to teach other robots.” Or humans?

Source: Deep learning helps robots perfect skills | KurzweilAI

Rats vs. computers vs. rat cyborgs in maze navigation | KurzweilAI

What would happen if we combined synthetic and biological systems, creating an intelligent cyborg rat? How would it perform?

Researchers in China decided to find out by comparing the problem-solving abilities of rats, computers, and rat-computer “cyborgs,” as they reported in an open-access PLOS ONE paper.

Source: Rats vs. computers vs. rat cyborgs in maze navigation | KurzweilAI

Magnetic mind control works in live animals, makes mice happy | Ars Technica

With a few more genetic tweaks, the resulting hybrid protein, dubbed Magneto, proved to be viable and responsive to magnetic fields in cells. When the researchers moved a magnet near the cells carrying the hybrid, Magneto jerked, opening the ion channel. This caused an influx of ions into the cells, sparking an electrical change that could fire off brain signals.

When the researchers put the gene for Magneto in zebrafish, a model organism for brain development, they found that the hybrid could alter complex behaviors. Using a genetic switch, the researchers made Magneto active in the zebrafish nerve cells that are involved in sensing touch. And, when they added a magnetic field, the fish upped the amount of time they coiled their tails, a touch-induced escape response.

The researchers next tested Magneto in mice, a mammalian model. By making Magneto active in cells that are responsive to dopamine—a neurotransmitter critical for reward-motivation pathways in the brain—the researchers could charm the mice into preferring an area of a chamber with a magnetic field.

Source: Magnetic mind control works in live animals, makes mice happy | Ars Technica

As Technology Barrels Ahead—Will Ethics Get Left in the Dust? – Singularity HUB

Technology is moving faster than our ability to understand it, and there is no consensus on what is ethical. It isn’t just the lawmakers who are not well-informed, the originators of the technologies themselves don’t understand the full ramifications of what they are creating. They may take strong positions today based on their emotions and financial interests, but as they learn more, they too will change their views.

Source: As Technology Barrels Ahead—Will Ethics Get Left in the Dust? – Singularity HUB

The supremely intelligent rat-cyborg | PLOS Neuroscience Community

These findings from Yu and colleagues suggest that optimal intelligence may not reside exclusively in man or machine, but in the integration of the two. By harnessing the speed and logic of artificial computing systems, we may be able to augment the already remarkable cognitive abilities of biological neural systems, including the human brain. The prospect of computer-assisted human intelligence raises obvious concerns over the safety and ethics of their application. Are there conditions under which a human “cyborg” could put humans at risk? Is altering human behavior with a machine tantamount to “playing god” and a dangerous overreach of our powers?

Source: The supremely intelligent rat-cyborg | PLOS Neuroscience Community

DOD officials say autonomous killing machines deserve a look | Ars Technica

[ … ] military officials are looking hard at the possibility of developing robotic systems that are capable of acting on their own if remote control is cut off and decisions must be made on when to deploy a weapon—whether it’s an armed drone dropping a bomb or launching a missile or a ground robot firing weapons. “These are hard questions, and a lot of people outside of us tech guys are thinking about it, talking about it, engaging in what we can and can’t do,” she said. “That’s important. We need to understand and know that it doesn’t necessarily need to happen, but we also have to put the options on the table because we are the worst-case scenario guys.”

Source: DOD officials say autonomous killing machines deserve a look | Ars Technica

Apple and the FBI think iPhones are safes. A philosopher explains what they really are.

Our electronic devices—or at least many of the processes that occur within them—are literally parts of our minds. And our consideration of Apple’s and the FBI’s arguments ought to flow from that fact.

This may sound ridiculous. But in an important co-authored essay and then in a book, the philosopher Andy Clark argued for something called the extended mind hypothesis. The basic idea was that we have no reason to treat the brain alone as the only place where mental processes can occur.

Source: Apple and the FBI think iPhones are safes. A philosopher explains what they really are.