We're moving into a science fiction future, where artificial intelligence (however defined) will be programmed by us for us, but, like all other policy, we're not really thinking through to the end, logically:
On some level, I believe many of us realize we are fast becoming inefficient parties in the face of artificial intelligence. Science fiction movies where the robots have “deemed us expendable” give many people cause for concern, fleeting as the thought might be. However, we may find out all too soon that autonomous cars will have been the first salvo fired off by robots in such a future. We program them to understand our values. They make decisions that reflect our values. The decisions frighten us. What does that mean?
So how do we program these things, when, right now, we can't solve fundamental moral problems? I really like the examples and logic puzzles. Well worth the read.