On why humans explaining the reasons for the moves AlphaGo makes are likely nons…

On why humans explaining the reasons for the moves AlphaGo makes are likely nonsensical.

Imagine someone in the 11th century trying to figure out how people in the 21st century might cool their houses. Suppose that they had enough computing power to search lots and lots of possible proposals, but had to use only their own 11th-century knowledge of how the universe worked to evaluate those proposals. Suppose they had so much computing power that at some point they randomly considered a proposal to construct an air conditioner. If instead they considered routing water through a home and evaporating the water, that might strike them as something that could possibly make the house cooler, if they saw the analogy to sweat. But if they randomly consider the mechanical diagram of an air conditioner as a possible solution, they'll toss it off as a randomly generated arcane diagram. They can't understand why this would be an effective strategy for cooling their house, because they don't know enough about thermodynamics and the pressure-heat relation.


Embedded Link

(Long.) As I post this, AlphaGo seems… – Eliezer Yudkowsky | Facebook
(Long.) As I post this, AlphaGo seems almost sure to win the third game and the match.

At this point it seems likely that Sedol is actually far…

Leave a Reply