For the past few months I've been thinking about ways to get artificial intelligence (AI) solve the hard problems in biology since I'm having so much trouble solving them myself. This article from the Future of Life Institute, an AI think-tank points out that others have started out with similar intentions but have come to realize that they need to take particular care not to unleash a disaster in the process:
Human reasoning is based on an understanding derived from a combination of personal experience and collective knowledge derived over generations, explains MIRI researcher Nate Soares, who trained in computer science in college. For example, you don’t have to tell managers not to risk their employees’ lives or strip mine the planet to make more paper clips. But AI paper-clip makers are vulnerable to making such mistakes because they do not share our wealth of knowledge. Even if they did, there’s no guarantee that human-engineered intelligent systems would process that knowledge the same way we would.I'd always thought that no one would be stupid enough to attach a superintelligent computer to actuators that could seriously affect the world but I've come to see that that is naive. The moment an AI is shown to be successful in a small field, it will almost instantly be given greater and greater responsibilities so that its owners can reap maximum rewards.
No comments:
Post a Comment