Sunday, 29 March 2015

I suspect that a lot of people will be opposed or indifferent to the idea of life extension until such a point that the technology is incontrovertibly feasible. And by that point, there'd no longer be any need for public approval of life extension.

We in the life extension community might increase the effectiveness of our marketing by explicitly conflating life extension, healthspan extension, and the promise of experience a much, much better future. In other words, eternal life in heaven. In too many conversations once I raise the possibility of life extension, the next objections are either, Why would you want to live forever as a frail old person? or How will we feed all those humans on earth if no one dies?

Perhaps I'd pique more people's interests if I led with, "the next 100 years are gonna be so great: we'll cure all diseases and have vacations on Mars and no one will have to work any more. That's why I'm working on biotechnologies that will reverse the aging in people today; so that we can all enjoy the wonderful things that are yet to come."

Saturday, 21 March 2015

For the past few months I've been thinking about ways to get artificial intelligence (AI) solve the hard problems in biology since I'm having so much trouble solving them myself. This article from the Future of Life Institute, an AI think-tank points out that others have started out with similar intentions but have come to realize that they need to take particular care not to unleash a disaster in the process:
Human reasoning is based on an understanding derived from a combination of personal experience and collective knowledge derived over generations, explains MIRI researcher Nate Soares, who trained in computer science in college. For example, you don’t have to tell managers not to risk their employees’ lives or strip mine the planet to make more paper clips. But AI paper-clip makers are vulnerable to making such mistakes because they do not share our wealth of knowledge. Even if they did, there’s no guarantee that human-engineered intelligent systems would process that knowledge the same way we would.
I'd always thought that no one would be stupid enough to attach a superintelligent computer to actuators that could seriously affect the world but I've come to see that that is naive. The moment an AI is shown to be successful in a small field, it will almost instantly be given greater and greater responsibilities so that its owners can reap maximum rewards.