2010-06-30

The Traveller

My profuse thanks to Bill for lending me The Traveller, it was an exciting and interesting read. The book involves people living 'off the grid', out of the clutches of the Vast Machine, the network of surveillance devices and databases that track what everyone is doing.


The privacy issue is important, but my reaction to it is opposite to that of John Twelve Hawks. Some time ago I read about someone who retaliated against CCTV cameras by filming the places that were filming him.

To put it another way, JTH is disturbed by the asymmetry between the watcher and the watched. He solves it by trying to avoid being watched. I too am concerned at this asymmetry, but I think it should be solved by me being able to watch the corporations and governments as closely as they watch me.

2010-06-23

Rainbows End

A Fire Upon The Deep and A Deepness In The Sky by Vernor Vinge are among my top sci-fi books. I borrowed Vinge's Rainbows End from Janos, a novel set here on Earth in the near future. The implicit predictions contained in the descriptions of the lifestyles and technology are interesting. The interactions between the characters are tightly drawn, and a creditable attempt is made to give depth to the personalities.



The plot is highly complex. I'm not good at following plots. Raymond Chandler is great to read, but utterly impossible to follow. I'm sure there are people out there that can follow every swerve and loop in Rainbows End, but I'm not one of them.

I'll continue to look out for Vinge's work.

2010-06-11

Dave Chalmers Singularity Lecture

A few weeks ago I went to Oxford to say hello to my friend Matt, and we went to a lecture on the singularity by David Chalmers. He covered many aspects, but one idea he talked about was that for safety reasons a superhuman AI should be developed in virtual reality. He said that the most important thing was that information shouldn't be allowed to leak in. Leaking out was less dangerous. A bit like the one-way mirrors they have in police interview rooms. The argument was that if information could leak out, people on the outside could be manipulated by the AI to free it.

An interesting idea, but I'm sceptical that we really can develop AIs safely. Perhaps the best we can do is to try to instil a moral principle that the strong shouldn't harm the weak. Since today's strong will be tomorrow's weak, as AIs gain in sophistication it should be in the strong's interest to uphold this principle. The problem is that it's a principle that can never be enforced by the weak, and so we'll always have to rely on the strong being responsible.