Australia’s second Singularity Summit was a roaring success, with fascinating talks delivered over the weekend and from what I’m hearing, more fantastic talks in the days after and leading up to the conference itself which sadly I wasn’t able to attend.
Unfortunately my phone was on the fritz for the whole con (hence my utter failure to deliver the rabid tweeting I promised) but I’ve tried to track down a few relevant pics, and all things going to plan should get hold of a few photos from fellow attendees.
Opening the talks was James Newton-Taylor with a retrospective look at what we’ve done to bring us this far, highlighting advances in nano technology, and a comparison between the speed of current supercomputers, and where we expect the human brain falls on the same scale. One thing I was not aware of before his talk was that 3d printing has developed far enough to facilitate printing articulated parts, he passed a functional chain such as that you’d find on a motorcycle through the audience that had been printed layer by layer.
So what is AI? The primary distinction made at the conference was between so called “Narrow AI” and an Artificial General Intelligence (AGI). The former is capable of making rational decisions based on experience, but is limited to one field of experience that is solidly hardcoded into it’s existance, with limited or non-existant ability to apply that experience to other similar problems. The example Goertzel used was Google’s adsense product, which impressively matches up ads to relevant webpages based on their content, but is incapable of placing a radio ad.
Hugo de Garis delivered his views on the consequences to humanity of creating the first AGI, in which he predicts a distopian future consisting mainly of faction war between the terrans (Those who resist AI) and the cosmists, who embrace it potentially in the form of augmentation; becoming cyborgs. His stance is in stark contrast to Ray Kurzveil who he mentioned numerous times, who’s belief is that with 10 years of dedication humanity could achieve a human level AGI.
Stelarc gave a fascinating speech from the perspective of an artist, which I hadn’t given much thought to until then. Once you remove pragmatism and explicitly working toward a tangible goal from your utility function (more on that later) a new world of opportunity is opened, although much of his work will become relevant in the next century as the seperation between consciousness and form becomes more defined. Stelarc’s presentation focused around the works he has created, generally using himself as the subject matter, and in most cases centered on the seperation of body and consciousness. From an artistic standpoint, setting up his body as a conduit by allowing it to be controlled remotely via an elaborate series of electrodes atached to speifc muscle groups makes for fascinating works, but for a scientist it opens a new world of possibility, as far as potentially giving us the technmology to grant a virtual life form physical presence. It would be hugely remiss of me not to show you one of the brilliant things I’ve seen in my time: The ear on his arm
Hearing from, and speaking to Ben Goertzel was downright inspiring. Ben is working on- amongst other things- Opencog; an open source framework that he hopes will produce human level AGI within his lifetime (Although that may not be as optimistic as you think- he is also doing significant research into retroactively increasing human lifespan using narrow AI). I have since checked out Opencog and experimented with it, much of it goes over my head for the time being, but what I have seen is incredible.
Definitely one of my highlights were hearing from Steve Omohundro, who having done some research on since has been massively influencial on the IT environment we exist in now. Steve discussed the semantics, and potentially implications of expressing an AI’s goal to it in uncertain terms. At this time, most AI models revolve around a utility function to describe how desirable any given choice or action might be. Without wanting to regurgitate his speech, I thought his example was good enough to warrant posting here.
Imagine if you will an AI created specifically to play chess, and to play it well. Therefore, it’s utility function might be described as being “The sum total of difficulty ratings of oponents defeated”, which at face value is a safe proposition.
However, by virtue of extrapolation we can see that it could be taken out of context. Imagine that we want to turn off the machine, having conducted our research. Being switched off logically implies that the machine is not playing any chess, and so it will resist being turned off.
If the program can appropriate more resources, then it’s ability to play more chess/beat harder opponents is increased, so it will try to get more resources.
If the automation can replicate, then it can multiply it’s ability to defeat challenging opponents, so therefore it should produce likenesses of itself.
Assuming finite capabilities, doing things that aren’t playing chess impliess doing fewer things that /are/ playing chess- for this reason it should resist changing it’s goals.
Better algorithms would make the machine better at chess, and so logically it should attempt to improve its own code.
What seemed like a benign goal at first can easily be bent into something quite concerning, and while there’s no “exterminate all humans” directives emerging obviously, it’s not inconcievable that removing poor chess players from the gene pool would allow it to play more and better chess in future generations.
His view on the countermeasures to this were an increasingly “moral” codebase, beginning in the first generations of intelligent machine with hardware controls, but moving rapidly towards a conciousness with imbued values and a concept of ethics.
David Chalmers spoke about the philosophical implications of creating an artificial consciousness, and also about how the opens up the possibility that the world we live in is actually a simulation, although practically this affects nothing as unless we intended to ‘break out’ it shouldn’t alter our decision making at all.
Lawrence Krauss concluded the conference on Sunday night with an engrossing discussion of the nature itself, explaining some research that have proven to 99% certainty that the universe is flat, and vindicating people such as myself that despite having no solid understanding to base it on, didn’t buy into string theory as a great unifying theory of the universe. He also described how our universe has a total processing capacity of at most 10^24 bits, which if Moore’s law holds, is 400 years.
I transcribed pages of notes from the talks, but as I believe the talks themselves should be available as video (as well as the slides etc, that the speakers used) I would much rather give a brief overview and my thoughts to anyone considering going next year. Overral, a successful summit and I’d like to extend my sincerest congratulations to Adam, the speakers, and all those who made it possible.