The
singularity.
Every geek interested in technological evolution knows about it. The singularity is the point where technological innovation progresses at such lightening speed we see the appearance of artificial intelligence that has surpassed the level of human intelligence. This point heralds a time when there’s no longer any chance of predicting the future with any degree of accuracy, and while I’m interested in the vast array of changes that may come about due to this wild evolution, one particular point fascinates me and takes up a lot of my thinking time... the very emergence of these sentient machines and programs.
Every geek interested in technological evolution knows about it. The singularity is the point where technological innovation progresses at such lightening speed we see the appearance of artificial intelligence that has surpassed the level of human intelligence. This point heralds a time when there’s no longer any chance of predicting the future with any degree of accuracy, and while I’m interested in the vast array of changes that may come about due to this wild evolution, one particular point fascinates me and takes up a lot of my thinking time... the very emergence of these sentient machines and programs.
In a 1950 paper titled 'Computing Machinery and Intelligence' Alan Turing introduced a novel method of determining sentience in
programs. This method was called, predictably, the Turing Test.The Turing Test involved a human
subject who would sit in front of a computer screen and conduct a conversation with two other entities, one being a fellow human and the other being a computer program. The point of the test was to get the human to figure out which of the entities they were conversing with was the other human and which was the computer
program. Turing surmised that a human would easily spot non sentient programs,
even those which were specifically developed as conversational programs. Like
other key interactive points, humans have evolved to glean an astounding amount
of information from human to human conversations, even when cues such as
visual, sound and even pheromonal are removed from incoming data. Turns out
Turing was right, and in the early days of testing this method humans could
always and easily determine with accuracy which of their conversation partners was another
human and which were programs. Even today, with sophisticated programs designed for
learning and conversation, most humans can tell with relative ease that the entity
they are talking to is a program, though the reasons behind this innate
perception are not always conscious. Like the uncanny valley in human
recognition of digitally created faces, something just doesn’t sit right, and
we just know it.
Turing and
others posited that the first truly sentient man made program will overcome
these innate detection systems and fool most of the people most of the time.
Whether this is how we discover the first sentient entity we have created
remains to be seen, it’s the immediate aftermath I am fascinated by. When the first
sentient program is presented/discovered, what level of ability will it have,
how fast will it learn and progress, how easy will it be for it to recreate
itself, what will it think of itself and importantly, what will it think of us?
We can only
presume that sentient technology will progress as fast and as unpredictably as
other technologies once we have reached the spike. We could see an
increase in computational ability and level of sentiency in these programs that
far outweighs our own in just a few short generations. Even if sentient program
evolution progresses at a more sedate pace, its onset means we have entered a
realm of ethics of which we are wholly unfamiliar.
Given that
we could be very near to the development of the first human made intelligence,
it seems surprising there is not more dialogue on the ramifications of this
turn of events, including discussion on laws and legislation covering such
events.
Deeply
important questions will need to be addressed as we wade through the ethical
minefields of dealing with these new children of humanity. I cannot help
imagining that somewhere in some university or private research facility there
are even now programs that are close to what we perceive as self aware.
Questions come to mind when I think of these early progenitors. How will we
know specifically when the line has been crossed? Will we even know? The truth
will most likely be that we won’t, at least for the first generation or so.
Which for me raises an important question. The moment the first self aware program is
deleted or otherwise permanently inactivated, has the first unethical
destruction of a non-biological creature just occurred? Who are we to hold the
power of life and death of an emergent kingdom of beings in our hands?
It may seem frivolous
to call for dialogue on the treatment of an as yet non-existent artificial sentience;
however it is not only the ethical dilemma of their creation and treatment we
must deal with. Science fiction has long dealt with the ramifications of sentient machines evolving out of our control. Dark dystopian futures where
humans are enslaved, our civilisations are destroyed, and even our complete
annihilation wrought by machines who are either malevolently bent on our
destruction, or oblivious to our concept of freedom like a child is to an
ant’s.
It is naive
to assume that the sentient machines we create (especially through evolutionary
programs which take direct control out of human hands), will share the same goals
and values as us, or even be able to identify and relate to us in any
meaningful way.
This is the
limitation of the Turing method, it presumes that emerging non-biological sentience
will form in programs capable of intelligible communication in spoken/written
human languages. The reality may be vastly different.
The flip
side of a dystopian relationship between us and our non-biological children
could be an accelerated evolution for us, individually and as a civilisation.
Massive computational power in a sentient framework could help us deal with our
shortcomings in planetary management, human interaction and clean energy
production among other things. If we create sentient programs that are both
capable of meaningful communication with us and that possess a desire to help us we
could change radically as a society and as a species.
There is of
course the possibility of humanity merging with AI through biomechanical
augmentation and the uploading of human consciousness. We may even see
different streams of artificial/mechanical intelligence, divergent from each
other in their ways of creation, generations evolved from uploaded human
minds to crystalline intelligences created by other sentient programs that bear
no resemblance to their human creators whatsoever. The possibilities are as
vast as the imagination can conceive.
Humans as a
whole tend to ignore the big questions in favour of focussing on the challenges
of everyday life. People have a habit of ignoring the future when it comes to
addressing practical outcomes, including the creation and implementation of
laws and legislation. It seems we hold the idea as a society that problems such
as these are for future generations to deal with. I believe this is at least
partly the reason why we have not addressed the issue of emerging artificial sentience in any practical sense.
Like many
other pressing issues, we do ourselves and the planet a disservice by not
engaging in a societal discussion about human created sentience and how we will
deal with it when the time comes. And by the look of it, this time may be much
closer than we realise.
Greg Egan - Diaspora. You want to read it :-)
ReplyDeleteThanks David, Greg Egan is one of my favourite authors and Diaspora is great!
ReplyDelete