The CEO of a prominent group promoting
the manifestation of a technological singularity, Luke Muehlhauser
from the Singularity Institute for Artificial Intelligence, recently
came out with a
very surprising statement regarding the dangers which would accompany
the creation of artificial super-intelligence:
“Unfortunately, the singularity may not be what you're hoping for. By default the singularity (intelligence explosion) will go very badly for humans, because what humans want is a very, very specific set of things in the vast space of possible motivations, and it's very hard to translate what we want into sufficiently precise math, so by default superhuman AIs will end up optimizing the world around us for something other than what we want, and using up all our resources to do so.”
For those of you unfamiliar with the
concept of the technological singularity... it has to do (generally
speaking) with programming a thinking computer that initially has the
same cognitive abilities as a human being. Due to computers
regularly becoming able to process evermore information faster, in a
very short time, after a computer achieved a human level of
intellect, it would, conceivably, surpass that level – arguably in
the next moment and almost certainly within the next few years. What
would start with a computer being able to pass a Turing test
(basically being able to fool human observers as to whether or not
they were having a dialogue with a human or a computer) would then
shortly be followed by a type of self-consciousness machine that
would intellectually be capable of manipulating humans and taking
human rationality to its furthest degree.
As indicated by Muehlhauser's
statement, this could all lead to disastrous results for humanity.
And, while I can't help but thinking this was some sort of an
subconscious confession from him, his expressed concern is reflected
by statements from other prominent individuals who work in fields
related to a technological singularity. For example, Bill Joy, the
co-founder of Sun Microsystems, has written about “Why
the future doesn't need us,” explaining some of the dangers
posed by a potential technological singularity. Even more optimistic
figures in the related fields, like Ray Kurzweil, have been quoted as
saying, “I’m not oblivious to the dangers, but I’m optimistic
that we’ll make it through without destroying civilization.”
Personally, I'm not convinced that a
singularity of the sort envisioned by the aforementioned
technologists is possible or likely. It may actually be possible but
I'm still wondering why we aren't already driving flying cars and
living in the techno-utopia promised by similar technologists from
the past. And, when I consider the hypothetical dangers posed by the
proposed technological singularity, I tend to think that the
existential risk to humankind outweighs the possible benefits.
More to the point, I feel that the
overall technological system in place, techno-industrial society as
it currently exists, is already “optimizing the world around us for
something other than what we want, and using up all our
resources to do so.” Muehlhauser's fear is already the reality as
far as I can tell.
Even widespread implementation of early
technological systems, like widespread agriculture, has caused places
like the fertile crescent to become deserts. The technological
advancement of that practice has since led to more widespread
disasters – rainforests are being destroyed for cropland, the crops
grown are increasingly being used for bio-fuels (presenting their own
problems), and roughly a billion people go hungry or starve each year
on this planet despite the widespread implementation of agricultural
technologies. The Bhopal disaster, one of the single most
devastating industrial catastrophes to date, was related to the
production of agricultural pesticides. And yet, despite this, we are
generally led to believe that agriculture has been a boon for
humanity and is a project which should unquestionably continue.
This, to me, is an example of a technological system advancing for
its own sake rather than for the benefit of humanity. It is as
Muehlhauser puts it... “optimizing the world around us for
something other than what we want, and using up all our
resources to do so.”
Other techno-industrial projects also
proceed despite the harm they cause to humanity and despite the fact
that they are using up resources in an entirely unsustainable way.
Take, for example, the computer which I, as a critic, am using to
write this article. We are told that computers make our lives better
and lead to more progress, but their manufacturing process leads to
toxic waste and their usage tends to promote a sedentary
consumeristic lifestyle (presenting destructive problems in itself).
But who can effectively argue that computer usage should be stymied
or that broadening the world wide web of computer networks is a
negative thing? To use these tools is certainly to be somewhat
complicit in the problems they present, but to argue against them
without employing their use seems quite futile. The system sucks us
all in whether we'd like it to or not and it would be nigh impossible
to escape the effects of the techno-industrial society which we have
been born into. (I'd argue that certain destructive technologies can
be used against themselves, but that's another subject altogether.)
The way our modern system is set up,
with an exponentially growing human population, it serves more the
interests of technological advancement and scientific discovery for
its own sake rather than for serving the broader interests of
humanity at large. A large human population, despite the problems
that accompany it, simply allows for more people working for further
technological advancements. And even those working in seemingly
benign jobs within this modern system actually facilitate the work
done in more destructive sectors of techno-industrial society. The
toilet scrubbers and the bakers doing their jobs makes it so that
rocket scientists, nuclear physicists, chemists, and genetic
engineers, can focus more completely on their work – which has
proven time and again to be highly destructive. And those latter
individuals, the scientists, are largely revered by our society and
held up for emulation despite the destructive powers they have
repeatedly unleashed.
When any destructive aspect of our
techno-industrial system must be acknowledged, like a nuclear
meltdown or the occurrence of some other large toxic spill, it's
presented as a necessary evil. But what is the good that comes with
these disasters? Is it because, in the case of nuclear power plant
melting down, more energy was previously created to be used for the
broader consumption of other resources (also known as the natural
world)? Or, maybe, a medical advancement is touted for saving lives
despite the harm involved with the creation and implementation of
that advancement? At the very best... technological advancement
seems to be a double-edged sword.
But incredible dangers presented by our
techno-industrial civilization persist. The negative feedback loops
associated with global warming, for instance, will continue beyond
most of the dates ever discussed – the Earth's atmospheric
temperature will continue to steadily rise even after the end of this
century. Toxic waste created over the last century will persist for
hundreds of thousands of years. And the weaponization of many
seemingly benign technologies threatens human existence on Earth.
And why? Why does humanity proceed
down this techno-industrial path? Is it supposed to be for the
creation of a computerized artificial super-intelligence (which even
the proponents fear)? Why would we seek to become gods just to
create the gods who will subsequently destroy us? I'm not really a
Freudian, but this is the thanatos urge personified in our society –
and it permeates most of us in this society. We largely serve,
promote, and defend a system which is, in one way or another, leading
to our collective destruction.
How long can this continue before some
large portion of humanity attempts to go down a different and more
sustainable path? In the past couple years we have experienced the
worst ever nuclear meltdown as it occurred just outside the largest
urban population center on the planet – and which subsequently
inundated the largest ocean with high levels of radiation. We have
experienced an oil spill which essentially turned the Gulf of Mexico
into a toxic pit. And we have seen unprecedented heatwaves, forest
fires, and droughts around the world which have occurred as a direct
result of global warming which is brought about by our
techno-industrial civilization. Our collective response to these
events has been little better than that of cattle being led into the
slaughterhouse. We are already going along with a system that is
“optimizing the world around us for something other than
what we want, and using up all our resources to do so.”
But I suspect humankind's broader
mindset, and our way of relating to this crisis, might change. The
disasters of techno-industrial mass society are becoming more
frequent and more apparent. At some point... some significant
portion of the global population may begin to effectively fight back
as the things which we collectively value, and our relationship with
the current system, suddenly and dramatically changes. This may or
may not occur in time to prevent the anthropocene mass extinction
event from finally catching up with its cause but, at the very least,
humanity at large might find some dignity in our resistance to the
system currently in place.
No comments:
Post a Comment