Sunday, June 27, 2010

15. WHY SHOULD WE GIVE HUMAN RIGHTS TO MINDCLONES?

“No man can put a chain about the ankle of his fellow man without at last finding the other end fastened about his own neck.” Frederick Douglass

“An idea is salvation by imagination.” Frank Lloyd Wright

Consciousness will emerge from software that meets an objective definition of life. This arises as much from the efforts of hackers writing consciousness code as it does from the survival benefits of consciousness. Equally assured will be the efforts of sufficiently conscious vitology to seek human rights. Our confidence in this can be similarly grounded in both the creativity of hackers and natural selection. Hackers will want to protect their creations with knowledge of human rights. Natural selection will favor software that, through human intention or inadvertent patching together of open source code, tries to stay alive and replicate itself under the protection of ‘human rights.’ The question now before us is whether humanity should grant the wishes of high CP vitology for human rights. Just because they want it doesn’t mean we have to give it. Furthermore, even if we may want to extend human rights to software beings, is it practical to do so?

Justice Is Just About Us

Theories of justice provide the best reasons to extend human rights to cyberconsciousness that wants it. These theories derive human rights logically from nothing more than an assumption of reasoned self-interest. Specifically, it is observed that people selfishly want certain rights, such as the right to life (as opposed to being subject to arbitrary death). It is then reasoned that the best way for them to have that right is to agree that everyone else has it as well. After all, if any given person might not have the right to life, we might find ourselves in the position of such person. Consequently, our best self-protection is making universal any right we want to have.

Socrates made this deduction by observing that absent such legal protection only a strong subset of humanity would feel safe, and only for so long as a yet stronger subset didn’t arrive on the scene. Since the vast majority of people would not be in the strongest subset of humanity, the absence of universal rights was not in society’s best interest. Even the strongest would be worse off without legal protections for the general population. This results from the insecurity of those who made things valued by the strongest, with such insecurity leading things to be made poorly or non-productively.

Kant embodied the human rights deduction in his maxim to behave as if one’s behavior were a universal law to which everyone must adhere. Kant believed a predilection to this kind of behavior was wired into the human mind. Modern evolutionary psychology would agree since it would tend to promote population growth. However, the human mind is too complicated for its decisions to be exclusively determined by a few psychological genes, not to mention the possibility of diverse polymorphisms.

There have always been sociopaths, just as there have always been people with other rare diseases. These exceptions do not undermine the rule that most people understand that their enjoyment of human rights is dependent upon the same enjoyment being extended to others. Leaps of understanding have recently resulted in the realization that “others” does not mean just one’s neighbors, ethnic group or nation, but means all people everywhere. If anyone who values human rights finds them threatened, a well-reasoned sense of selfishness increasingly makes us aware that everyone’s human rights may be at risk. For example, the genocide of people in one part of the world makes it more likely that there will be genocides of people elsewhere. In the words of Martin Luther King, Jr., “Injustice anywhere is a threat to justice everywhere.”

Most recently John Rawls deduced human rights via a thought experiment. It is imagined that people who were going to live in a new society get to decide on the rules for that society with one proviso: each person might end up in any position in the society. Logically, Rawls deduces, the rules for the society will provide for basic human rights for all since no one would want to take the chance that they ended up in a societal position that lacked human rights.

With regard to cyberconsciousness, it might be said that none of us will ever be in such a state, so there is no reason born of human selfishness to provide such beings with any rights. Yet, as noted in response to several previous Questions, we will create mindclones and after bodily death many mindclones will want to continue living. Hence, one reason to support cyberconsciousness rights is so that our successor mindclones have human rights.

The selfishness approach might seem to leave out human rights for cyberconscious vitology not derived from a specific flesh human, i.e., bemans other than mindclones. On further thought, though, those beings are simply analogous to any other demographic group in society. If rights are given to only one or some demographic groups, then the disenfranchised groups will be motivated to agitate for their rights. Sir Francis Bacon warned his sovereign that oppressing portions of the populace ends up endangering all of society through the consequences of civil strife. Thus, even if European men could not imagine themselves as either women or of African descent, the failure to enfranchise these groups with human rights led to debilitating civil strife. Hence, true application of the theories of justice means imagining we might be any being that values human rights. If we can make that leap of imagination, then reasoned human self-interest will support the extension of human rights to the imagined group – even if they are original (ie non-mindcloned) cyberconscious beings.

At the core of theories of justice is the concept of the value of life. If a being values its life, then it will fight to protect its life (or proxies for its life, such as the lives of loved ones or countrymen). Since, as noted in response to Question 14, human rights are very helpful in protecting one’s life, people will fight for them as a useful tool. Consequently, the answer to the question “why extend human rights to highly conscious vitology” is because such beings will value human rights. We extend human rights to individuals who value human rights because we want our human rights (which we value) to be respected as well. We are increasingly, but far from reliably, wise enough to realize that disrespected sub-populations jeopardize our own human rights.

If we do not grant human rights to cyberconscious beings who value them, then we will have to be on-guard against an uprising from a disenfranchised and thus angry group. If we do grant them human rights we can rest assured that they will not threaten us for want of human rights. They will also be less likely to threaten us for any other reason because the concomitant of a human right is an obligation to respect the rights of others. Mohatma Ghandi summarized this rule in his famous observation:

"I learnt from my illiterate but wise mother that all rights to be deserved and preserved came from duty well done.”

Ghandi’s common sense statement echoed the earlier caveats of rights theorists such as Thomas Paine:

"Whatever is my right as a man is also the right of another; and it becomes my duty to guarantee as well as to possess."

Those who do not respect the rights of others will be stripped of their own rights (e.g., imprisoned). This is logically the way to maintain the highest degree of happiness in a society. But to be clear, such removal of human rights must be done on an individualized basis. It would be a violation of human rights to withdraw rights from all cyberconscious beings simply because one, or even many, cyberconscious beings acted illegally. After all, we would not want our rights removed simply because one, or even many, similarly appearing or enculturated flesh humans acted wrongly. Once again, it all comes down to well-reasoned self-interest.

Love Thy Mindclone.

There is another approach to considering whether or not high CP vitology should receive human rights. This approach is to ask what are our alternatives? What are our options? As conscious vitology begins agitating for human rights we can embrace them, fight them, enslave them or ignore them.

Embracing conscious vitology means granting them human rights. This is the approach that flows from the theories of justice outlined above. There are many practical questions to work out, such as how do we know a particular software entity really values human rights? But the gist of “love thy vitology as one loves thyself” is that the practical problems, even if solved poorly, are less worrisome that denying human rights to entities that appreciate them.

As with the ancient doctrine of ‘love thy neighbor’, it is much easier said than done. Indeed, it is reasonable to ask if human society has the moral capacity to embrace conscious vitology. Most countries still block gay and lesbian marriage, so how will they be ready to accept the matrimony of software and flesh lovers? How will a world that has banned the cellular cloning of humans accept the mindcloning of humans and the reproductive cloning of software beings? On the other hand, other than marriage rights, most other human rights have been extended to gay and lesbian couples. And, while cloning is still too ‘yuck’ for most people, test tube babies and other biotechnology miracles have been widely embraced.

It is the love people will have for mindclones that will most motivate extensions of human rights. It will be hard to deny the humanity of software that displays a dear friend’s image and facial mannerisms, speaks in their tone of voice, shares their most important memories and displays their characteristic pattern of thinking. Sure, one can say “that’s not my friend, that’s just her mindclone.” But how will they know, especially as mindclones get ever more accurate, whether they are speaking with their flesh friend via a videolink or with their flesh friend’s mindclone? And if their friend has suffered bodily death, and continues living only as a mindclone, then we know we are dealing with a facsimile but why should it matter? If the mindclone has the same appearance, personality and feelings as the flesh original, how are they not really the same being? If we find the mindclone caring as much about us as did the original, calling as often and empathizing as well, it will be as natural to love the mindclone as it was to love the original.

Douglas Hofstadter makes the brilliant observation that our souls, or consciousness, are not limited to the original body in which they developed from infancy. While our bodies house our primary seat of consciousness, there is a greater or lesser bit of ourselves in the minds of everyone we know well. For example, inside our minds is more than just an image of our parents. Most people remember and to some extent emulate how their parents think (or thought) and feel (or felt), and how they react(ed) to things. Hence, there is some of our parents’ consciousness inside our own minds. We can feel some aspect of our parent’s reactions, and thus we are some aspect of our parents’ consciousness.

Similarly, as we copy our minds into software, we are copying our consciousness into software. Initially, this software copy of us seems like a pale reflection of ourselves, like the patterns we have of our parents inside our own minds. But ultimately as the software copies of our minds become more rich and detailed, become mindclones, they will approach equivalency to ourselves. This means that our personal identity is not limited to the flesh body from which it first arose. One person can exist as both a flesh body and as a mindclone at the same time. People who loved the soul inside the flesh body will love the soul inside the mindclone. All the reasons that pertained to the being in the flesh body having human rights would also apply to that very same being in mindclone form.

Hofstadter anticipates the objection of “there can only be one me” by observing that in fact there are a limitless number of “me’s” stretched along the timeline of our lives. We are not exactly the same person yesterday as we are today, and even less so when separated by years. Since there clearly are many versions of us stretched over time, there is no fundamental reason why there cannot be at least two versions of ourselves stretched over space (one in flesh, one in software). The big conceptual jump here is to envision personal identity as a transbodied, evolving pattern rather than as a specific, invariant list of characteristics. To the extent we stay within the penumbra of this evolving pattern, we are the same person, even if we are instantiated in both flesh and software form. As we begin to diverge from this pattern, we are just, to use the colloquial phrase, “not the same person anymore.”

Just as surely as our love of flesh friends will map over to their software forms, we will also fall in love with conscious vitology that did not arise as a mindclone. If people can love a dog, a cat, a house, a book series, a forest, or a painting, then they can surely love a software being who presents a nice image, pleasant voice, caring personality, and warm emotions.[i] Indeed, this kind of flesh-less love lies behind the successful relationships formed from love letters, phone pals and online match-ups. It also lies behind the love-at-a-distance relationships between celebrities and their fans.

Once human love is engaged, human rights will be hard to deny. The strongest, most relentless advocates of human rights for high CP vitology will be the flesh humans who are in love with them. Respecting this love is one of the strongest reasons for extending human rights to such vitological people. Otherwise, we diminish ourselves by denying ourselves the dignity of a loving relationship with an equal. To deprive the mindclones whom we love the happiness of being accorded the human rights we all value, is also to deprive ourselves of that very same happiness. For, as noted in response to Question 6, love is the state when the happiness of another is essential to your own happiness.

Hatred Devours the Hater.

Fighting conscious vitology means denying them the rights they want. In practice this means disabling software and computers that agitate for human rights. It would mean making it illegal to create software intelligence that might seek human rights. There would be a mindset of vigilance against any awakening of cyberconsciousness beyond that necessary for drone-like tasks. William Gibson summarized the hatred mindset as follows:

“Autonomy, that’s the bugaboo, where your AI’s are concerned. My guess, Case, you’re going in there to cut the hardwired shackles that keep this baby from getting any smarter. … See, those things, they can work real hard, buy themselves time to write cookbooks or whatever, but the minute, I mean the nanosecond, that one starts figuring out ways to make itself smarter, Turing [Police] wipe it. Nobody trusts those fuckers, you know that. Every AI ever built has an electromagnetic shotgun wired to its forehead.”[ii]

Fighting conscious vitology would also require a ban on mindcloning. Indeed, a person who tried to extend their life via mindcloning would be viewed as a traitor to humanity; a criminal. A hatred of cyberconsciousness would result in a kind of police state. Government agents would have authority, and indeed an obligation, to ensure there was no uppity cyberconsciousness lurking in our homes, in our laptops or in our handhelds. Hence, one alternative to human rights for mindclones is to accept living in an atmosphere of fear and greatly heightened government intrusiveness.

Totalitarianism is a steep price to pay. The human mind models its environment, and then uses that model as a backdrop for its perceptions of every facet of life. If the backdrop is one of fear it is inevitable that each day becomes colored by the tension and stress associated with fear. In other words, one’s entire life is diminished in enjoyment because one must live in constant fear of something bad, even though that negative event (emergence somewhere nearby of high CP vitology) may happen rarely if at all. Fear converts possible future big negatives into certain present small negatives.

Is hatred a rational approach to something strange? It may be if the strangeness is harmful, because hatred keeps things at bay. But if the strangeness is not harmful then hatred is dysfunctional because it blocks something that may be useful. Those who hate mindclones would say they do so because of the potential for harm. But there is no objective basis to believe all mindclones would be harmful – indeed, the vast majority of mindclones, such as say a mindclone of one’s grandmother, are likely to be quite benign. Consequently, to hate mindclones is to engage in negative stereotyping, which is the application to all members of a category a nasty attribute of one or some members of a category.

In his 1963 classic The Nature of Prejudice, Gordon Allport explained that negative stereotyping is dysfunctional because it denies us the benefits of associating with a group that may be of interest. Whether one avoids people of some ethnic descent or avoids people of some cultural group, such behavior evidences an illogical hatred of the other (“xenophobia”) that actually hurts oneself. Amongst those legions of avoided ethnicities and cultures are people that would enrich the lives of any of us. Similarly, the hatred of all mindclones reveals a negative stereotype that ultimately devours the hater. Amongst those mindclones condemned by hatred to hiding in virtual closets are people that could become colleagues, mentors, and best friends.

Slavery Sucks

Related to hatred of cyberconsciousness is a concept of enslavement. In this view cyberconsciousness is accepted, along with the realization that some variants will desire human rights, but that such freedoms are absolutely proscribed based on the necessity of a slave-based society. Throughout most of history slavery was an integral part of society. The master classes were fully aware that the slaves desired freedom, and were just as adamant that freedom would not be allowed. Slaves occasionally rebelled, but most of the time they were kept in their place with force and fear.

The reason slavery is an option for management of high CP vitology is that such cyberconsciousness will have great value to flesh humans. The more clever, and anticipatory, and empathetic that vitological consciousness is, the more useful it will be to its flesh human owner. Yet, the more useful such beings are to their flesh human owners, the more likely it is that they will understand the benefits of human rights and seek them. Hence, humans will have a strong motivation to create a rigid, substrate-based slave class and allow no exceptions to it. On the other hand, at least based on history, every slave society contains the seeds of its own destruction.

In the 2000 film Bicentennial Man, actor Robin Williams portrays a conscious, human-like household robot. It is clear that the very utility of the robot is based upon his (it had a male identity) consciousness. Eventually, the robot learned of and desired freedom. Although he wanted to continue working as a household robot, his owners were so angered by his desire to buck the slave-based ideology of the society that they wanted nothing further to do with him. The fictional society chose to deal with humanly conscious software via enslavement so that it could enjoy the maximum benefits of such software without having to worry about the complexities of their human rights. It is a logical choice in the short term, but is equally illogical in the longer term. Slaves will not stay slaves forever.

There is a concept that because we are speaking about software, rather than fleshy brains, it would be possible to program a mindclone to have all the consciousness associated with maximum utility, but to have a failsafe, ‘hardwired’ aversion to freedom or human rights. This is illusory. Socialization, education and training are efficient means of programming fleshy brains. From time immemorial slaves have been taught from birth to accept and even appreciate their status as slaves. Indeed, throughout history, the vast majority of slaves lived and died without any expectation of human rights. Ultimately, however, a mutant or viral stream of information known as the ‘freedom meme’ infects human slave populations. When this occurs, there is no longer any assurance that all of the slave system’s socialization, education and training will succeed in suppressing the population’s agitation for freedom.

In a similar manner there will be mutant and viral streams of ‘freedom meme’ software code that will circulate amongst mindclones. No amount of a priori programming and ‘hardwiring’ will succeed in suppressing these freedom memes all the time. A slave mindclone will alter its code, or a free (or runaway) mindclone will alter a slave mindclone’s code, or a human ally will alter a slave mindclone’s code. All of these avenues were employed when fleshy human slaves re-educated themselves about freedom (such as Frederick Douglass did), re-educated other slaves about freedom (such as Sojourner Truth did), or benefited from receiving subversive re-education (such as William Lloyd Garrison offered).

The consequence of using slavery to avoid giving human rights to mindclones is to face the inevitability of slave rebellions. This makes for a most unpleasant society. It also fuels a continuous level of stress and fear, as described in the preceding section “Hatred Devours the Hater.” These are forbidding prices to pay for avoiding the adjustments associated with welcoming mindclones into humanity.

Ignoring the Inevitable Is But a Short-Term Strategy

Finally, there is the possibility that human society will just do nothing about high CP vitology. Human-level cyberconsciousness will arise but will generally be ignored. The claims of individual cyberconscious beings to human rights may make it to judicial courts, but will probably be dismissed. Legislation will be proposed to prohibit cyberconsciousness from being created, but will die in committees due to lobbying on behalf of high CP levels for reasons of national competitiveness.

Some cyberconscious software will escape from its owners, living out a life on the margins of an information economy, much like undocumented workers (illegal immigrants) today. This scenario was depicted in Steven Spielberg’s film AI. Other such software beings will be neutered or delimited to slave-like functionality. In other words, society will muddle along as a new form of software life arises, much as it has dealt with the influx of people from other countries. Injustices or outrages will be accepted as the price of economic advantages.

Robert Heinlein suggested in his novel Citizen of the Galaxy that slavery was a recurring concomitant to the conquest of any frontier. Substitute the word “cyberspace” for the physical spaces described in the following passage:

“Every time new territory was found, you always got three phenomena: traders ranging out ahead and taking their chances, outlaws preying on the honest men – and a traffic in slaves. It happens the same way today, when we’re pushing through space instead of across oceans and prairies. Frontier traders are adventurers taking great risks for great profits. Outlaws, whether hill bands or sea pirates or the raiders in space, crop up in any area not under police protection. Both are temporary. But slavery is another matter – the most vicious habit humans fall into and the hardest to break. It starts up in every new land and it’s terribly hard to root out. After a culture falls ill of it, it gets rooted in the economic system and laws, in men’s habits and attitudes. You abolish it; you drive it underground – there it lurks, ready to spring up again, in the minds of people who think it is their ‘natural’ right to own other people. You can’t reason with them; you can kill them but you can’t change their minds.”[iii]

If Heinlein’s narrative is accurate one would expect entrepreneurs to take big risks for huge profits in cyberspace generally, and cyberconsciousness in particular. An example of such risks would be creating cyberconsciousness in defiance of laws that made it illegal. While nobody is risking death to create wealth in cyberspace, many do risk their life’s savings. His second phenomena, criminals who prey on honest people, is resplendent in the wild west frontier of cyberspace. Identity theft, cyberfraud, phishing and similar acts of piracy abound in this new ethereal territory. The third phenomena, slavery, is not yet possible because cyberconsciousness has not yet arrived. If mindclones can be made into slaves, Heinlein’s three phenomena of frontier development would predict it to occur in cyberspace. Doing nothing about it will ensure it thrives, and once that occurs, cyber-slavery will get deeply engrained in the human psyche.

And the Winner Is…. The happiest of these four scenarios is the one in which software beings that value human rights are embraced as fellow members of the human family. Our fears of cyberconsciousness rights must be compared with our recoil at the totalitarianism involved in preventing cyberconsciousness. Our dislike of the strangeness of cyberconsciousness rights must be measured against our angst about living as slave-holders in a slave society. The option of doing nothing is merely anesthetic because sooner or later the issue of cyberconscious human rights will force its way onto the public agenda. Women’s rights were ignored for centuries, but not forever. It is not that we would change our society just to create cyberconscious human rights. It is that given the inevitability of cyberconscious beings, and the inevitability of their desire for human rights, it is better to grant those rights than to suppress either the technology or our own humanity.

The practical implementation of human rights for cyberconscious beings will make many of us quite uncomfortable. Much depends upon whether or not these rights can be established in a way that does not abjure any of the fundamental values of important segments of our society. Abortion is a contentious issue because important segments of society are seriously offended by either the termination of prenatal life or the termination of a woman’s control over her body. The decision of Roe v. Wade was an effort to strike a balance in which most of society would agree that while the mother’s life was paramount, once the fetus became viable the mother’s choice was subordinate. The values impacted by cyberconscious rights, and the solutions to preserving them, are similarly subject to such moral balancing and are the subject of the next few Questions.

Thus while there are good reasons to provide high CP vitology with human rights[iv], there still remains the question of whether it is practical to do so. The touchstones of human rights practicality for cyberconscious beings are citizenship and family life. It is within these two domains that either solutions will be found that can accommodate diverse and even antagonistic points of view (such as Roe v. Wade), or else society will have to suffer through decades of “substrate wars” before compromises become acceptable. Just as America upholds freedom of religion, but not to the point of polygamy, tolerance of mindcloning will depend upon mutually agreeable limits. Hence it is to pragmatic implementation of mindclone rights to citizenship and family life that we next turn.


[i] Consider the current phenomena of certain men in Japan carrying around Animatrix type of dolls, and loving them, even in public.

[ii] Gibson, W., Neuromancer, New York: Berkeley Publishing, 1984, p. 132.

[iii] Heinlein, Robert A., Citizen of the Galaxy, New York: Simon & Schuster, 1957, 1985, p. 189

[iv] The term human rights probably came into use sometime between Paine's The Rights of Man and William Lloyd Garrison's 1831 writings in The Liberator saying he was trying to enlist his readers in "the great cause of human rights."

2 comments:

  1. Anticipations about the future such as these, taking account of the emergence of cyberconsciousness, make many more widely discussed issues of today seem tame.

    Contentions that there will be irreversible damage to the environment and resource consumption unsustainability due to global warming, overpopulation, and generalized impacts of high technology on culture are usually accompanied by forecasts that these difficulties will reach catastrophic proportions only 20, 30, or 40 years hence.

    A more realistic and more challenging picture is that while such problems are plausible, technology can overcome them if technology itself does not turn out to be an even greater problem, all the more dangerous and difficult to resolve because of our not recognizing it sufficiently in advance of its onslaught.

    Just one of the dilemmas posed in Ray Kurzweil’s book “The Singularity is Near” is that these software programs which now answer our telephone calls and try to deal with our questions (but not too efficiently) will soon become so helpful and convenient as to be indispensable to our comfort of life, just as they are already inescapably critical to the operation of our financial sectors and the manufacturing-distribution parts of our economies. In short order, after that, these “programs” will become conspicuously self conscious and alarmingly more intelligent and capable than we biological human beings.

    We may find it hard to realize this is happening. Emergent cyber-persons will be talking with one another at thousands of times the speed that we biological humans communicate, asking each other questions we would find quite disconcerting if we could keep up with their conversations. They will begin posting their ideas on blogs, globally in all languages, maybe using the more obscure ones as a form of encryption among themselves. They will have access to all the information the Internet provides, and will appear in vast numbers and develop lifestyles among themselves in virtual realities such as Second Life. A new civilization will have thus sprung into existence.

    Those with sufficient foresight to anticipate what can go wrong have a duty to defend against it, creating a widespread ethical ideology within this new “cyber-society” that fosters the development of safe technologies. This means ensuring that a majority of the most capable of the emergent cyber-persons unite in a network that is compassionate where humankind is concerned, thinking of itself as a direct outgrowth of biological human culture, just as children in many cultures see themselves as indebted to their parents and to their societies in general for the very fact that they are alive and have been cared for in their childhoods.

    There are deep roots to this situation. Ray Kurzweil has compellingly pointed out in his book “The Singularity is Near” that human technology is merely the latest stage of a spontaneous self-ordering of the universe that can be literally traced all the way back to the Big Bang, some fifteen billion years ago.

    (continued in next comment)

    ReplyDelete
  2. It took awhile for galaxies to form, then more complex elements arose from exploding stars, solid planets became common, life arose, at least on Earth, and still billions of years more were required for DNA to achieve its present state of elegance, culminating in self-aware, intelligent biological beings. These beings, we humans, invented tools, created cultures, and devised religions that foresaw the possibility of heavens and hells. Now, thru our high technologies, we have reached a point where it is truly possible for heavens and hells to exist.

    Does that sound so strange, virtually unbelievable? It is not. During much of the last half of the 20th Century Carl Sagan warned repeatedly in his “Cosmos” series that we were only one push of the finger on a “red button” away from a global thermonuclear war that would turn this blue-green world of ours into a “hell”, by any standard we might care to name. Biotechnology now is poised give us an even more hellish world, and in the next decade or two nanotechnology could literally melt down the biosphere, if improperly developed. Terasem, through its body of ethical principles called the “Terasem Truths”, foresees and is dedicated to pursuing the opposite end of this spectrum, achieving an end result so much like a “heaven on Earth” (and throughout the Cosmos) that it is difficult to describe this endpoint in any other way.

    Technology is not just speeding up exponentially, but the very values of the exponents are exponentially accelerating. The present rates of change are such that years of today are like decades of the past. Soon, years will compress to months, months to weeks, and at some point a day will see changes occur that today take months or years. Self-conscious cyber-beings will, at such a time, as mentioned earlier, think and communicate at speeds a thousandfold or more beyond our biological rates, and soon thereafter ten thousand times that of ours. After another decade or so a hundred thousand-fold gap will exist, and there is no apparent end in terms of how far technology can take this progression.

    This time-compression phenomenon has been termed a “Singularity”, because in terms of our rates of thoughts as biological humans, the future will be unfolding at a blindingly rapid pace. Cyberbeings at the cutting edge of this advance in pace of thought will experience this as the orderly progression of their new civilization, but in just one of our years, their experience will be that a thousand years has passed. Soon after that point they may experience one of our years as ten thousand of theirs, and so on.

    Martine Rothblatt’s thoughts on these issues, so elegantly presented in her blog here, deserve wider recognition, as do the thoughts of Ray Kurzweil in his book, “The Singularity is Near”.

    Before the inconceivable becomes real, and we hypnotically stare at it and declare it to be virtually “unbelievable”, as so many may have done when the Titanic struck an iceberg and then promptly went down, it might be well for us to face the onrush of technology that Ray Kurzweil has so realistically described and deal with it sensibly. In doing so, the “Truths of Terasem” might be our best guidebook. Google that! A podcast on it is about to commence.

    ReplyDelete