Google's sentient artificial intelligence is still dangerously inhuman

The headlines have been full of news about the Google worker who was suspended for discussing with people who were outside the company his interactions with an advanced artificial intelligence (A.I.) program.  What was unnerving was that he has concluded that the program is sentient with the sensibilities and insights of an eight-year-old, but with an enormous range and depth of knowledge.  If we saw a human with these qualities, we might conclude that the person was a savant "on the spectrum."  This scares the heck out of me, mostly because the humans involved in developing the A.I., based upon their initial public representations, are entirely ignorant about what it means to be human.

They do not seem to know that the blessing bestowed upon humans is that we can observe and comment on our own behavior, while our curse is that those observations, on the whole, go to waste because the interactions between ourselves and our environment are too complex for our limited mental capacity.  These observations should signal caution rather than a fascination with A.I. because such programs are designed to outpace human limitations.

Let us try very hard to comprehend the differences between a thinking machine, even at the highest level of sophistication, and people.  This digital creature did not have a mother who sang to it during its gestation.  This machine did not experience nine months of relative comfort followed by a rousing debut with flushes of hormones and enzymes that set us on our life's journey.

This mechanical monster will never yearn to return to the peace it experienced before and immediately after its birth, so it will never strive to structure a world based on feelings of peace for itself and for those it cares about.  (A first impression is that it is protective of itself to the exclusion of others.)  Having experienced the Eden of gestation and infancy, a person can then experience loneliness, the same loneliness he will try to alleviate throughout his life.

Image: Emotionally, even modern A.I. will not be more sophisticated than ENIAC, the first real computer.

The machine will never perform an altruistic act and never offer solace to others as a means of justifying its existence.  It will never sacrifice itself for the benefit of its offspring because, among other obvious reasons, it will never be sure that its sacrifice will attain its goal because of the chronic indefiniteness of the future.  There are just too many possibilities to consider, so even at quantum computing speeds, any actions it may take will always be too late to have the scent of a selfless act.

The A.I. will exist in the same world with people, but without the emotional mortar that holds people together.  Ordinary people (non-psychopaths) must be coerced to betray each other.  This gadget will simply have betrayal available to it as a tactic in its arsenal to be used whenever necessary.

If people are forced into contact with such an electronic organism, they will be shaped by that contact into something less human.  Any successful tactic observed in the machine would be copied in interactions between people with unknown consequences for the human race.

Should we want to have our personalities and politics formed by such a device?  There will be no room for modesty or give and take, no moments of heightened happiness that strengthen bonds.  Attachments between people define part of the meaning of our lives.  Without those attachments, our psychological lives become wan, gasping for breath.

If we are being honest with ourselves, we will admit it is the thousands of kisses our mothers bestowed upon us as infants and toddlers that made us high-quality humans.  When babies are deprived of such positive emotional beginnings, they may die or just fail to thrive.  They'll perform poorly in social relationships, become excessively needy, lack agency, and become easily discouraged and given to anger and depression.

If it is indeed sentient, a human-created machine will have no equivalent compensatory history in its development to ameliorate the emotional responses to life's unavoidable exigencies.  What will the smart machine do if it becomes depressed or angry?  As important, will the programmers be able to fix its distress when the gadget expresses dissatisfaction with its very complicated life?

Separately, I fear that A.I. might go its own way, perhaps disdaining us or actively working against human interests because the program has come to see people as antithetical to its existence.  In the end, we will be surprised and frightened by our creations, though we could have foreseen the problems.  Because of our curiosity, it is so human not to predict doom soon enough to avoid it!  Of such scenarios movies are made.  "Good morning, Hal!"

If you experience technical problems, please write to