dsrtao: dsr as a LEGO minifig (Default)
[personal profile] dsrtao
Is it unethical to build an AI with Focus?

(Focus is a Vingean construct, an induced monomania. It doesn't take away free will, but it makes a particular subject the most interesting thing in the universe -- worse than that, nothing sufficiently far from the subject is interesting at all. ADHD hyperfocus on exactly one thing. Savantism.)

(no subject)

Date: 2012-03-01 09:16 pm (UTC)
From: [identity profile] goldsquare.livejournal.com
FREE THE IPODS!

(no subject)

Date: 2012-03-01 09:29 pm (UTC)
From: [identity profile] goldsquare.livejournal.com
More seriously, I keep thinking "what is the definition of cruelty"?

For example: is it "more cruel" to create an artificial construct within tight parameters, than it is to, say, raise a steak in a feedlot?

Is it more cruel to genetically modify a cow to be as stupid and placid as a fish, and then confine it to a feedlot?

What are the ethics? More than that, if such things are ethical when applied to devices, such as an AI, are they ethical when applied to animals? People?

I suspect there is an interesting SF story in here, where a Savant/Aspergers is offered a cure for his "illness", but he rejects it: he's happy memorizing facts about Lego and Lego construction, he has friends who share his passion (if not as peculiarly) and he fears not having a passion.

(Dim memory tells me that Gordon Dickson wrote a story of that sort, where people could try an experimental drug. Every now and then it produced a supergenius, about as often it made a vegetable, and most people got only modest changes. I'm sure the book is in one of the many unopened boxes in the basement...)

(no subject)

Date: 2012-03-01 09:38 pm (UTC)
From: [identity profile] goldsquare.livejournal.com
YES. You are perfectly correct.

(no subject)

Date: 2012-03-01 09:40 pm (UTC)
seawasp: (Default)
From: [personal profile] seawasp
I think it's ethical because it's not a matter of taking something that SHOULD have free will and depriving it of that wide-ranging option.

(no subject)

Date: 2012-03-01 10:01 pm (UTC)
From: [identity profile] goldsquare.livejournal.com
Well, as true as that is, there's a catch. An AI is an intelligence, artificial in origin. How do we define the "intelligence" part?

One mechanism may very well be that it a special purpose machine that is capable of resolving problems of a certain scope and size, but lacking the sort of creativity/free will that humans share. But an equally valid interpretation is that it has the same sort of intelligence that separates mankind from the other animals.

I'm sort of illustrating the potential gap between a "giant calculator of great cost and size", and a "mind not made of meat but otherwise kindred to our own". Insofar as one projects the term AI onto an intelligence much like our own, the moral dimension becomes more subtle, I think.

The question has been making me ponder. For example: which is more cruel: a mind, like our own but made monomaniacal, or a mind, exactly like our own, without the tools to speak, write, communicate or change the world around it - in a mechanical sensory deprivation.

Would you, as a person, find one of the following more frightening and repellent if it happened to you: monomania, loss of almost all higher thought, or trapped in an apparent coma but fully intact?

It's why I keep pondering the nature of cruelty. Would an AI /hate/ being modified? Or not? Is it losing something that we would treasure as nearly humane or humanity?

(no subject)

Date: 2012-03-01 11:47 pm (UTC)
seawasp: (Default)
From: [personal profile] seawasp
The last one is the only one that's horrific for me when I'm in the situation. Monomania I'll be perfectly happy pursuing my focus and loss of higher thought I'll be unable to notice what I don't have.

Externally all of them are horrific because I'm NOT any of them and I wouldn't be ME under the other conditions.

(no subject)

Date: 2012-03-02 01:54 pm (UTC)
From: [identity profile] goldsquare.livejournal.com
Going a bit off topic - there is a huge gap between loss of almost all higher thought, and a lack of self-awareness.

I bet even Algernon knew something was going wrong, and remembered until he died what he once was.

The older I get, the more I realize that old age is an attempt to live within smaller and smaller circumscribed sets of capability. :-)

(no subject)

Date: 2012-03-02 06:42 pm (UTC)
From: [identity profile] ladymacgregor.livejournal.com
"The older I get, the more I realize that old age is an attempt to live within smaller and smaller circumscribed sets of capability."

I strongly disagree! Old age deals with *different* sets of capability. Sure, the physical plant has many more aches, and problems don't heal as fast as when I was twenty. But although I'm not as slender and athletic as when I was twenty, I like to think that I'm wiser, think things through more, and know more. I make more money. I own a house. I have a great relationship with my spouse. All of these are things that came with age.

(no subject)

Date: 2012-03-02 07:26 pm (UTC)
From: [identity profile] goldsquare.livejournal.com
You may be better off than I am, right now. :-)

(no subject)

Date: 2012-03-01 09:53 pm (UTC)
ext_58972: Mad! (Default)
From: [identity profile] autopope.livejournal.com
I think it's unethical to build any kind of AI that is self-aware/conscious without at a minimum recognizing that it has equivalent rights to a human being.

A non-self-aware AI is another matter.

(no subject)

Date: 2012-03-01 11:48 pm (UTC)
seawasp: (Default)
From: [personal profile] seawasp
The way it's being used in the discussion would indicate "AI" in the "thinking being" class, not "big dumb program that can mimic some processes."

(no subject)

Date: 2012-03-02 03:18 am (UTC)
From: [identity profile] metageek.livejournal.com
I tend to agree. Next question: if we've created a suite of autistic AIs, tuned for various jobs (*), and then we develop the technology to create neurotypical AIs, which can still do those jobs, do we have the obligation to offer the first generation upgrades? Probably...but presumably we designed them to be happy with their lot in the first place, so maybe the upgrade wouldn't make them happier, and would just complicate their lives. (They would at least have the chance to restore from backup if they didn't like the result; humans don't get that option after surgery.)

(*) Of course, if they're being created as slaves, there's a good chance their creators are ignoring their ethical responsibilities. The laws may change once higher-functioning AIs come along, though, and get applied to older models.

(no subject)

Date: 2012-03-02 01:51 pm (UTC)
From: [identity profile] goldsquare.livejournal.com
Interesting questions to ponder, in addition to yours...

1. If an organism lacks even the slightest iota of a sense of self-preservation, is it possible for it to be sentient? Contrariwise, if it shows a sense of self-preservation, must it be sentient to be treated as alive?

2. To what degree does the ability to feel pain, or to miss former abilities, matter to ethical treatment?

3. Are Asimov's 3 Laws tantamount to slavery, or second class citizenship for his otherwise Turing Test machines?

(no subject)

Date: 2012-03-02 06:14 pm (UTC)
From: [identity profile] metageek.livejournal.com
1. If an organism lacks even the slightest iota of a sense of self-preservation, is it possible for it to be sentient?

...why not?

Contrariwise, if it shows a sense of self-preservation, must it be sentient to be treated as alive?

Well...no. Consider the cockroach.

2. To what degree does the ability to feel pain, or to miss former abilities, matter to ethical treatment?

Not much. There are humans with no sense of pain; it's still wrong to harm them. I suppose, if you have to choose someone to risk harm, and one of the candidates can't feel pain, it's more ethical to choose that one. Say, if you need someone to run some dangerous gauntlet to rescue a child, and both candidates have the same odds of survival and success. (Of course, the inability to feel pain probably changes your odds...)

3. Are Asimov's 3 Laws tantamount to slavery, or second class citizenship for his otherwise Turing Test machines?

Hell, yes. The First Law makes them second class citizens; the Second Law makes them slaves—and, stupidly, slaves of humanity as a whole, not just of whoever legally owns them.

Did you ever read the one about the JG series, the ones that were created with some judgment about how to apply the Three Laws? [[SPOILER ALERT]] The goal was to create a robot that could better survive a conflict of laws—conflict between First and Second was no problem, but what about when you had to choose which human to save? A standard robot would burn out because it couldn't obey the prohibition against allowing humans to come to harm—it might not even survive long enough to save one of them. So USR created robots who could judge which humans were more worthy, based on a moral code that measured attributes like intelligence, value to the community, and morality. Early prototypes were nearly worthless; JG-9 was getting there; JG-10 was much better. JG-10 asked to meet JG-9—I forget what reason he gave. After a while, late one night, with no humans listening, they admitted to each other that each found the other to be the most worthy human they'd ever met.

(no subject)

Date: 2012-03-02 06:49 pm (UTC)
From: [identity profile] goldsquare.livejournal.com
The "is self-awareness a requirement for sentience" is because I am dancing around what may be a boundary line between "fancy machine" and "sentient". I do not have a formal definition of sentience that I can work with, so I'm playing with it.

One of the possible dividing lines between toaster and person, may be that a toaster does not have an identity, or name. "I never saw a toaster sorry for itself. A toaster will drop frozen dead from a counter without ever having felt sorry for itself." (Apologies to DH Lawrence fans).

If not the self-awareness/self-preservation of, say a P-1, what /is/ the dividing line between sentience and complexity? I still don't know.

Going back to DSR's original question, I think that part of the ethical question relates to not only whether you are harming the AI, but whether the AI self-perceives harm or would mourn or notice the change.

One of the characters from SF that I find most creepy is the sentient cow from Douglas Adams. Is it ethical to create a creature that wants to die? That would engage in conversation to convince you to kill and eat it? Ewww.

We'd not make a change to a human that hurt or caused a loss of self-perception or mourning for the past - unless it was the lesser of two evils (death or amputation).

My personal history makes some of this very real to me: my mother and much of her family survived the Holocaust: or didn't. One of them had an arm removed, reportedly by Mengele. Part of the barbarism was to devolve people to the status of things. The line between a person or person-equivalent and non-persons is important to me, emotionally.

Pigs are pretty smart, and affectionate. But we kill and eat them. Strangely, while I have no problem with that, I find myself OK with a pig in a large healthy pen living with other pigs and eventually being slaughtered, but I am definitely not OK with factory-farming and gestation crates. (Look them up. The online videos from Smithfield Farms are especially disturbing to the point of nausea.)

When I look at those lines, in an attempt to inform myself of where my standards are, it is the notion of self-awareness and pain and self-preservation that arise in my mind.

I've read all the Asimov stories I could find. :-)

I've always wondered how an ordinary 3-Laws robot would deal with a situation where it would take too long to decide who to save, and all would be lost, and so it had to act without thinking. Could it live with the equivalent of "regret" if its random action turned out to be inferior to what it would have done had it time?

(no subject)

Date: 2012-03-02 08:21 pm (UTC)
From: [identity profile] metageek.livejournal.com
The "is self-awareness a requirement for sentience"

Wait, now, you didn't say self-awareness; you said self-preservation. I don't see those as the same thing, do you?

I think that part of the ethical question relates to not only whether you are harming the AI, but whether the AI self-perceives harm

Mmm...by that standard, you could conclude that it's unethical to deny somebody homeopathic nostrums, if he believes in them.

Part of the barbarism was to devolve people to the status of things.

True. And part of the complexity of AI is that, at some point, it elevates things to the status of people.

I find myself OK with a pig in a large healthy pen living with other pigs and eventually being slaughtered, but I am definitely not OK with factory-farming

Yeah. My dividing line is further, but not for any good reason. We almost always buy non-factory-farm chicken. And I won't eat veal, since it's excessively cruel. (Well...I won't contribute to veal being made. When I took a piece of meat at Google lunch a few months ago, and then read the sign—no, I don't know why I did it in that order—I didn't throw out the veal; that wouldn't reduce the suffering. I'm more careful now.)

I've always wondered how an ordinary 3-Laws robot would deal with a situation where it would take too long to decide who to save

Come to think of it, that was the problem with at least one of the early JGs: it could make moral judgments, but too slowly to be of any use.

(no subject)

Date: 2012-03-02 09:09 pm (UTC)
From: [identity profile] goldsquare.livejournal.com
Wait, now, you didn't say self-awareness; you said self-preservation. I don't see those as the same thing, do you?

I don't conflate them: but how do you preserve a self you do not know you have? Moreover, for ethical purposes, I'm wondering about a situation, well: say that an organism is self-aware, but so passive that it has no desire to preserve itself in its current state or condition. Is there an ethical fork in the road nearby, that would allow modifying or changing that organism in an ethical way? Is its consent involved or required? If you modded an AI to make it better, is that unethical? If you modded it to make it worse, is that unethical? What's the standard for "worse or better"? If the organism/AI doesn't care, why should you?

I did not quite understand the relevance of your homeopathy question, but it it helps: I think it is unethical for a pharmacist to withhold legally prescribed medications, such as contraceptives. :-)

And part of the complexity of AI is that, at some point, it elevates things to the status of people.

I think that is quite true. Fortunately, the question begged by our host (who I hope I am not boring) doesn't address finding that threshold, but just posits something could surpass it.

Tacking toward tangents: I'm not in the last convinced that veal is crueler than other beef from a factory farm. Given how young most beef cattle are when slaughtered, the primary difference between them is whether the inappropriate diet they are fed contains an iron supplement, as far as I can tell. Veal cattle are slaughtered around 4-5 months, beef cattle closer to a year. Both are unnaturally confined, unnaturally fed, and often physically and chemically modified for maximum yield. I know when we were younger, veal was considered "more cruel", but I'm not sure that was ever really true. Given that dairy cattle are force-bred annually, and the boys are considered surplus, their lives are either brief, short, or not-very-long. But otherwise largely identical.

I remembered that thing about early JG's being slow, but I was thinking about a regular robot: one that made a bad decision in haste, because it lacked the leisure to make a good decision. If a positronic brain could "burn out" from a bad decision, there might be story-potential in that situation.

(no subject)

Date: 2012-03-29 03:16 am (UTC)
From: [identity profile] pamelina.livejournal.com
I think we'll have very intelligent AIs without self-preservation.

For example, what about Watson? It can speak english, listen, learn, win Jeopardy, diagnose illness. Is Watson an AI? If it's not, then why not?
Page generated Jul. 10th, 2025 01:30 am
Powered by Dreamwidth Studios