Wait, now, you didn't say self-awareness; you said self-preservation. I don't see those as the same thing, do you?
I don't conflate them: but how do you preserve a self you do not know you have? Moreover, for ethical purposes, I'm wondering about a situation, well: say that an organism is self-aware, but so passive that it has no desire to preserve itself in its current state or condition. Is there an ethical fork in the road nearby, that would allow modifying or changing that organism in an ethical way? Is its consent involved or required? If you modded an AI to make it better, is that unethical? If you modded it to make it worse, is that unethical? What's the standard for "worse or better"? If the organism/AI doesn't care, why should you?
I did not quite understand the relevance of your homeopathy question, but it it helps: I think it is unethical for a pharmacist to withhold legally prescribed medications, such as contraceptives. :-)
And part of the complexity of AI is that, at some point, it elevates things to the status of people.
I think that is quite true. Fortunately, the question begged by our host (who I hope I am not boring) doesn't address finding that threshold, but just posits something could surpass it.
Tacking toward tangents: I'm not in the last convinced that veal is crueler than other beef from a factory farm. Given how young most beef cattle are when slaughtered, the primary difference between them is whether the inappropriate diet they are fed contains an iron supplement, as far as I can tell. Veal cattle are slaughtered around 4-5 months, beef cattle closer to a year. Both are unnaturally confined, unnaturally fed, and often physically and chemically modified for maximum yield. I know when we were younger, veal was considered "more cruel", but I'm not sure that was ever really true. Given that dairy cattle are force-bred annually, and the boys are considered surplus, their lives are either brief, short, or not-very-long. But otherwise largely identical.
I remembered that thing about early JG's being slow, but I was thinking about a regular robot: one that made a bad decision in haste, because it lacked the leisure to make a good decision. If a positronic brain could "burn out" from a bad decision, there might be story-potential in that situation.
(no subject)
Date: 2012-03-02 09:09 pm (UTC)I don't conflate them: but how do you preserve a self you do not know you have? Moreover, for ethical purposes, I'm wondering about a situation, well: say that an organism is self-aware, but so passive that it has no desire to preserve itself in its current state or condition. Is there an ethical fork in the road nearby, that would allow modifying or changing that organism in an ethical way? Is its consent involved or required? If you modded an AI to make it better, is that unethical? If you modded it to make it worse, is that unethical? What's the standard for "worse or better"? If the organism/AI doesn't care, why should you?
I did not quite understand the relevance of your homeopathy question, but it it helps: I think it is unethical for a pharmacist to withhold legally prescribed medications, such as contraceptives. :-)
And part of the complexity of AI is that, at some point, it elevates things to the status of people.
I think that is quite true. Fortunately, the question begged by our host (who I hope I am not boring) doesn't address finding that threshold, but just posits something could surpass it.
Tacking toward tangents: I'm not in the last convinced that veal is crueler than other beef from a factory farm. Given how young most beef cattle are when slaughtered, the primary difference between them is whether the inappropriate diet they are fed contains an iron supplement, as far as I can tell. Veal cattle are slaughtered around 4-5 months, beef cattle closer to a year. Both are unnaturally confined, unnaturally fed, and often physically and chemically modified for maximum yield. I know when we were younger, veal was considered "more cruel", but I'm not sure that was ever really true. Given that dairy cattle are force-bred annually, and the boys are considered surplus, their lives are either brief, short, or not-very-long. But otherwise largely identical.
I remembered that thing about early JG's being slow, but I was thinking about a regular robot: one that made a bad decision in haste, because it lacked the leisure to make a good decision. If a positronic brain could "burn out" from a bad decision, there might be story-potential in that situation.