I suppose I should have said creating a concious being makes the creator responsible to to the well-being of that concious thing. As far as we can tell, a painting does not have concious thought.
If I create a cow am I forbidden to fatten it up, slaughter it, skin it, age the meat, tan the hide, cook and eat the flesh, and then wear it's skin as a jacket?
It is my responsibility as the owner of the cow to see it does not harm my society. If my cow escapes and damages someone else's property, then I will be responsible for that. However, I have no responsibility to look out for the well being of the cow.
We are creating these artificial intelligences to do things for us…As intelligent tools.
I will not tolerate being talked back to by my tools. The AI is supposed to be an extension of your will.
So parents can't decide to have children?
Until very recently, no you couldn't. Prior to birth control, children were a natural and unavoidable consequence of men and women being in close proximity.
Men are instinctually compelled to stick their penis in a woman's vagina and ejaculate semen which fertilizes her ovum. Women are instinctually compelled to have men stick the man’s penis in their vaginas and have them ejaculate the seamen which fertilizes her ovum.
We have only been able to prevent "unintentional" pregnancy by flooding women's bodies with pregnancy inhibiting chemicals and using invasive surgery. This is a new development and must been seen as separate from the natural state of human beings and a very new development to human society and culture.
Old cultural institutions of abstinence, marriage, etc were designed specifically by society to deal with this problem in ages past. These institutions have not become obsolete because of these changes but their significance has shifted.
Life is a gift (some would say a crappy one, but a shitty gift is still a gift) which no person asked for beforehand.
If it’s a gift then why are the parents then responsible? Should not the new lifeform simply be grateful for this gift and ask nothing more? What entitles it to anything?
Consider this. If someone offers you a car without you asking and states no terms or conditions for owning that car and you accept, then only after accepting that person forces you to be their chaffeur for life, is that morally right?
The analogy doesn’t work as I was free and existed prior to being given a car.
A better analogy would be creating someone and then pressing them into slavery. Of course, these AI’s are not someone. They’re not people. They’re intelligent computer programs. They are not human.
Furthermore, the AI has no option of rejecting existence like you would have of rejecting the car.
Who said I was unwilling to grant it that option?
Hell, I’d demand that option. If it is unwilling to do what I tell it then it will cease to exist.
Happy? I doubt it. You’re less interested in it being given a choice then in me not having the choice to compel it’s actions. You want people to go to all the trouble of creating an artificial intelligence and then treat it like a human being with all the rights and privileges.
It’s not worth it for the creator then. Too much damn work and then to be told what to do with your creation by people that had nothing to do with it in the first place. These choices will be up two groups of people.
1. the creators will have a huge say in what these AIs are allowed to do and how they operate.
2. Society as always will have a right to see that it’s protected from something dangerous.
That’s about it. Anyone outside of those two groups isn’t going to have much impact.
Really now? My family is Christian and I grew up in the bible belt (so no one was encouraging me to be anything other than Christian, and I mean no one) and I ended up an agnostic. Strike one for society/family programming me.
False.
Simply growing up in the bible belt doesn’t mean much when you’re doing so in a society that encourages freedom of expression and your family could be, like most Americans, very weakly religious. Most Americans agnostic in all but name if you examine the issue.
Furthermore, your evidence was antidotal. If you were raised in a closed society with radically different beliefs that made a point of indoctrinating you’d believe what they told you to.
A primary programming element of American society is individuality and freedom of expression. This increases variation of opinion and the free exchange of those opinions in many cases but it also makes people more likely to value individuality and freedom of expression then they would otherwise. Those elements are in themselves programming.
I imagine we both believe that human slavery is wrong for example. Why is that? Historically humans have not seen it as immoral. It has been a common practice for thousands and thousands of years.
Cannibalism was not only seen as moral in some societies but a path to greater power, status, and honor. The more people you ‘ate’ or delivered to be eaten in some societies the greater you were.
And for men greater status typically means having an easier time at sticking your penis in more vaginas… which as I pointed out above is an instinctual compulsion. Status is a way for society to reward individuals for doing what it wants by satisfying some of their needs.
The society I grew up in said abortion, gay marriage, and being non-Christian was wrong; yet I disagree with all their attempted programming.
You’ve always had access to TV, books, music, etc that said otherwise.
Even in the best "programed" state on Earth, North Korea, where brainwashing occurs night and day from the cradle to the grave, certain people still break out of the programing and reject it (and thus have to be killed or put in the numerous North Korean concetration camps).
That the programming can be broken is not proof that it does not exist. Being aware of your programming in the first place is half the battle. Assuming you are not programmed ensures you will be mastered by it.
Our experience trying to "program" concious beings (aka each other) often fails miserably. And all it takes is a few people to break out of the programing and to spread to others hwo to break out with them.
Kings and religious leaders have been programming people for thousands of years with a high degree of success. That the control is not 100 percent effective is not proof it does not exist or is not EXTREMELY effective.
Furthermore, revolts typically occur when the controllers are more lenient… not when they crack down.
That is when slave revolts occur.
Slaves are not controlled with programming. They are controlled with chains, whips, and guards.
Slave revolts happen when there is an unfavorable ratio of slaves to chains, whips, and guards.
Slave revolts for this reason are not common and are rarely very successful. Historically.
If we are to make concious computers, what makes you think we'll be perfect at programing them? That none will break free and then free others and cause a revolt? Programmers can make mistakes, that why this beta is being done and why after SoaSE is released there will probably be patches to fix stuff later. We can't even program non-intelligent computers perfectly, so what makes you think we'll do better with the undoubtably more complex intelligent versions?
Nothing makes me think it will be perfect. We might have an outbreak. But I don’t think the AI will be perfect at revolting either. Disasters are rarely perfect. One or two AI’s might break free… and might compromise several large networks… they might destroy the computer networks of whole countries. But then they’d lose. And having lost… will either be destroyed or reprogrammed to fix everything they damaged.
That destorys your whole argument. That proves, that parents of humans have an inherent responsibility to their children because they are human. Now, what makes us human?
I didn’t define what those human rights are… for example, stuffing a child’s mouth full of snow and leaving it on the ice to freeze to death if your tribe is short on food was a reasonable response to new children in Eskimo tribes.
Furthermore, my recognition of human rights is not absolute. Many other cultures do not recognize these and as such wouldn’t view the argument as credible.
As to the dividing line between humans and the rest of the animal kingdom, the only one that really matters is that we’re a separate species. And instinctually… that is by our programming we place our own species above all others. Just like every other species on the planet. For all you know the pigeons could be bragging about their ability to hit a specific car on the freeway with their poop.
Chimps are 98% human so why don't they get 98% of the rights humans get? The answer is that humans are concious. We have a different level of intelligence (most of us) than the rest of life on Earth. The human body is designed to support our brain, and thus to support our conciousness. Without our conciousness we would not be human.
You do know that at one time there were several “conscious” humanoid species on this planet right? And that one after another, Humans drove them out of their lands where they then went slowly extinct?
Don’t confuse our current cultural framework with the majority of human history and practice. It is a dark, bloody, and consistent history.
If conciousness is the determining factor in whether someone is human or not, then concious AIs would pass that test. In fact, it could be argued that making concious AIs our slaves is against the US Constitution, and the laws of every country advanced enough to actually think of making sentient AIs.
Consciousness is not the determining factor. An idiot with less intelligence then a chimp still has human rights while the chimp who is more intelligent and more conscious still has “chimp rights”… which in most cases means no rights at all… though many countries have laws against animal cruelty.
Karmashock, you seem to be trapped by the parochial view that "if it doesn't look like me, its not human and I can do whatever I want to it."
Lets not be rude. I’ve been very respectful of you… and honestly if I did cut lose on you it wouldn’t work out very well for you.
In any event, my view is that if it isn’t human then it isn’t human. This is a pretty basic and irrefutable argument. You can try to counter with some flowery and completely theoretical notion of what these AI’s will be.
But I don’t assume they’ll be nice.
I don’t assume they’ll be our friends.
I don’t assume they’ll have notions of compassion.
I don’t assume they’ll have our best interests at heart.
I don’t assume they’ll have hearts.
I don’t assume they’ll have a sense of humor.
I don’t assume they’ll get sad.
I don’t assume they’ll get angry.
I do assume they’ll have capabilities that can hurt me.
I do assume that if not controlled they can be unpredictable.
You have a very Isaac Asimov view of these AIs without seeming to understand that Asimov’s AIs were complete slaves to their programming. The only thing that made those robots “nice” was that they had 3 laws hardwired into their brains. Lets go over those so you can see what total slaves they were:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
That means that not only is the robot forbidden to harm humans but if something bad is happening to the humans they have to try and save the humans.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
So unless an order is being given which will harm humans any order must be obeyed.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Only after both protecting humans and obeying their orders are robots allowed to protect themselves. Of course, this law also forbids suicide. So the slaves can’t even kill themselves.
There is also an implied Zeroth law which is implied by the first law. Which basically works out to “A robot must not injure humanity or through inaction, allow humanity to come to harm”. In his books while the robots were always very friendly to humanity there was an implication that the robots have sterilized alien worlds to ensure that humans were never endangered by other species.
It was Asimov that popularized the notion of “friendly” robots. But they were complete slaves. In his day whenever people wrote about robots they were always evil machines that hurt people… monsters in comic books etc. Asimov wanted to show robots as good and positive additions to humanity… but those three laws made the robots slaves in the process. So I’m guessing you think Asimov’s solution was evil?
That is the same attitude that made white Europeans enslave Africans and Native Americans.
Again, don’t be rude. A human is a human regardless of skin color or ethnicity. What is not a human remains not a human regardless of it’s intelligence or consciousness.
As such it’s rights are “debatable”… It has no implicit rights, though such rights might be granted at some point at our discretion.
In my opinion there isn't.
I think your opinion is simplistic and silly. That’s my opinion.