Search This Blog

Saturday, 28 July 2018

Their own facts?

The Eye Evolution Simulation That Failed
Evolution News @DiscoveryCSC

On a new episode of ID the Future, Discovery Institute biologists Jonathan Wells and Ray Bohlin talk about a conversation that Dr. Wells imagined between evolutionists Richard Dawkins and Dan-Eric Nilsson, and published recently here at Evolution News. Download the podcast or listen to it here.

Dawkins had lectured (in real life) on Nilsson’s computer simulation work, showing the human eye could have evolved easily and quickly. What would the two of them have said when Nilsson contacted Dawkins and told him, “I’m sorry, Richard, but I didn’t do that simulation?”

Wells pictures them talking about rushing the work on that simulation. But then, what about the next conversation when the simulation failed?

Rights for machines=Wrongs for humans?

European Parliament Committee Wants Robot Rights
Wesley J. Smith 

Every time I warn that transhumanists and other assorted futurists want to destroy human exceptionalism by granting personhood and citizenship to sophisticated computers, some readers mock me with the "It will never happen here" complacency. (Anyone who says that has been in a coma for the last fifty years.)

Yes, it could "happen here." In fact, a committee of the European Parliament has  voted in favor of robot rights:

The draft report, approved by 17 votes to two and two abstentions by the European Parliament Committee on Legal Affairs, proposes that "The most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause."

Specifically, the committee calls for AI robot rights. From the  draft report  (my emphasis):

...creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently;
No! If robots have rights, the concept of human rights will cease to be objective, inherent, and inalienable, but rather, would become subjective and based on perceived individual capacities and capabilities.

Machines are not -- and could never be -- moral agents. They are mere things. Even the most sophisticated AI computer would merely be the totality of its programming, no matter how sophisticated and regardless of whether it was self-teaching.

Think about it. Would owning, buying, and selling these so-called "electronic person" machines amount to slavery? Would machines be allowed to own property, vote, be protected by anti-discrimination laws as the report seems to state?

If a robot were turned off without consent, would it amount to murder? Would it be entitled to due process of law before decommissioning? The whole thing is ludicrous.

The push to conjure non-human rights for non-human and undeserving entities, whether machine, animal, nature, etc., is a serious symptom of societal decadence that should be mocked and rejected out of hand.

Gold;A gift from heaven?