Sam’s Dream is About to Come True! (Kind of)

 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6262
Joined  08-12-2006
 
 
 
22 December 2017 08:33
 

It might not be science determining human values, not quite, but I think AlphaZero comes pretty close. The same AI which learned to play Go well enough to beat the world champion (a first for AI) has now “taught itself chess, then beat a grandmaster with moves never devised in the game’s 1,500-year history”—in four hours.

The Brits want to use the algorithm—which learns on its own without human input—to help diagnose cancer patients and possibly even make decisions about treatment:

But the insertion of a super-intelligent AI into NHS decision-making procedures brings an infinitely more worrying concern.

It is an open secret that the NHS effectively rations access to care — through waiting lists, bed numbers and limiting availability of drugs and treatments — as it will never have enough funds to give everyone the service they need.

The harsh reality is that some deserving people lose out.

The harsher alternative is to be coldly rational by deciding who and who not to treat. It would be most cost-effective to exterminate terminally ill or even chronically ill patients, or sickly children. Those funds would be better spent on patients who might be returned to health — and to productive, tax-paying lives.

This is, of course, an approach too repugnant for civilised societies to contemplate. But decision-making AIs such as AlphaZero don’t use compassionate human logic because it gets in the way. (The ‘Zero’ in that program’s name indicates it needs no human input.)

The same sort of computer mind that can conjure up new chess moves might easily decide that the most efficient way to streamline the health service would be to get rid of the vulnerable and needy.

Another similar algorithm “developed in America for probation services to predict the risk of parole-seekers reoffending was recently discovered to have quickly become unfairly racially biased.” Unfairly? Because bias caused it to make inaccurate predictions? Or because profiling is unfair for philosophical reasons having nothing to do with predictive accuracy?

It’s time for our justice system to embrace artificial intelligence

Professionals in the criminal justice system have a seemingly impossible task. They must weigh the probability that a criminal defendant will show up to trial, whether they are guilty, what the sentence should be, whether parole is deserved and what type of probation ought to be imposed. These decisions require immense wisdom, analytical prowess, and evenhandedness to get right. The rulings handed down will change the course of individuals’ lives permanently.

But human judgment brings humans failings. Not only are there racial disparities in the sentencing process, but research suggests that extraneous factors like how recently a parole board member ate lunch or how the local college football team is doing can have significant effects on the outcome of a decision. It may be that the tasks we ask judges and parole boards to carry out are simply too difficult for internal human calculus.

While humans rely on inherently biased personal experience to guide their judgments, empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates. Importantly, these gains can be made across the board, including for Hispanics and African-Americans.

AlphaZero for president!

 
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6262
Joined  08-12-2006
 
 
 
22 December 2017 08:37
 

Or better yet, AlphaZero for God.

 
 
EN
 
Avatar
 
 
EN
Total Posts:  20316
Joined  11-03-2007
 
 
 
22 December 2017 08:40
 

As long as we don’t get rid of lawyers, I’m good with it.

 
Cheshire Cat
 
Avatar
 
 
Cheshire Cat
Total Posts:  947
Joined  01-11-2014
 
 
 
22 December 2017 11:59
 

I’ve been wondering lately, what would happen if some powerful AI gained power over we humans on a global scale; what decisions would it make?

If it was programmed to value sentient life and to see the logic of ecosystems surviving and thriving for the global survival of all beings, wouldn’t it eventually have to come to the conclusion that there were too many human beings on this planet?

Wouldn’t it conclude that a drastic reduction in human beings would be the main solution for stopping global warming and preserving the web of life?

Isn’t this the ultimate fear we have of an AI? That once created, the monster escapes the laboratory, then concludes it’s creator is a detriment and should be eliminated or limited?

Mary Shelley’s tale might be prescient.

 
 
GAD
 
Avatar
 
 
GAD
Total Posts:  16340
Joined  15-02-2008
 
 
 
22 December 2017 16:41
 

Sounds good to me. If nothing else we’ll get some good scifi out of fighting the machines because they don’t understand that being bad is good.

 
 
Cheshire Cat
 
Avatar
 
 
Cheshire Cat
Total Posts:  947
Joined  01-11-2014
 
 
 
22 December 2017 18:10
 
GAD - 22 December 2017 04:41 PM

Sounds good to me. If nothing else we’ll get some good scifi out of fighting the machines because they don’t understand that being bad is good.

I doubt if there would be any machines to fight.

All it would take is a genetically engineered fatal virus that targets humans only. If the virus had a long dormancy period, and could live in a human body for months or years before becoming active, that would do the trick. It could be spread around the world before people even knew it was there. The AI would have to develop a vaccine to inoculate the human beings that it might need to sustain it, or perhaps it wouldn’t want to kill us off completely, just reduce our numbers drastically.

I’m thinking of something along the lines of Terry Gilliam’s movie, “12 Monkeys.”

 
 
Skipshot
 
Avatar
 
 
Skipshot
Total Posts:  9000
Joined  20-10-2006
 
 
 
22 December 2017 21:37
 

I am all for AI, and am not afraid of intellectual / rational competition.

 
EntropyPhoenix
 
Avatar
 
 
EntropyPhoenix
Total Posts:  2
Joined  27-02-2018
 
 
 
27 February 2018 15:24
 

The only reason our civilization holds together is because of resources, and these are catastrophically mismanaged - planet-transforming externalized costs, technology that amplifies human ignorance and destructive evolutionary proclivities, vast waste of human potential (think of all the ways the human species makes itself dumber on purpose or ignores low-cost ways to improve intelligence), economic systems that are inherently unsustainable, etc.

We have a species with a brain too complex for most members to even being to understand how it works - and that’s just with the shallow body of current neuroscience. These un-self-aware brains then come together to create systems of ever-increasing complexity using resource competition as the primary driving mechanism. There is no way this can last for long, and the dire problems arising on the planet are an indication of the absurdity of our situation.

The human brain is too limited to manage its own complexity, much less something as complex as emergent civilization, and it’s certainly insufficient to manage a whole planet. This should make intuitive sense. Evolution could not have planned for the consequences of mental faculties capable of creating the emergent complexity of civilization. What are the odds that our brain architecture could have been “in tune with” and “in control” of whatever it could produce emergent with other brains? It makes intuitive sense that our attempts at civilization would create an ever-growing potential for catastrophe. We create growing existential risk in the form of new systems we don’t understand (e.g. look at the effects of social media on polarization and negative psychological consequences for teens). We are helplessly plugged into an incomprehensible machine, and are little more than drones pumping resources into evolving and maintaining it. When you consider our lack of free will and the neurological programming we receive from the environment (e.g. this neurological programming contains the narratives and technology of the environment’s own evolution and maintenance), this alarming process, involving various positive feedback loops, should be apparent.

The only chance this species has is to take many forms of power out of the hands of human apes and put it into the hands of highly-intelligent machines. We’ve already seen what human “leaders” will do without exception, and they can never be trusted to take us into the future. There is also genetic engineering, neural implants, and other species-changing technology on the horizon, and the point is that the default human ape is not up to the task of managing its own existence. The assumption that we can is so deeply embedded that it’s difficult to question, but it’s fundamental, and a little reflection on evolution should present the intuition that our intelligence is highly dangerous and insufficient to manage itself.

 
GreenInferno
 
Avatar
 
 
GreenInferno
Total Posts:  115
Joined  20-09-2012
 
 
 
06 May 2018 15:46
 
Antisocialdarwinist - 22 December 2017 08:33 AM

It is an open secret that the NHS effectively rations access to care — through waiting lists, bed numbers and limiting availability of drugs and treatments — as it will never have enough funds to give everyone the service they need.

AlphaZero for president!

Isn’t that how you practice triage?

But on the consequences of having an AI make these sorts of decisions, I don’t have a problem in principle. Besides, we could always just have other AIs cross check the decision.

 
 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  202
Joined  13-02-2017
 
 
 
10 May 2018 03:45
 

Another similar algorithm “developed in America for probation services to predict the risk of parole-seekers reoffending was recently discovered to have quickly become unfairly racially biased.” Unfairly? Because bias caused it to make inaccurate predictions? Or because profiling is unfair for philosophical reasons having nothing to do with predictive accuracy?

At least it won’t have to break for lunch.

 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  202
Joined  13-02-2017
 
 
 
10 May 2018 04:06
 
EntropyPhoenix - 27 February 2018 03:24 PM

The only reason our civilization holds together is because of resources, and these are catastrophically mismanaged - planet-transforming externalized costs, technology that amplifies human ignorance and destructive evolutionary proclivities, vast waste of human potential (think of all the ways the human species makes itself dumber on purpose or ignores low-cost ways to improve intelligence), economic systems that are inherently unsustainable, etc.

We have a species with a brain too complex for most members to even being to understand how it works - and that’s just with the shallow body of current neuroscience. These un-self-aware brains then come together to create systems of ever-increasing complexity using resource competition as the primary driving mechanism. There is no way this can last for long, and the dire problems arising on the planet are an indication of the absurdity of our situation.

The human brain is too limited to manage its own complexity, much less something as complex as emergent civilization, and it’s certainly insufficient to manage a whole planet. This should make intuitive sense. Evolution could not have planned for the consequences of mental faculties capable of creating the emergent complexity of civilization. What are the odds that our brain architecture could have been “in tune with” and “in control” of whatever it could produce emergent with other brains? It makes intuitive sense that our attempts at civilization would create an ever-growing potential for catastrophe. We create growing existential risk in the form of new systems we don’t understand (e.g. look at the effects of social media on polarization and negative psychological consequences for teens). We are helplessly plugged into an incomprehensible machine, and are little more than drones pumping resources into evolving and maintaining it. When you consider our lack of free will and the neurological programming we receive from the environment (e.g. this neurological programming contains the narratives and technology of the environment’s own evolution and maintenance), this alarming process, involving various positive feedback loops, should be apparent.

The only chance this species has is to take many forms of power out of the hands of human apes and put it into the hands of highly-intelligent machines. We’ve already seen what human “leaders” will do without exception, and they can never be trusted to take us into the future. There is also genetic engineering, neural implants, and other species-changing technology on the horizon, and the point is that the default human ape is not up to the task of managing its own existence. The assumption that we can is so deeply embedded that it’s difficult to question, but it’s fundamental, and a little reflection on evolution should present the intuition that our intelligence is highly dangerous and insufficient to manage itself.

From one relative newbie to another…stick around, my man.  Stick around…