1 2 > 
 
   
 

Broadening artificial intelligence

 
nonverbal
 
Avatar
 
 
nonverbal
Total Posts:  1302
Joined  31-10-2015
 
 
 
09 September 2018 06:12
 

Racial and other prejudices continue to develop within people and perhaps always will. But such bias can also become insidiously crystalized into semi-permanent coding. Joy Buolamwini is paying close attention in an attempt to prevent such disasters within current AI programming. Regarding facial-recognition systems and some of their likely future impacts:

Instead of getting a system that works well for 98% of people in this data set, we want to know how well it works for different demographic groups. Let’s say you’re using systems that have been trained on lighter faces but the people most impacted by the use of this system have darker faces, is it fair to use that system on this specific population?

Besides facial recognition what areas have an algorithm problem?
The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone gets insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated – what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create, but that’s only going to happen if we are intentional.
https://www.theguardian.com/technology/2017/may/28/joy-buolamwini-when-algorithms-are-racist-facial-recognition-bias

How concerned should AI engineers be about this matter? How concerned—or not—are you?

(I’ll count on admins to move this OP if it fits elsewhere.)

 
GAD
 
Avatar
 
 
GAD
Total Posts:  16861
Joined  15-02-2008
 
 
 
09 September 2018 08:57
 

I have zero concern.

 
 
mapadofu
 
Avatar
 
 
mapadofu
Total Posts:  489
Joined  20-07-2017
 
 
 
09 September 2018 10:04
 

Related TEDx talk: https://youtu.be/TRzBk_KuIaM

 
GAD
 
Avatar
 
 
GAD
Total Posts:  16861
Joined  15-02-2008
 
 
 
09 September 2018 11:09
 
mapadofu - 09 September 2018 10:04 AM

Related TEDx talk: https://youtu.be/TRzBk_KuIaM

yeah, yeah, yeah, really just FUD.

 
 
Twissel
 
Avatar
 
 
Twissel
Total Posts:  2580
Joined  19-01-2015
 
 
 
09 September 2018 11:12
 

“Train your A.I. for the Data Set you want, not the Data Set you have”.

 
 
Brick Bungalow
 
Avatar
 
 
Brick Bungalow
Total Posts:  4897
Joined  28-05-2009
 
 
 
09 September 2018 19:01
 

Could you distill the premise of concern for me? Is the idea that artificial intelligence amplifies or broadcasts structural injustices already present? Or that it creates new ones? Or something else? I’m not precisely sure.

 
nonverbal
 
Avatar
 
 
nonverbal
Total Posts:  1302
Joined  31-10-2015
 
 
 
10 September 2018 06:20
 
Brick Bungalow - 09 September 2018 07:01 PM

Could you distill the premise of concern for me? Is the idea that artificial intelligence amplifies or broadcasts structural injustices already present? Or that it creates new ones? Or something else? I’m not precisely sure.

My concern is that if racism remains in the hearts of otherwise normal individuals, then, what might prevent such an attitude from going digital, becoming even more intractable than before?

 
Brick Bungalow
 
Avatar
 
 
Brick Bungalow
Total Posts:  4897
Joined  28-05-2009
 
 
 
10 September 2018 23:23
 
nonverbal - 10 September 2018 06:20 AM
Brick Bungalow - 09 September 2018 07:01 PM

Could you distill the premise of concern for me? Is the idea that artificial intelligence amplifies or broadcasts structural injustices already present? Or that it creates new ones? Or something else? I’m not precisely sure.

My concern is that if racism remains in the hearts of otherwise normal individuals, then, what might prevent such an attitude from going digital, becoming even more intractable than before?

I share your concern. I think racism is one of those things that is particularly pernicious precisely because of how it embeds within institutions and structures and tends to linger even as attitudes within the culture progress. It seems to be one of our most toxic and most tenacious memes.

 

 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  524
Joined  13-02-2017
 
 
 
13 September 2018 17:40
 
nonverbal - 10 September 2018 06:20 AM
Brick Bungalow - 09 September 2018 07:01 PM

Could you distill the premise of concern for me? Is the idea that artificial intelligence amplifies or broadcasts structural injustices already present? Or that it creates new ones? Or something else? I’m not precisely sure.

My concern is that if racism remains in the hearts of otherwise normal individuals, then, what might prevent such an attitude from going digital, becoming even more intractable than before?

It’s unclear to me how someone could even incorporate racism into AI without specifically programming it there, something that seems almost easy enough to do that even I could explain it conceptually, leaving it to the programmers to write the actual code.  How do you mean racism might even get into its algorithms, absent consciously putting it there?  Are you suggesting some kind of unconscious transmission in the creative process of developing those algorithms…?

 

 
Jan_CAN
 
Avatar
 
 
Jan_CAN
Total Posts:  2614
Joined  21-10-2016
 
 
 
13 September 2018 18:14
 
TheAnal_lyticPhilosopher - 13 September 2018 05:40 PM
nonverbal - 10 September 2018 06:20 AM
Brick Bungalow - 09 September 2018 07:01 PM

Could you distill the premise of concern for me? Is the idea that artificial intelligence amplifies or broadcasts structural injustices already present? Or that it creates new ones? Or something else? I’m not precisely sure.

My concern is that if racism remains in the hearts of otherwise normal individuals, then, what might prevent such an attitude from going digital, becoming even more intractable than before?

It’s unclear to me how someone could even incorporate racism into AI without specifically programming it there, something that seems almost easy enough to do that even I could explain it conceptually, leaving it to the programmers to write the actual code.  How do you mean racism might even get into its algorithms, absent consciously putting it there?  Are you suggesting some kind of unconscious transmission in the creative process of developing those algorithms…?

I would assume that AI would only be able to make decisions/come to conclusions based on the information/data available to it, that has been programmed into it.  If there was any bias on what information was included or omitted, could it not result in biased conclusions?

 
 
nonverbal
 
Avatar
 
 
nonverbal
Total Posts:  1302
Joined  31-10-2015
 
 
 
13 September 2018 18:18
 
TheAnal_lyticPhilosopher - 13 September 2018 05:40 PM
nonverbal - 10 September 2018 06:20 AM
Brick Bungalow - 09 September 2018 07:01 PM

Could you distill the premise of concern for me? Is the idea that artificial intelligence amplifies or broadcasts structural injustices already present? Or that it creates new ones? Or something else? I’m not precisely sure.

My concern is that if racism remains in the hearts of otherwise normal individuals, then, what might prevent such an attitude from going digital, becoming even more intractable than before?

It’s unclear to me how someone could even incorporate racism into AI without specifically programming it there, something that seems almost easy enough to do that even I could explain it conceptually, leaving it to the programmers to write the actual code.  How do you mean racism might even get into its algorithms, absent consciously putting it there?  Are you suggesting some kind of unconscious transmission in the creative process of developing those algorithms…?

I’m certainly no AI expert, or even novice. She seems to have a point, but you’re certainly welcome to counter argue it. You can catch the gist of her concerns in her TED thingie:
https://www.youtube.com/watch?list=PLj62-wQeg_DhmYphxg70DhPEJcjfmnVEt&time_continue=39&v=UG_X_7g63rY

 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  524
Joined  13-02-2017
 
 
 
14 September 2018 07:03
 
nonverbal - 13 September 2018 06:18 PM
TheAnal_lyticPhilosopher - 13 September 2018 05:40 PM
nonverbal - 10 September 2018 06:20 AM
Brick Bungalow - 09 September 2018 07:01 PM

Could you distill the premise of concern for me? Is the idea that artificial intelligence amplifies or broadcasts structural injustices already present? Or that it creates new ones? Or something else? I’m not precisely sure.

My concern is that if racism remains in the hearts of otherwise normal individuals, then, what might prevent such an attitude from going digital, becoming even more intractable than before?

It’s unclear to me how someone could even incorporate racism into AI without specifically programming it there, something that seems almost easy enough to do that even I could explain it conceptually, leaving it to the programmers to write the actual code.  How do you mean racism might even get into its algorithms, absent consciously putting it there?  Are you suggesting some kind of unconscious transmission in the creative process of developing those algorithms…?

I’m certainly no AI expert, or even novice. She seems to have a point, but you’re certainly welcome to counter argue it. You can catch the gist of her concerns in her TED thingie:
https://www.youtube.com/watch?list=PLj62-wQeg_DhmYphxg70DhPEJcjfmnVEt&time_continue=39&v=UG_X_7g63rY

So I checked out the Ted Talk, and I see what she is driving at.  If you don’t program algorithms with the proper basic instructions, they will display a “bias,” producing, as they will, only results based on the instructions with which they have been programmed.  Hence the robot’s software couldn’t recognize her face.  Apparently the programmer failed to include a variable for a wide enough variation in skin tone, or maybe her face is so dark the algorithms just can’t pick up the relevant features, or maybe her face deviates too far from the training set—that sort of thing.  Makes sense.  With these kinds of failures, in a manner of speaking bias enters our facial recognition algorithms—or more precisely, the algorithms are bias represented in code because that code means their application can only produce results as though a bias were present.  In this case, the bias is a result of negligence, not intent.

What I don’t see is how what she ends with follows from what she starts with.  Why invent some new concept “the coded gaze” for the already obvious one, “biased facial recognition”?  Why propose this “incoding movement”—and god help us, an “algorithmic justice league” to fight the “coded gaze”—for the obvious steps of making sure one starts with representative examples and instructions when coding facial recognition software and includes the proper range for the variables in those instructions?  I’m pretty sure the need to do this is widely, if not universally, known, despite the fact that clearly there was a failure in this case.  So why inflate this rather obvious problem and its equally obvious remedy with high-falutin’ concepts that—to my mind—simultaneously reiterate and obscure the obvious?  For now, instead of simply knowing I have to be more careful when programming facial recognition to “code” representative instructions, I have to be mindful of avoiding “the coded gaze”—a maxim that absent some further explanation will leave me scratching my head and wondering what I’m supposed to be doing (or in this case, not doing).  Other than guaranteeing people like Buolamwini explanatory job-security, I just don’t see the utility of her kind of language games.  They strike me as satisfying the itch to be original more than any dedication to solving actual problems. 

That’s my opinion, at least.  While I recognize the basic problem she’s pointing to, what I’m suggesting is that her way of pointing to it obscures in some high-falutin’ mission what is already widely, if not universally, known.  We need to be especially careful that our instructions for facial recognition (and algorithms generally) are ‘representative,’ and even more careful that they don’t replicate unjust social and personal biases.  Why not just say that absent the post-modern jargon of the “coded gaze” and her triad list of “coding accountability”?  Why, for all love, an “algorithmic justice league” to raise awareness of a “rising problem”?

 

[ Edited: 14 September 2018 10:40 by TheAnal_lyticPhilosopher]
 
Twissel
 
Avatar
 
 
Twissel
Total Posts:  2580
Joined  19-01-2015
 
 
 
14 September 2018 09:48
 

The solution, as always, is better data. Algorithms can detect biases, and we can remove them from future results.
We already have examples of this working, like in the case of the connection of mealtimes and likelihood of pardons by Israeli judges.

 
 
GAD
 
Avatar
 
 
GAD
Total Posts:  16861
Joined  15-02-2008
 
 
 
14 September 2018 17:53
 
Twissel - 14 September 2018 09:48 AM

The solution, as always, is better data. Algorithms can detect biases, and we can remove them from future results.
We already have examples of this working, like in the case of the connection of mealtimes and likelihood of pardons by Israeli judges.

I used to review the repair reports for about 50 techs vs how many failed again after repair and it always spiked before lunch and quitting time. Tracing the original fails back to the assembly line I found the same correlations.

 
 
Poldano
 
Avatar
 
 
Poldano
Total Posts:  3295
Joined  26-01-2010
 
 
 
17 September 2018 02:37
 
TheAnal_lyticPhilosopher - 13 September 2018 05:40 PM
nonverbal - 10 September 2018 06:20 AM
Brick Bungalow - 09 September 2018 07:01 PM

Could you distill the premise of concern for me? Is the idea that artificial intelligence amplifies or broadcasts structural injustices already present? Or that it creates new ones? Or something else? I’m not precisely sure.

My concern is that if racism remains in the hearts of otherwise normal individuals, then, what might prevent such an attitude from going digital, becoming even more intractable than before?

It’s unclear to me how someone could even incorporate racism into AI without specifically programming it there, something that seems almost easy enough to do that even I could explain it conceptually, leaving it to the programmers to write the actual code.  How do you mean racism might even get into its algorithms, absent consciously putting it there?  Are you suggesting some kind of unconscious transmission in the creative process of developing those algorithms…?

AI that uses a learning method is susceptible to bias from any distortion present in the data set used for learning. I’m using distortion to refer specifically to the variances between the sample data set used for learning and the data set for the entire population. The reason is because learning necessarily uses Bayesian statistical reasoning, whether the learner is an AI, a pigeon, or a human. Even perfect learning algorithms are no proof against this source of bias.

 

[ Edited: 17 September 2018 02:41 by Poldano]
 
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6396
Joined  08-12-2006
 
 
 
18 September 2018 14:22
 
Poldano - 17 September 2018 02:37 AM
TheAnal_lyticPhilosopher - 13 September 2018 05:40 PM
nonverbal - 10 September 2018 06:20 AM
Brick Bungalow - 09 September 2018 07:01 PM

Could you distill the premise of concern for me? Is the idea that artificial intelligence amplifies or broadcasts structural injustices already present? Or that it creates new ones? Or something else? I’m not precisely sure.

My concern is that if racism remains in the hearts of otherwise normal individuals, then, what might prevent such an attitude from going digital, becoming even more intractable than before?

It’s unclear to me how someone could even incorporate racism into AI without specifically programming it there, something that seems almost easy enough to do that even I could explain it conceptually, leaving it to the programmers to write the actual code.  How do you mean racism might even get into its algorithms, absent consciously putting it there?  Are you suggesting some kind of unconscious transmission in the creative process of developing those algorithms…?

AI that uses a learning method is susceptible to bias from any distortion present in the data set used for learning. I’m using distortion to refer specifically to the variances between the sample data set used for learning and the data set for the entire population. The reason is because learning necessarily uses Bayesian statistical reasoning, whether the learner is an AI, a pigeon, or a human. Even perfect learning algorithms are no proof against this source of bias.

There are two different kinds of AI, only one of which is truly “intelligent” in my opinion. First is AI that only knows what it has been programmed to know. An example of this is Stockfish, the chess engine that until recently was the best chess-player on the planet. But everything Stockfish knows about chess is part of its original programming. Any given version of Stockfish is no better at playing chess now than it was when it was first released. Other examples include programs designed to predict the likelihood that parole candidates will reoffend, which have recently been under fire for being “racist” or discriminatory toward certain groups.

The other kind of AI—the kind that is truly “intelligent”—is a generalized learning algorithm, so-called “Deep Learning.” It starts out knowing nothing, but learns from experience. The best example of this is Alphazero, the AI that taught itself to play chess well enough to consistently beat Stockfish. It was programmed with the rules of the game, but not with tactics or strategy. These it picked up on its own by playing millions of games against itself, moving randomly at first, then refining its tactics and strategy based on winning or losing. It did this all in just four hours. The same “Deep Learning” algorithm also learned to play Go the same way, starting with nothing more than the rules of the game, and was the first AI capable of consistently beating the best human Go player on the planet. And it did the same with another game, Shogi. Another example is the AI that taught itself to determine the sexual orientation of men and women based solely on photographs posted to dating websites—with 90% accuracy.

It’s easy to see how bias could make its way into the first, non-intelligent type of AI. If I, the person in charge of the algorithm, believe that bald parole candidates are more likely to reoffend than candidates with hair, then I’ll incorporate this bias into the prediction algorithm.

How the second type of AI—the truly intelligent type—could become biased isn’t quite so obvious. It all depends on the information presented to it during the learning phase. For example, I could cherry-pick from all the parole candidates who went on to reoffend and exclude all those with hair, while at the same time excluding bald candidates who didn’t reoffend. The AI would presumably associate bald parole candidates with a high likelihood of reoffending and hairy candidates with a low likelihood of reoffending.

One thing’s for sure: even if there is in fact a causal relationship between baldness (or skin color, etc.) and reoffending, and this causal relationship is reflected in the AI’s predictions, it’ll be accused of bias despite the causal relationship. This could be true even if information about skin color, for example, is withheld from the AI during the learning phase. It might, for example, find that candidates raised in low-income, fatherless households are highly likely to reoffend. If there’s also a correlation between skin color and low-income, fatherless households, the AI will appear to be biased against candidates with a certain skin color.

 
 
 1 2 >