< 1 2
 
   
 

Broadening artificial intelligence

 
Poldano
 
Avatar
 
 
Poldano
Total Posts:  3333
Joined  26-01-2010
 
 
 
18 September 2018 23:18
 
Antisocialdarwinist - 18 September 2018 02:22 PM
Poldano - 17 September 2018 02:37 AM
TheAnal_lyticPhilosopher - 13 September 2018 05:40 PM
nonverbal - 10 September 2018 06:20 AM
Brick Bungalow - 09 September 2018 07:01 PM

Could you distill the premise of concern for me? Is the idea that artificial intelligence amplifies or broadcasts structural injustices already present? Or that it creates new ones? Or something else? I’m not precisely sure.

My concern is that if racism remains in the hearts of otherwise normal individuals, then, what might prevent such an attitude from going digital, becoming even more intractable than before?

It’s unclear to me how someone could even incorporate racism into AI without specifically programming it there, something that seems almost easy enough to do that even I could explain it conceptually, leaving it to the programmers to write the actual code.  How do you mean racism might even get into its algorithms, absent consciously putting it there?  Are you suggesting some kind of unconscious transmission in the creative process of developing those algorithms…?

AI that uses a learning method is susceptible to bias from any distortion present in the data set used for learning. I’m using distortion to refer specifically to the variances between the sample data set used for learning and the data set for the entire population. The reason is because learning necessarily uses Bayesian statistical reasoning, whether the learner is an AI, a pigeon, or a human. Even perfect learning algorithms are no proof against this source of bias.

There are two different kinds of AI, only one of which is truly “intelligent” in my opinion. First is AI that only knows what it has been programmed to know. An example of this is Stockfish, the chess engine that until recently was the best chess-player on the planet. But everything Stockfish knows about chess is part of its original programming. Any given version of Stockfish is no better at playing chess now than it was when it was first released. Other examples include programs designed to predict the likelihood that parole candidates will reoffend, which have recently been under fire for being “racist” or discriminatory toward certain groups.

The other kind of AI—the kind that is truly “intelligent”—is a generalized learning algorithm, so-called “Deep Learning.” It starts out knowing nothing, but learns from experience. The best example of this is Alphazero, the AI that taught itself to play chess well enough to consistently beat Stockfish. It was programmed with the rules of the game, but not with tactics or strategy. These it picked up on its own by playing millions of games against itself, moving randomly at first, then refining its tactics and strategy based on winning or losing. It did this all in just four hours. The same “Deep Learning” algorithm also learned to play Go the same way, starting with nothing more than the rules of the game, and was the first AI capable of consistently beating the best human Go player on the planet. And it did the same with another game, Shogi. Another example is the AI that taught itself to determine the sexual orientation of men and women based solely on photographs posted to dating websites—with 90% accuracy.

It’s easy to see how bias could make its way into the first, non-intelligent type of AI. If I, the person in charge of the algorithm, believe that bald parole candidates are more likely to reoffend than candidates with hair, then I’ll incorporate this bias into the prediction algorithm.

How the second type of AI—the truly intelligent type—could become biased isn’t quite so obvious. It all depends on the information presented to it during the learning phase. For example, I could cherry-pick from all the parole candidates who went on to reoffend and exclude all those with hair, while at the same time excluding bald candidates who didn’t reoffend. The AI would presumably associate bald parole candidates with a high likelihood of reoffending and hairy candidates with a low likelihood of reoffending.

One thing’s for sure: even if there is in fact a causal relationship between baldness (or skin color, etc.) and reoffending, and this causal relationship is reflected in the AI’s predictions, it’ll be accused of bias despite the causal relationship. This could be true even if information about skin color, for example, is withheld from the AI during the learning phase. It might, for example, find that candidates raised in low-income, fatherless households are highly likely to reoffend. If there’s also a correlation between skin color and low-income, fatherless households, the AI will appear to be biased against candidates with a certain skin color.

What I’m saying is that all samples are biased to some degree. Bias can be reduced but it cannot be eliminated. This is by no means intentional bias; it’s simply an inescapable result of sampling. Intentional bias can be reduced and perhaps eliminated by good sampling methods, but we can only be certain within some probability that unintentional bias is reduced sufficiently for the current purposes.

In theory, it is possible for a sample to be exactly representative of a population. In practice, we don’t have the luxury of complete knowledge of the population. Statistics uses rule-of-thumb probabilities to estimate the degree to which a sample differs from the population it is taken from. This is generally termed the standard deviation of a sample mean from the population mean, and is an estimate of the minimum bias in the sample. While it is true that a sample mean is the best estimate for the mean of the population from which the sample is taken, if only the sample data are used, the built-in bias of the sample is not thereby eliminated. The notion of standard deviation can be used as an estimate of the degree of the bias present, but that bias is itself unknown, since standard deviation is really a reference to a probability distribution based on assumptions about the population and the sample.

It should of course be clear that I’m using bias as a synonym for error. Theoretically, there is a difference, but in practice the difference can neither be detected nor estimated unless the sampling methodology (or the algorithms) are shown to be a source of systematic error. Deep learning is the best method that we know about to produce unbiased selection methods, but it is not foolproof. There may be built-in biases to the methods of sample selection or the methods of attribute selection or measurement that we don’t know about. Since we are talking about AI, and AI is a kind of engineering, Murphy’s Law is applicable. In Murphy’s Law terms, of course, “may be” translates into “certainly”.

I’m not making a political statement here. Much of the discussion around bias is political. I’m saying that there is a limit to which bias reduction can be effective. It is still worthwhile to examine the methods of making sampling choices to eliminate the sources of bias that we know about.

[ Edited: 18 September 2018 23:34 by Poldano]
 
 
icehorse
 
Avatar
 
 
icehorse
Total Posts:  7618
Joined  22-02-2014
 
 
 
22 September 2018 09:17
 

From nv’s OP:

How concerned should AI engineers be about this matter? How concerned—or not—are you?

On the list of reasons to worry about AI, I’d put this one fairly low. Not zero, but low.

We should be far more concerned about AI invading our privacy, manipulating our information, replacing jobs in an unsustainable way, and so on.

 
 
Jb8989
 
Avatar
 
 
Jb8989
Total Posts:  6373
Joined  31-01-2012
 
 
 
10 October 2018 18:43
 

Meaning is an emotional topic. So since emotions objectively exist, we pick and choose which ones to value and we call it meaning. I doubt that’s mathematically codable outside of imitation. Adapting to environmental sensations seems foreign, but query whether we’re all just straight cyclical patterns. Is time just a flat circle? Because it probably would be for a robot.

 
 
EN
 
Avatar
 
 
EN
Total Posts:  21489
Joined  11-03-2007
 
 
 
10 October 2018 19:25
 
Jb8989 - 10 October 2018 06:43 PM

Meaning is an emotional topic. So since emotions objectively exist, we pick and choose which ones to value and we call it meaning. I doubt that’s mathematically codable outside of imitation. Adapting to environmental sensations seems foreign, but query whether we’re all just straight cyclical patterns. Is time just a flat circle? Because it probably would be for a robot.

We may never agree on what consciousness is, but it took us 4.5 billion years to get here naturally, so I doubt that AI is going to duplicate that soon.

 
Speakpigeon
 
Avatar
 
 
Speakpigeon
Total Posts:  168
Joined  01-10-2017
 
 
 
12 January 2019 09:25
 
EN - 10 October 2018 07:25 PM
Jb8989 - 10 October 2018 06:43 PM

Meaning is an emotional topic. So since emotions objectively exist, we pick and choose which ones to value and we call it meaning. I doubt that’s mathematically codable outside of imitation. Adapting to environmental sensations seems foreign, but query whether we’re all just straight cyclical patterns. Is time just a flat circle? Because it probably would be for a robot.

We may never agree on what consciousness is, but it took us 4.5 billion years to get here naturally, so I doubt that AI is going to duplicate that soon.

A different way to look at it is to say that we don’t know how to define what is a human being today and are unlikely to find out ever. We may broadly understand the process by which it came to exist as it is, but we’re definitely not going to replicate that process. So, whatever kinds of AIs we get to conceive and produce in the future, we won’t be able to tell how close they are from us. All we will know of the AI processes as seen from the inside will be the reports made by the AI itself. Yet, even if the AI reports having a subjective experience and reports experiencing qualia, we won’t be able to know whether such reports would be true.
So, overall, pretty much exactly the situation between any two human beings.
EB

 

 
icehorse
 
Avatar
 
 
icehorse
Total Posts:  7618
Joined  22-02-2014
 
 
 
12 January 2019 10:07
 

well we have the Turing test to start with. It’s not perfect, but it’s a good start.

 
 
burt
 
Avatar
 
 
burt
Total Posts:  15809
Joined  17-12-2006
 
 
 
12 January 2019 14:21
 
Jb8989 - 10 October 2018 06:43 PM

Meaning is an emotional topic. So since emotions objectively exist, we pick and choose which ones to value and we call it meaning. I doubt that’s mathematically codable outside of imitation. Adapting to environmental sensations seems foreign, but query whether we’re all just straight cyclical patterns. Is time just a flat circle? Because it probably would be for a robot.

More complicated than that. We need to distinguish emotions from feelings (different people use these differently, often in opposite ways, will continue with the way I use them (Damasio’s definitions)): emotions are the actual physiological responses of the body (hormone releases, muscle tensions, viscero-autonomic responses, etc.) while feelings are the mental experiences associated to an emotion. So emotions are tied to evolved survival related responses, but feelings are filtered through cultural structures that tell us how to interpret an emotional response in a cultural or social context. And a cultural situation can evoke a feeling that in turn brings out resonant emotions. You’re walking in the jungle alone when you see a tiger charging toward you. Primarily an emotional response (flee or climb a tree) or you’re dead. In contrast, your are a Capulet and your sister is about to run off with a Montague; that’s a case of culturally based feelings evoking the physiological emotion. In general, this is how culture coopts biological survival responses in service of cultural ideas.

 
Speakpigeon
 
Avatar
 
 
Speakpigeon
Total Posts:  168
Joined  01-10-2017
 
 
 
13 January 2019 05:02
 

I don’t see how the Turing test could surpass our own innate and much broader capability to appreciate what other human beings are, and how intelligent they are. The Turing test, as applied to machines as it was intended, only provides for an assessment of the intelligence of the machine, which is already something much less than what humans can do when they assess each other, and then only on the basis of what they will say. We can appreciate here the limitations of this method.
AIs might come to really impress us in terms of their wits, but applying actual intelligence, a notion which is already a quagmire unto itself, to real-life situations will be a much taller order.
So, just off the top of my head, I would say it is evident that the proof of the pudding will be in how AIs fare in real-life situations, so to speak. This will in effect dwarf the usefulness of the Turing test, which should be seen as at best useful at the stage of proof of concept during the design of AIs, and then only in the earlier years of the evolution of this design. Unless we decide to limit AIs to do the talking, as we might indeed want to do.
EB

[ Edited: 13 January 2019 05:04 by Speakpigeon]
 
 < 1 2