< 1 2 3
 
   
 

Propaganda and Personality

 
Jb8989
 
Avatar
 
 
Jb8989
Total Posts:  6373
Joined  31-01-2012
 
 
 
31 July 2019 17:47
 
EN - 30 July 2019 05:45 PM
Jb8989 - 30 July 2019 10:06 AM

I think that whether we like it or not, people are generally in a relationship with other people who understand them well. Good or bad. Some say that our data “understands” us better than most people. I think an increased relationship with it is inevitable, even way before AI.

Let’s say that AI understands us.  A torturer may also understand us with respect to our fears and what hurts.  Does that make a relationship?  When I go to a carnival, I know it’s all bullshit and I am getting ripped off by throwing darts at a balloon for a stuffed animal, but I do it anyway for fun.  But I don’t have a relationship with the carnie barker who drew me in.  I tolerate him so I can have a good time.  That’s not a relationship, even if he understands that he can get money from me by a certain presentation of his product. He’s the prostitute, not me.  So as long as I maintain my distance, even if I buy into his shit, I haven’t totally lost my autonomy and dignity.  You have to maintain your limits with AI, and realize it’s a carnie barker,  not a real person.

I’d say that you’d be in torturer/torturee relationship with the torturer. And a transactional relationship with the Carnie. Your phone will eventually probably have better manipulation skills than both.

 
 
Cheshire Cat
 
Avatar
 
 
Cheshire Cat
Total Posts:  1309
Joined  01-11-2014
 
 
 
31 July 2019 20:55
 
burt - 31 July 2019 08:13 AM

Siri: Right, to exist is to be a number.

I am not a number. I am a free man!

Don’t forget this gentlemen.

https://www.youtube.com/watch?v=nW-bFGzNMXw

 
 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  916
Joined  13-02-2017
 
 
 
01 August 2019 03:17
 
burt - 29 July 2019 03:50 PM
TheAnal_lyticPhilosopher - 29 July 2019 10:52 AM

To burt’s point, in East Germany under the Communists there was one member of the STASI for every 216 citizens.  Each STASI field operative had enough informants for there to be one informant for every six citizens.  That’s 16% of citizens actively repressing their neighbors for the benefit of the just state, and the question is, as I see it: how many inactive ones nevertheless embraced the Communist utopian vision?  Two more?  Three?  Did half the people go along with this “social justice”—for that’s exactly how they saw it: social justice in the name of the oppressed proletariat.  Or did more?  That state propaganda was essential to holding this activist body together is indisputable, but what I find more fascinating than the technical method of the message (i.e. digital versus television versus radio) is the fact that the message finds so many willing believers.  Terror is rightly seen as the necessary mechanism for the totalitarian utopias that have wrecked havoc the 20th century, but what’s often overlooked is that this necessary mechanism has sufficiency conditions in consentSomeone has to believe in order to enforce, and the rather horrifying implication is that so many do believe.  So, to the OP’s point about homogeneity as well, is there something about being homogeneous that compels people to believe?  Is there something about being one-with-all that misfires in these utopian visions that end up being a human rights horror show, all in the name of the oppressed?  What is it about fighting for the oppressed that mobilizes so many believers to do almost anything to support the totalitarian state as the sole means of rectification? 

As a related question bearing more directly on the question raised in the OP, does the spread of the information nervous system in the digital age increase the danger of this misfiring?  Are we in especial danger now?

I’m not sure that “believe” is the right word, perhaps more like “adapt” or “go along with,” unless asked for some rationalizing explanation at which point the standard party line comes out. One thing I got a great laugh out of was a Hungarian cultural exhibit put on in Vancouver around 1992. There were displays of architecture, art, and so on and as I saw it, almost all were intensely individualistic expressions. But the labels said things like “This represents the unity of the proletariat in the struggle against oppression.” Daisies growing through concrete.

I think both apply.  Surely the 1 in 6—or something very close to that—believe, but what proportion of the rest do, versus—as you say—“adapt” or “go along with”?  I would guess—or at least hope—the majority adapt or go along with, not believe.  But, these states hold power somehow, and as I see it, power in any form requires consent at some level, be it among those who wield it as a body, or among those subject to it who believe.  “Adapt” and “go along” wouldn’t be consent, per se, though the god help them when the revolution comes, for it will be taken as such.  And also, to that point, at what point does “go along with” become so adaptive that it is in fact indistinguishable from belief, even for oneself?

 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  916
Joined  13-02-2017
 
 
 
01 August 2019 03:19
 
Jb8989 - 30 July 2019 10:13 AM
TheAnal_lyticPhilosopher - 29 July 2019 10:52 AM

As a related question bearing more directly on the question raised in the OP, does the spread of the information nervous system in the digital age increase the danger of this misfiring?  Are we in especial danger now?

Misfiring, sure, but I’m also talking about accuracy to a fault. The only way to predict behavior is to reflect on past ones. Imagine if codes didn’t just know past behaviors but also current emotional states? Planting a highly predictable advertisement in the moment would resemble a thought under those circumstances, no?

I’m not sure what you mean by “accuracy to a fault.”  If you mean using data trends to predict dispositions and behaviors, then planting nudges based on those predictions, then in principle that is a possibility, and in principle something like it might be possible with online tracking, but in practice—as far as I know—we are light years from that.  As an analogy I think of how difficult it is to truly persuade someone you know extremely well into something they are not already inclined to do.  This seems far more difficult to me online.

But, that caveat said, yes, if you guess well enough you can say something that will to the listener seem like their own thought.  This is in fact one of the main skills useful in clinical therapy: knowing where the person is coming from so that they can be guided to “their own thoughts” by the clinician framing a suggestion or offering an interpretation.  If done well enough, it will appear to the client as their own thought, and it’s therapeutic power will stem from this, as they will feel they came to the realization on their own. 

But again, this takes much experience and intimacy, of a kind.  From the scientific work I’ve read about big data and online behavior, we are a long way from all but the grossest forms of persuasion, much less control.  In principle, though, both seem more likely in the Internet medium than in any other—by far.  If this is what you mean, then I agree.

 

 
burt
 
Avatar
 
 
burt
Total Posts:  15835
Joined  17-12-2006
 
 
 
01 August 2019 07:22
 
TheAnal_lyticPhilosopher - 01 August 2019 03:17 AM
burt - 29 July 2019 03:50 PM
TheAnal_lyticPhilosopher - 29 July 2019 10:52 AM

To burt’s point, in East Germany under the Communists there was one member of the STASI for every 216 citizens.  Each STASI field operative had enough informants for there to be one informant for every six citizens.  That’s 16% of citizens actively repressing their neighbors for the benefit of the just state, and the question is, as I see it: how many inactive ones nevertheless embraced the Communist utopian vision?  Two more?  Three?  Did half the people go along with this “social justice”—for that’s exactly how they saw it: social justice in the name of the oppressed proletariat.  Or did more?  That state propaganda was essential to holding this activist body together is indisputable, but what I find more fascinating than the technical method of the message (i.e. digital versus television versus radio) is the fact that the message finds so many willing believers.  Terror is rightly seen as the necessary mechanism for the totalitarian utopias that have wrecked havoc the 20th century, but what’s often overlooked is that this necessary mechanism has sufficiency conditions in consentSomeone has to believe in order to enforce, and the rather horrifying implication is that so many do believe.  So, to the OP’s point about homogeneity as well, is there something about being homogeneous that compels people to believe?  Is there something about being one-with-all that misfires in these utopian visions that end up being a human rights horror show, all in the name of the oppressed?  What is it about fighting for the oppressed that mobilizes so many believers to do almost anything to support the totalitarian state as the sole means of rectification? 

As a related question bearing more directly on the question raised in the OP, does the spread of the information nervous system in the digital age increase the danger of this misfiring?  Are we in especial danger now?

I’m not sure that “believe” is the right word, perhaps more like “adapt” or “go along with,” unless asked for some rationalizing explanation at which point the standard party line comes out. One thing I got a great laugh out of was a Hungarian cultural exhibit put on in Vancouver around 1992. There were displays of architecture, art, and so on and as I saw it, almost all were intensely individualistic expressions. But the labels said things like “This represents the unity of the proletariat in the struggle against oppression.” Daisies growing through concrete.

I think both apply.  Surely the 1 in 6—or something very close to that—believe, but what proportion of the rest do, versus—as you say—“adapt” or “go along with”?  I would guess—or at least hope—the majority adapt or go along with, not believe.  But, these states hold power somehow, and as I see it, power in any form requires consent at some level, be it among those who wield it as a body, or among those subject to it who believe.  “Adapt” and “go along” wouldn’t be consent, per se, though the god help them when the revolution comes, for it will be taken as such.  And also, to that point, at what point does “go along with” become so adaptive that it is in fact indistinguishable from belief, even for oneself?

Good points. I recall a comment made by the teacher in an international relations class I took in high school: All governments exist with the consent of the governed. The ways this consent is gained vary.

A disturbing question for me is how many in the US or other democratic countries are just going along with things without real thought or commitment.

 
Jefe
 
Avatar
 
 
Jefe
Total Posts:  7105
Joined  15-02-2007
 
 
 
01 August 2019 09:30
 
burt - 01 August 2019 07:22 AM

A disturbing question for me is how many in the US or other democratic countries are just going along with things without real thought or commitment.

... or feel they can’t (individually) make a difference.

 
 
Jb8989
 
Avatar
 
 
Jb8989
Total Posts:  6373
Joined  31-01-2012
 
 
 
01 August 2019 10:32
 
TheAnal_lyticPhilosopher - 01 August 2019 03:19 AM
Jb8989 - 30 July 2019 10:13 AM
TheAnal_lyticPhilosopher - 29 July 2019 10:52 AM

As a related question bearing more directly on the question raised in the OP, does the spread of the information nervous system in the digital age increase the danger of this misfiring?  Are we in especial danger now?

Misfiring, sure, but I’m also talking about accuracy to a fault. The only way to predict behavior is to reflect on past ones. Imagine if codes didn’t just know past behaviors but also current emotional states? Planting a highly predictable advertisement in the moment would resemble a thought under those circumstances, no?

I’m not sure what you mean by “accuracy to a fault.”  If you mean using data trends to predict dispositions and behaviors, then planting nudges based on those predictions, then in principle that is a possibility, and in principle something like it might be possible with online tracking, but in practice—as far as I know—we are light years from that.  As an analogy I think of how difficult it is to truly persuade someone you know extremely well into something they are not already inclined to do.  This seems far more difficult to me online.

People harbor so many biases because almost immediately on receipt of contradictory information - regardless of whether it’s good or bad information - they’re already conjuring internal counterpoints. Phones don’t really have to compete with internal or external counterpoints, and because we’re comfortable relying on how much more accurate they are than real people, we’re also increasingly accustom to dissenting views going away. So bias reinforcement can be cherry picked just as easily as solid vetting of source material. Combined, I think it gives the internet a competitive manipulation advantage over traditional talk if outwardly directed.

[ Edited: 01 August 2019 10:37 by Jb8989]
 
 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  916
Joined  13-02-2017
 
 
 
01 August 2019 14:57
 
Jb8989 - 01 August 2019 10:32 AM
TheAnal_lyticPhilosopher - 01 August 2019 03:19 AM
Jb8989 - 30 July 2019 10:13 AM
TheAnal_lyticPhilosopher - 29 July 2019 10:52 AM

As a related question bearing more directly on the question raised in the OP, does the spread of the information nervous system in the digital age increase the danger of this misfiring?  Are we in especial danger now?

Misfiring, sure, but I’m also talking about accuracy to a fault. The only way to predict behavior is to reflect on past ones. Imagine if codes didn’t just know past behaviors but also current emotional states? Planting a highly predictable advertisement in the moment would resemble a thought under those circumstances, no?

I’m not sure what you mean by “accuracy to a fault.”  If you mean using data trends to predict dispositions and behaviors, then planting nudges based on those predictions, then in principle that is a possibility, and in principle something like it might be possible with online tracking, but in practice—as far as I know—we are light years from that.  As an analogy I think of how difficult it is to truly persuade someone you know extremely well into something they are not already inclined to do.  This seems far more difficult to me online.

People harbor so many biases because almost immediately on receipt of contradictory information - regardless of whether it’s good or bad information - they’re already conjuring internal counterpoints. Phones don’t really have to compete with internal or external counterpoints, and because we’re comfortable relying on how much more accurate they are than real people, we’re also increasingly accustom to dissenting views going away. So bias reinforcement can be cherry picked just as easily as solid vetting of source material. Combined, I think it gives the internet a competitive manipulation advantage over traditional talk if outwardly directed.

What do you mean phones are more accurate than real people?  That the information available through them is more reliable? 

I would say yes to that but for the search skills required to sort good from bad information online—an increasingly difficult task if one consults media sources on hot political topics, for instance.  But overall I take your point about cherry picking and bias reinforcement.  That’s more of an option now that information consumption is so much more self-selecting and so much less vetted by editors on a rather limited menu of outlets (like in the days of TV and radio; pre-Internet).  In any case, I do suspect without being able to prove with a good causal explanation that in the US now we are more subject to wayward ideological movements gaining momentum.  Ideological tendencies, that is, not based in fact but instead in recreational outrage, or in hackneyed views (flat earthers and anti-vaxers, for instance).  My own limited foray into politically-charged topics suggests there is a wide gulf between these tendencies and the realities on the ground.  This goes, I think, to your idea of a competitive manipulation advantage over traditional talk.  That does seem to be the case.

 

 
Jb8989
 
Avatar
 
 
Jb8989
Total Posts:  6373
Joined  31-01-2012
 
 
 
01 August 2019 17:44
 
TheAnal_lyticPhilosopher - 01 August 2019 02:57 PM
Jb8989 - 01 August 2019 10:32 AM
TheAnal_lyticPhilosopher - 01 August 2019 03:19 AM
Jb8989 - 30 July 2019 10:13 AM
TheAnal_lyticPhilosopher - 29 July 2019 10:52 AM

As a related question bearing more directly on the question raised in the OP, does the spread of the information nervous system in the digital age increase the danger of this misfiring?  Are we in especial danger now?

Misfiring, sure, but I’m also talking about accuracy to a fault. The only way to predict behavior is to reflect on past ones. Imagine if codes didn’t just know past behaviors but also current emotional states? Planting a highly predictable advertisement in the moment would resemble a thought under those circumstances, no?

I’m not sure what you mean by “accuracy to a fault.”  If you mean using data trends to predict dispositions and behaviors, then planting nudges based on those predictions, then in principle that is a possibility, and in principle something like it might be possible with online tracking, but in practice—as far as I know—we are light years from that.  As an analogy I think of how difficult it is to truly persuade someone you know extremely well into something they are not already inclined to do.  This seems far more difficult to me online.

People harbor so many biases because almost immediately on receipt of contradictory information - regardless of whether it’s good or bad information - they’re already conjuring internal counterpoints. Phones don’t really have to compete with internal or external counterpoints, and because we’re comfortable relying on how much more accurate they are than real people, we’re also increasingly accustom to dissenting views going away. So bias reinforcement can be cherry picked just as easily as solid vetting of source material. Combined, I think it gives the internet a competitive manipulation advantage over traditional talk if outwardly directed.

What do you mean phones are more accurate than real people?  That the information available through them is more reliable? 

I would say yes to that but for the search skills required to sort good from bad information online—an increasingly difficult task if one consults media sources on hot political topics, for instance.  But overall I take your point about cherry picking and bias reinforcement.  That’s more of an option now that information consumption is so much more self-selecting and so much less vetted by editors on a rather limited menu of outlets (like in the days of TV and radio; pre-Internet).  In any case, I do suspect without being able to prove with a good causal explanation that in the US now we are more subject to wayward ideological movements gaining momentum.  Ideological tendencies, that is, not based in fact but instead in recreational outrage, or in hackneyed views (flat earthers and anti-vaxers, for instance).  My own limited foray into politically-charged topics suggests there is a wide gulf between these tendencies and the realities on the ground.  This goes, I think, to your idea of a competitive manipulation advantage over traditional talk.  That does seem to be the case.

It’s more reliable, but also at a certain point our searches started being met with the algorithms consulting our data before retrieving our results. So in some sense it’s also more comforting and validating. I wonder whether politics is dominated by fear, but that tech will have a much more elaborate emotional understanding of our motivations. There’s some lawyers who think that if there ever was a time to take back the forth amendment, it’s before emotional recognition software reaches our phones. Privacy may never be the same after that.

[ Edited: 01 August 2019 17:48 by Jb8989]
 
 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  916
Joined  13-02-2017
 
 
 
02 August 2019 16:14
 
Jb8989 - 01 August 2019 05:44 PM
TheAnal_lyticPhilosopher - 01 August 2019 02:57 PM
Jb8989 - 01 August 2019 10:32 AM
TheAnal_lyticPhilosopher - 01 August 2019 03:19 AM
Jb8989 - 30 July 2019 10:13 AM
TheAnal_lyticPhilosopher - 29 July 2019 10:52 AM

As a related question bearing more directly on the question raised in the OP, does the spread of the information nervous system in the digital age increase the danger of this misfiring?  Are we in especial danger now?

Misfiring, sure, but I’m also talking about accuracy to a fault. The only way to predict behavior is to reflect on past ones. Imagine if codes didn’t just know past behaviors but also current emotional states? Planting a highly predictable advertisement in the moment would resemble a thought under those circumstances, no?

I’m not sure what you mean by “accuracy to a fault.”  If you mean using data trends to predict dispositions and behaviors, then planting nudges based on those predictions, then in principle that is a possibility, and in principle something like it might be possible with online tracking, but in practice—as far as I know—we are light years from that.  As an analogy I think of how difficult it is to truly persuade someone you know extremely well into something they are not already inclined to do.  This seems far more difficult to me online.

People harbor so many biases because almost immediately on receipt of contradictory information - regardless of whether it’s good or bad information - they’re already conjuring internal counterpoints. Phones don’t really have to compete with internal or external counterpoints, and because we’re comfortable relying on how much more accurate they are than real people, we’re also increasingly accustom to dissenting views going away. So bias reinforcement can be cherry picked just as easily as solid vetting of source material. Combined, I think it gives the internet a competitive manipulation advantage over traditional talk if outwardly directed.

What do you mean phones are more accurate than real people?  That the information available through them is more reliable? 

I would say yes to that but for the search skills required to sort good from bad information online—an increasingly difficult task if one consults media sources on hot political topics, for instance.  But overall I take your point about cherry picking and bias reinforcement.  That’s more of an option now that information consumption is so much more self-selecting and so much less vetted by editors on a rather limited menu of outlets (like in the days of TV and radio; pre-Internet).  In any case, I do suspect without being able to prove with a good causal explanation that in the US now we are more subject to wayward ideological movements gaining momentum.  Ideological tendencies, that is, not based in fact but instead in recreational outrage, or in hackneyed views (flat earthers and anti-vaxers, for instance).  My own limited foray into politically-charged topics suggests there is a wide gulf between these tendencies and the realities on the ground.  This goes, I think, to your idea of a competitive manipulation advantage over traditional talk.  That does seem to be the case.

It’s more reliable, but also at a certain point our searches started being met with the algorithms consulting our data before retrieving our results. So in some sense it’s also more comforting and validating. I wonder whether politics is dominated by fear, but that tech will have a much more elaborate emotional understanding of our motivations. There’s some lawyers who think that if there ever was a time to take back the forth amendment, it’s before emotional recognition software reaches our phones. Privacy may never be the same after that.

Yes, emotional recognition software on a smart phone sounds disconcerting, but if it’s no better than regular human judgment, we may not have too much to worry about.  In any case, it sounds like neither one of us is cheering the effect of smart phones, instant self-selecting information, and the dearth of real conversation between people.  Maybe I’m adding that last point, but it’s becoming a thing among the rising generation, and they are far worse off for it.

 
Jb8989
 
Avatar
 
 
Jb8989
Total Posts:  6373
Joined  31-01-2012
 
 
 
03 August 2019 17:29
 

One wonders whether recall is forgetfull enough to not recognize it’s increasingly outdated?

 
 
 < 1 2 3