< 1 2 3 4 > 
 
   
 

Extrasensory perception?

 
burt
 
Avatar
 
 
burt
Total Posts:  15955
Joined  17-12-2006
 
 
 
03 January 2019 10:04
 
Speakpigeon - 01 January 2019 10:18 AM

Again, nearly all humans agree with Aristotle’s syllogisms because we nearly all have the same sense of logic.

Actually, this is not true. We must learn syllogistic thinking. The Russian psychologist Alexander Luria carried out studies of reasoning on illiterate villagers in Uzbeckistan in the 1930s and found that in fact they did not agree with syllogistic statements. Their thought was narrative. For example, he would present statements like:
There are no camels in Germany.
Munich is a city in Germany.
Then he would ask, Are they any camels in Munich? The response he got was along the lines of “I don’t know, I’ve never been to Munich, but if it’s a big city perhaps there are camels.”
Or;
In the north it snows all the time.
Where it snows all the time, bears are white.
The response when asked what color bears were in the north was like “I don’t know, I’ve never been there.” 
Luria’s conclusion was that these people did not perceive syllogisms as “a series of propositions that comprised a unified logical structure.” Rather, “They [were] perceived as a series of isolated, concrete, and logically unrelated judgements that yield no particular inference and are thus not a means of deduction”

Likewise, he found that these people didn’t use what we would call logical categories. Rather, their categorization was practical. When shown a group of objects falling into two different logical categories, subjects ignored the logical groupings and construct narrative situations in which all of the objects filled functional purposes. Luria remarks that “these people were inclined to use concrete thinking to reconstruct situations that could become a basis for unifying discrete objects.”  Shown an ax, a hammer, and a saw (all “tools”), together with a log (not a “tool”), stories of constructive activity would be told.  When asked about the logical categories, the response was that these were irrelevant, foolish, and non-functional.  When told, for example, that axes, hammers, and saws formed a group because they were all tools, a typical reply would be that just these alone made no sense because without something like a log to work on, they served no useful purpose. Later research shows that Luria’s observations are true of primitive peoples in general (e.g., C.R. Hallpike, The Foundations of Primitive Thought is a great reference on this). So if you’re looking for a “logical sense” it just isn’t there. What is there is a feeling (or sense) for what “fits” into our learned ways of thinking. If we’ve learned formal logic, that can mimic a sense for logic.

[ Edited: 03 January 2019 10:27 by burt]
 
burt
 
Avatar
 
 
burt
Total Posts:  15955
Joined  17-12-2006
 
 
 
03 January 2019 10:26
 
Speakpigeon - 01 January 2019 10:18 AM

Or maybe you could suggest a mechanism by which humans could have invented formal logic without having a sense of logic to begin with.
EB

I’ve studied the history of how Aristotle developed his logic. It was a long process, beginning with Parmenides. The Greek revolution in thinking arose with the idea that it was possible to understand the world with reason, without referring to supernatural entities. But the issue became, how does one evaluate a reasoned argument? The early pre-Socratics would usually say something along the line that their arguments were correct because “justice” required it (the argument form was that of the law courts). Aristotle’s three laws are implicit in Parmenides but they were not recognized as such and many spurious forms of reasoning were used rhetorically (particularly by some of the Sophists). Plato parodies this in Euthydmeus (298d,e):
“...If you would answer me, said Dionysodorus, you will admit these things yourself, Ctesippus.  Tell me, have you a dog?
“Yes, a villain of one, said Ctesippus.
“Has he got puppies?
“Very much so, he said, as bad as he is.
“Then the dog is their father?
“I have seen him myself, he said, on the job with the bitch.
“Very well, isn’t the dog yours?
“Certainly, said Ctesippus.
“Then being a father he is yours, so the dog is your father and you are the puppies brother.”
The question of how to categorize things and argue categorically was one of the main topics in Plato’s Academy. Aristotle resolved the problem by developing his formal logic. It’s not clear how Aristotle did this, other than through careful observation and analysis (we can’t ask him and he didn’t say). But for Aristotle his logic applied to the world (today it’s seen as applying to the formal use of statements in language), so he may have gotten it in part through spatial reasoning as with Venn diagrams (A = A, a thing occupies its own position, goes with the Arrow paradox; A?B, nothing is outside of its own position; either A or B, nothing can be in two places at once).

 
nonverbal
 
Avatar
 
 
nonverbal
Total Posts:  1891
Joined  31-10-2015
 
 
 
03 January 2019 11:20
 
Speakpigeon - 03 January 2019 02:21 AM
nonverbal - 02 January 2019 01:41 PM
Speakpigeon - 02 January 2019 12:56 PM

Is my English skills so poor I couldn’t articulate the simple idea that recognising our sense of logic as a sense of perception also avoid the embarrassment of having to rely on extrasensory perception to support our reasoning?
Beats me.
EB

Is they so poor? You tell me. And please see the title of this thread.

Why is it you can’t articulate your point? Am I supposed to read your mind? You think extrasensory perception works?
So, please, see indeed the title of this thread: “Extrasensory perception?
See? See the exclamation mark at the end of it? It signal it’s a question, and therefore not an assertion.
Second, even without an exclamation mark, it would still not be an assertion of the existence or my belief in the existence of extrasensory perception. It would merely signal what the subject of the thread would be.
Further, my OP makes clear the idea of extrasensory perception wouldn’t be my first rational choice.
EB

It looks like a question mark to me, but of course my close-up vision is not at all what it used to be!?

 
 
GAD
 
Avatar
 
 
GAD
Total Posts:  17892
Joined  15-02-2008
 
 
 
03 January 2019 20:04
 
burt - 03 January 2019 10:04 AM
Speakpigeon - 01 January 2019 10:18 AM

Again, nearly all humans agree with Aristotle’s syllogisms because we nearly all have the same sense of logic.

Actually, this is not true. We must learn syllogistic thinking. The Russian psychologist Alexander Luria carried out studies of reasoning on illiterate villagers in Uzbeckistan in the 1930s and found that in fact they did not agree with syllogistic statements. Their thought was narrative. For example, he would present statements like:
There are no camels in Germany.
Munich is a city in Germany.
Then he would ask, Are they any camels in Munich? The response he got was along the lines of “I don’t know, I’ve never been to Munich, but if it’s a big city perhaps there are camels.”
Or;
In the north it snows all the time.
Where it snows all the time, bears are white.
The response when asked what color bears were in the north was like “I don’t know, I’ve never been there.” 
Luria’s conclusion was that these people did not perceive syllogisms as “a series of propositions that comprised a unified logical structure.” Rather, “They [were] perceived as a series of isolated, concrete, and logically unrelated judgements that yield no particular inference and are thus not a means of deduction”

Likewise, he found that these people didn’t use what we would call logical categories. Rather, their categorization was practical. When shown a group of objects falling into two different logical categories, subjects ignored the logical groupings and construct narrative situations in which all of the objects filled functional purposes. Luria remarks that “these people were inclined to use concrete thinking to reconstruct situations that could become a basis for unifying discrete objects.”  Shown an ax, a hammer, and a saw (all “tools”), together with a log (not a “tool”), stories of constructive activity would be told.  When asked about the logical categories, the response was that these were irrelevant, foolish, and non-functional.  When told, for example, that axes, hammers, and saws formed a group because they were all tools, a typical reply would be that just these alone made no sense because without something like a log to work on, they served no useful purpose. Later research shows that Luria’s observations are true of primitive peoples in general (e.g., C.R. Hallpike, The Foundations of Primitive Thought is a great reference on this). So if you’re looking for a “logical sense” it just isn’t there. What is there is a feeling (or sense) for what “fits” into our learned ways of thinking. If we’ve learned formal logic, that can mimic a sense for logic.

I remember reading this and other similar stuff and it is again inline with how I think about thinking.

 
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6851
Joined  08-12-2006
 
 
 
03 January 2019 20:39
 
GAD - 03 January 2019 08:04 PM
burt - 03 January 2019 10:04 AM
Speakpigeon - 01 January 2019 10:18 AM

Again, nearly all humans agree with Aristotle’s syllogisms because we nearly all have the same sense of logic.

Actually, this is not true. We must learn syllogistic thinking. The Russian psychologist Alexander Luria carried out studies of reasoning on illiterate villagers in Uzbeckistan in the 1930s and found that in fact they did not agree with syllogistic statements. Their thought was narrative. For example, he would present statements like:
There are no camels in Germany.
Munich is a city in Germany.
Then he would ask, Are they any camels in Munich? The response he got was along the lines of “I don’t know, I’ve never been to Munich, but if it’s a big city perhaps there are camels.”
Or;
In the north it snows all the time.
Where it snows all the time, bears are white.
The response when asked what color bears were in the north was like “I don’t know, I’ve never been there.” 
Luria’s conclusion was that these people did not perceive syllogisms as “a series of propositions that comprised a unified logical structure.” Rather, “They [were] perceived as a series of isolated, concrete, and logically unrelated judgements that yield no particular inference and are thus not a means of deduction”

Likewise, he found that these people didn’t use what we would call logical categories. Rather, their categorization was practical. When shown a group of objects falling into two different logical categories, subjects ignored the logical groupings and construct narrative situations in which all of the objects filled functional purposes. Luria remarks that “these people were inclined to use concrete thinking to reconstruct situations that could become a basis for unifying discrete objects.”  Shown an ax, a hammer, and a saw (all “tools”), together with a log (not a “tool”), stories of constructive activity would be told.  When asked about the logical categories, the response was that these were irrelevant, foolish, and non-functional.  When told, for example, that axes, hammers, and saws formed a group because they were all tools, a typical reply would be that just these alone made no sense because without something like a log to work on, they served no useful purpose. Later research shows that Luria’s observations are true of primitive peoples in general (e.g., C.R. Hallpike, The Foundations of Primitive Thought is a great reference on this). So if you’re looking for a “logical sense” it just isn’t there. What is there is a feeling (or sense) for what “fits” into our learned ways of thinking. If we’ve learned formal logic, that can mimic a sense for logic.

I remember reading this and other similar stuff and it is again inline with how I think about thinking.

This makes it hard to argue that all human beings come from the factory, so to speak, with an innate sense of logic. But once it’s learned, isn’t it possible that it becomes intuitive? In the same way that “6x6=36” becomes intuitive? And by intuitive, I mean that it “feels” true without having to consciously multiply six times six. Once we know, through experience, that the ground gets wet when it rains, we don’t have to consciously go through the chain of logic. If it’s raining, we intuit the ground is wet.

 
 
GAD
 
Avatar
 
 
GAD
Total Posts:  17892
Joined  15-02-2008
 
 
 
03 January 2019 20:59
 
Antisocialdarwinist - 03 January 2019 08:39 PM
GAD - 03 January 2019 08:04 PM
burt - 03 January 2019 10:04 AM
Speakpigeon - 01 January 2019 10:18 AM

Again, nearly all humans agree with Aristotle’s syllogisms because we nearly all have the same sense of logic.

Actually, this is not true. We must learn syllogistic thinking. The Russian psychologist Alexander Luria carried out studies of reasoning on illiterate villagers in Uzbeckistan in the 1930s and found that in fact they did not agree with syllogistic statements. Their thought was narrative. For example, he would present statements like:
There are no camels in Germany.
Munich is a city in Germany.
Then he would ask, Are they any camels in Munich? The response he got was along the lines of “I don’t know, I’ve never been to Munich, but if it’s a big city perhaps there are camels.”
Or;
In the north it snows all the time.
Where it snows all the time, bears are white.
The response when asked what color bears were in the north was like “I don’t know, I’ve never been there.” 
Luria’s conclusion was that these people did not perceive syllogisms as “a series of propositions that comprised a unified logical structure.” Rather, “They [were] perceived as a series of isolated, concrete, and logically unrelated judgements that yield no particular inference and are thus not a means of deduction”

Likewise, he found that these people didn’t use what we would call logical categories. Rather, their categorization was practical. When shown a group of objects falling into two different logical categories, subjects ignored the logical groupings and construct narrative situations in which all of the objects filled functional purposes. Luria remarks that “these people were inclined to use concrete thinking to reconstruct situations that could become a basis for unifying discrete objects.”  Shown an ax, a hammer, and a saw (all “tools”), together with a log (not a “tool”), stories of constructive activity would be told.  When asked about the logical categories, the response was that these were irrelevant, foolish, and non-functional.  When told, for example, that axes, hammers, and saws formed a group because they were all tools, a typical reply would be that just these alone made no sense because without something like a log to work on, they served no useful purpose. Later research shows that Luria’s observations are true of primitive peoples in general (e.g., C.R. Hallpike, The Foundations of Primitive Thought is a great reference on this). So if you’re looking for a “logical sense” it just isn’t there. What is there is a feeling (or sense) for what “fits” into our learned ways of thinking. If we’ve learned formal logic, that can mimic a sense for logic.

I remember reading this and other similar stuff and it is again inline with how I think about thinking.

This makes it hard to argue that all human beings come from the factory, so to speak, with an innate sense of logic. But once it’s learned, isn’t it possible that it becomes intuitive? In the same way that “6x6=36” becomes intuitive? And by intuitive, I mean that it “feels” true without having to consciously multiply six times six. Once we know, through experience, that the ground gets wet when it rains, we don’t have to consciously go through the chain of logic. If it’s raining, we intuit the ground is wet.

Yeah. But new stuff and/or complex stuff, especially outside of experience, isn’t so intuitive and requires experience and lot of work to make it intuitive.

 
 
Speakpigeon
 
Avatar
 
 
Speakpigeon
Total Posts:  168
Joined  01-10-2017
 
 
 
05 January 2019 08:27
 
burt - 03 January 2019 10:04 AM
Speakpigeon - 01 January 2019 10:18 AM

Again, nearly all humans agree with Aristotle’s syllogisms because we nearly all have the same sense of logic.


Actually, this is not true.

Actually, you don’t know that.
All you have is some evidence that suggests to you that it’s not true.
I considered this evidence, as you reported it, and as reported elsewhere, and it seems to me that all it suggests is that formal logic has to be learnt. Now, in previous replies to other posters, I made clear I was talking about our logical intuitions, not about formal logic. I agree formal logic has to be learnt.
More precisely, my view about formal logic is that what we have to learn is what the formalism used means, somewhat like we need to learn the grammar and the vocabulary to understand a foreign language, or somewhat like we need to learn musical notation before we can read a score, and this definitely doesn’t imply our brain doesn’t have any linguistic capability or any musical capability before any kind of formal training.
Also, more recent than the study you are referring to, scientific tests seem to have shown that animals and toddlers all have logical capabilities. Crucially, these studies tested logical behaviour outside any formal logic. One test for example is for a small child to understand that by using a word unknown to him , you’re asking him to identify the odd one out of a set of known and previously labelled objects, which suggests a capability to infer between a set of words and a set of objects. I also think that this demonstrates the intuitive ability of the toddler to conceive by himself of what is logically possible, something which seems crucial to me if we are to understand the world around us.
I also tested on myself that I have clear intuitions about the truth of logical formulas that are too complex for me to explain how my brain does it, which seems to preclude any influence of my training in formal logic, which is at any rate minimal.
I have also tested a particular logical formula on a group of people. As I expected, they were all adamant that the formula was obviously false, even though I had introduced the formula as a proven logical truth. None of them where able to explain coherently why the formula would be false, unsurprisingly since it’s a rather difficult one. I myself was able to test on myself that my brain already knew the answer about this formula even before I could possibly have formally analysed it. I was reading a presentation of it, barely read the formula, did not try to understand it since it was too complicated, continued to read what the author was saying about it, somehow misread what he was saying as claiming that the formula was true, something I immediately decided was definitely very wrong, decision which on the face of it could only possibly be 100% intuitive. I realised only later that the author was in fact saying that the formula was indeed false. And I still today can’t explain myself why it’s false. It’s just bloody obvious it is, and all the people I tested with it have been very clear they were absolutely certain it was false even though I could keep up the charade of insisting it was a logical truth given that none of them could explain why it was false.
And there’s also the case of material implication. One author I read admitted many students at university level had a hard time trying to make sense of it as a proper logical operation even though it’s usually presented without qualification. You only learn later about the so-called paradoxes of material implication. I myself immediately decided it was wrong on being presented with the truth-table definition of it and it was the very fist time I was being exposed to a formal training in logic. And I haven’t changed my mind since. I merely discovered much later that, like me, many people vehemently disagreed that it should be regarded as a proper definition of the implication as we usually think of it. So, here, people effectively stand their ground, clearly on the basis of their intuition, even in the face of the formal training they have been subjected to.

burt - 03 January 2019 10:04 AM

We must learn syllogistic thinking. The Russian psychologist Alexander Luria carried out studies of reasoning on illiterate villagers in Uzbeckistan in the 1930s and found that in fact they did not agree with syllogistic statements. Their thought was narrative. For example, he would present statements like:
There are no camels in Germany.
Munich is a city in Germany.
Then he would ask, Are they any camels in Munich? The response he got was along the lines of “I don’t know, I’ve never been to Munich, but if it’s a big city perhaps there are camels.”
Or;
In the north it snows all the time.
Where it snows all the time, bears are white.
The response when asked what color bears were in the north was like “I don’t know, I’ve never been there.” 
Luria’s conclusion was that these people did not perceive syllogisms as “a series of propositions that comprised a unified logical structure.” Rather, “They [were] perceived as a series of isolated, concrete, and logically unrelated judgements that yield no particular inference and are thus not a means of deduction”

Likewise, he found that these people didn’t use what we would call logical categories. Rather, their categorization was practical. When shown a group of objects falling into two different logical categories, subjects ignored the logical groupings and construct narrative situations in which all of the objects filled functional purposes. Luria remarks that “these people were inclined to use concrete thinking to reconstruct situations that could become a basis for unifying discrete objects.”  Shown an ax, a hammer, and a saw (all “tools”), together with a log (not a “tool”), stories of constructive activity would be told.  When asked about the logical categories, the response was that these were irrelevant, foolish, and non-functional.  When told, for example, that axes, hammers, and saws formed a group because they were all tools, a typical reply would be that just these alone made no sense because without something like a log to work on, they served no useful purpose. Later research shows that Luria’s observations are true of primitive peoples in general (e.g., C.R. Hallpike, The Foundations of Primitive Thought is a great reference on this). So if you’re looking for a “logical sense” it just isn’t there. What is there is a feeling (or sense) for what “fits” into our learned ways of thinking. If we’ve learned formal logic, that can mimic a sense for logic.

This can be convincingly interpreted as showing not that human beings don’t have an intuitive sense of logic but that the extent to which they come to use it is largely affected by cultural factors, something which is obvious and was already obvious at the time of Aristotle.
It is also clear that we are all prepared to disregard any logical evidence we are presented with the moment that there is an emotional motivation to do so. This is particularly in evidence in political debates but also in all situations where we have to argue a point, including with friends and with family, and certainly in forums like this one. Emotion is stronger than logic. Big news?
There is also a parallel to make with language, for example. Different people obviously get to learn languages that are very different from each other and this is clearly because learning a language is affected by prevalent local cultural factors. Yet, this doesn’t imply that our brain doesn’t have a linguistic capability from the start. This is in fact true of all our senses. Different cultures put a different emphasis on specific senses, visual, hearing, small, touch, etc.
There is also a parallel about the prevalence of emotion over our sensory perceptions. People can be moved to expose themselves to clear and immediate danger under the influence of their emotions. In effect, their emotional state lead them somehow to disregard at least some of what their sensory perceptions tell them.
The fact that we can also learn new methods that then become intuitive, like imagining the music on reading the score, shows that our brain has the capability of transforming the repetition of a process into an intuitive understanding of this process. This can be understood as essentially a process of logical integration. I don’t see why this logical capability wouldn’t be put to some use even outside any formal training in logic.
I think we would need to see more scientific research on logic as a pre-formal training capability to get more evidence either way. For now, I’m confident we have a logical intuitive sense that’s an inherent capability of our brain and neurological system generally, and I would expect of any sort of complex, naturally occurring cognitive system.
EB

 

[ Edited: 05 January 2019 08:39 by Speakpigeon]
 
burt
 
Avatar
 
 
burt
Total Posts:  15955
Joined  17-12-2006
 
 
 
05 January 2019 11:08
 
Speakpigeon - 05 January 2019 08:27 AM
burt - 03 January 2019 10:04 AM
Speakpigeon - 01 January 2019 10:18 AM

[cut to provide space]

Later research shows that Luria’s observations are true of primitive peoples in general (e.g., C.R. Hallpike, The Foundations of Primitive Thought is a great reference on this). So if you’re looking for a “logical sense” it just isn’t there. What is there is a feeling (or sense) for what “fits” into our learned ways of thinking. If we’ve learned formal logic, that can mimic a sense for logic.

This can be convincingly interpreted as showing not that human beings don’t have an intuitive sense of logic but that the extent to which they come to use it is largely affected by cultural factors, something which is obvious and was already obvious at the time of Aristotle.
It is also clear that we are all prepared to disregard any logical evidence we are presented with the moment that there is an emotional motivation to do so. This is particularly in evidence in political debates but also in all situations where we have to argue a point, including with friends and with family, and certainly in forums like this one. Emotion is stronger than logic. Big news?
There is also a parallel to make with language, for example. Different people obviously get to learn languages that are very different from each other and this is clearly because learning a language is affected by prevalent local cultural factors. Yet, this doesn’t imply that our brain doesn’t have a linguistic capability from the start. This is in fact true of all our senses. Different cultures put a different emphasis on specific senses, visual, hearing, small, touch, etc.
There is also a parallel about the prevalence of emotion over our sensory perceptions. People can be moved to expose themselves to clear and immediate danger under the influence of their emotions. In effect, their emotional state lead them somehow to disregard at least some of what their sensory perceptions tell them.
The fact that we can also learn new methods that then become intuitive, like imagining the music on reading the score, shows that our brain has the capability of transforming the repetition of a process into an intuitive understanding of this process. This can be understood as essentially a process of logical integration. I don’t see why this logical capability wouldn’t be put to some use even outside any formal training in logic.
I think we would need to see more scientific research on logic as a pre-formal training capability to get more evidence either way. For now, I’m confident we have a logical intuitive sense that’s an inherent capability of our brain and neurological system generally, and I would expect of any sort of complex, naturally occurring cognitive system.
EB


I’m familiar with the research you reference, but it would help if you provided precise references. I think that my last sentence is the relevant point: “What is there is a feeling (or sense) for what ‘fits’ into our learned ways of thinking. If we’ve learned formal logic, that can mimic a sense for logic.” Experiments on oneself tend to be biased because the subject is already conditioned into a particular way of thinking. But something that applies is results from the Wason selection experiments. These were originally conducted 50+ years ago but they are strongly repeatable. They provide partial support for both what I’ve said and for my interpretation of your response: In this task volunteers are presented with four cards on a table. They see something like A 8 K 3 and are told that there is a rule that if a card has a vowel on one side it must have an odd number on the other side. Then they are asked which cards need to be turned over to see if this rule is being followed. A surprisingly large percentage of people, around 90% if I recall, get it wrong (excepting those who have had training in logic, and even some of those miss). On the other hand, given a logically equivalent task where a person is told that they are a bartender and that nobody under age 18 is allowed an alcoholic drink, then shown four order tabs with age on one side and drink order on the other, for example, 24 gin tonic 16 coke, most people get it right. The reason assumed for this is that people find the second test something familiar from their experience and so are able to rely on their understanding of what’s involved in the experience. No whether or not one wants to call that an innate sense of logic is open to question. As I see it, I would not say that. Rather, it seems to me that the innate sense is one of whether or not something fits into patterns of expectation that have been learned. To the extent that the external world presents us with patterns that follow logical rules (e.g., no two bodies occupying the same space at the same time) we develop patterns of expectations that mimic logical rules. Language learning is easier than learning logic (in a formal sense) because we’re specifically tuned for that: infants have a particular sensitivity to sounds that fall within the normal human voice range (formant frequencies: http://person2.sol.lu.se/SidneyWood/praate/whatform.html) and that, coupled with the way that the brain orients to novelty, to salient stimuli, and the neural pruning process makes it easy to learn a language (relatively speaking, still takes several years). With things like formal logic we don’t have that and learning logic can actually be made harder by learned non-logical expectations (such as the assumption of volition in inanimate objects) just as it can be assisted by learned logical expectations. Michael Shermer wrote that he believes magical thinking is a spandrel that comes along because evolution favored development of the ability for logical thought. My response is that it’s more likely that the ability for logical thought is the spandrel since magical thinking is fine for survival in a primitive environment, it’s easy to learn, and was (and still is) characteristic of most human thinking. Here’s an anecdotal example: shortly after getting out of grad school with a degree in physics I was working to prove a theorem that arose from some of my research. For several days I focused on this exclusively and after about a week, late in the evening, got the proof. The next day I took a break and just walked about enjoying the environment—except that I began seeing the patterns of relations that were involved in my proof showing up in connections between various elements of the environment. I don’t attribute this to any innate intuition or sense for mathematics, rather to an intensely learned set of patterns acting to organize perceptions. Given that the brain and nervous system has evolved to survive in the terrestrial environment (in particular for brains, with Hebbian reinforcement and synaptic pruning), and given the way that complex neural nets function on the basis of pattern recognition, this seems to me to only be expected—that what you are referring to as a sense of logic is more aptly characterized as a sense of “fit.” That things fit into expected (ie.e., learned) patterns or they do not and when they do not, this elicits concern (because, in a primitive environment, lack of fit might get one eaten). There is a bit of support for this in that the proto-Indo-European word “ar” is the root for the English words “reason,” “harmony,” “art,” and other similar words (as well as equivalent words in other Indo-European languages) and translates as “to fit together.” Following this line of reason, I’d claim that if humanity had evolved in a world where the standard experiences as an infant grew into adulthood were quantum in nature then our apparent “sense” would be quantum logic rather than ordinary categorical reasoning. But that experiment isn’t likely to be made anytime soon.

 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  1006
Joined  13-02-2017
 
 
 
07 January 2019 10:31
 

I am fairly sure the needle of human knowledge has never been moved one iota through the use of an Aristotelian demonstrative syllogism.  In fact, as the paradigm of knowledge it arguably retarded discovery (i.e. the generation of reliable knowledge) for almost two thousand years.  It would be peculiar indeed if it were somehow innate to our dispositions to know and inquire, i.e. if there was a “sense of logic as a sense of perception” that acts as our access to the world.  Since no useful inquiry or discovery has ever deployed it—not in science, which developed late, nor in industry, which preceded it—what evidence beyond worship of it now is there that it somehow structures our very manner of perceiving and thinking?  Unless, of course, one wants to argue our perceiving and thinking innately disposes us ignorance and discovering nothing new…. 

It seems to me the only useful application of Aristotelian logic is its formalization by Boole, which as I understand underlies the theory of computation and all computer programming.  For computation (which recreates one operation of intelligence) and for the manipulation of information presumed known, the logic is proven powerful—one of the most useful tools we have.  But as a tool for inquiry, discovery, and the generation of reliable knowledge, it is probably an optimized guarantee of ignorance.  If it’s not, it hard to explain how its deployment as the standard of knowing for 2000 years resulted in so much of it. 

I think the OP raises an important point, but framing it in terms of an innate sense of logic as a sense of perception almost certainly misstates that point.  Rather the far more interesting question seems to be: how does Aristotelian logic emerge out of our linguistic practices and perceptual life, not how our very perceiving, speaking and thinking is structured apriori by it.

[ Edited: 07 January 2019 10:41 by TheAnal_lyticPhilosopher]
 
burt
 
Avatar
 
 
burt
Total Posts:  15955
Joined  17-12-2006
 
 
 
07 January 2019 12:39
 
TheAnal_lyticPhilosopher - 07 January 2019 10:31 AM

I am fairly sure the needle of human knowledge has never been moved one iota through the use of an Aristotelian demonstrative syllogism.  In fact, as the paradigm of knowledge it arguably retarded discovery (i.e. the generation of reliable knowledge) for almost two thousand years.  It would be peculiar indeed if it were somehow innate to our dispositions to know and inquire, i.e. if there was a “sense of logic as a sense of perception” that acts as our access to the world.  Since no useful inquiry or discovery has ever deployed it—not in science, which developed late, nor in industry, which preceded it—what evidence beyond worship of it now is there that it somehow structures our very manner of perceiving and thinking?  Unless, of course, one wants to argue our perceiving and thinking innately disposes us ignorance and discovering nothing new…. 

It seems to me the only useful application of Aristotelian logic is its formalization by Boole, which as I understand underlies the theory of computation and all computer programming.  For computation (which recreates one operation of intelligence) and for the manipulation of information presumed known, the logic is proven powerful—one of the most useful tools we have.  But as a tool for inquiry, discovery, and the generation of reliable knowledge, it is probably an optimized guarantee of ignorance.  If it’s not, it hard to explain how its deployment as the standard of knowing for 2000 years resulted in so much of it. 

I think the OP raises an important point, but framing it in terms of an innate sense of logic as a sense of perception almost certainly misstates that point.  Rather the far more interesting question seems to be: how does Aristotelian logic emerge out of our linguistic practices and perceptual life, not how our very perceiving, speaking and thinking is structured apriori by it.

A bit tough on Aristotle, Anal. In my understanding the real value of his logic (plus the somewhat later developed Stoic logic) was that it provided the ground rules for philosophical discourse. It doesn’t give something new but it does eliminate much bs. The other thing is that historically the ancient world wasn’t all that hung up on Aristotle, he was one of a number of philosophers and, in Kuhnian terms, science was in a pre-paradigmatic state, and in that state there is never much progress, just different approaches and theories without much way of judging between. It wasn’t until Aristotle became “The Philosopher” in the Medieval period that his logic and his overall system took hold and, in my understanding, this could well have been a necessary preliminary for the later Scientific Revolution. But right on with doubts about a “logical sense.”

 
Speakpigeon
 
Avatar
 
 
Speakpigeon
Total Posts:  168
Joined  01-10-2017
 
 
 
08 January 2019 12:48
 
TheAnal_lyticPhilosopher - 07 January 2019 10:31 AM

I am fairly sure the needle of human knowledge has never been moved one iota through the use of an Aristotelian demonstrative syllogism.

Derail.
The OP’s notion of sense of logic isn’t about Aristotle’s syllogistic logic or any formal logic.

TheAnal_lyticPhilosopher - 07 January 2019 10:31 AM

In fact, as the paradigm of knowledge it arguably retarded discovery (i.e. the generation of reliable knowledge) for almost two thousand years.

I doubt that very much but it’s still a derail anyway.
And I have good reasons to doubt:

The Elements is still considered a masterpiece in the application of logic to mathematics. In historical context, it has proven enormously influential in many areas of science. Scientists Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, and Sir Isaac Newton were all influenced by the Elements, and applied their knowledge of it to their work. Mathematicians and philosophers, such as Thomas Hobbes, Baruch Spinoza, Alfred North Whitehead, and Bertrand Russell, have attempted to create their own foundational “Elements” for their respective disciplines, by adopting the axiomatized deductive structures that Euclid’s work introduced.
https://en.wikipedia.org/wiki/Euclid’s_Elements

TheAnal_lyticPhilosopher - 07 January 2019 10:31 AM

It would be peculiar indeed if it were somehow innate to our dispositions to know and inquire, i.e. if there was a “sense of logic as a sense of perception” that acts as our access to the world.  Since no useful inquiry or discovery has ever deployed it—not in science, which developed late, nor in industry, which preceded it—what evidence beyond worship of it now is there that it somehow structures our very manner of perceiving and thinking?  Unless, of course, one wants to argue our perceiving and thinking innately disposes us ignorance and discovering nothing new….

I already provided in my response to burt a detailed indication of how I see human perception and understanding of the world as essentially a logical process. If you don’t feel like addressing any specific point, then have a good day.

TheAnal_lyticPhilosopher - 07 January 2019 10:31 AM

It seems to me the only useful application of Aristotelian logic is its formalization by Boole, which as I understand underlies the theory of computation and all computer programming.  For computation (which recreates one operation of intelligence) and for the manipulation of information presumed known, the logic is proven powerful—one of the most useful tools we have.  But as a tool for inquiry, discovery, and the generation of reliable knowledge, it is probably an optimized guarantee of ignorance.  If it’s not, it hard to explain how its deployment as the standard of knowing for 2000 years resulted in so much of it.

Your insistance on Aristotelian logic won’t make it anything but a derail, you know.
And knowledge is always reliable or it wouldn’t knowledge.
It’s interesting to note that Aristotelian logic came out around the same period as Euclid’s elements and arithmetic. I don’t see that they proved much more useful than logic at the time in terms of the development of science.
And since Kepler and Newton, mathematics has been a fundamental basis for science, which ipso facto makes logic a fundamental basis for science.

TheAnal_lyticPhilosopher - 07 January 2019 10:31 AM

I think the OP raises an important point, but framing it in terms of an innate sense of logic as a sense of perception almost certainly misstates that point.  Rather the far more interesting question seems to be: how does Aristotelian logic emerge out of our linguistic practices and perceptual life, not how our very perceiving, speaking and thinking is structured apriori by it.

There’s no “misstatement” whatsoever of the point I wanted to make. If you want to make a different point, start your own thread.
Still, given the way you’ve managed to successfully ignore the OP and all my subsequent posts, I think I’ll be better off ignoring yours.


It should be noted, however, for those who may be interested, that logic, logic in its intuitive form, is still what mathematicians rely on and use today to prove all their theorems. It is notorious that despite the development of the standard method of logic by mathematicians themselves, essentially between 1900 and 1930, mathematicians today still choose to prove their theorems using their logical intuition, i.e. their sense of logic, rather than any of the computer-based theorem provers implementing formal logic.
And I also don’t think any mathematician waited for formal logic to be developed in the 20th century to know how to go about being logical in proving their theorems in the 3rd century BC.
And then mathematics is and has been since Kepler and Newton a fundamental basis for science, which ipso facto makes logic a fundamental basis for science.
This is in effect shows at the macroscopic scale of the development of science what happens at the microscopic scale of the development of each human being from early childhood.
Oh, well, I don’t mind having to wait for science to catch up with all this.
EB

 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  1006
Joined  13-02-2017
 
 
 
08 January 2019 13:14
 
burt - 07 January 2019 12:39 PM
TheAnal_lyticPhilosopher - 07 January 2019 10:31 AM

I am fairly sure the needle of human knowledge has never been moved one iota through the use of an Aristotelian demonstrative syllogism.  In fact, as the paradigm of knowledge it arguably retarded discovery (i.e. the generation of reliable knowledge) for almost two thousand years.  It would be peculiar indeed if it were somehow innate to our dispositions to know and inquire, i.e. if there was a “sense of logic as a sense of perception” that acts as our access to the world.  Since no useful inquiry or discovery has ever deployed it—not in science, which developed late, nor in industry, which preceded it—what evidence beyond worship of it now is there that it somehow structures our very manner of perceiving and thinking?  Unless, of course, one wants to argue our perceiving and thinking innately disposes us ignorance and discovering nothing new…. 

It seems to me the only useful application of Aristotelian logic is its formalization by Boole, which as I understand underlies the theory of computation and all computer programming.  For computation (which recreates one operation of intelligence) and for the manipulation of information presumed known, the logic is proven powerful—one of the most useful tools we have.  But as a tool for inquiry, discovery, and the generation of reliable knowledge, it is probably an optimized guarantee of ignorance.  If it’s not, it hard to explain how its deployment as the standard of knowing for 2000 years resulted in so much of it. 

I think the OP raises an important point, but framing it in terms of an innate sense of logic as a sense of perception almost certainly misstates that point.  Rather the far more interesting question seems to be: how does Aristotelian logic emerge out of our linguistic practices and perceptual life, not how our very perceiving, speaking and thinking is structured apriori by it.

A bit tough on Aristotle, Anal. In my understanding the real value of his logic (plus the somewhat later developed Stoic logic) was that it provided the ground rules for philosophical discourse. It doesn’t give something new but it does eliminate much bs. The other thing is that historically the ancient world wasn’t all that hung up on Aristotle, he was one of a number of philosophers and, in Kuhnian terms, science was in a pre-paradigmatic state, and in that state there is never much progress, just different approaches and theories without much way of judging between. It wasn’t until Aristotle became “The Philosopher” in the Medieval period that his logic and his overall system took hold and, in my understanding, this could well have been a necessary preliminary for the later Scientific Revolution. But right on with doubts about a “logical sense.”

I’ll attempt to justify the hardness.  It grants that more havoc was created in Aristotle’s name once his “method” became codified in medieval thought than he himself probably would have countenanced.  But nevertheless, when thinkers did get around to the business of knowing again, it was Aristotle they turned to, and contra your suggestion, I would say the way of thinking initiated by Galileo and Torricelli et al. upended the very logic of Aristotelian inquiry.

For Aristotle, inference rests on certain truths that are immediately possessed.  These truths are always better known than the conclusions, and at the end of the day they reflect that which is fixed and universal in nature, being definitions and classifications into which all particulars fall.  For, him, knowledge of particulars and their changes mark inferior being, and thus is not really knowledge at all.  What is solely knowledge is bringing to awareness how the particular falls under the auspices of the universal, and therefore the conclusion about it is true.  And this process always involves the prior apprehension of the certain propositions that ultimately ground the inference.

The question, of course, for philosophers since then has been: whence these certain truths immediately possessed on which valid inferences rest?  For the empiricists (and logical positivists, etc.), sense provides them; hence the issue is how to go from certainly apprehended particulars to the non-sensibly apprehended universals.  For the rationalists, some faculty of rational apprehension provides them; hence the issue is moving from the apprehended universals in their application to the sensory—and hence uncertain—particulars.  But in both cases the problem is the same: how does one go from truths deemed immediate and certain to conclusions that are demonstratively true?  One school says induction, the other deduction, but the underlying premise of demonstration from prior knowledge remains.

As I see it, science only got off the ground and established a means of obtaining reliable knowledge by abandoning, and even reversing, this process.  In science discovery reigns, not demonstration; demonstration only applies to the organization of knowledge after it is achieved, not acts as the means of achieving it.  Specifically, in science there are no antecedent certain truths upon which to rest inferences. None are even sought. Instead there are first conjectures or probabilities, then these are tested against further experiences, tests that incorporate the expected consequences deduced both from what is already known and what is conjected against what is in fact observed in the varying particulars, thus determining whether these consequences obtain, and therefore adding to the likelihood of the conjectural proposition’s truth.  In short, scientific inference uses both deduction and induction conjointly in a single operation to reach an explanation that always remains, against the standards of demonstrative certainty, probable.  Induction suggests an explanation, deduction goes from the potential explanation to the particular facts to see if the facts that should follow do in fact follow.  Once this is done, alternative explanations are proposed, and these are tested to see if those consequential facts obtain: if not, then those are rejected; if so, then the original conjecture is modified, or rejected, or the test is checked for its reliability.  This process goes on and on in a process of testing and elimination that eventually achieves a representative case, i.e. a satisfying explanation for all the facts observed in light of already existing knowledge.  This process essentially reverses the reliance on certain knowledge grounding a conclusion.  In effect, science as we know it now is “the process of making sure, not grasping antecedently given sureties” (Dewey).

This leaves open, of course, the question: how can we ever be sure?  The answer: in demonstrative terms, i.e. against a standard of eliminating all possibilities that are non-contradictory, or insuring the conclusion by antecedent insurance of the premise, we can never be sure.  All our explanations—our “conclusions”—are probable, subject to amendment or outright rejection under new knowledge.  This is absolutely not the case for Aristotelian demonstrative syllogisms.  Quite the contrary: for our part we pick the best explanation, the one that explains the most without leading to inconsistences with other things we know; for him, in what he called demonstrative syllogisms, we always already start with things we know for certain.  Summarily put, I guess, in science, all so-called demonstrative certainly is only instrumental to achieving reliable explanations, which remain only probable.  It is neither ever truly certain nor the beginning and end of science as such.

Or so the Anus conjects.  This is the germ of an idea that I still work on, without at this point being satisfied with it myself.  There are many details to work out, and the devil is in them.  I would also append that this problem of satisfying the tacit requirements of demonstration still plagues much—if not the core—of philosophy of science today.

 

[ Edited: 08 January 2019 13:34 by TheAnal_lyticPhilosopher]
 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  1006
Joined  13-02-2017
 
 
 
08 January 2019 13:22
 
Speakpigeon - 08 January 2019 12:48 PM

Still, given the way you’ve managed to successfully ignore the OP and all my subsequent posts, I think I’ll be better off ignoring yours.

.

I don’t see a downside to that, so by all means do.  Each here to do their own, as I like to say.

[ Edited: 08 January 2019 13:35 by TheAnal_lyticPhilosopher]
 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  1006
Joined  13-02-2017
 
 
 
08 January 2019 13:52
 

Regarding the relative performance on the Wason test, where concrete situations make the “logic” easier to obey than occasions requiring application of a formal rule, you say:

Now whether or not one wants to call that an innate sense of logic is open to question. As I see it, I would not say that. Rather, it seems to me that the innate sense is one of whether or not something fits into patterns of expectation that have been learned.

My view too.  Logic in its formal sense, as a codified way of “proper thinking” is, I think, derived from patterns of implication in meaning, which is more fundamental that the forms it takes on under conditions of controlled inquiry or successful communication.  I’d say that logic is the formalization of the patterns of expectations that have proven successful, thus these forms are both ‘derived’ and ‘invented,’ much like one derives and invents certain forms for interactions in commerce, like the terms of contract for making and enforcing what were otherwise informal promises.  While the formalization in effect creates something new, in fact the order of genesis is the opposite, I think, of what is suggested in the OP.  We derive a logic sense from successful patterns of inquiry and communication, not guide those inquiries and communication by some innate, non-derived logical sense.  Logic, then, as rules for “correct” thinking, is always instrumental; it is an aposteriori tool that can be internalized for use in future uses, thus creating its own conditions for its own success, yielding the appearance of apriority.  But in its genesis it is extracted from informal patterns of meaning as they occur continued use, once these patterns prove useful and can be subjected to reflective scrutiny that prescribes the specific conditions for their success, thus insuring more constantly reliable success in the future. 

In any case, my view on this, somewhat in line with what I quote from you.

 

[ Edited: 08 January 2019 13:55 by TheAnal_lyticPhilosopher]
 
burt
 
Avatar
 
 
burt
Total Posts:  15955
Joined  17-12-2006
 
 
 
08 January 2019 17:21
 
TheAnal_lyticPhilosopher - 08 January 2019 01:14 PM
burt - 07 January 2019 12:39 PM
TheAnal_lyticPhilosopher - 07 January 2019 10:31 AM

I am fairly sure the needle of human knowledge has never been moved one iota through the use of an Aristotelian demonstrative syllogism.  In fact, as the paradigm of knowledge it arguably retarded discovery (i.e. the generation of reliable knowledge) for almost two thousand years.  It would be peculiar indeed if it were somehow innate to our dispositions to know and inquire, i.e. if there was a “sense of logic as a sense of perception” that acts as our access to the world.  Since no useful inquiry or discovery has ever deployed it—not in science, which developed late, nor in industry, which preceded it—what evidence beyond worship of it now is there that it somehow structures our very manner of perceiving and thinking?  Unless, of course, one wants to argue our perceiving and thinking innately disposes us ignorance and discovering nothing new…. 

It seems to me the only useful application of Aristotelian logic is its formalization by Boole, which as I understand underlies the theory of computation and all computer programming.  For computation (which recreates one operation of intelligence) and for the manipulation of information presumed known, the logic is proven powerful—one of the most useful tools we have.  But as a tool for inquiry, discovery, and the generation of reliable knowledge, it is probably an optimized guarantee of ignorance.  If it’s not, it hard to explain how its deployment as the standard of knowing for 2000 years resulted in so much of it. 

I think the OP raises an important point, but framing it in terms of an innate sense of logic as a sense of perception almost certainly misstates that point.  Rather the far more interesting question seems to be: how does Aristotelian logic emerge out of our linguistic practices and perceptual life, not how our very perceiving, speaking and thinking is structured apriori by it.

A bit tough on Aristotle, Anal. In my understanding the real value of his logic (plus the somewhat later developed Stoic logic) was that it provided the ground rules for philosophical discourse. It doesn’t give something new but it does eliminate much bs. The other thing is that historically the ancient world wasn’t all that hung up on Aristotle, he was one of a number of philosophers and, in Kuhnian terms, science was in a pre-paradigmatic state, and in that state there is never much progress, just different approaches and theories without much way of judging between. It wasn’t until Aristotle became “The Philosopher” in the Medieval period that his logic and his overall system took hold and, in my understanding, this could well have been a necessary preliminary for the later Scientific Revolution. But right on with doubts about a “logical sense.”

I’ll attempt to justify the hardness.  It grants that more havoc was created in Aristotle’s name once his “method” became codified in medieval thought than he himself probably would have countenanced.  But nevertheless, when thinkers did get around to the business of knowing again, it was Aristotle they turned to, and contra your suggestion, I would say the way of thinking initiated by Galileo and Torricelli et al. upended the very logic of Aristotelian inquiry.

For Aristotle, inference rests on certain truths that are immediately possessed.  These truths are always better known than the conclusions, and at the end of the day they reflect that which is fixed and universal in nature, being definitions and classifications into which all particulars fall.  For, him, knowledge of particulars and their changes mark inferior being, and thus is not really knowledge at all.  What is solely knowledge is bringing to awareness how the particular falls under the auspices of the universal, and therefore the conclusion about it is true.  And this process always involves the prior apprehension of the certain propositions that ultimately ground the inference.

The question, of course, for philosophers since then has been: whence these certain truths immediately possessed on which valid inferences rest?  For the empiricists (and logical positivists, etc.), sense provides them; hence the issue is how to go from certainly apprehended particulars to the non-sensibly apprehended universals.  For the rationalists, some faculty of rational apprehension provides them; hence the issue is moving from the apprehended universals in their application to the sensory—and hence uncertain—particulars.  But in both cases the problem is the same: how does one go from truths deemed immediate and certain to conclusions that are demonstratively true?  One school says induction, the other deduction, but the underlying premise of demonstration from prior knowledge remains.

As I see it, science only got off the ground and established a means of obtaining reliable knowledge by abandoning, and even reversing, this process.  In science discovery reigns, not demonstration; demonstration only applies to the organization of knowledge after it is achieved, not acts as the means of achieving it.  Specifically, in science there are no antecedent certain truths upon which to rest inferences. None are even sought. Instead there are first conjectures or probabilities, then these are tested against further experiences, tests that incorporate the expected consequences deduced both from what is already known and what is conjected against what is in fact observed in the varying particulars, thus determining whether these consequences obtain, and therefore adding to the likelihood of the conjectural proposition’s truth.  In short, scientific inference uses both deduction and induction conjointly in a single operation to reach an explanation that always remains, against the standards of demonstrative certainty, probable.  Induction suggests an explanation, deduction goes from the potential explanation to the particular facts to see if the facts that should follow do in fact follow.  Once this is done, alternative explanations are proposed, and these are tested to see if those consequential facts obtain: if not, then those are rejected; if so, then the original conjecture is modified, or rejected, or the test is checked for its reliability.  This process goes on and on in a process of testing and elimination that eventually achieves a representative case, i.e. a satisfying explanation for all the facts observed in light of already existing knowledge.  This process essentially reverses the reliance on certain knowledge grounding a conclusion.  In effect, science as we know it now is “the process of making sure, not grasping antecedently given sureties” (Dewey).

This leaves open, of course, the question: how can we ever be sure?  The answer: in demonstrative terms, i.e. against a standard of eliminating all possibilities that are non-contradictory, or insuring the conclusion by antecedent insurance of the premise, we can never be sure.  All our explanations—our “conclusions”—are probable, subject to amendment or outright rejection under new knowledge.  This is absolutely not the case for Aristotelian demonstrative syllogisms.  Quite the contrary: for our part we pick the best explanation, the one that explains the most without leading to inconsistences with other things we know; for him, in what he called demonstrative syllogisms, we always already start with things we know for certain.  Summarily put, I guess, in science, all so-called demonstrative certainly is only instrumental to achieving reliable explanations, which remain only probable.  It is neither ever truly certain nor the beginning and end of science as such.

Or so the Anus conjects.  This is the germ of an idea that I still work on, without at this point being satisfied with it myself.  There are many details to work out, and the devil is in them.  I would also append that this problem of satisfying the tacit requirements of demonstration still plagues much—if not the core—of philosophy of science today.

Not a lot to disagree with, although I do have a different take on things. As I see it, Aristotle had to develop his logic in order to resolve the issue of how to make a non-sophistical argument. Without that, the early science of the Greek and Hellenistic periods wouldn’t have been possible. Then the Medieval philosophers came along and that was what they had to work with (the Platonic and other influences came in later, mainly from Byzantium). I think we’re pretty much on the same page of what the did with it, but I don’t think that empirical science could have arisen without that work. Then there was a brief interlude of Renaissance natural magic (Prospero) as the Aristotelian view collapsed, followed by the development of empirical methods (Bacon), the recognition of math as important for science, and the development of validity criteria for physio-mathematical theory (Newton). So I see two steps, first the development of validity criteria for philosophical theorizing (Aristotle, then the Medievals), then the development of validity criteria for empirical work so that now we have a rational-empirical science. I’d project this further (a long term project for me) and say that we still need to develop validity criteria for looking at different worldviews, or paradigms.

 
 < 1 2 3 4 >