The traditional stance of knowledge was justified true belief.
Belief: A propositional attitude that P. I'm willing to extend this into the unconscious realm so that cached rules and muscle memory can serve as "beliefs" as well.
Truth: a relational property between beliefs and states of affairs. States of affairs are not true or false, beliefs are true or false.
Justification: The person's belief that P needs to be well supported, such as by being based upon some good evidence or reasoning.
Justification is the hardest of the three.
Then some upstart went and wrote a paper in the 60's which provided counterexamples. The most famous of which is about two job applicants:
Belief: A propositional attitude that P. I'm willing to extend this into the unconscious realm so that cached rules and muscle memory can serve as "beliefs" as well.
Truth: a relational property between beliefs and states of affairs. States of affairs are not true or false, beliefs are true or false.
Justification: The person's belief that P needs to be well supported, such as by being based upon some good evidence or reasoning.
Justification is the hardest of the three.
Then some upstart went and wrote a paper in the 60's which provided counterexamples. The most famous of which is about two job applicants:
The case's protagonist is Smith. He and Jones have applied for a particular job. But Smith has been told by the company president that Jones will win the job. Smith combines that testimony with his observational evidence of there being ten coins in Jones's pocket. (He had counted them himself — an odd but imaginable circumstance.) And he proceeds to infer that whoever will get the job has ten coins in their pocket. (As the present article proceeds, we will refer to this belief several times more. For convenience, therefore, let us call it belief b.) Notice that Smith is not thereby guessing. On the contrary; his belief b enjoys a reasonable amount of justificatory support. There is the company president's testimony; there is Smith's observation of the coins in Jones's pocket; and there is Smith's proceeding to infer belief b carefully and sensibly from that other evidence. Belief b is thereby at least fairly well justified — supported by evidence which is good in a reasonably normal way. As it happens, too, belief b is true — although not in the way in which Smith was expecting it to be true. For it is Smith who will get the job, and Smith himself has ten coins in his pocket. These two facts combine to make his belief b true. Nevertheless, neither of those facts is something that, on its own, was known by Smith. Is his belief b therefore not knowledge? In other words, does Smith fail to know that the person who will get the job has ten coins in his pocket? Surely so (thought Gettier).
It seems that anyone who wants to argue against this needs to claim that either that what Smith had did count as knowledge or that knowledge requires something in addition to true, justified beliefs since those three are not sufficient to guarantee knowledge in all cases that If the belief is true and justified, then there needs to be something else which determines whether or not something is knowledge or knowledge needs to include weird luck situations.
Before we go any further - I argue justification ultimately rests on evidence. Reasoning in the absence of evidence cannot provide justification. I look at reality in terms of information. In order to have a justified belief, the formation of a belief must be entangled in some way with reality such that it's unlikely to have formed if it were not actually the case.
Don't take my word for it, ask a computer! (By the way, if you can't explain your philosophy to a computer, then you probably haven't thought it out well enough.)
var x = true;
How does the computer know that x is 5? Because the bits at that variable's address became entangled with the bits of the value 5 through a process of assignment. The important thing is that if the variable's bits were entangled with the value bits of a different number, then the computer wouldn't believe that the variable had a value of 5.
// combining beliefs
var x = 5;
var y = 4;
var belief = (x==5 && y==4);
// combining beliefs in a boolean manner DOES NOT WORK if
// the beliefs have an indeterminant truth value
var x = 5;
var y;
var belief = (x==5 && y==4);
// always false (or throws exception depending on language)
Now, that's a crazy-high standard and impossible for humans to follow. Humans are made by evolution so we don't have a nice tested-at-the-factory 1-to-1 mapped assignment function: we need to infer things from sensory data. We allow terms like "likely," and "unlikely."
var judgedLikelihoodEvent1 = 0.44;
var judgedLikelihoodEvent2 = 0.66;
var belief = judgedLikelihoodEvent1 < judgedLikelihoodEvent2;
Note that while variables (abstractions/identities) can be of a multitude of "types," beliefs as propositional attitudes are always boolean.
Due to working in a world with imperfect information, I'm willing to relax the standard a bit by allowing a presumption that the universe is lawful and that the future will be like the past. This is a belief with an indeterminate truth value because no amount of accumulated evidence can prove it, but it is necessary because it allows us to create and improve evidence extracting functions.
It's also axiomatically obeyed by everyone except, perhaps, crazy people.
A belief in the likelihood of something must still be justified by entangled with reality using a contraining function (e.g. everything being possible tells you nothing). [Bayesian correlation](http://yudkowsky.net/rational/bayes?repost3yearslater) is my favorite system thus far.
// presuming tests and sensory data are defined
var evidenceBucket = { };
var val, i, len=tests.length;
var max=0, mostLikely=null; k;
for( i=0; i<len; i++ )
{
val = tests[i](sensoryData);
evidenceBucket[val] = evidenceBucket[val] || 0;
evidenceBucket[val]++;
}
for( var k in evidenceBucket )
{
if( evidenceBucket[k] > max )
{
max = evidenceBucket[k];
mostLikely = k;
}
}
// at this point mostLikely is the value that
// most of the tests returned
// obviously there are problems here with the weights
// of the tests all being equal
// but this is just a formalization of
// "if it walks like a duck and quacks like a duck"
The tests, so long as they produce output DEPENDENT on the evidence input (e.g. evidence->val via f(evidence)), are thus entangled with it.
But, that still doesn't cover everything - the tests could be bad and, while still producing output correlated with input, could do so in an extremely weak manner. The mind is a pattern recognizer and it can do its job poorly due to a lack of proper discernment or bad tests build up through a series of unfortunate correlations. We have to make due with our extremely limited senses and, by pigeonholing our sensory data into abstractions, we fall victim to the [pigeonhole principle](http://en.wikipedia.org/wiki/Pigeonhole_principle).
(The computer version has less guano)
What sometimes what looks like a duck and quacks like a duck is a human child dressed in a duck costume. I'd consider the belief justified if the following conditions were met:
1. Sensory data was recieved and processed AND
2. The function which identified the pattern did so in a dependent manner on the sensory data - that is if the sensory data were *significantly* different, the output would be different. AND
3. The pattern identification function must only make use of justified beliefs.
4. The pattern accurately reflects reality (becomes an true belief) OR
5. The pattern overlaps with another pattern (false positive) only if either of the following occurs:
a. The function did not have access to data which allowed it to discern between an actual positive and false positive OR
b. The function is widely used because the cases where it delivers false positives are unknown or are so rare as to not be worth the additional computational effort.
Wow, that's a lot, but the first two are really saying "induction is necessary" and "data must be discernable." The third should be an obvious justification - you reached a true conclusion by using your brain and evidence in such a way that, if the evidence were different, you wouldn't.
The fourth one is a bit more complicated but provides some escape hatches. You can call a child in a duck suit a duck and be justified in your belief if:
1. The child is too far away to tell and you don't have some sort of 6th sense that can determine duckiness.
2. No one has ever heard of or used a duck costume or duck call before or the costumes are so perfect that it's way too hard to tell (e.g. you need 5/20 vision) and only two duck costumes were worn in the history of humanity.
3. Your understanding of costumes and humans and ducks, if you're using them, have to be justified.
The fifth clause makes justification a sliding scale. As the cost of informedness decreases, the amount of informedness required to be justified increases. As collisions become more possible, justification must become more rigorous.
Given this stance, let's examine the Gettier problem again. But wait, one more thing... can knowledge only be justified by knowledge?
No. You're allowed to luck out sometimes and that luck is *justified* so long as it fits 5a. or 5b. thereby retaining the original definition of knowledge as justified+true+belief.
Ok, now onward:
But Smith has been told by the company president that Jones *will* win the job.
Here we run into the first problem. We don't have entanglement with Jones actually getting the job. Unless the future can send information to the past it's **impossible to have foreknowledge** given the standard I've outlined. Statements like "I know X will Y" are automatically false because, as far as we know, we have no **discernable** information entanglement with the future.
A key way to view this is that, at the time of formation, beliefs about the future have no truth value - the value is indeterminant or, we could ditch that and just call them "false until they're true." Either way, they're
++++
truth is as immutable as beliefs and reality are. It's not possible, holding beliefs to be constant, for a belief about something at a certain time to be truth and then later not true... at least according to present understanding of physics. In other words, I can't believe tomorrow that I'll eat pancakes and call that true, and then not do it.
This is a key reason why entanglement is important - if there are ways for something to become non-true, then you can't call them true - you can , at best, call them "likely true."
What can be knowlege is that the company president said that Jones will win the job. Other beliefs can be formed FROM that, and they can even end up being true, but they cannot be knowledge of Jones getting the job.
Smith combines that testimony with his observational evidence of there being ten coins in Jones's pocket. (He had counted them himself — an odd but imaginable circumstance.) And he proceeds to infer that whoever will get the job has ten coins in their pocket.
Using "the person with 10 coins in his pocket" as a standin for job-winner is kind of lame and is only present to allow luck to work later. At best it doesn't increase the chances of a Jones-gets-the-job belief being true and almost certainly reduces it due to P(A) >= P(A&B).
Smith doesn't actually know that Jones will get the job per foreknowledge being banned but, even if this rejection were overlooked, he doesn't know the future state of Jones pocket change in the present because he's not yet entangled discernably with events which could change it.
There is the company president's testimony;
Irrelevant, all that is knowledge here is what the president said, not the actual fact of Jones winning the position.
> there is Smith's observation of the coins in Jones's pocket;
At the present time but not at the point where Jones wins the job (which he didn't anyways). While it's likely that the coin count wouldn't change, the fact that it could would make knowledge conditional on further information which isn't allowed post-belief.
> and there is Smith's proceeding to infer belief b carefully and sensibly from that other evidence.
Smith is justified in a belief that something is likely to occur, but not justified that is will occur due to the foreknowledge ban. A belief about a future event is not knowledge even if it is based on knowledge. This is because it's not true (yet) that Jones gets the job due to the immutability clause.
> Belief b is thereby at least fairly well justified — supported by evidence which is good in a reasonably normal way. As it happens, too, belief b is true — although not in the way in which Smith was expecting it to be true.
Now, this is where we get to the luck part. As I mentioned before, justification doesn't have to be based on knowledge, so this part of the statement is correct. Let's see how it jives with my four requirements:
1. Sensory data about the president's statement and the coin count were obtained and processed.
2. Presumably the functions identified the statement and coin count in such a way that if they didn't occur they wouldn't be believed.
3.
4. The pattern accurately reflected reality for both.
So, what's the problem? It's that there's no guaranteed truth connection between the current state of affairs and future state of affairs including the coin count. You can't have a true belief about future events unless you have discernable entanglement.
// here's some standard code
var x = 5;
x = 9;
belief.truthiness = x == 5;
// ok, that's straightforward
// now program this ---------------------------
var x = 5;
belief.truthiness = ifInFuture(x == 5);
// belief.truthiness == ???
x = 9;
// belief.truthiness == false, but... ???????
The truth value of a such a belief is indeterminant. A simple way to avoid this problem
The truth of beliefs is a dichotomy with no middle. There's no "null state" - a null state is not having a belief at all. How does the computer, which is a state machine, determine the truth value after the ifInFuture assignment without entanglement - how can it be aware of what it hasn't yet encountered?
> For it is Smith who will get the job, and Smith himself has ten coins in his pocket. These two facts combine to make his belief b true.
Yes, the
Nevertheless, neither of those facts is something that, on its own, was known by Smith. Is his belief b therefore not knowledge? In other words, does Smith fail to know that the person who will get the job has ten coins in his pocket? Surely so (thought Gettier).
Before we go any further - I argue justification ultimately rests on evidence. Reasoning in the absence of evidence cannot provide justification. I look at reality in terms of information. In order to have a justified belief, the formation of a belief must be entangled in some way with reality such that it's unlikely to have formed if it were not actually the case.
Don't take my word for it, ask a computer! (By the way, if you can't explain your philosophy to a computer, then you probably haven't thought it out well enough.)
var x = true;
How does the computer know that x is 5? Because the bits at that variable's address became entangled with the bits of the value 5 through a process of assignment. The important thing is that if the variable's bits were entangled with the value bits of a different number, then the computer wouldn't believe that the variable had a value of 5.
// combining beliefs
var x = 5;
var y = 4;
var belief = (x==5 && y==4);
// combining beliefs in a boolean manner DOES NOT WORK if
// the beliefs have an indeterminant truth value
var x = 5;
var y;
var belief = (x==5 && y==4);
// always false (or throws exception depending on language)
Now, that's a crazy-high standard and impossible for humans to follow. Humans are made by evolution so we don't have a nice tested-at-the-factory 1-to-1 mapped assignment function: we need to infer things from sensory data. We allow terms like "likely," and "unlikely."
var judgedLikelihoodEvent1 = 0.44;
var judgedLikelihoodEvent2 = 0.66;
var belief = judgedLikelihoodEvent1 < judgedLikelihoodEvent2;
Note that while variables (abstractions/identities) can be of a multitude of "types," beliefs as propositional attitudes are always boolean.
Due to working in a world with imperfect information, I'm willing to relax the standard a bit by allowing a presumption that the universe is lawful and that the future will be like the past. This is a belief with an indeterminate truth value because no amount of accumulated evidence can prove it, but it is necessary because it allows us to create and improve evidence extracting functions.
It's also axiomatically obeyed by everyone except, perhaps, crazy people.
A belief in the likelihood of something must still be justified by entangled with reality using a contraining function (e.g. everything being possible tells you nothing). [Bayesian correlation](http://yudkowsky.net/rational/bayes?repost3yearslater) is my favorite system thus far.
// presuming tests and sensory data are defined
var evidenceBucket = { };
var val, i, len=tests.length;
var max=0, mostLikely=null; k;
for( i=0; i<len; i++ )
{
val = tests[i](sensoryData);
evidenceBucket[val] = evidenceBucket[val] || 0;
evidenceBucket[val]++;
}
for( var k in evidenceBucket )
{
if( evidenceBucket[k] > max )
{
max = evidenceBucket[k];
mostLikely = k;
}
}
// at this point mostLikely is the value that
// most of the tests returned
// obviously there are problems here with the weights
// of the tests all being equal
// but this is just a formalization of
// "if it walks like a duck and quacks like a duck"
The tests, so long as they produce output DEPENDENT on the evidence input (e.g. evidence->val via f(evidence)), are thus entangled with it.
But, that still doesn't cover everything - the tests could be bad and, while still producing output correlated with input, could do so in an extremely weak manner. The mind is a pattern recognizer and it can do its job poorly due to a lack of proper discernment or bad tests build up through a series of unfortunate correlations. We have to make due with our extremely limited senses and, by pigeonholing our sensory data into abstractions, we fall victim to the [pigeonhole principle](http://en.wikipedia.org/wiki/Pigeonhole_principle).
(The computer version has less guano)
What sometimes what looks like a duck and quacks like a duck is a human child dressed in a duck costume. I'd consider the belief justified if the following conditions were met:
1. Sensory data was recieved and processed AND
2. The function which identified the pattern did so in a dependent manner on the sensory data - that is if the sensory data were *significantly* different, the output would be different. AND
3. The pattern identification function must only make use of justified beliefs.
4. The pattern accurately reflects reality (becomes an true belief) OR
5. The pattern overlaps with another pattern (false positive) only if either of the following occurs:
a. The function did not have access to data which allowed it to discern between an actual positive and false positive OR
b. The function is widely used because the cases where it delivers false positives are unknown or are so rare as to not be worth the additional computational effort.
Wow, that's a lot, but the first two are really saying "induction is necessary" and "data must be discernable." The third should be an obvious justification - you reached a true conclusion by using your brain and evidence in such a way that, if the evidence were different, you wouldn't.
The fourth one is a bit more complicated but provides some escape hatches. You can call a child in a duck suit a duck and be justified in your belief if:
1. The child is too far away to tell and you don't have some sort of 6th sense that can determine duckiness.
2. No one has ever heard of or used a duck costume or duck call before or the costumes are so perfect that it's way too hard to tell (e.g. you need 5/20 vision) and only two duck costumes were worn in the history of humanity.
3. Your understanding of costumes and humans and ducks, if you're using them, have to be justified.
The fifth clause makes justification a sliding scale. As the cost of informedness decreases, the amount of informedness required to be justified increases. As collisions become more possible, justification must become more rigorous.
Given this stance, let's examine the Gettier problem again. But wait, one more thing... can knowledge only be justified by knowledge?
No. You're allowed to luck out sometimes and that luck is *justified* so long as it fits 5a. or 5b. thereby retaining the original definition of knowledge as justified+true+belief.
Ok, now onward:
But Smith has been told by the company president that Jones *will* win the job.
Here we run into the first problem. We don't have entanglement with Jones actually getting the job. Unless the future can send information to the past it's **impossible to have foreknowledge** given the standard I've outlined. Statements like "I know X will Y" are automatically false because, as far as we know, we have no **discernable** information entanglement with the future.
A key way to view this is that, at the time of formation, beliefs about the future have no truth value - the value is indeterminant or, we could ditch that and just call them "false until they're true." Either way, they're
++++
truth is as immutable as beliefs and reality are. It's not possible, holding beliefs to be constant, for a belief about something at a certain time to be truth and then later not true... at least according to present understanding of physics. In other words, I can't believe tomorrow that I'll eat pancakes and call that true, and then not do it.
This is a key reason why entanglement is important - if there are ways for something to become non-true, then you can't call them true - you can , at best, call them "likely true."
What can be knowlege is that the company president said that Jones will win the job. Other beliefs can be formed FROM that, and they can even end up being true, but they cannot be knowledge of Jones getting the job.
Smith combines that testimony with his observational evidence of there being ten coins in Jones's pocket. (He had counted them himself — an odd but imaginable circumstance.) And he proceeds to infer that whoever will get the job has ten coins in their pocket.
Using "the person with 10 coins in his pocket" as a standin for job-winner is kind of lame and is only present to allow luck to work later. At best it doesn't increase the chances of a Jones-gets-the-job belief being true and almost certainly reduces it due to P(A) >= P(A&B).
Smith doesn't actually know that Jones will get the job per foreknowledge being banned but, even if this rejection were overlooked, he doesn't know the future state of Jones pocket change in the present because he's not yet entangled discernably with events which could change it.
There is the company president's testimony;
Irrelevant, all that is knowledge here is what the president said, not the actual fact of Jones winning the position.
> there is Smith's observation of the coins in Jones's pocket;
At the present time but not at the point where Jones wins the job (which he didn't anyways). While it's likely that the coin count wouldn't change, the fact that it could would make knowledge conditional on further information which isn't allowed post-belief.
> and there is Smith's proceeding to infer belief b carefully and sensibly from that other evidence.
Smith is justified in a belief that something is likely to occur, but not justified that is will occur due to the foreknowledge ban. A belief about a future event is not knowledge even if it is based on knowledge. This is because it's not true (yet) that Jones gets the job due to the immutability clause.
> Belief b is thereby at least fairly well justified — supported by evidence which is good in a reasonably normal way. As it happens, too, belief b is true — although not in the way in which Smith was expecting it to be true.
Now, this is where we get to the luck part. As I mentioned before, justification doesn't have to be based on knowledge, so this part of the statement is correct. Let's see how it jives with my four requirements:
1. Sensory data about the president's statement and the coin count were obtained and processed.
2. Presumably the functions identified the statement and coin count in such a way that if they didn't occur they wouldn't be believed.
3.
4. The pattern accurately reflected reality for both.
So, what's the problem? It's that there's no guaranteed truth connection between the current state of affairs and future state of affairs including the coin count. You can't have a true belief about future events unless you have discernable entanglement.
// here's some standard code
var x = 5;
x = 9;
belief.truthiness = x == 5;
// ok, that's straightforward
// now program this ---------------------------
var x = 5;
belief.truthiness = ifInFuture(x == 5);
// belief.truthiness == ???
x = 9;
// belief.truthiness == false, but... ???????
The truth value of a such a belief is indeterminant. A simple way to avoid this problem
The truth of beliefs is a dichotomy with no middle. There's no "null state" - a null state is not having a belief at all. How does the computer, which is a state machine, determine the truth value after the ifInFuture assignment without entanglement - how can it be aware of what it hasn't yet encountered?
> For it is Smith who will get the job, and Smith himself has ten coins in his pocket. These two facts combine to make his belief b true.
Yes, the
Nevertheless, neither of those facts is something that, on its own, was known by Smith. Is his belief b therefore not knowledge? In other words, does Smith fail to know that the person who will get the job has ten coins in his pocket? Surely so (thought Gettier).