Ask yourself. We don't know why, but as you do actually believe in video games causing violence, you should.
I don't believe it because I want to believe in it, I believe it because as far as I can tell both logical inference *and* scientific evidence support this conclusion. It's like with global warming, we know from evidence that is happening and we know from logical inference (it can't be proven experimentally) that humans are responsible.
People don't want to believe in global warming, but they know it's real. But some people who also *want* global warming not to be real still manage to deny global warming is real. In this case, the people who play violent video games (including myself to a certain extent) have an interest in video games not promoting aggression. So in more or less the same fashion, we can expect people to deny video games effect behaviour, regardless of whether it is a sound claim or not.
To put things backwards, if you don't have any reason they perceive to be sound, to believe that video games cause violence, nobody has any reason to claim there is.
[ ✓ ] When questioned on the existence of cyptids, claims that it is societal prejudice that most people don't believe in them and that nobody who said "they don't exist" has ever reallylooked into the matter
[ ✓ ] Doesn't understand the concept of the null hypothesis
[ ✓ ] Makes claims then says that the burden of proof is on other people to disprove the claims
[ ✓ ] Goes on a tangent rant that experts are conspiring against non-experts by pretending to know more
Are we on Bay12, or Ufoproof.com here.
A null hypothesis has to be falsifiable. The hypothesis
"cryptids don't exist because people don't see them and people who see them are hallucinating" is the very definition of an unfalsifiable hypothesis. If I see one that does not prove they exist but if I fail to see them it proves they don't exist.
To bring in a previous topic, you’ve used the word “winning” a lot. I could just point out that this game has the official motto of “Losing is Fun,” but I’ll try to be a bit more general, as one example could just be an exception to the rule. (Of course, this particular exception means that this particular game might be able to handle Winning Requires Oppression...)
What do you mean by “winning”? Can you describe it without using the word itself or any synonyms? In other words, winning by what metric?
By winning I mean moving the plot of the characters or the fortress forward in a direction that is desirable to the player. The problem with oppressive systems is they tend to respond to individual people who threaten with them by
Reassignment to Antartica. Thus reassigned, the player is rendered unable to win, even if winning means challenging the society's oppressive regime rather than furthering personal goals.
In a nutshell, my response to your most recent post: Think Probabilistically. Not null hypotheses but priors. Not “it exists” or “it does not exist” but “I estimate an X% chance that it exists.”
To elaborate, you start with a prior. As you receive information, you use it to adjust your estimate. This is called “updating”. The weight of each update is proportional to the strength of the information. The strength of the information is the ratio of the likelihood of the information given its truth to the likelihood of the information given its falsehood.
That makes sense, but ultimately we have to act on the precautionary principle. If there a substantial probability that something is harmful, we should act as though it is rather than using the uncertainty as an excuse for inaction.
Applying this to the tiger: someone seeing a tiger is strong evidence for the existence of a tiger, since hallucinations are not common. Someone seeing no tiger is weak evidence against the existence of a tiger, since people don’t always notice things. But what’s important here is the ratio of the strengths - how more uncommon are hallucinations(/lying/paranoia/bribes/etc.) compared to people missing the tiger(/lying/bribes). If false positives are more than 99 times less probable than false negatives, then the overall evidence is for the tiger existing. If not, the overall evidence is against the tiger existing.
It seems you beat me too it, I was going to say that to someone but you got there first.
Seeing no tiger is only weak evidence against the existence of tigers *if* the person can actually see tigers in general. If we are talking about blind people, then any number of people failing to see the tiger fails as evidence against the existence of tigers.
In the case of some scientific studies detecting something while other studies fail to detect something, the studies failing to detect something are blind men. That is because different studies have different methodology (or they would produce the same result) and since that is the case we have a problem. We don't know which methodology used by the various studies is the correct methodology to detect the thing we are looking for, so in effect the only way to prove the methodology is correct is to detect what you are looking for.
In effect it is not like the people not seeing the tiger by chance, those who did not see the tiger were confirmed to be blind since we didn't know who could 'see' to begin with. Seeing is defined here as
"able to detect the positive result" IF there is one.
Then you update based on this evidence. It shifts your estimate. Your prior and the update are combined together to give you the estimate. If tigers are known to be common in the area, then that should also influence your estimate, ferex.
(You may note that I mentioned liars and bribes in both the false positive and false negative sections. Shouldn’t they cancel out? With mathematics, we can show this is not the case. Combining possible explanations for something is roughly additive while the update depends on the ratio. Adding something to both sides of a ratio brings the ratio closer to 1:1. Interpreted, this means that bringing in liars etc. worsens our overall ability to discern the truth, which lines up with our intuitions about liars.)
This is Bayes’ Law, the heart of Reasoning Under Uncertainty. (Technically, it’s the odds ratio form of Bayes’ Law, since we don’t care about the absolute probabilities for updating, only the relative probabilities of foo and !foo.)
[Foo is a metasyntactic variable, also known as a placeholder. !foo means not-foo.]
The problem is that if someone does not want someone to find something out they can purposely replicate the methodology of studies that failed to prove the affirmative in order to bring down the probability by sheer weight of numbers.
An extra problem is that detecting things often needs good instruments which are expensive, by using cheap instruments the nay-sayer can replicate null-hypothesis studies at a far lower cost than is needed to actually confirm the hypothesis.
The idea a player character in a game should be subject to extreme realistic interactions in a world where nothing at all is even remotely realistic is absurd.
Also, note that the player is the only person in the df universe with a brain. Well, my players anyway.
The blade you are wielding bites both ways. If we are not required to be realistic, then also we can't argue that oppression should exist because it is realistic for it to exist in a given context.