Also, your argument explicitly deals with how humans make predictions, so yes, you have to deal with how humans work, because that's the entire point of your argument. If we were talking about the way AIs make prediction, you couldn't ignore the way an AI works, but you could ignore how humans work. Those other sciences don't concern themselves with how humans make predictions, therefore they don't need to concern themselves with humans.
I think I'd be easier to understand if I'd replace every 'human' with 'robot' or 'smart ai' because I hope that there is no difference and I argued as if there was none.
We're regressing to the classical tree-in-the-woods problem: if I shot you with the disintegrator ray, presuming your perception is all that is, then WHO WAS PHONE? were you shot or not?
something broken cannot decide that.
I could not decide that if I was shot.
A shot robot, being broken by a shot, doesn't know if he was shot, he didn't do any calculations after being shot.
Some other robot watching the scene may notice the shooting and decides that a robot was shot.
More specifically? I still don't know what you mean by that.
english,german,french,...
You've claimed that patterns that aren't understood are languages for you.
I didn't mean it like that. I meant something like that:
time 1: I don't understand a.
time 2: I understand a now! a always was a pattern! Because someone could have understood it before me.
And yet, you cannot logically prove logic. And, again, you are treating math as though it was a physical thing, rather than an abstraction. Universe does not run on Math or Physics, it runs on universe and things like Math and Physics are human abstractions that can be used to describe the universe.
I don't believe any statement about how real math is or not. I think math can predict things.
You can freely violate the rules of logic. A <=> ~A. Math? 2+2 = -9. There. Violated. The thing is, since they are tools supposed to describe reality, if you violate the rules governing them the results are utterly useless.
I wanted to point out that a robot gains from following a rule if he learned that following that rule is of benefit.
So once the robot decides to follow that rule, he lost the possibility to do the things the rule forbids.
Also, please provide examples of 'some kind of sentences' in question since it seems somewhat unclear what you mean by that.
"Never do something evil" Ok:
Killing is evil, I will never kill.
Stealing is evil, I will never steal.
(just examples, not my beliefs)
Now I will probably not do something if its evil, If I'm unsure I can give reassure myself by saying "Doing A is evil, I will never do A."
And maybe this sentence will strengthen my belief because of the form of the sentence I belief so much in.
One can use english to define rules:
If A is evil then don't do A. If one is convinced of this sentence one can use it to guide ones actions.
And people could even define rules to define new rules.
It boils down to:
I think robots can use the mechanics of some language to create new knowledge. And the robots may use that knowledge, so the knowledge may make them less random than they were before.