Consciousness remains inexplicable to this day if you get really gritty that's true. But I do believe that our models have some inherent value, even if they will always be incomplete. But I get what you say concerning scientific ideologues tough in my mind that is a direct consequence of our education system and not ill intented scientists or flawed methods. There are a fair amount of people in the field who know that all progress they make are but temporary truths.
If a model is able to make predictions it is a good partial description of reality. Ultimately we can not know reality but does that matter when we share this same perception?
If a model is able to make predictions this implies only that two things in the model is correlated to something else, possibly something not considered *in* the model in the first place.
To all practical considerations this does not really matter, but philosophically it does still matter and hence nothing about the accuracy of the model implies it in any way resembles the actual reality. That is to say that appearing to do something causes the appearance of something else to well appear does not explain away the possibility that actually you were appearing to do that thing because you were caused to do it by a third thing existing in the unknowable real world, that same thing that also caused the apparent effect to occur (for an example).
I was careful to make a distinction between science itself and the scientific ideology in my second post, there is no problem in applying the scientific method to get your 'appearances' as it were. But to answer your question of whether it matters, it matters very, very much. If we take it that everything that can be known about the real world beyond it's existence is the catalogue of experiences then everyone's experiences add up equally to our collective understanding, there is no competition possible. The moment we conclude that there is some means by which somebody can arrive at knowledge about the actually existing world beyond appearances, then whoever it is that possess the greatest amount of that means ultimately establishes a tyranny over everybody else; a tyranny that is preceded by a bitter conflict as everybody squabbles to acquire as much of the thing that allows them to access the real world; that thing in science's case is ironically money.
As for GoblinCookie's "everything could be conscious or non-conscious, we can't really tell" - evidence is weaker than surety, but it can exist in the absence of absolute knowledge. It is more likely that a conscious mind is behind something that passes the Turing Test than something that doesn't, for instance. And we all have to work with the evidence available. Sure, there's a minimal chance that I am in an evil god's Matrix and pressing the "z" button is magically linked to killing a random person, but that doesn't keep me from typing "zymurgy."
Nope. The Turing Test machine is probably just made to pass the test, so it is really nothing but a magician's trick to make us think that we are dealing with another being when we are not, a step up from an NPC in a regular computer game like DF. One of the arguments I made that we do not ultimately know that all other people are not just like the Turing test computer, that is a trick made to make us think that are not alone. The problem is to coming up with something *other than ourselves* in such a lonely universe that would go through all the trouble of inventing such an elaborate trick with no plausible reason to do so since *it* is not actually aware of what is doing.
The more likely reason is that we made all those false people because we were lonely and then deliberately *chose* to forget that they were fake so that we would actually have seemingly real people in order to make us feel like we were not alone.
True, but it seems plausible that killing video game characters normalizes or cheapens murder. If killing people, even if they are philosophical zombies, makes you a worse person, then you should not kill people.
It's like the Murder-Ghandi Parable.
It is worse than that. Think back to the good (or bad) old days when people would kill each-other at close range using spears or other melee weapons. Do you think the grunts were actually thinking about the people they were skewing with their spears as they were skewering them? Is it not more likely that the folks that trained them gave them mannequins or other dummies to skewer for practice, the real purpose of which was not really to make the better at actually wielding their spears but rather so that skewering things became a semi-reflective act. In that situation they probably just killed whoever they were told to kill unthinkingly, since skewering things is now second-thought to them.
Now look to a future where warfare is done by automated killed robots with human controllers. Now the experience of actually killing people is essentially the same as playing a computer game, so now computer gamers reflective ability to kill the appearance of people by using a joystick, mouse or keyboard makes them the perfect soldiers.