Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 679 680 [681] 682 683 ... 759

Author Topic: Calm and Cool Progressive Discussion Thread  (Read 1291698 times)

Harry Baldman

  • Bay Watcher
  • What do I care for your suffering?
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10200 on: June 09, 2015, 09:16:19 am »

If I remember correctly, information integration is the final step of information processing in the nervous system. You're quite correct in that it's a black box currently, because if we knew what exactly happened in there and what each part did, we'd have the workings of the human mind somewhat explained and artificial intelligence possible. We know how information is acquired, but we have less of a clear picture on how it gets processed, and I maintain that finding out how is the main thing of importance for many reasons that seem perfectly good to me, but on which your mileage may vary.

It's not quite magical black box thinking, I believe. Black box thinking would be more like trying to make a computer that's connected to one or more human brains and call that an artificial intelligence, or possibly asserting that artificial intelligences will always be plainly inferior to a human intellect for a certain task. This, on the other hand, is more in the vein of "but imagine if we could get telomerases to work in every human cell!" wishful thinking as I see it, but it's really fun to think about and easy to hype yourself for, much like all versions of this kind of science. Truthfully, it's largely empty enthusiasm based on a great deal of nonexistent work that, for all I know, may never get done. But it's a damn sight better than dreaming about FTL travel, so here I am.

Replace 'artificial intelligence' with 'Soviet Union' and you've got your answer. Do you care about a few bruises or shed skin cells?

I imagine shedding skin cells would probably be the regular die-off of old people. Soviet Union shenanigans would be more like self-flagellation, I'd think. Or transhumanism, you never know. And I don't know about you, but I don't actively seek out getting bruised. It's not very pleasant.

Not very well. Most humans wouldn't even have second thoughts about amputating an irreparably functionless and damaged finger, even if the cells that make it up were still alive.

Wouldn't they? I was under the impression that people try to keep their body parts if they can help it. Well, most kinds of people at least. I understand amputation is usually a difficult decision, and when it's not they often make it for you or inform you that it's your best chance for survival/quality living and expect you to take the option.

But how many cases of an irreparably functionless and damaged part of society can you think of, though?
« Last Edit: June 09, 2015, 09:31:10 am by Harry Baldman »
Logged

Helgoland

  • Bay Watcher
  • No man is an island.
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10201 on: June 09, 2015, 09:53:25 am »

Well, the lining of your intestinal tract then. Those guys are chewed up by your own body, they're built to die.
Logged
The Bay12 postcard club
Arguably he's already a progressive, just one in the style of an enlightened Kaiser.
I'm going to do the smart thing here and disengage. This isn't a hill I paticularly care to die on.

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10202 on: June 09, 2015, 12:28:48 pm »

Pleasure (as in, a state of mind that creates positive emotions, just to make sure we don't get into a discussion about what words mean) is the ultimate goal of everyone's life.
This is manifestly not true, unless you expand the definition of "pleasure" to be so broad as to be meaningless, as you might by saying that I value agency because pursuing my desires is pleasurable. At that point, you're constructing the same sort of unfalsifiable, universal answer as a mystic seizing on the word "quantum". I believe that it's better for me to make a choice that damns me to eternal suffering than for the choice to be made for me to enjoy eternal pleasure. If I should choose to be unhappy, even with no additional reason to "make it worth it", even simply for the sake of being unhappy, I don't believe there's anything wrong with that as long as I know what I'm doing. If there's some way in which that means I secretly value pleasure above all else, just in a weird way, it's because you're scrambling for an explanation that preserves your truism in spite of evidence to the contrary.

At best for your argument, I am fundamentally insane.
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

Dutchling

  • Bay Watcher
  • Ridin' with Biden
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10203 on: June 09, 2015, 12:35:06 pm »

That, or making choices is a pleasurable experience for you :)
Logged

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10204 on: June 09, 2015, 12:38:23 pm »

No, this is the case even when the decision is an agonizing one to have to make.

EDIT: Unless, again, you want to expand the definition so far as to be an unfalsifiable truism by saying there's some "secret" level of happiness I'm not aware of.
« Last Edit: June 09, 2015, 12:40:00 pm by Bauglir »
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

Frumple

  • Bay Watcher
  • The Prettiest Kyuuki
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10205 on: June 09, 2015, 12:45:35 pm »

I... would probably say that people that don't consider making a choice a pleasure in and of itself has likely not had the ability to choose taken away from them often and blatantly enough. The decision may be agonizing, but that you're able to make it is, very explicitly, its own sort of bliss. Agency is often bundled with stuff that makes exercising it painful on the net, but that's little to do with the nature of agency itself, I would say.

Just as an example, that moment when you're able to walk after months of being bed-ridden, even with all the pain and weakness it involves, even if it the ability to choose your own actions is shortly going to lead to misery, is gorram sublime.
Logged
Ask not!
What your country can hump for you.
Ask!
What you can hump for your country.

Harry Baldman

  • Bay Watcher
  • What do I care for your suffering?
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10206 on: June 09, 2015, 01:04:44 pm »

This is manifestly not true, unless you expand the definition of "pleasure" to be so broad as to be meaningless, as you might by saying that I value agency because pursuing my desires is pleasurable. At that point, you're constructing the same sort of unfalsifiable, universal answer as a mystic seizing on the word "quantum". I believe that it's better for me to make a choice that damns me to eternal suffering than for the choice to be made for me to enjoy eternal pleasure. If I should choose to be unhappy, even with no additional reason to "make it worth it", even simply for the sake of being unhappy, I don't believe there's anything wrong with that as long as I know what I'm doing. If there's some way in which that means I secretly value pleasure above all else, just in a weird way, it's because you're scrambling for an explanation that preserves your truism in spite of evidence to the contrary.

At best for your argument, I am fundamentally insane.

Ah! A semantic argument! I refer you to that fragment you quoted, especially that bit inside the parentheses. And also the rest of the original post, where I explicitly mention the feeling of agency as a form of pleasure because it makes one happy to be personally responsible for their course in life (though admittedly I seem to accidentally question your stance's actual practicality in real life as well, which is a very strange coincidence).

And with that I feel obliged to ask, what do you think pleasure is? I'm assuming ecstasy, enjoyment and euphoria from the wording of your post. Crucially, my definition includes that as well as happiness and personal satisfaction.

The point is, pleasure in this case is specifically meant to be the light within your brain with the accompanying psychoactive reward that says you done good (it is to philosophical hedonism, if I understand correctly, what "utility" is to utilitarianism). Pleasure is simply the best word for it that encompasses most of its covered human behaviors and stretches the easiest to accommodate something that doesn't have a readily available word for it (and if there is, it's probably a pretentious German word).

The less specific point is, this is a debate of falling trees, deserted areas and sounds thereof.
« Last Edit: June 09, 2015, 01:10:10 pm by Harry Baldman »
Logged

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10207 on: June 09, 2015, 01:19:49 pm »

Not really. I'm accusing your definition of vacuousness. At that point you're saying "Everyone does the things they want to because they want to". If your definition of "pleasure" is so broad as to allow you to make that move where you say "Anything you suggest as a motivation boils down to pleasure" then you're making a non-argument, and I don't see how it's either convincing or useful in constructing an AI. At least my Greatest Good offers some qualities by which it can be distinguished from literally anything else at all.

What I think pleasure is is a sense of enjoyment; it might be got from pride at an accomplishment, sensory feedback from a meal, or reciprocation of your feelings toward a lover. If you want me to dig deeper, it has its roots as a reward mechanism. Even deeper into my own beliefs, making the reward your explicit goal is foolish, and if you make it your ideal you turn yourself into an addict. But that is your business, of course. It's just not something I wish to do. In any case, it's a good deal more restricted than "the whole of human motivation", and that makes it a much more useful concept because it means I can have sensible conversations about it.

EDIT: I guess to be clear, what I'm saying is that defining pleasure this way is abusing semantics to dispose of the argument entirely. It's a major foundation on which everything else rests, and if you just abstract "goodness" away, you wind up saying nothing and taking quite a lot of words to do so. It's as though you were to write instructions on cracking RSA, and at some point you call a method for calculating the decryption key that is "left as an exercise to the reader".

My main point here is that I do not consider pleasure to be the highest good, unless (as you suggest we treat it) it is essentially defined as the greatest good. So a morality we want to implement into some hypothetical intelligence that's based on pleasure will either be something I'm at sharp disagreement with, or else so nebulous as to be no morality at all.
« Last Edit: June 09, 2015, 01:50:31 pm by Bauglir »
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

Harry Baldman

  • Bay Watcher
  • What do I care for your suffering?
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10208 on: June 09, 2015, 02:38:02 pm »

Not really. I'm accusing your definition of vacuousness. At that point you're saying "Everyone does the things they want to because they want to". If your definition of "pleasure" is so broad as to allow you to make that move where you say "Anything you suggest as a motivation boils down to pleasure" then you're making a non-argument, and I don't see how it's either convincing or useful in constructing an AI. At least my Greatest Good offers some qualities by which it can be distinguished from literally anything else at all.

What I think pleasure is is a sense of enjoyment; it might be got from pride at an accomplishment, sensory feedback from a meal, or reciprocation of your feelings toward a lover. If you want me to dig deeper, it has its roots as a reward mechanism. Even deeper into my own beliefs, making the reward your explicit goal is foolish, and if you make it your ideal you turn yourself into an addict. But that is your business, of course. It's just not something I wish to do. In any case, it's a good deal more restricted than "the whole of human motivation", and that makes it a much more useful concept because it means I can have sensible conversations about it.

EDIT: I guess to be clear, what I'm saying is that defining pleasure this way is abusing semantics to dispose of the argument entirely. It's a major foundation on which everything else rests, and if you just abstract "goodness" away, you wind up saying nothing and taking quite a lot of words to do so. It's as though you were to write instructions on cracking RSA, and at some point you call a method for calculating the decryption key that is "left as an exercise to the reader".

My main point here is that I do not consider pleasure to be the highest good, unless (as you suggest we treat it) it is essentially defined as the greatest good. So a morality we want to implement into some hypothetical intelligence that's based on pleasure will either be something I'm at sharp disagreement with, or else so nebulous as to be no morality at all.

Actually, that's correct. It is indeed a definition that encompasses every human motivation, and is thus probably quite vacuous. The statement "pleasure is the goal of everyone's life" is meaningless, because pleasure in this situation is the sensation you obtain when you perceive positive accomplishment. You could replace it with "happiness" or "satisfaction" and get the exact same sentence.

Let's rephrase further: "The goal of everyone's life is to get what you want." That's a tautology, because your goal is what you want. I suppose that sentence is indeed unsalvageable, unfortunately. Well then. Let's take a step back.

The core principle, which the previous shitty sentence is derived from, is that people do things because they get something from it. The counterargument to that is altruism, which doesn't really get you anything aside from maybe gratitude, but is still pretty great to do. So you introduce the concept of pleasure, and say that being altruistic pleases you despite it resulting in a net loss for you materially, and recontextualizes selflessness as part of a broader selfish motivation, because terrible people will have you believe that everything you do is technically selfish to justify their own selfish actions. And then you extend the concept of pleasure as non-material gain, and notice that even material gain is valuable because of your subjective perception of it - see the mice that would starve if it meant they could keep stimulating their pleasure centers. And there you have a handy unified way of characterizing all of human subjective fulfillment - non-material gain (or pleasure, but pleasure sounds dirtier, if catchier).

From here you can reason that non-material gain, if you can measure it, quantify it and predict it with adequate knowledge of the human mind, could as an equation, if applied to a society and with interactions borne in mind, be potentially solved for maximum non-material gain. It's not an AI thing, strictly speaking, just part of me gushing about the potential benefits of a mathematical reduction of human thought, a necessary prerogative for artificial intelligence, for scientific (or, well, pseudoscientific) fields concerned with the mind.


And with all that, I am back where we started - solving for maximum non-material gain, even if you had a complete knowledge of the underlying principles of consciousness, probably wouldn't be all that helpful and in fact I notice that SirQuiamus was completely right - see problem #1, which he mentioned but I failed to understand the implications of at the time.

Furthermore, I'm not actually advocating making the sensation of non-material gain your goal, because with the way it is defined and phrased you literally can't do anything else except by doing something blatantly self-destructive out of spite such as slitting your own throat with no other provocation - but then you would have derived a small measure of satisfaction from proving me completely wrong and demonstrating supreme agency, which I could comfortably describe as non-material gain for you. It's a catch-all term for a reason. More amusingly, it may in fact be an unhelpful, impractical abstraction, which is something that sounds familiar to me right now.

A good real-life example of non-material gain coming to light is when a good deed becomes tainted by some extraneous factor. The good deed would have granted you the appropriate amount of non-material gain, but the extraneous factor changed your perception of it sufficiently that you failed to get all (or indeed any) of it. For you, that would be your choice to be in a specific situation having been revealed to be an illusion, a ploy based on a prediction of what choice you were likely to pick, and also the result of shallow, yet nevertheless effective manipulation.

Ah, to be proven wrong.
« Last Edit: June 09, 2015, 02:44:34 pm by Harry Baldman »
Logged

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH
Re: Calm and Cool Progressive Discussion Thread
« Reply #10209 on: June 09, 2015, 02:50:31 pm »

This discussion does give me an idea. Say we obtained an artificial intelligence and taught it to perceive the entirety of humanity as its body, utilizing various mechanisms to move its "body parts", and being taught strict self-preservation. What would be the problems with this?
Replace 'artificial intelligence' with 'Soviet Union' and you've got your answer. Do you care about a few bruises or shed skin cells?
One horrible outcome would be the AI has identified portions of humanity as defective, pathogenic, a cancer or of a foreign organism. The outcome would be the same. Programmed cell suicide. It would still make a cool God machine though.

penguinofhonor

  • Bay Watcher
  • Minister of Love
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10210 on: June 09, 2015, 03:37:31 pm »

.
« Last Edit: December 17, 2015, 11:13:02 am by penguinofhonor »
Logged

Graknorke

  • Bay Watcher
  • A bomb's a bad choice for close-range combat.
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10211 on: June 09, 2015, 03:58:53 pm »

Wouldn't they? I was under the impression that people try to keep their body parts if they can help it. Well, most kinds of people at least. I understand amputation is usually a difficult decision, and when it's not they often make it for you or inform you that it's your best chance for survival/quality living and expect you to take the option.
Fingers are a tricky issue, because one damaged finger can impact the function of the others because of the way the tendons are connected to the muscles and each other. I don't know exactly how the mechanism works there though
Still, as an analogue to society, I don't doubt there are cases where just getting rid of a certain group of people could help others work more efficiently.
Logged
Cultural status:
Depleted          ☐
Enriched          ☑

UXLZ

  • Bay Watcher
  • God Eater
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10212 on: June 09, 2015, 06:14:57 pm »

The issue of morality is packed to the brim with variables and so forth. Even things commonly held as 'correct' have hundreds, probably even hundreds of thousands of situations where the morality is at best questionable.

"Don't Steal."
What if your 8 year old daughter is slowly and painfully dying of starvation, and all other options have been exhausted?
What if your mother needs expensive surgery to rid her of constant agony, once again, all other options exhausted?
What if the fate of the world somehow hangs in the balance?

"Don't Kill."
What if three criminals have broken into your home and want to torture you and your family to death?
What if they've sworn to come back after you after being arrested if they somehow fail and get caught?
Oh, but they're doing it because you earlier did the same thing to their families.


Objective morality is impossible, because however you attempt to define it you are using your own subjective opinions as a basis.
Invoking Godwin's law, please explain to me how any of what the Nazis, or the Viet-cong, or the Islamic state terrorists is objectively, unquestionably wrong without using a point of origin to judge them by.
Logged
Ahhh~ She looked into your eyes,
And saw what laid beneath,
Don't try to save yourself,
The circle is complete.

Transcendant

  • Bay Watcher
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10213 on: June 09, 2015, 10:39:02 pm »

wrong thread
Logged

Angle

  • Bay Watcher
  • 39 Indigo Spear Questions the Poor
    • View Profile
    • Agora Forum Demo!
Re: Calm and Cool Progressive Discussion Thread
« Reply #10214 on: June 09, 2015, 11:05:50 pm »

The biggest problem with morals is that they change over time. Trying to set some sort of "moral calculus" is equivalent to setting the rules of which a utopia would live by. You can take a guess at the outcome.

You don't hardwire your moral prepositions into your calculus. Thus, you can compute something for any set of morals.

It's still difficult and impractical.
Logged

Agora: open-source platform to facilitate complicated discussions between large numbers of people. Now with test site!

The Temple of the Elements: Quirky Dungeon Crawler
Pages: 1 ... 679 680 [681] 682 683 ... 759