Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 678 679 [680] 681 682 ... 759

Author Topic: Calm and Cool Progressive Discussion Thread  (Read 1291741 times)

Frumple

  • Bay Watcher
  • The Prettiest Kyuuki
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10185 on: June 08, 2015, 07:07:23 pm »

I see agency as being more important - the freedom to make choices, and choices that matter.
... agency is a pretty basic pleasure, yes. People generally need that (or at least the illusion of it, which is sometimes the best you can get) to be happy. Being consistently denied agency is one of the simpler ways to drive a human insane -- it's about as anti-pleasure as it gets. The feeling of making choices, of controlling your own actions, is generally considered among the higher pleasures (it's often offset by recognition of the varying consequences involved and whatnot, but that's neither here nor there).

It's not sufficient in and of itself -- as should be obvious, as choices often lead to misery -- but it's generally pretty necessary.

Struggling towards a futile dream is not anti-hedonism. Hell, in a lot of ways it's about as hedonistic as it gets -- you're willingly throwing away a lot (including other sorts of pleasure, and quite possibly the happiness of people around you) specifically to indulge a specific desire. You've just decided to value certain pleasures over others, which is... fine? Different folks, different situations, have different combinations of pleasures they most desire to seek. And that's usually okay, when it's not hurting other people.
Logged
Ask not!
What your country can hump for you.
Ask!
What you can hump for your country.

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10186 on: June 08, 2015, 07:17:46 pm »

It's all in how you frame it, really. I see the availability of pleasure as a necessary step toward agency. You see the availability of agency as a necessary step toward pleasure. Since I don't see pleasure as the end-goal, I don't find it terribly obvious that agency isn't sufficient in itself; it's what I've decided makes the most sense to me as the ideal, and that's the case even when its exercise does lead to misery. And, to tie this tangent back to the original one, that we have such different perspectives on this makes it very difficult for us to arrive at a satisfactory design for the relevant calculus, even if we'd largely agree on which particular actions happen to be moral.
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

KingofstarrySkies

  • Bay Watcher
  • It's been a long time...
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10187 on: June 09, 2015, 12:22:31 am »

BUT PLEASURE IS THE ULTIMATE GOAL OF MY LIFE :D
Logged
Sigtextastic
Vereor Nox.
There'll be another King, another sky, and a billion more stars...

Harry Baldman

  • Bay Watcher
  • What do I care for your suffering?
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10188 on: June 09, 2015, 04:57:02 am »

I find the assertion that machines don't have morality entirely true. However, this doesn't mean they can't possess morality, seeing as morality is, in essence, a set of internalized rules and guidelines. It's not like the principles of human thought and behavior (and, by extension, human thought patterns that lead to moral, ethical behavior) are unknowable. All people operate on a basis of cause and effect, with the differences in their predictable response being what we call a personality. And then, once we get all that down to algorithms, we have AI, possibly the friendly kind.

I would agree that I wouldn't want anything simpler than that performing law enforcement, since they probably wouldn't have an adequate grasp of nuance and context to fulfill their functions well. But if we could have an artificially sapient, humanlike robotic police officer with a programmable personality and the sort of efficiency that allowed the robot arm from the video to defeat a master swordsman with probably much less practice than he had, why not? I suspect they would be no less fallible than the regular police officer in that event. Perhaps even less so.

Of course, we're probably going to obtain unmanned police drones that shoot people a little too often before such a thing is possible, but you know. A man can dream.

The concept of a universal ethical calculus is quite old and widely known, but somewhat problematic.

The main problem is that it doesn't work.

If it doesn't work, it requires improvement, doesn't it?

And of course it's not going to actually work if you're going to put randomly assigned values into a formula you pulled out of your ass (see: Drake's equation). However, imagine if we did have an algorithm that produces sapience, and thus we could work with behavior on a fundamental, mathematical level, figuring out what produces acceptable humanlike behavior. Now that's a position from which ethical calculus can be plausibly derived.

BUT PLEASURE IS THE ULTIMATE GOAL OF MY LIFE :D

This actually describes my thoughts on the matters of pleasure as well, by the by. Pleasure (as in, a state of mind that creates positive emotions, just to make sure we don't get into a discussion about what words mean) is the ultimate goal of everyone's life. What probably complicates things is that a single, solitary form of pleasure to the exception of all others doesn't always equate to actual pleasure on account of the nervous system adapting to it. For instance, chemical highs lose their kick after a long time having the exact same kind, while the pleasure of agency dulls itself if you take your life in a fundamentally unpleasant, self-destructive direction. Tending toward extremes decreases the pleasure gained, while a variety of pleasures in life makes them that much more efficient at providing happiness - a state of overall pleasure gain.

To make matters complicated, though, while people do choose their course of action based on the expected pleasure it will bring, their projection of pleasure gain can often be wrong (based on incorrect assumptions or made with flawed reasoning). Furthermore, they can also project themselves more broadly than as a self-contained entity, identifying themselves with concepts, communities and other people - hence the idea of self-sacrifice. They can even project themselves beyond their own deaths with the idea of an afterlife.
« Last Edit: June 09, 2015, 05:03:55 am by Harry Baldman »
Logged

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10189 on: June 09, 2015, 05:31:44 am »

I personally ascribe to the line that what humans really want is fun, not pleasure.
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.

SirQuiamus

  • Bay Watcher
  • Keine Experimente!
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10190 on: June 09, 2015, 05:37:08 am »

I personally ascribe to the line that what humans really want is !!FUN!!, not pleasure.
FTFY

And of course it's not going to actually work if you're going to put randomly assigned values into a formula you pulled out of your ass (see: Drake's equation). However, imagine if we did have an algorithm that produces sapience, and thus we could work with behavior on a fundamental, mathematical level, figuring out what produces acceptable humanlike behavior. Now that's a position from which ethical calculus can be plausibly derived.
So you're saying that we should discover the hidden rules of human behaviour by creating an algorithm that perfectly simulates human behaviour? That's a bit backside-backwards, don'tcha think?
« Last Edit: June 09, 2015, 05:40:31 am by SirQuiamus »
Logged

Harry Baldman

  • Bay Watcher
  • What do I care for your suffering?
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10191 on: June 09, 2015, 05:44:33 am »

I personally ascribe to the line that what humans really want is fun, not pleasure.

I think those are synonymous, with fun being a subset of pleasure and thus one of the requirements (balanced with other forms of pleasure) for happiness.

So you're saying that we should discover the hidden rules of human behaviour by creating an algorithm that perfectly simulates human behaviour? That's a bit backside-backwards, don'tcha think?

Well, not quite. To create that algorithm, we need to obtain a functional mathematical model of sapience first in order to figure out the principle according to which disparate information is integrated into a whole within the mind. Otherwise we can't possibly create the algorithm or invent a proper method for ethical calculations. I tend to trip over what I'm trying to say often, so my apologies.

Point is, as I notice I'm having trouble editing those previous sentences into coherently describing what I'm trying to say, we need to figure out what the input and the stages of its processing are in order to mathematically describe how it affects the output. Ethical calculus in the form described in the wiki article is made up of unhelpful, impractical abstractions.
« Last Edit: June 09, 2015, 05:52:54 am by Harry Baldman »
Logged

Truean

  • Bay Watcher
  • Ok.... [sigh] It froze over....
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10192 on: June 09, 2015, 06:07:15 am »

http://www.bay12forums.com/smf/index.php?topic=103213.msg6289079#msg6289079

Utter insanity and wish fulfillment fantasy. Again. You can't have perfection. Get over it, accept it, move on. Such heaven would be will be populated by horrors. Blind faith will literally destroy humanity like that. The road to Hell is paved with good intentions, and all such nonsense is excellent paving material.

You're essentially talking about creating perfect angels to enforce some "moral" or at least "legal" code. If anything was playing God. This. You do realize someone will inevitably be controlling these things, yes. (No question mark). No, not, "but that doesn't have to be the case." It will. Every other tool humanity has created has followed this path and been misused. This is no different.

"But Truean this is dif fer ent! These machines will control themselves." No. They won't. Someone will, at best, control them beforehand via programing. There will be back doors built into the code. You're not thinking like the bastards who populate this world. They salivate at the chance to control literal deadly force from the palm of their hands. They will. How does this not register with people. Does love of shiny new gizmos and gadets overpower common sen.... Upon remembering the lines for new iphones, yes. Yes it does.

We're screwed. The demands of the morons composing the masses will demand ever greater gadgetry even more insane than everybody constantly wearing a GPS device (your "smartphone"). So long as the bread and circuses is handheld, they let the government know where they are at all times. Playing video games "nerds" were once ridiculed for and trading freedom for flashing lights and "achievements" on screen.... Plus you're always tethered to work 24/7 as you overshare everything and stupid bragging mothers post pictures of their children subjecting them to constant permanent surveillance from cradle to grave. This is big brother's wet dream come true.

This is the part where I leave the conversation.
Logged
The kinda human wreckage that you love

Current Spare Time Fiction Project: (C) 2010 http://www.bay12forums.com/smf/index.php?topic=63660.0
Disclaimer: I never take cases online for ethical reasons. If you require an attorney; you need to find one licensed to practice in your jurisdiction. Never take anything online as legal advice, because each case is different and one size does not fit all. Wants nothing at all to do with law.

Please don't quote me.

SirQuiamus

  • Bay Watcher
  • Keine Experimente!
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10193 on: June 09, 2015, 06:16:29 am »

Ethical calculus in the form described in the wiki article is made up of unhelpful, impractical abstractions.
I think the most important philosophical question here is: Is there such a thing as a helpful, practical abstraction in matters of morality? An algorithm is not the same thing as data, right? Even if you have a perfectly valid sequence of functions and variables, you'll still have to fill those variables with exact numerical values, which have to be somehow derived from real situations in the real world. As the GIGO principle dictates, even the most elegant of equations will yield nothing but rubbish if your data is faulty, and how on earth are you going acquire objectively correct data of such things as pleasure, suffering, harm, fairness, etc.?

In this sense Bentham's calculus is quite illustrative of the inherent weakness in every mathematical abstraction of morality: No matter what moral presupposition you are working from, you'll sooner or later have to try and make exact quantifications of things that are practically non-quantifiable. Is my suffering greater than your pleasure? Who knows? – it depends entirely on who's doing the math.   

Ninja'd by ScarfKitty. 
Logged

Harry Baldman

  • Bay Watcher
  • What do I care for your suffering?
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10194 on: June 09, 2015, 06:58:03 am »

@Truean: Well, I suppose you might be entirely correct. I'm perhaps overly optimistic about these things. However, in consideration of the future I find fear and distrust much less productive than attempts at constructivity and suggestions for improvement. My suggestions might be entirely wrong, insane and produced by wishful thinking, and maybe civilization as we know it is doomed to live through forty thousand years of exploring the deepest, darkest depths of failure, tyranny and suffering if any of them ever come to be implemented. But humanity's had a pretty good record thus far on the macro scale (similarly to the racist bastard cops averaging out when taken in context with the good people on the force), methinks, so I remain optimistic.

On the other hand, I'm pretty sure I'd salivate at having literal deadly force in the palm of my hand, so you might be more correct than I'd care to admit.

I think the most important philosophical question here is: Is there such a thing as a helpful, practical abstraction in matters of morality? An algorithm is not the same thing as data, right? Even if you have a perfectly valid sequence of functions and variables, you'll still have to fill those variables with exact numerical values, which have to be somehow derived from real situations in the real world. As the GIGO principle dictates, even the most elegant of equations will yield nothing but rubbish if your data is faulty, and how on earth are you going acquire objectively correct data of such things as pleasure, suffering, harm, fairness, etc.?

In this sense Bentham's calculus is quite illustrative of the inherent weakness in every mathematical abstraction of morality: No matter what moral presupposition you are working from, you'll sooner or later have to try and make exact quantifications of things that are practically non-quantifiable. Is my suffering greater than your pleasure? Who knows? – it depends entirely on who's doing the math.   

Ninja'd by ScarfKitty. 

But a human being makes each and every one of their decisions based on incoming sensory data, which in most cases is, in fact, sensed as discrete impulses. There's no magic black box in there, no inner godliness that elevates man from mushroom, it's all a set of data collection and integration devices hooked up to one another. We manage to create a unified perception of a world from this data, and derive things such as morality, pleasure and harm from it. This demonstrates that it is possible to do such a thing. The underlying principle is there to be uncovered and reduced, and the abstractions, the black boxes of thought are always unhelpful and impractical - the reductions that show their underlying principles are not. If we obtain a mathematical reduction for all of these abstractions, then we may begin the procedures for objectively assessing them as well. They're not non-quantifiable, they just haven't been successfully quantified yet. It's an important difference. Somebody doing shitty mathematics doesn't prove mathematics to be a sham is what I'm getting at.

Though I now realize this does open up certain issues, such as being able to conclusively prove that a shot of heroin will give the heroin addict much more pleasure than, say, the same amount of money spent on groceries will give a suburban child (though perhaps not more than the same amount of money in heroin will give the same suburban child). In that case, you might get better results if you threw in a bit of utilitarianism with your hedonism, or strove to achieve a happy balance between the two. From that perspective, you are correct, as this wouldn't really resolve questions of actual morality (it becomes a matter of personal preference at this point - I tend toward hedonism+utilitarianism, but some might go further in either of the directions, or do something else entirely). It would, however, render approaches to morality, psychology and pretty much every other sphere of knowledge relating to human thought more exact, and thus make hedonism and utilitarianism a far more practicable set of philosophies.
Logged

SirQuiamus

  • Bay Watcher
  • Keine Experimente!
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10195 on: June 09, 2015, 07:44:38 am »

But a human being makes each and every one of their decisions based on incoming sensory data, which in most cases is, in fact, sensed as discrete impulses.
When this is applied to a moral dilemma, it seems to imply that the mental states of other people have to be directly observable. Just put everyone in an fMRI scanner and compare the amount of pleasure/suffering in their brains? Should you carry a portable scanner with you, just to make sure your moral compass is properly calibrated?

I don't subscribe to any black-box theory of mind, but I'm always working under the assumption that mind-reading is practically impossible, never mind how intimately we understand the brain's inner workings. In the light of present knowledge, the mind is a black box, but one with a slowly opening lid, whereas telepathy and other such things are yet-unseen supermassive black boxes of the second order.

As you admit later on in your post, subjective experiences are not objectively quantifiable (obviously!), and comparing private mental states with one another will not provide logically valid results, simply because the variables were never commensurable to begin with.

To take a classical example: Sadists are perfectly moral people – if morality is defined as following your ethical principles consistently and according to the dictates of reason – and scanning their brains during a heinous atrocity could prove unquestionably that their pleasure is "objectively" greater than the victim's suffering, therefore making their act justifiable.         
Logged

Harry Baldman

  • Bay Watcher
  • What do I care for your suffering?
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10196 on: June 09, 2015, 08:14:30 am »

The fMRI scanner measures blood flow to the regions of the brain if I recall correctly, which is a secondary product of neuronal activity, and isn't actually a reliable way to map the workings of neurons, so it's not quite a good way to observe the mental state of a human being, given that we have yet to understand the core principle behind information integration. Integration is key here, and also the thing that makes subjectivity possible. And we don't really need to read the mind to produce the mathematics that make it work - we just need a model that gets the same results. Reading minds, which is what we've been doing with the fMRI thus far, merely produces superficial understanding of its workings.

And you seem to have gotten the wrong idea about my description of subjectivity - I maintain it is quantifiable, as it is a part of integrating the objective sensory information of the situation into what we describe is the mind. Certain impulses will dominate over others depending on context - that's the core of subjectivity. The problem with it is that a heroin addict will probably genuinely derive more pleasure from a fix than a well-fed child will from some groceries, which doesn't allow us to consider these things purely from a hedonistic perspective, or at least requires us to apply that hedonistic perspective more widely.

See, the fun thing here is that with a reduction of thought we could play these situations out in different levels of detail and assess with our own subjective interpretation (informed by, for instance, a utilitarian perspective or according to some other set of moral rules) which course of action is preferable. For instance, if the sadist derives pleasure from a heinous atrocity that destroys a victim, then while technically the victim won't care anymore after they're dead, we can affirm with our own subjective interpretation that it, for instance, a) infringes on one's legal right not to be murdered, b) removes a certain amount of utility the victim could have potentially brought to society, c) makes God angry at us or d) decreases the pleasure gain of the victim's family, employer, friends (and even readers of the newspaper, unless they are sadists themselves and didn't realize you could justify yourself like that) and so forth while not increasing the pleasure gain of the sadist's inner circle (unless she tells some really good stories about it). Maybe the sadist has already killed or gruesomely harmed several people nobody cares about, and we measure that subjectively she's probably hit the point where her actions present diminishing results in both increasing the overall pleasure and productivity gain in society.

This discussion does give me an idea. Say we obtained an artificial intelligence and taught it to perceive the entirety of humanity as its body, utilizing various mechanisms to move its "body parts", and being taught strict self-preservation. What would be the problems with this?
« Last Edit: June 09, 2015, 08:24:12 am by Harry Baldman »
Logged

SirQuiamus

  • Bay Watcher
  • Keine Experimente!
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10197 on: June 09, 2015, 08:47:33 am »

This discussion does give me an idea. Say we obtained an artificial intelligence and taught it to perceive the entirety of humanity as its body, utilizing various mechanisms to move its "body parts", and being taught strict self-preservation. What would be the problems with this?
Ahh, I don't know... the implementation, perhaps, and the matter of choosing which people are going to play the role of its genitals. :v

Every time someone proposes to solve an age-old human-interest problem with a super-intelligent AI, it's always worth asking: "Is it more trouble than it's worth?" Your mileage may vary, is what I'm saying.

...By the way, I'm greatly intrigued by that thing called "information integration" – is it another boxy-thingy within the brain which we don't quite understand yet, but which will solve all problems of the human condition once its secrets are finally revealed? With those secrets, we can take all conflicting moral worldviews and integrate them into a universal morality which will necessarily satisfy everyone, at all times? I can't wait! :p   
Logged

Helgoland

  • Bay Watcher
  • No man is an island.
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10198 on: June 09, 2015, 08:53:12 am »

This discussion does give me an idea. Say we obtained an artificial intelligence and taught it to perceive the entirety of humanity as its body, utilizing various mechanisms to move its "body parts", and being taught strict self-preservation. What would be the problems with this?

Replace 'artificial intelligence' with 'Soviet Union' and you've got your answer. Do you care about a few bruises or shed skin cells?
Logged
The Bay12 postcard club
Arguably he's already a progressive, just one in the style of an enlightened Kaiser.
I'm going to do the smart thing here and disengage. This isn't a hill I paticularly care to die on.

Graknorke

  • Bay Watcher
  • A bomb's a bad choice for close-range combat.
    • View Profile
Re: Calm and Cool Progressive Discussion Thread
« Reply #10199 on: June 09, 2015, 08:54:20 am »

This discussion does give me an idea. Say we obtained an artificial intelligence and taught it to perceive the entirety of humanity as its body, utilizing various mechanisms to move its "body parts", and being taught strict self-preservation. What would be the problems with this?
Not very well. Most humans wouldn't even have second thoughts about amputating an irreparably functionless and damaged finger, even if the cells that make it up were still alive.
Logged
Cultural status:
Depleted          ☐
Enriched          ☑
Pages: 1 ... 678 679 [680] 681 682 ... 759