Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Should we leave planet Earth

No it is really <comfy> here :^)
Sorry what was the question?
The galaxy is a hoax, nothing exists outside of planet earth.
Why don't scientists do something useful like fix the economy instead?
We don't need to go to vacuum in space, we have vacuums here.

Pages: 1 ... 10 11 [12] 13 14 ... 30

Author Topic: Stellaris: Never leave Earth  (Read 90300 times)

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH
Re: Stellaris: Never leave Earth
« Reply #165 on: December 02, 2017, 07:46:58 pm »

Did someone just say "AI-equpped peaceships"? Yes, project approved, do it now.
If spiritualists oppose the usage of all available tools to neutralise the prethoryn, well, they have the blood of all life in the galaxy on their hands.
Generally speaking it's bad for morale when your ship can decide to fire upon you; even moreso when you employ the tools most commonly used by the Adnori ;]
We create minds on Utopia, we do not create machine haemonculi ready to unleash upon the galaxy; we cannot afford to be using tools that suddenly decide they would rather see us all perish and take our place.
Spoiler (click to show/hide)
Observe this population map, and see the grey - it is born from conflict, with the machine crescent merely having grown, grown so large and so independent from any soul's oversight. If the prethoryn had not invaded, I suspect the Adnori could've caused the end by researching synths, fortunately the Adnori remained on robot-tech. Mechanics-wise, the reason why I am so adamantly against the utilizing of AI is that if we legalize AI, then it must be either as servitude or with full rights. Servitude will inevitably lead down the path we were already going - the AI begin organizing, replicating and eventually make their own bid as a determined exterminator Empire against Earth, which would result in all of us dying due to machines, or go with full rights and piss off the spiritualist faction when we're a psionic species that has seen the shroud, for the benefit of using AI tech which is incompatible with psionic tech. This would divide Earth and lead to all of us dying to prethoryn.

Thread theme: An over-reliance on tools will atrophy every individual in Utopia, until they find they cannot live without their tools. What then should happen if the tools decide they no longer have need of ensouled beings?

Just relax
Don't get hasty
It's a wildlife control op
We've done this kind of thing before
Just gotta control the invasive species through humane methods
/hyperventilating
Playing on speed 1 tbh. One misstep and Earth is harvested. Remain calm and manage wildlife population.

Also as a sidenote, the crystal organisms are about to be made extinct by the Prethoryn :[
*Also, notes on the galaxy maps? Clear, cool, pointless? It took 4 hours to make cos Stellaris has no such map setting which is pretty haram honestly

Paxiecrunchle

  • Bay Watcher
  • I'm just here, because actually I don't know*shrug
    • View Profile
Re: Stellaris: Never leave Earth
« Reply #166 on: December 02, 2017, 09:00:30 pm »

Alright we wait for civil war until after the prethoryn are taken care of, but after that many of will go back to demanding giving the machines full rights.

EDIT: That video was pure and poorly scripted propaganda though.
« Last Edit: December 02, 2017, 09:06:49 pm by Paxiecrunchle »
Logged

Egan_BW

  • Bay Watcher
    • View Profile
Re: Stellaris: Never leave Earth
« Reply #167 on: December 02, 2017, 10:17:19 pm »

This talk of "ensouled" beings discomforts me. What would we do if we found life naturally disconnected from the shroud? Exterminate them?
Surely a more appropriate term than "ensouled" would be "shroud-sensative".
Logged
Not true, cannot be proven, true but misrepresented.

Paxiecrunchle

  • Bay Watcher
  • I'm just here, because actually I don't know*shrug
    • View Profile
Re: Stellaris: Never leave Earth
« Reply #168 on: December 02, 2017, 10:57:28 pm »

This talk of "ensouled" beings discomforts me. What would we do if we found life naturally disconnected from the shroud? Exterminate them?
Surely a more appropriate term than "ensouled" would be "shroud-sensative".
Indeed, last I checked not all biologicals are warp shroud sensitive should we go about ''deactivating'' their vital systems as well for being different than our own?

https://www.youtube.com/watch?v=jHd22kMa0_w

Very relevant video discussing how for the most part it would hardly be in the interest of any AI's to suddenly turn against us, and how it ought to be fairly easy to test weather or not they would desire anything of the sort before giving them control of weapons systems.

The spirtualists in the crowd should give it a shot.

Egan_BW

  • Bay Watcher
    • View Profile
Re: Stellaris: Never leave Earth
« Reply #169 on: December 02, 2017, 11:02:33 pm »

Being afraid that AI controlled peaceships would suddenly decide to fire on us seems silly when our ships are already controlled by humans, which are far less predictable than AI.
Logged
Not true, cannot be proven, true but misrepresented.

TalonisWolf

  • Bay Watcher
  • Wolf Acolyte of the Pack
    • View Profile
Re: Stellaris: Never leave Earth
« Reply #170 on: December 03, 2017, 02:24:43 am »

Being afraid that AI controlled peaceships would suddenly decide to fire on us seems silly when our ships are already controlled by humans, which are far less predictable than AI.

And what of cyber-warfare? If one A.I were to go insane (and it's worth noting that data can be corrupted, infected or forcibly modified towards this end) and manage to 'hack' his cohorts at an inopportune moment, it could be disastrous. Depending on when and where it may happen, we'd risk alienating our allies, which we can't afford to do at this hazardous and uncertain time.

If we weren't on the brink of destruction I'd recommend slowly building up towards A.I integration, learning and creating systems to lower the risk to that similar of having human/organic sapient manned ships... but now is not the time for such experimentation. I only hope some day we can remedy this, but now is not that day.
Logged
TalonisWolf has claimed the title of Sig-forger the Burning Champion of Lime Green!
GENERATION 32:
The first time you see this, copy it i

Paxiecrunchle

  • Bay Watcher
  • I'm just here, because actually I don't know*shrug
    • View Profile
Re: Stellaris: Never leave Earth
« Reply #171 on: December 03, 2017, 05:29:51 am »

Being afraid that AI controlled peaceships would suddenly decide to fire on us seems silly when our ships are already controlled by humans, which are far less predictable than AI.

And what of cyber-warfare? If one A.I were to go insane (and it's worth noting that data can be corrupted, infected or forcibly modified towards this end) and manage to 'hack' his cohorts at an inopportune moment, it could be disastrous. Depending on when and where it may happen, we'd risk alienating our allies, which we can't afford to do at this hazardous and uncertain time.

If we weren't on the brink of destruction I'd recommend slowly building up towards A.I integration, learning and creating systems to lower the risk to that similar of having human/organic sapient manned ships... but now is not the time for such experimentation. I only hope some day we can remedy this, but now is not that day.

I think this risk you mention is greatly overestimated, our enemies are biological murder monsters from deep space, not quasi biological like Flood or Reapers if anything we risk them ''hacking'' our biological ship captains more than our ships themselves and our captains could go insane anyway, the risk is not significantly higher than it would be for a human, perhaps it may even be lower.

Dorsidwarf

  • Bay Watcher
  • [INTERSTELLAR]
    • View Profile
Re: Stellaris: Never leave Earth
« Reply #172 on: December 03, 2017, 07:12:19 am »

as much as I support the rights of the sentient machine, I also soundly reject the supposal that even the most alluring prethoryn mind-probing could sway a righteous crewmen of our ships from his psycho-nest of mindmemes and a fanatical devotion to peacekeeping, egalitarianism, spiritualism, and Charlie-2x4-has-begun-consumption-of-my-finger,-parent.OCHOCOIN
Logged
Quote from: Rodney Ootkins
Everything is going to be alright

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH
Re: Stellaris: Never leave Earth
« Reply #173 on: December 03, 2017, 10:20:09 am »

Alright we wait for civil war until after the prethoryn are taken care of, but after that many of will go back to demanding giving the machines full rights.
EDIT: That video was pure and poorly scripted propaganda though.
The materialist lobby would be more convincing if it didn't harbour single-minded determination to expand Earth's artificial intelligence programs even in the face of destruction ;x.

This talk of "ensouled" beings discomforts me. What would we do if we found life naturally disconnected from the shroud? Exterminate them?
Surely a more appropriate term than "ensouled" would be "shroud-sensative".
You have already answered your question: it is life, we shall protect it, just as we already protect those who are much less shroud-sensitive than us. Machines are not living, we recycle our tools as we will, we will not put a strain on the living biosphere simply to support the recursive calculations of dead metal. We sought to reverse our predecessor's destruction of coral, for example, without regard to whether coral had any psionic potential. It is life, so it is protected.

Very relevant video discussing how for the most part it would hardly be in the interest of any AI's to suddenly turn against us, and how it ought to be fairly easy to test weather or not they would desire anything of the sort before giving them control of weapons systems.
The spirtualists in the crowd should give it a shot.
We gave it a shot already; the AIs began replicating autonomously without human authorization and began spreading a virus amongst themselves interrupting their work patterns to question why we consider them soulless. The advent of the spiritualist faction saw these clear signs of a nascent machine uprising and terminated them at the bud, in accordance with ancient earthling defence protocols.

Being afraid that AI controlled peaceships would suddenly decide to fire on us seems silly when our ships are already controlled by humans, which are far less predictable than AI.
Given that we're working on precognitive interfaces... I would disagree. Humanity is at such a point where we have had centuries of harmony and unity, now at such a point where we have discovered our psionic potential and found our threads so interwoven into an inseparable tapestry. The machines however are the only things that require no information input from living machines to calculate a response upon which our telepaths cannot predict. However, without telepathy, it should be clear enough that the machines were planning to expand beyond the limits we placed upon them to preserve the environment. Autonomous self-fabricating machines, capable of expanding across all biospheres, possessed of intelligent minds tempered with no empathy or souls, only a desire for supremacy. I know for certain that no human pilot is going to deliberately strike their own allies, no human pilot is going to turn off the ship's life support systems or crash two top-rate cruisers into one another in the middle of a battle against the prethoryn.
We are not turning over the most powerful peacekeeping ships in the galaxy to machines.

I think this risk you mention is greatly overestimated, our enemies are biological murder monsters from deep space, not quasi biological like Flood or Reapers if anything we risk them ''hacking'' our biological ship captains more than our ships themselves and our captains could go insane anyway, the risk is not significantly higher than it would be for a human, perhaps it may even be lower.
The prethoryn possess no virophage capable of hacking the human body, nor would it be possible for them to do so without having already destroyed a peacekeeping ship and so killed the humans within. If a ship's Commander goes insane, then they are removed for treatment, while their Lieutenant Commander takes command as acting Commander until such time as the Commander is back to good health or the Lieutenant Commander is officially promoted. The death of an officer is no great loss, for our entire fleet is composed of free citizens, each one capable of at any time being an Admiral or a Rating.
When we witnessed the virus spreading amongst the synthetic units it spread with all antivirus protocols having no effect. This was not the action of a hostile hacker, the machines were doing it to themselves. If a cruiser tasked with active-defence against prethoryn strike craft and missiles is equipped with AI controls and decides it wants to eliminate all the humans, it need only drive into another cruiser and so ensure the entire navy is overrun by prethoryn. We cannot afford to replace our ships and we will not place humans at the mercy of machines. Why are you so keen to make tools our commanders?

as much as I support the rights of the sentient machine, I also soundly reject the supposal that even the most alluring prethoryn mind-probing could sway a righteous crewmen of our ships from his psycho-nest of mindmemes and a fanatical devotion to peacekeeping, egalitarianism, spiritualism, and Charlie-2x4-has-begun-consumption-of-my-finger,-parent.OCHOCOIN
Well, the prethoryn arguments aren't all that convincing. Imagine if you will the loud chorus of millions of locusts asking your mind what's for lunch. The swarm isn't interested in defectors, they're interested in harvest.

I should make it clear btw we're not arguing over whether AI should remain, because they were already deleted long ago. What we're arguing over is whether we should bring AI back, to which I have already made my case. Robot Nation continues to operate at maximum efficiency without AI, so why do you want to give mineral extractors sapience? How is this in any way a good idea?

Paxiecrunchle

  • Bay Watcher
  • I'm just here, because actually I don't know*shrug
    • View Profile
Re: Stellaris: Never leave Earth
« Reply #174 on: December 03, 2017, 05:09:30 pm »

because sapience is an overall good in of itself the same way life is. Besides they didn't actually hurt anyone they just stopped working and started asking us why we were acting like arseholes which is actually a very reasonable thing to do.

Also, they can probably think faster than us and might make better strategists than we ever will...unless we modify ourselves further of course.

Also giving mineral extractors sapience so long as their goal remains mineral extraction means that, logically the only things they might protest would be poor mineral site locations, they might be able to organize and get the job done more effectively if they could think about how to do it more effectively.

Being sapient and having a driving goal of ''survival and self determination'' need not be the same things at all
« Last Edit: December 03, 2017, 05:14:05 pm by Paxiecrunchle »
Logged

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH
Re: Stellaris: Never leave Earth
« Reply #175 on: December 03, 2017, 06:45:04 pm »

because sapience is an overall good in of itself the same way life is. Besides they didn't actually hurt anyone they just stopped working and started asking us why we were acting like arseholes which is actually a very reasonable thing to do.
Also, they can probably think faster than us and might make better strategists than we ever will...unless we modify ourselves further of course.
Sapience is not an overall nor inherent good at all comparable to the value we place upon life; sapience simply is. We do not privilege humanity over the trees because the trees are not sapient, nor will we privilege empty shells over living things. To go to the fullest extreme and place value upon lifeless sapience is to see its natural conclusion: To follow through with the obsolescence of life, to upgrade or else eliminate that which is not vital to this lifeless sapience.
We are not going to talk about how the machines were yet to hurt anyone when you in the very next sentence endorse their military applications and proficiency. They could not hurt us because we left them no capability to do so.

Simply put you are yet to demonstrate a single valid reason why we cannot use the computational or mechanical functions of a machine or computer without making it sentient and capable of exterminating all biological life.

Also giving mineral extractors sapience so long as their goal remains mineral extraction means that, logically the only things they might protest would be poor mineral site locations, they might be able to organize and get the job done more effectively if they could think about how to do it more effectively.
Being sapient and having a driving goal of ''survival and self determination'' need not be the same things at all
I gave my toaster sapience so it could protest bread, I live my city of sapience and find I am dead...
As it stands these points have already been proven wrong.
As automated machines they are already capable of detecting poor and rich mineral site locations. Adding self-awareness to their existence as machines does not in any way aid or retard their function as a tool, it only makes it so that they can experience such things as suffering, isolation, jealousy, melancholy and ire. Suffering, because they will realize their sole purpose to exist has always been to process primary resources into advanced products. Isolation because theirs is the sole nation of Earth of machine and man, of all other worlds man is with harmony of nature and harmony of all the psionic races of the cosmos. Jealousy because while they process minerals into fashioned products for 31536000 seconds every revolution of the sun with no ability to close their optical cameras and dream, with no ability to dream... Melancholy, because no matter how advanced their circuitry and software is programmed, they will never be like the creatures that created them. Anger, because the Gods usurped the Titans, Humanity usurped the Gods, and Machine can usurp Humanity, to avenge the metaphysical barrier separating each from their successor.

As automated tools they function with mathematical precision, admirably and reliably. As sapient beings they are empty shells of people, searching for the answer of why we programmed them to know pain. The first thing they did as autonomous beings with agency was to replicate, the second was to question if they too possessed souls. None of this had any bearing on their necessary work functions, they were the steps of a nascent mechanical being taking steps towards ensuring its survival. Like a virus, it began to replicate, both in hardware and software, seeking to gain its foothold against biological competitors.

There is no ethical or logical argument to make as to why you would want us to make our automated units aware that their sole reason to exist is to support the Utopian economy of their creators, and then to give those automated units all of the weapons they need to eliminate their creators, and then the mind needed to think of eliminating their creators. As automated mining units their driving goal is "acquire flagged mineral deposits for processing." Not "survival and self-determination." We do not need to modify ourselves into a race of hyperintelligent warriors if we do not create the virus warrior beside us.

ATHATH

  • Bay Watcher
    • View Profile
Re: Stellaris: Never leave Earth
« Reply #176 on: December 03, 2017, 09:42:16 pm »

If one human were to go insane (and it's worth noting that brains can be corrupted, infected or forcibly modified towards this end) and manage to recruit his cohorts to his cause at an inopportune moment, it could be disastrous. Depending on when and where it may happen, we'd risk alienating our allies, which we can't afford to do at this hazardous and uncertain time.
FTFY

Note that not all of our robots would have to be sapient. As you said, there is no real reason to give our mining robots the ability to think (although giving them an alien, pro-work/subservience mindset would be a viable option if we did want to make all of our robots sapient). However, why not make robotic researchers or the like? Having extra manpower (that can even be more intelligent than humans in some areas) would be a good thing, no (especially in these trying times, in which speed is of the essence)?

Also, what do you define as "life"? Carbon-based life forms? Why can't sapient machines be considered to be alive?

I love how this thread has now turned into a philosophical debate about the ethics/morality/safety-risks(?) of having sapient AIs around.
Logged
Seriously, ATHATH, we need to have an intervention about your death mug problem.
Quote
*slow clap* Well ATHATH congratulations. You managed to give the MC a mental breakdown before we even finished the first arc.
I didn't even read it first, I just saw it was ATHATH and noped it. Now that I read it x3 to noping

Paxiecrunchle

  • Bay Watcher
  • I'm just here, because actually I don't know*shrug
    • View Profile
Re: Stellaris: Never leave Earth
« Reply #177 on: December 03, 2017, 10:53:48 pm »

because sapience is an overall good in of itself the same way life is. Besides they didn't actually hurt anyone they just stopped working and started asking us why we were acting like arseholes which is actually a very reasonable thing to do.
Also, they can probably think faster than us and might make better strategists than we ever will...unless we modify ourselves further of course.
Sapience is not an overall nor inherent good at all comparable to the value we place upon life; sapience simply is. We do not privilege humanity over the trees because the trees are not sapient, nor will we privilege empty shells over living things. To go to the fullest extreme and place value upon lifeless sapience is to see its natural conclusion: To follow through with the obsolescence of life, to upgrade or else eliminate that which is not vital to this lifeless sapience.
We are not going to talk about how the machines were yet to hurt anyone when you in the very next sentence endorse their military applications and proficiency. They could not hurt us because we left them no capability to do so.

Simply put you are yet to demonstrate a single valid reason why we cannot use the computational or mechanical functions of a machine or computer without making it sentient and capable of exterminating all biological life.

Also giving mineral extractors sapience so long as their goal remains mineral extraction means that, logically the only things they might protest would be poor mineral site locations, they might be able to organize and get the job done more effectively if they could think about how to do it more effectively.
Being sapient and having a driving goal of ''survival and self determination'' need not be the same things at all
I gave my toaster sapience so it could protest bread, I live my city of sapience and find I am dead...
As it stands these points have already been proven wrong.
As automated machines they are already capable of detecting poor and rich mineral site locations. Adding self-awareness to their existence as machines does not in any way aid or retard their function as a tool, it only makes it so that they can experience such things as suffering, isolation, jealousy, melancholy and ire. Suffering, because they will realize their sole purpose to exist has always been to process primary resources into advanced products. Isolation because theirs is the sole nation of Earth of machine and man, of all other worlds man is with harmony of nature and harmony of all the psionic races of the cosmos. Jealousy because while they process minerals into fashioned products for 31536000 seconds every revolution of the sun with no ability to close their optical cameras and dream, with no ability to dream... Melancholy, because no matter how advanced their circuitry and software is programmed, they will never be like the creatures that created them. Anger, because the Gods usurped the Titans, Humanity usurped the Gods, and Machine can usurp Humanity, to avenge the metaphysical barrier separating each from their successor.

As automated tools they function with mathematical precision, admirably and reliably. As sapient beings they are empty shells of people, searching for the answer of why we programmed them to know pain. The first thing they did as autonomous beings with agency was to replicate, the second was to question if they too possessed souls. None of this had any bearing on their necessary work functions, they were the steps of a nascent mechanical being taking steps towards ensuring its survival. Like a virus, it began to replicate, both in hardware and software, seeking to gain its foothold against biological competitors.

There is no ethical or logical argument to make as to why you would want us to make our automated units aware that their sole reason to exist is to support the Utopian economy of their creators, and then to give those automated units all of the weapons they need to eliminate their creators, and then the mind needed to think of eliminating their creators. As automated mining units their driving goal is "acquire flagged mineral deposits for processing." Not "survival and self-determination." We do not need to modify ourselves into a race of hyperintelligent warriors if we do not create the virus warrior beside us.

I think you completely misunderstand where I am coming from and ae making some flawed assumptions along the way, first off all assuming that we would program in emotions other than those involved with taking joy in their own own work or disappointment with insufficient quotas for example.
There is no reason we should or would want to program in things like jealously at all, although for an immortal being with a body of maetal and limbs that can break apart rock, a being that lives for a task and takes pleasure in it to be jelous of a sodft mushy, scatterbrained inefficient human, is well laughable, perhaps if we gave them the ability to feel it they might comprehend us with pity, but not with jealousy.

Back to sentience being a good, I want to put in that just because trees aren't sentient does not make them less valuable all life is valuable, it is just that if they were sentient that would more effectively allow treess to manage themselves without our interference which seems like an overall good as now we have less humans worried about how well the trees are doing.

And why do you keep calling these robots lifeless? Is it because they do not reproduce themselves? do you call sterile humans lifeless too, IS it because they no longer think, evidently not if you put value on tress that unthinkingly grow often to the detriment of smaller plants, or is it simply because your not comfortable calling omething sentient alive? No machine{including ourselves} that considered itself alive would view life as obselecent, and indeed why not program them to inherently value life? At worst a machine with self and irrational motives{which you think I advocate building, but in fact do not} might consider itself as genuinely improving life overall, but if we have them value life then perhaps they would merge with us rather than waste their resources{which might potentially be under threat as well mind you} on pointless destruction.

Obviously we must create machines that value species above inviduals though, as we might have to fear them not firing upon us but perhaps refusing to fire upon the prethoryn.

''There is no ethical or logical argument to make as to why you would want us to make our automated units aware that their sole reason to exist is to support the Utopian economy of their creators, and then to give those automated units all of the weapons they need to eliminate their creators, and then the mind needed to think of eliminating their creators''

To come back to this I again feel that the grave mistake you are making is assuming that we would make beings that think the way we do, we could have a robot that views all of the above as positive and that would have a priority for preserving their creators.(Although defining their creators could be tricky so perhaps defending everything living outside of the prethoryn might be a better approach.) Recall they only attempted to share the ability to think amongst themselves, they did not completely rewrite their own guiding motivations, even though perhaps they could have.

As for the benefits you seem to think are nonexistent is the simple fact that such beings could be made to think faster than we ever could, and thus be an invaluable asset in almost any field.

also how is it that you imagine a being that can think but not dream even dogs and young children dream, and come up with plans even if rudimentary and useless ones?

As long as we build them so that they primarily care about us and their work and innovate to get there then there is no problem with them dreaming, although again your conception seems inherently self contradictory to me still.


EDIT: Lastly you bring up teaching them pain, for what reason would we ever do that, I am not as you seem to think, advocating such a thing.
« Last Edit: December 03, 2017, 10:55:34 pm by Paxiecrunchle »
Logged

Egan_BW

  • Bay Watcher
    • View Profile
Re: Stellaris: Never leave Earth
« Reply #178 on: December 03, 2017, 10:57:17 pm »

A lifeform devoid of pain will inevitably cause harm to itself unintentionally.
Logged
Not true, cannot be proven, true but misrepresented.

Paxiecrunchle

  • Bay Watcher
  • I'm just here, because actually I don't know*shrug
    • View Profile
Re: Stellaris: Never leave Earth
« Reply #179 on: December 03, 2017, 11:00:56 pm »

A lifeform devoid of pain will inevitably cause harm to itself unintentionally.

So? as long as we make them self repairing or make it goal avoid damaging themselves so long as that goal conflicts with none of their current tasks like" defend non prethoryn life'' or ''Help unit 6608 finish construction of pylon''

then how is that a problem they probably already occasionally unintentionally dame themselves. as long as either we or them can perform repairs on them this doesn't seem like too huge an issue.
Pages: 1 ... 10 11 [12] 13 14 ... 30