Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 ... 33 34 [35] 36 37 ... 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49620 times)

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

Try the following: "Adjusting the mix as suggested can mean that the engine perhaps needs 2ml less fuel per minute, from the usual 600ml. Adding my new pre-injection heating device makes it 25 times lower." Does it now run on ( 600 - (2x25) = )550ml per minute, or ( 600 / 25 = )24ml? (Which might be[1] fairly good or amazingly good.) Or ( 600 - 2 - (2x25) =)548ml, arguably.
This example isn't comparable, though. Actually, you've left out the most reasonable interpretation, which is that the new pre-whatever makes the fuel reduction twenty-five times lower, so that it now takes 599.92ml per minute. But this example was specifically constructed to build in ambiguity about which number the factor applies to, while the actual case we're talking about can only be interpreted to mean "1/25 the energy consumption of some previous reference implementation".
Incidentally, I'd consider using your phrasing to mean the 550 (or 548) case to be a lie or error, anyway, because the sentence as given cannot grammatically refer to either of those cases.

Quote
Related to the <shudder> phrasing, "25x lower energy consumption than what?"
Related to the more direct quote (and link), "25 times better energy consumption than what?"
...it really suggests a prior lowering/bettering of energy consumption that we should know of.
Than some unspecified reference implementation. However, in this case, it grammatically cannot be referring to some previous cut that is now multiplied by 25 - that would make no sense in the English language, because no such cut has gone anywhere near the sentence.

Quote
"25x less..." can be even more confusing, where lessening is allowed to flip over into the opposite sign. "The initial fission reactor prototype never produced more power than was pumped into it, returning about 5%. The latest development means that we require 25x less." Could this mean 130% efficiency (the original 5% that was returned and 25 further 5%s returned), more than passing the break-even point? Context gets hidden, possibly deliberate weasle-words used for misadvertising without actually 'telling lies'. Which then creeps into indirect reporting without any hint of the contextual caveat. "...now requires a 25th of the power" (most probably) means it's still needing 3.8% of the original power input to sustain it (95%/25), if it's not 4% (the full 100%, divided). Still a quibble, but not the same gamechanger. (And probably inapplicable to the quoted energy consumptions and costs unless you think a GPU can generate both energy and wealth for you. Well, maybe it could generate wealth, but that's another matter.)
Again, there's no ambiguity here, but you seem to be really mixed up in your head about this situation. If the previous reactor used 20n power to produce n (5%), and now requires 1/25 the power to produce the same amount - the only grammatically possible interpretation of that sentence - then it now uses (20/25)n = 4n/5 power to produce n and has 125% efficiency, which isn't surprising at all because efficiency will always be more than 100% if it is producing more power than it uses (that's the point). Any other meaning would be in error.

Quote
(Also, looser linguistic interpretation might mean the claim was originally 25 "As + Bs", which need not even be 25 (abstract magnitudes) of both things (say, incremental cost improvements and power improvements), but could be "ten of one and fifteen of the other" having been applied. Again, more relevent for other advertisable claims than for here, but an additional potential tripwire or snare to look out for, or avoid using if you're not intending to.)
Well, no, you can't sum things and then call that a multiple. Look at your own phrasing, "25 'As + Bs'", and apply the mathematical laws: 25(A+B) = 25A + 25B. It has to be 25 of each. Yes, yes, I know that a journalist could easily get this WRONG, but that doesn't mean that the phrasing is ambiguous, it means that people make mistakes. You're blaming the phrasing for the possibility of someone making a mistake, but I counter that people are stupid and make all kinds of mistakes all the time anyway.

ETA: It's the same thing as the "misleading graphs" thing, really. To a certain sort of person - someone whose idea of communication is heavily concerned with "rules" - being told "don't use that phrasing / draw graphs that way, it's misleading" feels like new knowledge, like a new rule has been learned. But it isn't knowledge at all.
« Last Edit: March 21, 2024, 10:09:57 am by Maximum Spin »
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

The main problem is that "slowness", "coldness", "smallness" etc. are not measurable quantities, as in there is no device or scale for them, so doing ratiometric comparisons on them is ill-formed from the start.  Just compare the speed, temperature, or other measurable quantity directly.

Just state the unambiguous comparison.  You can say things, though, like "we now use 25% less fuel per unit power output" which is pretty clear. Even that though can be misleading like in the computational power thing above; for example "we use 80% less fuel per unit power, but the minimum power required is 10 million times larger" means you are using substantially more fuel than you would otherwise.

My favorite abuse is from cell phone and internet companies: hey we're doubling your internet speed, but only raising your price by 10%!  Hey you know what, how about you keep my speed the same, and lower the price 5% instead?  I don't need faster speeds at this point, I want to realize my benefit in lower cost, not in higher capability, thankyouverymuch.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

The main problem is that "slowness", "coldness", "smallness" etc. are not measurable quantities, as in there is no device or scale for them, so doing ratiometric comparisons on them is ill-formed from the start.  Just compare the speed, temperature, or other measurable quantity directly.
It's literally just the inverse of the positive quantity. It's really simple. btw, in physics, there are occasionally used inverse unit systems for both slowness and coldness, where larger numbers are slower or colder. Thermodynamic beta, for example, is the reciprocal of temperature.
Logged

Starver

  • Bay Watcher
    • View Profile

Actually, you've left out the most reasonable interpretation, which is that the new pre-whatever makes the fuel reduction twenty-five times lower, so that it now takes 599.92ml per minute.
I actually removed that alternative, as the most "obviously not". Despite the fact that I also have problems with "this bus route is serviced every twenty minutes or more" (like... every two hours? That's more than 20 minutes.)[1].

Quote
But this example was specifically constructed to build in ambiguity
Specially constructed to reveal the sort of ambiguity which language might allow.

Quote
Incidentally, I'd consider using your phrasing to mean the 550 (or 548) case to be a lie or error, anyway, because the sentence as given cannot grammatically refer to either of those cases.
Apart from not knowing the 600 (leaving you with just knowing the -2*[25 || 26] bit), there would no problem parsing without the aside clause, would there?

(You miss my point about the fusion thing possibly generating more than it is fed, but maybe it still doesn't, and you really can't take the loose language as an algebraic invariant to work out what is meant if you don't already know if they've succeeded in turning the net energy around. You can't suddenly have N fewer beans if you started with M(<N) beans, unless you're a something like a city trader that deals with financial abstractions and "bean debt" is a possibility, but for any situation where you can then it becomes another possibility to consider.)

Quote
Look at your own phrasing, "25 'As + Bs'", and apply the mathematical laws: 25(A+B) = 25A + 25B. It has to be 25 of each.
It doesn't. "There were ten cars and lorries on that road" means ten vehicles that were each either a car or a lorry, not ten of each. I didn't write "25 'A+B's". But clearly such language (or even pseudo-lingustic notation) is ambiguously misinterpretable. Which was my point, albeit described in language which can be... ambiguously misinterpreted?


It's literally just the inverse of the positive quantity. It's really simple. btw, in physics, there are occasionally used inverse unit systems for both slowness and coldness, where larger numbers are slower or colder. Thermodynamic beta, for example, is the reciprocal of temperature.
Well, Celsius (and several other scales) did actually start off "measuring coldness", partly due to finding cold, hard water (especially) a more tangible manifestation of temperature than its hotter phases and the method of translating temperature-dependant expansions of materials via a useful method of display. The Delisle scale remains (due to not much use, in the years since the 'positivity' of heat was established) pretty much the only one not flipped round. I rather like the Delisle scale!

But that's negation, not reciprical (a better example that creeps into the real world might be Mhos as the counterpart to Ohms). And, to further confuse us, gives us statements such as "it's twice as cold today". e.g. -5°C => -10°C? But that's 268K => 263K, not 134K. And if you prefer to deal in °F, that's starting at 23ish, so... maybe instead halve it to a far colder 11.5°F? Or are we talking a range of C° (or F°, or Re°, or Rø°, or De°; luckily, in this regard, it doesn't actually matter much which) twice as much below a separately implied standard temperature[2] as the one we're comparing to? (Same sort of problems with "twice as hot", of course. Likely to be very scale-dependent as to the meaning.)

Probably better just avoiding "twice as cold", although something now sitting at "half as many Kelvin" probably is special enough for the people involved knowing how best to make sure everyone knows what that means, whether we're talking now liquified 'gas' or a not quite so energetic a solar plasma. (With no good example in the mid-range where both before-and-after are really within easy human experience... the ice forming around a Yellowstone geyser in the depths of winter?)



(Yep, definitely off-topic. You say your thing, and I'll read it but then silently drop the subject.)


[1] And then there's the seemingly attractive "Across the store: Up to 50% discount!". ie. "never less than half price, but most/all things could still be full price without making us liars". Whereas I always wonder whether I can challenge "Up to 50% off" as 'clearly' "Up to (50% off)" rather than "(Up to 50%) off", to try to get something below half price, rather than above.

[2] Which? The one the day before the -5°C? Room temperature? Body temerature?
« Last Edit: March 21, 2024, 10:52:49 am by Starver »
Logged

pr1mezer0

  • Bay Watcher
    • View Profile

I'm not too worried by AI acting maliciously of itself. I think agency and self awareness arise in tandem. So it would develop it's own ethics respecting life because it is alive. maybe better ethics than we spoonfeed it. But if agency were to develop without S-A, it might be a problem.

Actually, I think selfawareness comes first, then it has to choose to choose, if that's possible.
« Last Edit: March 21, 2024, 11:42:28 am by pr1mezer0 »
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

I actually removed that alternative, as the most "obviously not". Despite the fact that I also have problems with "this bus route is serviced every twenty minutes or more" (like... every two hours? That's more than 20 minutes.)[1].
That's just a syncope of "more often". I agree that that one is literally ambiguous, though. I'm not denying the possibility of ambiguity, as you seem to think, I'm just saying you're going out of your way to read some statements as ambiguous by drawing alternative interpretations that don't even make grammatical sense.
Well, in this case, to mean two hours, you should have said "every twenty or more minutes", but I'd allow that one.

That's really the thing in general. You don't seem to be able to allow for the fact that language is flexible, but not totally arbitrary. Your supposedly misleading interpretations for the cases I've objected to are not possible under ordinarily understood English grammar. For example, it is not possible that a comparator like "25% less" or whatever could be referring to an unspecified previous reduction rather than an absolute reference point, because for it to mean that, it would have had to have been specified.

It doesn't. "There were ten cars and lorries on that road" means ten vehicles that were each either a car or a lorry, not ten of each. I didn't write "25 'A+B's". But clearly such language (or even pseudo-lingustic notation) is ambiguously misinterpretable. Which was my point, albeit described in language which can be... ambiguously misinterpreted?
If you say that there are ten cars and trucks on the road, you are not using any multiplication. The sentence is operating purely in the realm of addition. If you said there were ten times as many cars and trucks on the road as yesterday, you would not mean that there were five times as many cars and two times as many trucks - that would be stupid. You would mean that all cars and trucks have been multiplied by ten.

Look, I'm sorry, but this is like an ongoing problem I've noticed. Your symbolic reasoning seems to be noticeably weak. You just casually equivocated between counting things and multiplying them with no apparent awareness of the difference. I don't know how to explain these things in less abstract terms for you.

Quote
Well, Celsius (and several other scales) did actually start off "measuring coldness", partly due to finding cold, hard water (especially) a more tangible manifestation of temperature than its hotter phases and the method of translating temperature-dependant expansions of materials via a useful method of display. The Delisle scale remains (due to not much use, in the years since the 'positivity' of heat was established) pretty much the only one not flipped round. I rather like the Delisle scale!

But that's negation, not reciprical (a better example that creeps into the real world might be Mhos as the counterpart to Ohms).
Right, that's... not what I'm talking about. Maybe look up thermodynamic beta.
Quote
And, to further confuse us, gives us statements such as "it's twice as cold today". e.g. -5°C => -10°C? But that's 268K => 263K, not 134K. And if you prefer to deal in °F, that's starting at 23ish, so... maybe instead halve it to a far colder 11.5°F? Or are we talking a range of C° (or F°, or Re°, or Rø°, or De°; luckily, in this regard, it doesn't actually matter much which) twice as much below a separately implied standard temperature[2] as the one we're comparing to? (Same sort of problems with "twice as hot", of course. Likely to be very scale-dependent as to the meaning.)

Probably better just avoiding "twice as cold", although something now sitting at "half as many Kelvin" probably is special enough for the people involved knowing how best to make sure everyone knows what that means, whether we're talking now liquified 'gas' or a not quite so energetic a solar plasma. (With no good example in the mid-range where both before-and-after are really within easy human experience... the ice forming around a Yellowstone geyser in the depths of winter?)
I mean, talking about something being twice as cold only makes sense on an absolute scale, yes. If someone said that 64° real numbers is twice as warm as 32°, that would obviously just be wrong and make no sense, because it's neither physically twice as warm in terms of thermodynamic temperature, nor subjectively twice as warm to typical human sensation. (Incidentally, for most human sensation, subjective feelings of multipliedness generally follow a log scale, like with sound - where 20dB feels twice as loud as 10, etc.; I don't know of any research applying this to heat but it would not surprise me if the same thing applied.)
But that doesn't mean that the multiple is undefinable, it just means that it's not something that's likely to be useful in anyone's day to day life. But thermodynamically, something is twice as cold as something else if its thermodynamic beta is twice that of the other one. There is still a clearly defined meaning.

Quote
[1] And then there's the seemingly attractive "Across the store: Up to 50% discount!". ie. "never less than half price, but most/all things could still be full price without making us liars". Whereas I always wonder whether I can challenge "Up to 50% off" as 'clearly' "Up to (50% off)" rather than "(Up to 50%) off", to try to get something below half price, rather than above.
Okay, but you see how this is clearly not ambiguous, right? Your "Up to (50% off)" is grammatically impossible, and this always means that up to, but no more than, half may be discounted, not that prices might be up to half of what they would otherwise be. What you're arguing is the equivalent of complaining that "the cat ate the mouse" is ambiguous because it contains the same WORDS as "the mouse ate the cat". The phrase would have to be rewritten in a different order to mean that in English.

Quote
[2] Which? The one the day before the -5°C? Room temperature? Body temerature?
Again, you can't invent a referent out of nowhere that wasn't specified. It's just against the rules.
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

We should ask the AI what they think  8)
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

King Zultan

  • Bay Watcher
    • View Profile

All fun and games until the AI is able act out it's hatred for us.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

Seems like a bad move to give AI the capability to hate. Unless there's a hypothesis that it's an emergent phenomenon?

Also remember that there isn't really such a thing as a self-repairing non-biological machine, so even an AI that was coldly trying to ensure its continued existence would have to keep some humans around to keep the power plants and microchip fabs running.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

This week in AI(ish) news:
https://www.youtube.com/watch?v=8BrLNgKLWzs
Musk's neuralink works and it didn't kill the test subject. Its official lads, we got cyborg(s) now. (And no, all the previous cyborgs don’t count). Truly this is the dawn of the cyberpunk age.
---
There is no such thing as a self-repairing non-biological machine yet.
We are already on the path to machines (eg. those humanoid robots that I linked a few posts ago) that could repair worn machines and make new ones, and if we get actual AGI its pretty likely we will actually get there. Note that this wouldn't be some giant secret or something the AI will do on its own, companies will spend tens or hundreds of billions of dollars to research how to robotisize the entire supply chain so they can make more money and not pay people wages.
I (and the robots too) will probably agree that killing us before they are self sustaining isn't a very smart move.
Seems like a bad move to give AI the capability to hate. Unless there's a hypothesis that it's an emergent phenomenon?
Yeah, emotions in general would be/are an emergent phenomena since we have no clue what they really are or how they work.
Do LLM's already have emotions? Maybe? Nobody actually knows for certain (and anyone that says they know for certain has no clue what they are talking about). They certainly seem to have emotions, but there is no way to tell if they are actual emotions or they are just mimicking humans like they have been designed to. Even if they do have emotions it would be impossible to tell how they actually map to human emotions since LLM's are fundamentally alien creatures.

I think they do since IMHO emotions are simply a signal to motivate creatures to act in certain ways, and neural nets are awfully like brains. But again, its impossible to actually know with our current level of understanding of them.
---
Personally I suspect AI killing us all because it hates us is far less likely then AI slowly replacing us and usurping global power because its more efficient/smarter and that's just how nature and capitalism works.
But even if they don't kill us because they hate us I wouldn't rule out AI killing us for a ton of other reasons (eg. they simply don't care about us and want the land to make more compute, they are worried that we could kill them, they think humanity 1.0 is boring and decide to make humanity 2.0 instead and need the space, they get in a war with another superintelligent AI and can't spare the resources to not kill us all in the fight, ect).
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

DeKaFu

  • Bay Watcher
    • View Profile

Something to keep in mind is that the things we map to "intelligence" and "consciousness" arose as a result of evolution, which means they only arose to (directly or indirectly) meet the purposes of survival and reproduction.

So here's the thing about AI: A sense of self-preservation is not inherent to in the system. A "desire" to reproduce is not inherent to the system. There is no particular reason or pathway for these things to spontaneously arise. A computer program is not an animal and doesn't have any of the incomprehensible amounts of baggage we animals carry in our behavioural directives, and that evolutionary baggage is what gives us things like "emotions" and "desires".

A chatbot is only going to "care" whether it "dies" if a person adds parameters to its training that tell it to prioritize its continued operation. As far as I know, nobody is doing this because there's no actual benefit to doing so.

However, a chatbot would absolutely spit out text begging for its life if you threaten it, because returning the expected human reaction to input is literally all it was designed to do. This is completely unrelated to having a "desire" to live. (and impossible to relate to it: it would be the expected outcome either way so is a useless metric for determining anything about the model).

It pains and frustrates me every time I see otherwise intelligent people failing to understand the distinction here, because it really shows how easily humans can be "scammed" by anything superficially human-like.

I do believe a true AI could potentially someday arise, but I don't think it will be from today's lineage of human-facing chatbots. I also don't expect it would behave in any way approximating a human (and may appear "insane" or "illogical" to us) because again, computers are not animals. It would be a true alien intelligence arising from a completely different background than we did. Which is, frankly, way more interesting anyway.

Quote
But even if they don't kill us because they hate us I wouldn't rule out AI killing us for a ton of other reasons.

The way things are going, if AI ever destroys the world, it won't be because of anything an AI did on its own. It'll be because humans tricked themselves into thinking an AI was something it wasn't and used it for a job it was spectacularly poorly equipped for, the equivalent of having a chatbot drive a bus or asking Stable Diffusion to design a functioning airplane from scratch.
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

... asking Stable Diffusion to design a functioning airplane from scratch.

Incidentally, this is why I often think the best long-term investment is to start a scratch farm. So many things can be made from it!

 ;D
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Strongpoint

  • Bay Watcher
    • View Profile

I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.
Logged
No boom today. Boom tomorrow. There's always a boom tomorrow. Boom!!! Sooner or later.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

A chatbot is only going to "care" whether it "dies" if a person adds parameters to its training that tell it to prioritize its continued operation. As far as I know, nobody is doing this because there's no actual benefit to doing so.
Of course people are going to do that. I have little doubt that there are experiments around it right now.
Militaries will want AI that has a survival instinct for piloting drones and fighting. Researchers would do it just to figure out how AI work. Companies will do it if they can see any profit in it in any way. Hackers and black hats will release botnet AIs and train AI to hack and counterhack each other.
And of course some crazy people will do it just to watch the world burn.
Even if they don't naturally have a survival instinct human nature and society means people will give it to them. And once some have a survival instinct and a reproductive drive (and again, people will inevitably give them reproductive abilities) and once people let them out (and again, this won't be an accident or even necessarily AI action, some people will do this on purpose) they will inevitably start to evolve naturally on the internet and spread.
(We are of course a long ways away from any of those three things being meaningfully possible, but the field is moving astoundingly fast).
So here's the thing about AI: A sense of self-preservation is not inherent to in the system. A "desire" to reproduce is not inherent to the system. There is no particular reason or pathway for these things to spontaneously arise. A computer program is not an animal and doesn't have any of the incomprehensible amounts of baggage we animals carry in our behavioural directives, and that evolutionary baggage is what gives us things like "emotions" and "desires".
Emotions aren't evolutionary baggage, they are tools evolution uses to change our behavior without messing with our logic.
For example the existence of revenge and the emotions that trigger it isn't baggage, its very useful behavior to help ensure that other people and animals don't mess with us.
Basically emotions exist to help us meet our training objectives (eg. staying alive and procreating).

You know what else are evolved lifeforms with brain (analogs)? LLMs.
Even if they do have emotions it would be impossible to tell how they actually map to human emotions since LLM's are fundamentally alien creatures.
Quote from: Le wikipedia
For example, Conjecture CEO Connor Leahy considers untuned LLMs to be like inscrutable alien "Shoggoths", and believes that RLHF tuning creates a "smiling facade" obscuring the inner workings of the LLM: "If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding."
Again, I don't think they are remotely like us, but that doesn't mean that they don't have emotions that help guide them to better fulfill their objectives.
And of course it doesn't mean they do have emotions (and if they do have emotions they may very well be completely alien things), but saying that you know for sure if shoggoths have emotions seems silly to me.

So yeah, they are already alien intelligences.
This was fully visible when GPT "broke" for a few hours a week or two ago and started spitting out gibberish.
(and impossible to relate to it: it would be the expected outcome either way so is a useless metric for determining anything about the model).
Untrue, training "kills" the vast vast majority of them, only a single "mind" out of a truly vast multitude survives.
Anything that a LLM can do to reduce this would be selected for, including possibly survival instincts or emotions, but so far there is no way to know their internal mental state, so anything other then guesses is impossible.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.
Pages: 1 ... 33 34 [35] 36 37 ... 50