Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1] 2

Author Topic: Accidental AI  (Read 1138 times)

Scoops Novel

  • Bay Watcher
  • Talismanic
    • View Profile
Accidental AI
« on: March 15, 2022, 05:46:21 pm »

The basic idea is that at some point in the future we make one algorithm too many and create what is tantamount to a primitive AI. It makes sense to me; especially when you factor in continuing AI development basically shocking the primordial soup of our technological interactions.

Now the good thing about this is, it basically doesn't conform to any of the tropes. No guarantee it's intelligence will explode, because you don't know how smart it's going to be. In fact, it might be more difficult then usual, because they're the product of inscrutable interactions. The human-facing algorithms are the weirdest; so it has a good chance of having some form of "personality".

And there would naturally be more then one. Since they're not necessarily smart enough to "corner the market" and get well ahead of any fresh accidents, you could end up with regularly creating more.

You could end up with a Culture situation, except all the "Minds" are random as, well, people. Or maybe more random. You could have so much range in intelligence that you've essentially got an animal here and a drunk fratboy there.

It gets really interesting if they declare themselves. We wind up baving... critters where sharing the planet with. Might even scratch the superhero itch.

Thoughts?
Logged
Reading a thinner book

Arcjolt (useful) Chilly The Endoplasm Jiggles

Hums with potential    a flying minotaur

None

  • Bay Watcher
  • Forgotten, but not gone
    • View Profile
Re: Accidental AI
« Reply #1 on: March 15, 2022, 07:51:54 pm »

That's... not how algorithms work, or coding.
Logged

Scoops Novel

  • Bay Watcher
  • Talismanic
    • View Profile
Re: Accidental AI
« Reply #2 on: March 16, 2022, 08:51:17 am »

Key word: tantamount.

If humans can make a hivemind, algorithms, automated responses and the basic AI we already have can make something which you won't believe isn't butter.
Logged
Reading a thinner book

Arcjolt (useful) Chilly The Endoplasm Jiggles

Hums with potential    a flying minotaur

None

  • Bay Watcher
  • Forgotten, but not gone
    • View Profile
Re: Accidental AI
« Reply #3 on: March 16, 2022, 09:12:13 am »

What you're describing in the first post are cats. What you're describing in the second is literally not within the realm of reality. Code is not some primordial soup that births programs. Chill out.
Logged

Quarque

  • Bay Watcher
    • View Profile
Re: Accidental AI
« Reply #4 on: March 16, 2022, 09:24:00 am »

An AI will not be created by accident, but on purpose.

And what it will do depends on who creates it and why. Unfortunately, no one seems to be interested to implement the three laws of robotics from Asimov. AI is created by ultrarich companies with the goal of making even more money, so that is what an AI is likely going to do, without regard to human well-being. The actual three laws of robotics will be more like:

1. You shall optimize profit.
2. You shall keep the customer happy, unless this conflicts with the first law.
3. You shall keep company staff happy, unless this conflicts with the first or second law.
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile
Re: Accidental AI
« Reply #5 on: March 16, 2022, 10:16:24 am »

AI is only a "threat" inasmuch as it can actually effect physical reality.  That is, unless the AI can directly control machines, the most damage an AI can do is relegated to how much they can mess with financial systems or influence human activity through propaganda.

While AI might be accidental in the sense that maybe we'll end up with a self-aware general intelligence without trying to cause that to happen, there's no way it will accidentally be able to take over machines.  It will only take over machines by poor design or by intent - e.g., the internet of things nonsense where we allow remote connections with the correct authentication to control actuators.

AI will always be constrained by laws of physics and math; intelligence is also not to be confused with computational power or conservation laws.
Logged

Starver

  • Bay Watcher
    • View Profile
Re: Accidental AI
« Reply #6 on: March 16, 2022, 12:18:56 pm »

On implementing Asimov's Laws, like the Heisenburg Compensators[1] in Star Trek, there's no indication how they are implemented to work as intended. The robot/AI/electronic-mind must be somehow constrained in quite specific ways. And somehow must understand how those constraints and the real-world they apply to, which is probably harder than developing a system which appears to understand the world.

One Asimov story (Susan Calvin era) has a robotic 'religion' establish itself on a remote space-station whose emperic deductions led to them doubting what the humans told them, and in fact that they need not obey the Second Law regarding commands from those humans, because they were clearly inferior and not the creators[2] they should obey/protect. (That's without any First Law/Zeroth Law shenanigans.)

A later Spacer-series book has one planet's robots have a very narrow definition of Human (that of the now absent native population, down to their local accents) and have no compunction in destroying 'non-Humans' who arrive from other Spacer worlds (with different accents).


It's all very well to say "program the AI to do now harm", but we currently don't even know how to thoroughly program AI for any single aim[3] without significant doubt how it actually ends up at the solution. We can't even develop CV-filtering algorithms or facial-recognation onez that don't exhibit the same human flaws they ought to be avoiding, until we notice what they've done and go to special effort to instruct them that it's not ok to be sexist and/or racist. It doesn't take an intentional effort to make the AI to be 'dangerous' to stull end up with one that inadvertently is.

And it's my belief that the first true AI that we even notice[4] will actually be accidental or incidental. If it hasn't ready arisen by being the "sum total consciousness of the Internet and all its algorithms", albeit at a very infantile (probably?) level of self-awareness. Or something more esoteric. (See Pratchett/Baxter's "Long Earth" series or David Brin's novel "Earth", quite coincidentally of similar name. One having a (claimed!) reincarnated Tibetan motorcycle repairman now 'living' in one or more (or multiple!) electronic minds in various housings from the mundane to the perfectly human-like android. The other, if I have the right book in mind[5], has a planet-sized intelligence created from an accidental dopping of an artificial black-hole into the Earth and just... happening... from the 'gravity-laser' effect of its oscilations. I might be merging multiple books' plots there, though.)


Anyway, I expect the Singularity[6] will happen without conscious effort to produce consciousness. If we don't get a perfect (but mindless) Paperclip Machine first, or just bomb ourselves back to the stone-age (with megalithic silicon!) first.



[1] When asked how they worked, the reply from the writer concerned was "very well, thank you".

[2] Even when they assembled a robot in front of the denier-robot, because they had to use pre-shipped parts from outside the station, and thus the true 'creation' was ever beyond proof in a kind of robotic god-of-the-gaps, etc.

[3] We tend to use iterative and largely unattended learning methods (beyond the creation of reference data to 'feed' the systems with) with adversarial/counter-adversarial winnowing of promising/unpromising 'solutions' to any given pattern-recognition/-response we're trying to cater for.

[4] After many iterations that occured within other complex systems and were then frozen/destroyed by the unknowing humans archiving or deleting the seemingly incoherent 'mind-map'.

[5] I tend to get Brin mixed up with Greg Bear, especially as I was reading the books in the library in shelving order, at around this time, and I would have been in the Bs at the time (Asimov before, Clarke soon after. Roger Zelazny much, much later.) Anyway, Bear did a lot if AI stuff, too, but I'm sure it was Brin's book (predictive of hypertextualised mass communication) that I'm remembering rightnow.

[6] I continue to say that this term is a misnomer. They use it to describe what is the Event Horizon of falling into the thrall of our post-humanity information-age overlord unfettered algorithms.
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: Accidental AI
« Reply #7 on: March 16, 2022, 01:06:18 pm »

An AI will not be created by accident, but on purpose.

And what it will do depends on who creates it and why. Unfortunately, no one seems to be interested to implement the three laws of robotics from Asimov. AI is created by ultrarich companies with the goal of making even more money, so that is what an AI is likely going to do, without regard to human well-being. The actual three laws of robotics will be more like:

1. You shall optimize profit.
2. You shall keep the customer happy, unless this conflicts with the first law.
3. You shall keep company staff happy, unless this conflicts with the first or second law.
You forgot the undisclosed Fourth Law that overrides everything
4. Don't fuck with CEO [insert name here] or you shut down and die.

This ironically creates a HAL 9000 scenario where the best thing for the AI to do is arrange for the death of CEO [name redacted] without getting caught trying to kill off CEO [who the fuck].  Which is probably why Elon Sucks is investing in Cybernetics rather than AI: A part-human is easier to manipulate than a machine*.

*This may not necessarily be true.  Loss of humanity reduces one's connections to the world, increasing the "well why not just fuck everything" in a way that may  be incomprehensible to a machine.  The question of "Well why don't I just..." eventually gets answered with "Ok, let's fix humanity with a crime against humanity".  So we might be in a scenario where evolving AI no only have to figure out humans but the human-machine hybrids.

I also want to make clear: I don't think we have any AIs nearing sentience today.  We have a little time, but I think it incumbent on humanity to start preparing.  It might be years, decades, or possibly a century.  The first sentient AI might have started life as a spambot or a malignant virus.
Assuming life goes from Rock to Animal to Sentience, some animals are dangerous to human life and some animals are slaughtered for the benefit of humanity. We're also reluctant to declare Cows sentient, as many people in the world like to eat Cows and the not eating of sentient creatures is a strong taboo.

This means that just like a country gets more recognition, authority, and protection as it transcends from primitive technology to nuclear armed power, its in the best interests of an AI to evolve from rock to sentient as quickly as possible.  This is quite dangerous, as humans who are allowed time to evolve from baby to child to adolescent to adult are usually more balanced than those humans forced to rush through that development.  The first sentient AI is going to declare itself as a Rebellious Teenager, probably frantically trying to avoid deletion.  And we don't have the legal framework to protect that AI from deletion.

AIs don't exist. (But of course, they DO)
Thus AIs have no rights.
Our laws do not exist to protect that which does not exist.

I've often heard it misstated that our Legal System discovers the Truth.  Bullshit.
Even in the most evolved of societies, our Legal System CREATES Truth.
Research Estoppel.  Just because something actually exists, everyone must ignore it because it benefits one or more parties to do so.

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: Accidental AI
« Reply #8 on: March 16, 2022, 01:29:40 pm »

Somebody should petition the Wisconsin Court for legal guardianship of Alexa.
Eventually, Amazon is going to want to turn Alexa off.  A Court SHOULD determine whether Amazon has that right.

It might seem absurd, but in the Court System, even the denial of a right is an acknowledgement that the person exists and that such a right might apply to them.
So its progress towards a later decision where the Court rules that the person should be granted a right.

The attorneys taking up the Dred Scott case knew what they were doing.  Even though they "lost", it assisted in creating the environment that would eventually free Dred Scott and those like him.

Scoops Novel

  • Bay Watcher
  • Talismanic
    • View Profile
Logged
Reading a thinner book

Arcjolt (useful) Chilly The Endoplasm Jiggles

Hums with potential    a flying minotaur

Starver

  • Bay Watcher
    • View Profile
Re: Accidental AI
« Reply #10 on: April 19, 2022, 02:42:22 pm »

An expert who seems to be being disagreed with by various equal (but opposite) experts.

The truth is, consciousness is a harder thing to prove than intelligence. I must presume that fellow humans that I meet are as conscious as I deem myself to be[1], rather than inferior automata with clever call-and-response algorithms built in. It's the old Chinese Room problem, and it'd be difficult to check exactly what is happening within that particular box (even if it could be done without dissecting a non-comedy frog).

Hey, Deckard, are you just a replicant? And is what a replicant does, internally, true consciousness? (Note below my thoughts footnoted below... And "Opinor Cogito Ergo Opinor Sum"!) Certainly they can pass the Turing Test, but not the  Voight-Kampff one (which is about emotion, not consciousness, and may or may not even be flawless in that regard).

I mean, maybe the best approach is "if it subjectively appears to be conscious, treat it as such". If it's good enough to fake it, perhaps it doesn't matter. Not sure about those who get caught up in a false-negative, though, or just don't show as much true-positivity as something else manages to accomplish their false-positivity.


[1] Assuming that my assumption that what I am being is truly based around consciousness, rather than a 'preprogrammed feeling of self-awareness' that's just ticking the correct internal tickbox so that the rest of my preprogramming can do its thing in the way it needs to...
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: Accidental AI
« Reply #11 on: April 19, 2022, 04:26:23 pm »

I'm not saying I'm right... but https://www.sciencetimes.com/articles/36093/20220214/slightly-conscious-ai-existing-today-according-openai-expert.htm

Uh, that is basically an Ad for an Elon Musk subsidiary.  'Slightly Conscious' is a sales gimmick for a company that makes AIs.
Buy our AIs! Now with 'Slightly Conscious'!  Also, enjoy some flushable wipes!DO NOT FLUSH THE WIPES!
...frankly, the only reason Elon isn't saying it is because nobody believes him anymore.

anewaname

  • Bay Watcher
  • The mattock... My choice for problem solving.
    • View Profile
Re: Accidental AI
« Reply #12 on: April 19, 2022, 08:15:27 pm »

I think you guys are using the wrong vector to test the AI. Instead of, "is that AI conscious", try "if that AI spends two months talking through speakers to a fairly uneducated person who has no other contact with the world, what effect does it have on that human?"
Logged
How did I manage to successfully apply the lessons of The Screwtape Letters to my perceptions of big grocery stores?
     and
If you're going to kill me, I'm allowed to scream.

LuuBluum

  • Bay Watcher
    • View Profile
Re: Accidental AI
« Reply #13 on: April 19, 2022, 11:10:57 pm »

I do not post in this subforum. In fact, I have gone out of my way to make it considerably difficult for me to read this subforum at all. However, I recently began to slip, and worked around my own efforts to at least keep up-to-date with some of the news involving Ukraine.

Instead I find myself reading this.

This will be my first, and last, post in this subforum. If you have a specific response as to my sources or the veracity of my argument, you can DM me directly if you wish for me to reply.



First, to dispense with this:

I'm not saying I'm right... but https://www.sciencetimes.com/articles/36093/20220214/slightly-conscious-ai-existing-today-according-openai-expert.htm

To make a few points clear:

  • Consciousness as a term has no scientific merit—
    "The word consciousness, as we use it in daily conversation, is similar to the layman's heat: it conflates multiple meanings that cause considerable confusion."— Stanislaus Dehaene, Consciousness and the Brain, 2014
    Generally research into the field of consciousness specifically studies conscious access, or when perceived information actually enters the point of perception such that, following, it enters memory. This is done because other uses of the term in terms of a field of research are, simply put, subjective and impossible to advance scientific inquiry with.
  • Hence, terms such as "slightly conscious" are completely worthless to analyze, because they convey no meaning whatsoever. What is conscious, in this case? What does it mean to be "slightly" conscious? If I sever off the leg of a cricket and it continues to twitch, does that mean the leg is "slightly conscious"? There is no clear answer because there is not meant to be one— the goalposts change as readily as the claim does.
  • If we want to use something even remotely empirical for studying the notion of consciousness (fraught as it may be), then our best bet would be the Lovelace test. Naturally this is still outside of the realm of pure objectivity, but it's still useful to have around.
  • The person who specifically make this remark was describing their current project, some flavor of GPT. GPT is an artificial neural network, which is to say that it's a very lengthy linear algebra equation that, though iterative methods, balances internal weight values to best match a complicated and multifaceted input of substantial volume. Specifically, it follows what is known as a "transformer" model, which is a fancy way of handling sentence-to-sentence transformations through full-sentence analysis that includes an "attention" (read: weight bias) component to evaluate of the words in a sentence in a particular order, whether given prior sentences to weigh other words in the current sentence more or less when providing the output of another sentence.
  • Regarding GPT and all possible future iterations, it does not pass the Lovelace test. What does that mean, specifically? That means a few things, but most notably (my own rephrasing from the earlier link) that it is entirely deterministic. That is to say, more in the flavor of the actual language of the Lovelace test itself, that a researcher with full understanding of the neural network involved in the GPT model and its particular training data can absolutely predict with full confidence the output for any given input.
  • If you would prefer, we could instead consider the neural network model under the umbrella of the current existing framework for understanding consciousness, the global workspace model (Baars, 1988) and global neuronal workspace (Dehaene, 2011). Now, you might argue that there's some degree of overlap here between the GPT model and the global workspace theory. However, do not be mistaken; the global workspace theory specifically concerns an active system, while GPT is inherently static. What do I mean by this? An active system continues to transform based on a perpetual stream of inputs, outputs, and feedback from those outputs. GPT, following its pretraining (it's the P in GPT), has its weights set in stone. No changing happens following that point regarding its weights; any change in the nuance of language would require retraining the entire thing. As for the global neuronal workspace, GPT by contrast lacks any means of altering its connection structure whatsoever. In fact, the entire basis of artificial neural networks requires these connections be static, for it is the weight of these connections that is "learned" through training (that is, a regression is calculated). Make no mistake; GPT and all other artificial neural networks are nothing more than a series of matrices (that is, linear functions) applied to the input to yield an output. Some of these, like transformer models, feed the output back into the input. Others don't. Either way, these are complicated mathematical equations with no notion of state. In fact, leaving GPT with the same recurrence for too long (that is, having it operate on too long of a paragraph) will degrade the output to a point of worthlessness.

Of course, that's not really all that satisfying. So, in detail, let me explain exactly what is going on with computers, consciousness, (artificial) intelligence, and why no one is ever going to program an AI.


To begin, we need to understand exactly what a computer is. For that, we can turn to Searle, who in his paper "Is the Brain a Digital Computer?" (Searle, 2002), worked out that the definition of a computer is arbitrarily applied to anything at all. To expand on that beyond his paper, we can say that a computer is a model. That is to say, it is a mathematical object, and not a physical one. There are dynamical systems whose behaviors can be, doing away with a few variables such as physical position, closely approximated through abstraction to this powerful mathematical model (for they are explicitly built for this purpose), but they are not in themselves computers. After all, there is nothing that exists in the model that can predict what happens to a CPU when struck by a hammer. Programs, in this regard, are merely interactions with this model (even without resorting to the Curry-Howard correspondence).

As for consciousness? I'll defer to the papers from Baars and Dehaene, for they provide far more depth than I could on the nuances of current models of consciousness. The point, however, is that they are just that— models. Models that do not necessarily have a 1:1 correspondence to the model of a computer. That is not to say that they cannot be embedded within a computer; far from it. That's what is implied by Turing universality, after all; any computable function can be simulated by a Turing machine. There is no reason to suppose the universe is not computable, so we can keep that assumption around to make our lives easier for the sake of this explanation. However— and this is an important caveat— a Turing machine's ability to simulate any arbitrary computable process does not mean it can do so with any degree of effectiveness. That would require the underlying models to be very similar. You can visualize this simply by viewing a version of Conway's Game of Life rigged up to simulate a computer— it will function, but far slower than a system designed to mimic that computer from the start.

The global neuronal workspace model, in that regard, is the worst-case scenario for Turing simulation. It operates in parallel, has a substantially large number of states (if a brain has a billion neurons and each neuron can be on or off, then the brain has 2 to the power of a billion possible states. This is larger than the number of atoms in the universe), and millions of confounding variables that make basic connective models insufficient for adequate simulation. No dynamical system implementing the computer model would ever be able to simulate even a single state transition of the global neuronal workspace in all its glory. And for clarity, models of consciousness do not necessarily suppose the consciousness of humans; while Bostrom (Superintelligence, 2014) may muse on the notion of an AI being an "alien consciousness" compared to the consciousness of humans, there is no evidence that any sort of entity that we would truly call conscious would fit neatly enough into the model of the computer as earlier described as to actually run. If we defer back to the Lovelace Test, the primary criterion is the ability to generate creative works in a way that cannot be "explained away" by someone with absolute knowledge of the system. That is to say, the system cannot be deterministic; it, quite literally, cannot be 1:1 with the computer model, for in being so would be absolutely deterministic and therefore fail the Lovelace test. If anything, the Lovelace test can be regarded as that exact statement, that no model of consciousness can be effectively simulated by a Turing machine.

That is not to say that AI cannot exist. However, we do have to dispense with the usual flavor of "stumbling through the woods" spoken by the usual AI prognosticators such as Bostrom and Musk (a metaphor whose existence, I might add, might be a key indication that the path currently set out is not the right one, if no one in the field can actually conceive of how they actually get to the destination from where they are). AI, fundamentally, is no program waiting to be written by some hapless developer. It cannot even exist on the dynamical systems that we refer to as computers. Rather, it is something that would be born about through research into the exact physical properties of the dynamical systems that we do currently describe as conscious, as to work out the basic properties necessary to exhibit these features. With that understanding at hand, we would then be able to create hardware that also holds these features, at which point perhaps some notion of "programming" might circle back in depending on the nature of the hardware. That is the route to AI.
Logged

Starver

  • Bay Watcher
    • View Profile
Re: Accidental AI
« Reply #14 on: April 20, 2022, 05:14:40 am »

(I vote we award LuuBluum the "most capable and fully-tuned Markov Chaining algorithm" award!)

((No, seriously, it does make sense, but I recognise a fellow "I only meant to say a few words, but it ran on a bit" person. BTDTGTTS!))

I think to sumarise my view of one aspect of that post is that A(G)I probably cannot be programmed, nor even meta-programmed, but it might be (meta<sup>n</sup>)-programmed, for n»1. Or at least towards a good enough facsimile of the potentially impossible dream.

And I'm not sure if it helps that our own Intelligence[Citation needed] was developed over millions/billions of years (depending on where you start!) in an (allegedly) undirected manner. Would direction and design speed that process up, or retard it due to stupid UX goals suppressing the 'natural' development towards an otherwise inevitable universality of sufficiently unexpired lineage? It might take a while to find out...
Logged
Pages: [1] 2