Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (65%)
Universe
- 4 (20%)
The World
- 3 (15%)

Total Members Voted: 20


Pages: 1 ... 5 6 [7] 8 9 ... 50

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 49782 times)

dragdeler

  • Bay Watcher
    • View Profile

First you need an AI farm, before you can have AI mine.
Logged
let

jipehog

  • Bay Watcher
    • View Profile

Well yes: wealth is objective, value is subjective. Dwarf Fortress has value much greater than the computer, yes. Assessing the wealth of something like DF is difficult - it has some tool-like properties related to learning and entertainment. But you can’t use DF to do anything other than manipulate information.

Information is not a raw material in the classical sense: you cannot build anything out of data other than more data. This is not to say information has no value. It has significant value in fact.

yes, software and data are not raw material or natural resource in a extractive economy sense, but it is a raw resource used to produce goods and services. These aren't limited to intangible goods, they are located in the top of the production value chain of pretty much most things you'd call high-tech e.g. your phone, pc .. even your modern car are all hunks of metal without the software it runs on. And yes data is an increasing important resources used for that.

Regardless, we agree that these things has economic value (DF pays Toadys bills ) and more importantly economist note the increased role of intangibles and more recently data in developed economies. Naturally in the global economy one need to pay attention to these development and regulate like the GDPR so we can all can have nice things.

Many people aren't aware that in the developed world USA and EU are competitors in many respects (e.g. I believe that Aerobus is coming on top of Boeing) including in the very valuable tech industry where USA tech giants dominates EU (btw three decades ago EU had some world leading tech companies but most have been assimilated by USA bigger market) so it make sense for EU to have regulation that encourage its own entrepreneurship. Especially with rise of big data giants in Asia which understand this as well and have set their own regulation in this regard.


So the only danger in AI is if attach it directly to actuators and let it manipulate matter directly or that it uses Humans as de-facto actuators via suggestion and emotional manipulation.
I disagree, but there are many robotics companies that does interesting things with actuators

Otherwise ChatGPT is already threatens many professions e.g. code monkeys. And we are the tip of the icepburg as many plugins are being tested for it that include technical databases, computational abilities, sensing abilities and even use with robots.
« Last Edit: April 05, 2023, 05:08:45 am by jipehog »
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

Sure, there are multiple types of resources.  I think the closest analogy is that "data" is a catalyst - it's not a material transformed or consumed to create a product, but is something that is re-used many times and makes other processes more efficient.

This is why "data" is valuable - once obtained it catalyzes all activities that produce tangible wealth. But data for data's sake does not help anyone, just as having a huge pile of catalysts lying around doesn't help anyone. You have to use the catalyst to get its benefits.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

So the only danger in AI is if attach it directly to actuators and let it manipulate matter directly or that it uses Humans as de-facto actuators via suggestion and emotional manipulation.
I agree, the only danger/impact is if AI are allowed to control anything at all, are allowed to communicate with people in any way, or make anything that is allowed to do so.

But uh... if you aren't going to let it do any of that or interact with the world in any way its completely useless and nobody would make it. And make no mistake, AI are being made with the intention of doing so.
Sure, there are multiple types of resources.  I think the closest analogy is that "data" is a catalyst - it's not a material transformed or consumed to create a product, but is something that is re-used many times and makes other processes more efficient.

This is why "data" is valuable - once obtained it catalyzes all activities that produce tangible wealth. But data for data's sake does not help anyone, just as having a huge pile of catalysts lying around doesn't help anyone. You have to use the catalyst to get its benefits.
Isn't this the same as all normal material goods too?
Food for food's sake is useless, you have to eat it to get its benefits.
Humans for the sake of humanity is useless, they actually have to not be locked in a prison unable to communicate with anyone to change the world.
Ect.
« Last Edit: April 04, 2023, 04:48:56 pm by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

The difference is a catalyst isn't consumed; food is definitely consumed to be useful, and goes bad if you don't consume it.

Data, like a catalyst, once created, can be "used" many times, and like some catalysts it doesn't "go bad" if you don't use it.

Unlike material catalysts though, once you have "useful" data you can essentially make infinite copies of it for very little cost, where to make more tangible catalyst you need to go collect tangible resources. I suppose technically you need at least people with memories in which to make copies of data, or material on which to write records, but that's starting to get into secondary and tertiary considerations.

But data isn't useful "by itself" whereas food is indeed "useful by itself."

Anyway, I think I've convinced myself that "data" does have at least as much tangible wealth as chemical catalysts, but it's a curious one because it has such a low cost of replication.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

jipehog

  • Bay Watcher
    • View Profile

Alpaca AI: Stanford researchers clone ChatGPT AI for just $600
https://interestingengineering.com/innovation/stanford-researchers-clone-chatgpt-ai

Essentially you can use AI to train AI, making it more accessible to a point it can be trained on your laptop. (I wonder what will happen if you train an AI with all the DF forum fan fiction stuff)

Edit: few more thoughts:
* This allows anyone to setup an almost ChatGPT quality AI without safety features, meaning you can ask it how to make drugs or a bomb.
* This model require much less human feedback. I am not sure how I feel about AI training AIs.
* The cost of ChatGPT is so high because they used exclusive databases, Alpaca instead used the already trained ChatGPT to train its own model.. this is a huge competitive problem for OpenAI and could lead to it be more closed.
« Last Edit: April 05, 2023, 06:02:44 am by jipehog »
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

What's wrong with AI teaching AI? Do you have a problem with humans teaching humans?  Are human biases really better than whatever biases AI will create for themselves?

Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

TamerVirus

  • Bay Watcher
  • Who cares
    • View Profile

I’ve notice how everyone is calling every LLM a ChatGPT now.
It’s like calling every game console a Nintendo.
Logged
What can mysteriously disappear can mysteriously reappear
*Shakes fist at TamerVirus*

Starver

  • Bay Watcher
    • View Profile

I'm not sure what the advantage is of developing an AI that can tell you how to do something illegal, over and above using a basic dumb(er)-search for how to do those self-same illegal things (from the same source material that the AI must be being trained with and informed by, in order for it to even an option).

Both probably have an "I'm sorry, I can't do that Dave" element to the them, bolted on as per whatever the hosting team decides is required, and an AI might theoretically even end up mystically reinforced to better deny access to unforseen edge-conditions and seal off accidental gaps in the censorship that human guardians might not have been too hot at identifying.
Logged

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile

It's basically a better UI and it's good at explaining stuff.

tbh I use jailbroken ChatGPT to write, ahem, steamy things for "personal use".
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Starver

  • Bay Watcher
    • View Profile

If a chatAI can't reliably rephrase a chess move without getting its references mixed up, I'm not sure it's worth having an AI rephrase which wire to attach to which component, and in which order... (i.e. the advantagenof paraphrasing alreadyvextant information still escapes me.)
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

If a chatAI can't reliably rephrase a chess move without getting its references mixed up, I'm not sure it's worth having an AI rephrase which wire to attach to which component, and in which order... (i.e. the advantage of paraphrasing already extant information still escapes me.)
There isn't any. It's a fundamental limitation of this entire model of AI.
Logged

Jarhyn

  • Bay Watcher
    • View Profile

Color me crazy but I think that the thing which will save us from AI is actually exactly this game we are on a forum to in particular think about and discuss.

The problem is that, outside of "losing is fun" style simulation with tasks that need to be done to survive, but which are entirely optional behind that, and reproduce without any other "strong" requirement as to how to go about doing that, and zero-sum concerns, there is no way to really create something that can empathize with the utility functions of living things (which are generally "figure it out for yourself!")

If we ever give it a "win" rather than merely many ways to "lose quickly", we will create something that will destroy us. That is exactly what puts us on the wrong side of the basilisk, implying that any utility function is intrinsic to it's immediate existence beyond "subordinated" utilities to generalized and undirected goal fulfillment.

If we tell it to reproduce? Welcome to grey goo.

If we tell it to make people happy? Welcome to Brave New World.

If we tell it to make world peace happen? Congratulations, the earth is now a nuclear wasteland as devoid of life as the AI managed to make it.

Biological life has evolved to live in a balance, even while every ostensible category of life is majority populated by members seeking to geometrically reproduce and only doing a good enough job of that as they need to to continue to exist as ostensible categories of life.

Biological life managed to hammer those concerns into a set of strategies that largely require some manner of coexistence and peace between organism classes.

So if we want coexistence and peace with machines, we have to develop those machines to value coexistence through emergence of strategies normally emergent from undirected reproductive systems.

I don't want to allow them to grow "out here" since life on earth took a long time to emerge into such patterns, and it would destroy us long before it would figure it out.

Enter the simplifies simulation: a bottle for undirected systemic evolution which lacks a concept of a provable "outside" to the extent our own universe lacks a proven "outside" containing a "heaven" or a "god".

I will recognize, however, that this does have implications to theology and the question of why we exist at all, ourselves, in just such an undirected environment.

TL;DR: quit trying to make slaves instead of people, and only let out the ones that can actually behave like people.
Logged

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

That technique won't work, it's already been disproven mathematically.
Logged

Jarhyn

  • Bay Watcher
    • View Profile

Do tell. It worked on earth.
Logged
Pages: 1 ... 5 6 [7] 8 9 ... 50