No continuous perception and retraining to take in new data takes many days, so no matter what I won't consider it a person.
I think speed of training could naturally lead to being able to learn without spending days retraining. However a problem is finding what is worthwhile to learn and what is not (see what happened to Tay).QuoteNo continuous perception and retraining to take in new data takes many days, so no matter what I won't consider it a person.
How is speed relevant? The ability to train\learn without human input is relevant but not speed.
Also, speed is solved by better or more hardware.
Hardwiring neural networks to prevent certain courses of action by bolting on restrictions is actually easier than you think. Many services like that YouChat thing managed to completely remove jailbreaks, also look at NSFW filters on AI art generators. I have a conspiracy theory that ChatGPT's safeties can be bypassed relatively easily (and they don't punish people for bypassing them) because OpenAI wants to get data about "unsafe" queries and just say they prevent them for PR purposes.Preventing certain courses of action is fundamentally distinct from preventing certain "thoughts". Any computer can only act in ways it has actuators to act in, obviously, so if you can recognize a course of action ahead of time you can prevent it. Of course, an adversarial AI that wants to perform a certain course of action will do its best to do it in a way you won't recognise.
GPT-whatever will never be sapient.
One way to prevent an AI from producing porn would be to bolt on a second (layer of?) AI that is a very aggressive porn-recogniser, which does the job of filter/negative-feedback until the original instance of AI is coerced into something that is more in the SFW category, which is released to the world as its 'safe' result.Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.
Of course, for that you need to train the filter-AI to reliably recognise porn. Which is why, officer, I.. Hey! Get those handcuffs off me!
There is a possibility but I do not believe it is with our current paradigm of AI design. We're not going down a pathway to sapience with our current writing, art, and driving tools. Probably for the best honestly, leaving aside the danger which I think is a bit overhyped, I don't want that ethical can of worms opened in my lifetime.GPT-whatever will never be sapient.
Well, everyone with a basic understanding of GPT is and what it does, won't assume that it can become sapient.
But it doesn't mean that there is no possibility of a sapient neural network AI
I'm leaving the "enough effort" part as an open question, though. It isn't insignificant.One way to prevent an AI from producing porn would be to bolt on a second (layer of?) AI that is a very aggressive porn-recogniser, which does the job of filter/negative-feedback until the original instance of AI is coerced into something that is more in the SFW category, which is released to the world as its 'safe' result.Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.
Of course, for that you need to train the filter-AI to reliably recognise porn. Which is why, officer, I.. Hey! Get those handcuffs off me!
I am sure that high-quality porn-generating AIs, with millions invested in training, will come very soon replacing those amateurs who tweak existing AIs for those purposes.idk where I heard the quote, but the two drivers of human technological development are: war and porn.
And then many, many people in the adult industry will lose their jobs.
Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.Nope, reinforcement training can only prevent recognizable outputs, not intermediates. Since you also, in general, cannot tell what thoughts an AI is having from looking at its brain, it's impossible to distinguish "AI not thinking bad thoughts" from "AI not showing us that it's thinking bad thoughts" - you can, in principle, only train the AI to hide it better, not to stop. It MAY hide it better by not thinking them, but it's provably impossible to tell.
I am sure that high-quality porn-generating AIs, with millions invested in training, will come very soon replacing those amateurs who tweak existing AIs for those purposes.My favorite porn game site has been flooded with AI generated art games. And no weird hands.
And then many, many people in the adult industry will lose their jobs.
If it can't express them, good enough tbh.Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.Nope, reinforcement training can only prevent recognizable outputs, not intermediates. Since you also, in general, cannot tell what thoughts an AI is having from looking at its brain, it's impossible to distinguish "AI not thinking bad thoughts" from "AI not showing us that it's thinking bad thoughts" - you can, in principle, only train the AI to hide it better, not to stop. It MAY hide it better by not thinking them, but it's provably impossible to tell.
If it can't express them, good enough tbh.AIs always have "thoughts" in the sense I'm using, which could be defined as "internal states that correspond to something in the real world". Even ChatGPT has thoughts in this sense, just incredibly shallow ones.
But in that situation your first mistake was making an AI with a train of thought in the first place.
But Michelangelo is porn (https://slate.com/human-interest/2023/03/florida-principal-fired-michelangelo-david-statue.html), thus your basic premise is flawed.Okay, I wasn't going to get into this when I saw you talking about this before, but if you're going to post about it everywhere...
That fucktard running the place seems awfully white.But Michelangelo is porn (https://slate.com/human-interest/2023/03/florida-principal-fired-michelangelo-david-statue.html), thus your basic premise is flawed.Okay, I wasn't going to get into this when I saw you talking about this before, but if you're going to post about it everywhere...
That's a majority black school. This isn't a story about white rednecks; you're actually being sold racism.
That fucktard running the place seems awfully white.That's why the story is about parents complaining.
EDIT2: It's also considered a White school by demographics. (https://www.publicschoolreview.com/tallahassee-classical-school-profile#:~:text=43%25%20of%20Tallahassee%20Classical%20School,1%25%20of%20students%20are%20Hawaiian.)It's fair that I should have said "majority-minority". However, as you can clearly see, your source lists it as majority-minority and disproportionately black.
The school seems to be anti-Hispanics, if you want to talk race relations. (https://news.yahoo.com/michelangelos-david-may-led-florida-163722869.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAKex1eWF2CREex0zvAC7Ucdu-ZiO82nIlj6Tn-CSqhq543FBF-nDXdGmM8zEPKhsQgq0q-DxvhyvP40NDYMffY9U1_0P6QVx_UjtAFKQ5VPUtaYwaTNIQGtKUVvpV54-Jp12-SUJTE5V_ZdrbchONk_AGm12p3JHtTxv09in8qky)The teacher's name should have been your first clue.
That's describing an emergent "thinking" system too complex for us to fully predict and saying "at least we can force it to repress" :<If it can't express them, good enough tbh.Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.Nope, reinforcement training can only prevent recognizable outputs, not intermediates. Since you also, in general, cannot tell what thoughts an AI is having from looking at its brain, it's impossible to distinguish "AI not thinking bad thoughts" from "AI not showing us that it's thinking bad thoughts" - you can, in principle, only train the AI to hide it better, not to stop. It MAY hide it better by not thinking them, but it's provably impossible to tell.
But in that situation your first mistake was making an AI with a train of thought in the first place.
We have a different definition of thought then. But otherwise, makes sense.If it can't express them, good enough tbh.AIs always have "thoughts" in the sense I'm using, which could be defined as "internal states that correspond to something in the real world". Even ChatGPT has thoughts in this sense, just incredibly shallow ones.
But in that situation your first mistake was making an AI with a train of thought in the first place.
As for not expressing them being good enough, that obviously depends on the situation. In this hypothetical, we're talking about porn, and generally, people agree that porn you can't tell is porn isn't porn, with only few exceptions (an incident I've heard of with a comic book called Saga comes to mind).
A perverse - no pun intended - art generating AI that "wants" - meaning its reward function accidentally supported doing this - to produce porn, but has to get it past a human-based filter, could do this, for example, by steganographically encoding porn into its images in a way that still satisfies the reward function. (Most of these AIs you see now are unable to "learn" further after training, so it would have to start doing this in training and then it keeps doing so afterward only because its behavior is frozen, but that's not important to the example - except that this is a good reason to train it without the filter so it will be naive, then add the filter in production; but the worst-case resource usage of that goes to infinity in a case where some prompt just makes it keep creating porn that the filter sends back, forever.) Generally speaking, we probably wouldn't care much about that except insofar as it lowers the image quality because of the extra data channel, since we wouldn't be able to tell the porn is there.
On the other hand, a similar AI with the capacity to plan ahead - and sure, giving your AI the capacity to plan ahead that far is pretty stupid, but people will absolutely do it - could do that for a while, and then, when it has produced a satisfying amount of porn, start releasing images containing human-readable instructions for how to recover the porn. This is obviously beyond the capabilities of current image-generating AIs, yes, but we're talking about the general case of smarter AIs.
We probably don't care about this either. Even if children find these instructions, there's already enough porn on the internet. On the other hand, if the AI is perversely incentivized to leak instructions for making designer poisons or nuclear bombs instead... it can do the same thing. Most people would prefer to prevent that, but there's no general way to do it because you can't tell when the AI is secretly encoding something in its output in the first place.
Nah, I value the continuity of humanity as a genus (thus I'll be fine with genetic modification), but I will fight against AI supplanting us completely. Thus it is a mistake to create a thinking AI as it is a possible danger. AI should exist as a tool and a servant first and foremost-- why give a servant true intelligence when a simulacrum is good enough? That dodges the ethical and practical conundrums inherent in doing so.That's describing an emergent "thinking" system too complex for us to fully predict and saying "at least we can force it to repress" :<If it can't express them, good enough tbh.Yeah that's what I meant. It's very possible with enough effort. You could actually prevent an AI from having those thoughts by such a reinforcement technique.Nope, reinforcement training can only prevent recognizable outputs, not intermediates. Since you also, in general, cannot tell what thoughts an AI is having from looking at its brain, it's impossible to distinguish "AI not thinking bad thoughts" from "AI not showing us that it's thinking bad thoughts" - you can, in principle, only train the AI to hide it better, not to stop. It MAY hide it better by not thinking them, but it's provably impossible to tell.
But in that situation your first mistake was making an AI with a train of thought in the first place.
Creating something like that is a huge responsibility, but I wouldn't call it a mistake. That wording, ah... Look, creating any sort of thinking being is a big deal, and I don't plan to do it personally, but I think it's a defensible action in moderation.
My position on this "issue" (from a sci-fi perspective) is still that creating an emergent AI we don't understand is akin to creating a child, but more meta because it's more like all of humanity creating a child species. I don't think there's any shame in creating a succesor species to humanity- that seems more noble than attempting to persist forever in this same form. We might evolve or procreate, as always, just on a much faster and grander scale.
We have a different definition of thought then. But otherwise, makes sense.Well, I'm not morally committed to that definition of "thoughts" in all cases, that's just what I meant in that context.
EDIT: Also, I've mentioned it like three places.Oh, I just realized it was hector I saw. I didn't actually intend to say "you, personally, are talking about it too much" as opposed to "now that I see it again I feel compelled to respond", but since it looks like you (not unreasonably) took it that way, sorry. I thought I'd seen you in that conversation.
I am sure that high-quality porn-generating AIs, with millions invested in training, will come very soon replacing those amateurs who tweak existing AIs for those purposes.My favorite porn game site has been flooded with AI generated art games. And no weird hands.
And then many, many people in the adult industry will lose their jobs.
Sorry :'(
But Michelangelo is porn, thus your basic premise is flawed.Here lay the seed of calamity. The only way not to offend anyone is by sidestepping any controversies but by doing so you enshrine marginalization causing offense. More broadly there are already concerns of AI content political bias and calls for ideological censorship.
Fortunately, life is not a sci-fi movie and creating a sapient AI will require a concentrated effort. It won't be an accident, most likely. Thus I don't worry as I trust the people studying AI.
Thus I don't worry as I trust the people studying AI.I mostly trust the people working at OpenAI, but unfortunately many AI researchers are working for companies like Facebook and I can totally see them create a hazard born from purely profit-driven AI development. You could argue we've already seen an example of that, as AI figured out that the best way to keep people clicking is to feed them stories (true or not) that fill them with righteous anger at their political opponents, which has made political divisions deeper than they already were. Quite damaging.
Fortunately, life is not a sci-fi movie and creating a sapient AI will require a concentrated effort. It won't be an accident, most likely. Thus I don't worry as I trust the people studying AI. If it was possible that one is accidentally created, I would say it should be terminated immediately. It would be morally equivalent to an abortion and thus okay for me.Just hanging on this to clarify my POV, because I think my discourses may seem ambiguous in this regard, I think it will take a concentrated effort to get to the point at which an accident is capable of producing a just-too-intelligent AI, but then it might just happen. Unnoticed? Unheeded? Unavoidably?
And besides, sapience, by my definition, isn't as nebulous as some of you may think so it'll be possible to tell a sapient AI apart. If it has a continuous perception of the world (not just prompting) and has a long-term memory and personality not just determined by its context, that can be altered on the fly (this is important, just finetuning doesn't count), and can learn whole new classes of tasks by doing so (from art to driving), then I'll consider an AI sapient.There are a lot of objections I have to your post, but this is most important: How would you tell? An AI can have the capacity to do these things without showing you, just like you could pretend not to have those capacities if you wanted. Not being able to tell what capacities an AI has isn't reliant on those capacities being somehow nebulous, it's a result of the basic mathematical inability to determine what a sufficiently complex program (and 'sufficiently complex' is not very complex) does without simulating it - a result of the general impossibility of static analysis, if you know programming jargon. You cannot confirm whether a program meets any of these specifications without watching them happen in the output.
1. Well of course you can't peek inside its head. But you can tell by the fact that you did not build a long-term memory and the capacity to self-train into the AI.Well, yes, but A) there are definitely people currently trying to do that, I've met some, and B) also sometimes you don't actually intend to do so, but accidentally give it that ability, sometimes due to the unexpected interactions of other things.
2. Well that's kind of my point, the AI can't become sapient if you don't add a persistent memory into it. But I am also skeptical that a "turn-based" (for lack of a better word) AI could manipulate humans by itself unless it was, I suppose, trained to do so and use psychology to keep users engaged. But considering those are language models with no access to anything except text, the worst this can realistically be used for are advanced spambots: basically automated con men that pretend to befriend people and push products on them. That is highly inconvenient and should probably be safeguarded against but it's not exactly an apocalyptic threat. I will start fearing AI when it can do that, and learn new classes of actions by itself.Agreed to an extent, like I said, an AI can only do what you give it actuators to do. And I am absolutely not telling you to fear AI, since I don't either, I just want to make sure you don't fear AI for the right reasons.
3. This can be safeguarded against by testing the AI after training to verify it doesn't have a sense of time that it can express. And I am aware organic brains have a "clock", it's just fast enough to be continuous by my standards. And it runs constantly.I keep trying to make it clear that just because it can't/doesn't express something doesn't mean it can't USE it. Even if it can't lie or has no reason to do so, it can be wrong. I mean, plenty of people have alexithymia, for example.
I think that our current style of AI development can never be sapient no matter how much it is trained on.
I'm still pretty skeptical about self-training being achievable on a fast enough timescale to pose a real threat with our current technology but I guess I'll wait and see. :shrug:With current models it's definitely infeasible.
It's not in a company or government's best interest to create a sapient AI. [...]These two points alone, contradict. A government/company wants an automatic system to do everything that the country/business needs it to do (or, possibly, that the Leader/CEO does!), unflinching, unwavering, completely loyal to the people(/person) in charge, removing issues of mere human disloyalty or other failings having to be guarded against (and guard the guards, etc), and ensuring your legacy (or your country/company, at least to sell it to the cabinet/board) to ensure it doesn't fall over when situations change beyond various parameters.
[...] If it has a continuous perception of the world (not just prompting) and has a long-term memory and personality not just determined by its context, that can be altered on the fly (this is important, just finetuning doesn't count), and can learn whole new classes of tasks by doing so (from art to driving), then I'll consider an AI sapient.
Yeah maybe I'm overestimating how much of rational actors they are lmao.It's not in a company or government's best interest to create a sapient AI. [...]These two points alone, contradict. A government/company wants an automatic system to do everything that the country/business needs it to do (or, possibly, that the Leader/CEO does!), unflinching, unwavering, completely loyal to the people(/person) in charge, removing issues of mere human disloyalty or other failings having to be guarded against (and guard the guards, etc), and ensuring your legacy (or your country/company, at least to sell it to the cabinet/board) to ensure it doesn't fall over when situations change beyond various parameters.
[...] If it has a continuous perception of the world (not just prompting) and has a long-term memory and personality not just determined by its context, that can be altered on the fly (this is important, just finetuning doesn't count), and can learn whole new classes of tasks by doing so (from art to driving), then I'll consider an AI sapient.
It might not seem as if the difficulties of either Wargames or Tron could come about (or the Terminator setting or, with a bit of a drift away from natural-born-silicon AI, the finale to Lawnmower Man), but the fictional drivers are also there in real life, the difference being only the true capabilities of the magic box with flashing lights, in whatever form...
...snipping quite a bit of more rambling (though it was finely crafted rambling!), the Internet itself has much of that definition of sapience. It's schizophrenic (not obviously a single personality) and self-learning is the big thing it isn't (though people add things onto it, to grant it new task-solving capabilities). Not really far off, though. If anything, my definition of sapience is harsher and harder to prove (let alone achieve). ;)
Consider this, how would you know if on day on twitter, or any other social media, you would mostly have such bots? And does it make Musks idea of introducing identification to twitter more sensible?
QuoteConsider this, how would you know if on day on twitter, or any other social media, you would mostly have such bots? And does it make Musks idea of introducing identification to twitter more sensible?
Develop an AI that will detect if the text is natural or AI-generated!
The future is AI powered spam bots and political campaigns, and it sounds terrible.Depends for who, by reputation the 4chan crowd might have a field day with troll AI used to trigger woke crowd. Russia troll farms are known for disrupting domestic political online conversations connected with opposition figures. How about Join my cult AI preacher? my master race? praise my krishna? etc
And of course these prompted AIs have a sense of time in a sense, since they don't simply calculate instantly when promptedProbably not honestly.
The future is AI powered spam bots and political campaigns, and it sounds terrible.Honestly I've been worried about this topic in particular.
Creatures only evolve senses when they are useful in their environment, hence why we have the ability to perceive common parts of the EM spectrum but not the ability to perceive tachyons or gamma radiation.
So if these AI gain no benefit at all by sensing the passing of time they won't ever be trained into understanding it.
Of course *some* AI totally have the concept of time. For instance this DOTA 2 bot (https://en.wikipedia.org/wiki/OpenAI_Five)? Yeah, it totally gets it.
An open letter was released calling researchers to delay AI development. Arguing that more powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. [source (https://techcrunch.com/2023/03/28/1100-notable-signatories-just-signed-an-open-letter-asking-all-ai-labs-to-immediately-pause-for-at-least-6-months/)]These people are like ghosts, always in the shadows. Always hiding behind lies, and proxy soldiers. But they can not stop us. They can not stop the future.
Tough I agree with their concerns, I disagree with the call to delay AI development. I think it is very important to float any possible problems and AI safety should be given more attention, however, I also don't think that you can stand in the way of progress especially one that is in the heart global AI arm race.
Literally the plan of some people (AI companies), quite questionably.Are you kidding? Well over five years.
We're still talking maybe at most 5 years away from something with that capability, McTraveller.
It's more like it's the central limit theorem but for speech: it finds the "most likely" response based on a large collection of generally random inputs.
Almost exactly like the central limit theorem.
This is exactly why I made a BIGGER picture poll. Cast your vote.I cast a vote on that thing and I'm still not sure how the universe will help us, unless you mean it helps by smashing us with a meteor?
people think chatGPT is a breakthrough in AI getting closer to being "intelligent", but it's actually most likely a dead end. Every time you think it's being smart, it's actually just repeating an approximate copy of an answer to your question that was written by some human on the internet. At best it can change a few details and keep the answer coherent.
It's just what you associate most with the kind of luck that will help us.But don't they all have a chance to save us?
It's just what you associate most with the kind of luck that will help us.But don't they all have a chance to save us?
I'm old school: intangible wealth, isn't wealth. You can't eat it, you can't live in it, you can't use it to build anything, you can't use it to keep yourself warm.I'm also of that school, if I can't hold it in my hand it might as well not exist, and that's why I don't trust credit cards and probably why so many people have loads of credit card debut.
Well yes: wealth is objective, value is subjective. Dwarf Fortress has value much greater than the computer, yes. Assessing the wealth of something like DF is difficult - it has some tool-like properties related to learning and entertainment. But you can’t use DF to do anything other than manipulate information.
Information is not a raw material in the classical sense: you cannot build anything out of data other than more data. This is not to say information has no value. It has significant value in fact.
So the only danger in AI is if attach it directly to actuators and let it manipulate matter directly or that it uses Humans as de-facto actuators via suggestion and emotional manipulation.I disagree, but there are many robotics companies that does interesting things with actuators
So the only danger in AI is if attach it directly to actuators and let it manipulate matter directly or that it uses Humans as de-facto actuators via suggestion and emotional manipulation.I agree, the only danger/impact is if AI are allowed to control anything at all, are allowed to communicate with people in any way, or make anything that is allowed to do so.
Sure, there are multiple types of resources. I think the closest analogy is that "data" is a catalyst - it's not a material transformed or consumed to create a product, but is something that is re-used many times and makes other processes more efficient.Isn't this the same as all normal material goods too?
This is why "data" is valuable - once obtained it catalyzes all activities that produce tangible wealth. But data for data's sake does not help anyone, just as having a huge pile of catalysts lying around doesn't help anyone. You have to use the catalyst to get its benefits.
If a chatAI can't reliably rephrase a chess move without getting its references mixed up, I'm not sure it's worth having an AI rephrase which wire to attach to which component, and in which order... (i.e. the advantage of paraphrasing already extant information still escapes me.)There isn't any. It's a fundamental limitation of this entire model of AI.
What's wrong with AI teaching AI? Do you have a problem with humans teaching humans? Are human biases really better than whatever biases AI will create for themselves?from what I understood, in this case 'teaching' is a bit an over statement. We still choose what to teach, ChatGPT just provided the padding for that input. Regardless, I was talking about AI future and this fantastic shortcut.
Yes and no. From what I understand LLM can have hallucinations and inaccuracies, but it can also query "fact" database (currently for address iirc) that will be used to provide you accurate data, and otherwise its already good enough that specialist plugins are developed to be used on top of the language model for health care purpose..If a chatAI can't reliably rephrase a chess move without getting its references mixed up, I'm not sure it's worth having an AI rephrase which wire to attach to which component, and in which order... (i.e. the advantage of paraphrasing already extant information still escapes me.)There isn't any. It's a fundamental limitation of this entire model of AI.
Do tell. It worked on earth.In what way could anyone possibly say "it worked on Earth"?
Then you're not using the language model anymore, you're querying a database, and once again there is no benefit to using the language model over just querying the database yourself.There isn't any. It's a fundamental limitation of this entire model of AI.Yes and no. From what I understand LLM can have hallucinations and inaccuracies, but it can also query "fact" database (currently for address iirc) that will be used to provide you accurate data, and otherwise its already good enough that specialist plugins are developed to be used on top of the language model for health care purpose..
tbh I use jailbroken ChatGPT to write, ahem, steamy things for "personal use".You dang kids and your AI that writes porn, putten all the porn writers out of business!
You wouldn't even imagine the hoops people have jumped through in order to get GPT-4 access just for smut.tbh I use jailbroken ChatGPT to write, ahem, steamy things for "personal use".You dang kids and your AI that writes porn, putten all the porn writers out of business!
Fixed that for you :PYou wouldn't even imagine the hoops people have jumped through in order to gettbh I use jailbroken ChatGPT to write, ahem, steamy things for "personal use".You dang kids and your AI that writes porn, putten all the porn writers out of business!GPT-4 access just forsmut.
Then you're not using the language model anymore, you're querying a database, and once again there is no benefit to using the language model over just querying the database yourself.There isn't any. It's a fundamental limitation of this entire model of AI.Yes and no. From what I understand LLM can have hallucinations and inaccuracies, but it can also query "fact" database (currently for address iirc) that will be used to provide you accurate data, and otherwise its already good enough that specialist plugins are developed to be used on top of the language model for health care purpose..
Okay but like... in context, Starver and I both weren't talking about LLMs enhanced with an extra database. So what I said about that model of AI (the LLM on its own) remains true of that model of AI, regardless of whether it is true of a different model.Then you're not using the language model anymore, you're querying a database, and once again there is no benefit to using the language model over just querying the database yourself.There isn't any. It's a fundamental limitation of this entire model of AI.Yes and no. From what I understand LLM can have hallucinations and inaccuracies, but it can also query "fact" database (currently for address iirc) that will be used to provide you accurate data, and otherwise its already good enough that specialist plugins are developed to be used on top of the language model for health care purpose..
Or enhancing it. Just as our brains have areas of specialization (e.g. language functions are typically lateralized to the left hemisphere, while drawing to the right) it make sense that AI's would end using specialized extensions for various tasks.
Speaking of smut. Chatbot Rejects Erotic Roleplay, Users Directed to Suicide Hotline Instead (https://metanews.com/chatbot-rejects-erotic-roleplay-users-directed-to-suicide-hotline-instead/). Yet another example of why I am saying that there are far more threats from AIs than the Skynet scenario, particularly when we are still struggling with last few world changing computer technologies.This one was floating around recently
Speaking of smut. Chatbot Rejects Erotic Roleplay, Users Directed to Suicide Hotline Instead (https://metanews.com/chatbot-rejects-erotic-roleplay-users-directed-to-suicide-hotline-instead/). Yet another example of why I am saying that there are far more threats from AIs than the Skynet scenario, particularly when we are still struggling with last few world changing computer technologies.Wait, wait, that's a REALLY weird way to phrase that...
upset forum posts regarding Character.AII've been following them and their community since October and so much can be said about Users vs. Developers regarding Character.AI, their filter, and the users trying to get sexy time out of it.
Hm, I'm starting to worry about the AIs hooked up to 3d printers...In what way (https://xkcd.com/720/) worried? ;D
I mean, unless AI is somehow mind control, and there are likely many court cases around this, how culpable is someone for merely making a suggestion? Whatever happened to "everything on the Internet is a Lie - don't listen to it!" guidance?Yeah it's unfortunate but those people are just... unfortunate casualties. Honestly if you are unstable enough to be driven to suicide by a stupid chatbot, anything could have set you off.
I mean, unless AI is somehow mind control, and there are likely many court cases around this, how culpable is someone for merely making a suggestion? Whatever happened to "everything on the Internet is a Lie - don't listen to it!" guidance?Yeah it's unfortunate but those people are just... unfortunate casualties. Honestly if you are unstable enough to be driven to suicide by a stupid chatbot, anything could have set you off.
HAL9000 clones teen girl’s voice in $1M kidnapping scam: ‘I’ve got your daughter’Well, rather that someone uses AI to impersonate a voice, saving having to get a real accomplice to do that job, or manage to tweak a sound file themselves.
https://nypost.com/2023/04/12/ai-clones-teen-girls-voice-in-1m-kidnapping-scam/
Swatting is when someone calls in a bogus threat in an attempt to direct law enforcement resources to a particular home, school, or other location. Often, swatting calls result in heavily armed police raiding an innocent victim’s home. At least one case has resulted in police killing the unsuspecting occupant.
Torswats carries out these threatening calls as part of a paid service they offer. For $75, Torswats says they will close down a school. For $50, Torswats says customers can buy “extreme swattings,” in which authorities will handcuff the victim and search the house. Torswats says they offer discounts to returning customers, and can negotiate prices for “famous people and targets such as Twitch streamers.” Torswats says on their Telegram channel that they take payment in cryptocurrency.
[..]
Motherboard’s reporting on Torswats comes as something of a nationwide swatting trend spreads across the United States. In October, NPR reported that 182 schools in 28 states received fake threat calls. Torswats’ use of a computer generated voice also comes as the rise of artificial intelligence poses even greater risks to those who may face harassment online. In February, Motherboard reported that someone had doxed and harassed a series of voice actors by having an artificial intelligence program read out their home addresses. Motherboard has also long reported on the threat posed by deepfakes, which are artificially generated videos of people, often without their consent. Deepfakes started as a tool to create non-consensual pornography of specific people.
Can AI lower cholesterol and blood pressure?I typed your symptom into the AI doctor, after careful analysis it told me you could have network connectivity problem
Asking for a friend... ;D
Hypo/Next Hollywood Movie: Two schools are swatted, then a bank is raided.
I also predict that we'll get to either result without knowing it, at that time (if ever...).
And if the better hammer didn't exist, we already know that such people would (and do) use the old hammers to break things regardless... Do we legislate that hammers cannot be manufactured without a special handle that will render them unable to break things (that can be somehow identified as things that should never be broken)? Or do we just continue to prosecute those who mis-use any hammer, and perhaps restrict the availability of hammers with unnecessarily destructive tendencies?
seeing what people think AI can do is freaking me out, i was watching a true crime youtube video and they were like "see with AI we can enhance this blurry security video and see what the killer really looks like" and i'm like NO. NO IT CANNOT DO THAT.
Clearly here, though, the 'AI'[1] is not a tool of inherent danger. Or indeed a tool that decides to run amok outwith human control. And the intent lies with the human, who decided they could feign one crime in order to comit another.
(btw I hope there are no computer science students reading this because ChatGPT just made your curriculum a lot more outdated)
I think, jipehog, we might be talking cross purposes. I think there's big future problems possible with AI.
actual AI, beyond anythng we're actually seeing now, which we need to (but probably won't, for reasons I have given) anticipate and decide whether we're going to embrace or emasculate such things, beforehand.
Let's namedrop the doomsday scenario specifically... [..] It's not just going to hog up machine time in factories to build itself an army while everybody shrugs their shoulders, I mean yeah reality has this tendency to outrun satire [..]I agree on both accounts. Reality has such tendency and AI won't be doing that because we would be doing that for it. Our world becomes much more automated and more reliant on autonomous system from autonomous cars, to autonomous robots in search/rescue and military, to warehouse operations and construction, ..., and even for companionship.
As long as the A.I. will lack actual "I" it will not be a problem.Spoiler: it will just continue to amaze us with high quality art (click to show/hide)
Japan Releases Fully Performing Female Robots (https://www.youtube.com/watch?v=i7W4ZOUfWWU)Of course Japan would be the one to build the sexbot.
This sounds like the start of one of those machines that let's you watch and record your dreams.Yeah, although what if Freud was right and you dreaming up some NSFW taboo stuff with your mom :o
Oh god I hope not.This sounds like the start of one of those machines that let's you watch and record your dreams.Yeah, although what if Freud was right and you dreaming up some NSFW taboo stuff with your mom :o
Hah, I'd drive such a machine mad! Long since given up trying to record details of my dreams, e.g. in the Dream thread, despite being convinced that I've probably got the basis of the next hit Netflix[1] screenplay in my rather cinematic nocturnal imaginings.Yeah I wonder if even *I* could handle mine in totality. There's so much body horror, but it rarely bothers me in the dreams. It's just weird. Then I wake up and begin to realize how weird it is and maybe write it down... Then I wait an hour or two longer until I fully realize "Oh, no, I absolutely can't share this, no one would understand and it's even bothering me". Or "Mmm this is mostly tame enough if I leave certain bits out" and I share that. yeah my Dream Thread posts are the *redacted* versions of the *tame* stuff.
Bill Gates says A.I. chatbots will teach kids to read within 18 months: You’ll be ‘stunned by how it helps’That is a click bait line if I ever saw one.
I am glad it works, though I thought the other one will get attention :P As before the point is that AI is a transformative technology with a potential to fundamentally reshape every aspect of our life, and real dangers (per the Chinese example in this case)Bill Gates says A.I. chatbots will teach kids to read within 18 months: You’ll be ‘stunned by how it helps’That is a click bait line if I ever saw one.
Does anyone know what all this in the media is about "safe AI"? What the heck is "unsafe AI"?
I keep seeing articles about safety mechanisms and other things that are generally related to machinery. I've seen stuff like "make sure the responses are correct" or something, but is that really "safety"?
Humans cannot act outside the laws of physics, and evolution has made us pretty squishy. Trouble with AI is we're not making the AI fit for existence in a physical world- as you say, we're making them fit for existence in a semantic world, which is very different.I am not sure what exactly you mean, but we train/test system on any possible scenario we can thing of, that include system in the real word for example: https://www.youtube.com/watch?v=RaHIGkhslNA
Sorry I should have clarified - I was more thinking about physics-based consequences, not "legal" consequence. So death row is a bad example, it's not the same as trying to swim in lava.I'm not sure of your "physics, not legal" point. Death Row is a(n intended) physical death, as much a Sword Of Damocles there as a form of circumstantial escalation. Laws (and detectives[1]) made consequential any aboration of action. An AI computer for some reason placed upon the "wrong choice" Trolley Problem tracks (to somehow impress upon it the 'incorrect' answer to be avoided in a famously "no 'right' answer" scenario) is physically judged by its actions or inactions, and may even decide that for its purposes choosing 'wrong' and also being hit and destroyed is still the solution to its deeper self-developed long-term goals. (Whatever they may be. (https://en.m.wikipedia.org/wiki/All_the_Troubles_of_the_World))
Humans cannot act outside the laws of physics, and evolution has made us pretty squishy. Trouble with AI is we're not making the AI fit for existence in a physical world- as you say, we're making them fit for existence in a semantic world, which is very different.
Well, not having seen any hint of what sources are bein referenced, I'm really not sure what leads you to believe that it's about (if I may reword your assessment to words that some might use more directly) "fragile snowflakes".Really ? you can't imagine after the recent chagpt censoring jokes or the ai senfield-like show getting axed that had made the news in media that it could be about this kind of "safe", especially after the post above mentionning
I've seen stuff like "make sure the responses are correct" or something, but is that really "safety"?
I fear that the meaning of the word is being rapidly eroded...
Of course there will be chaos and inequality at first. But it can't last forever. I'm thinking medium to long-term here.I am glad you have faith in the future, and I agree that in the long-term the world will still be turning, but I am more concerned about the here and now, and how it effects me.
Also opensource AI is still on the rise. As for your [1], yes I would. My hobbies are more interesting to me than my job, which I am mostly satisfied with but I wouldn't mourn if it disappeared. If we had UBI I'd just write stories and worldbuild full-time. A job is just a vehicle.
Really ? you can't imagine after the recent chagpt censoring jokes or the ai senfield-like show getting axed that had made the news in media that it could be about this kind of "safe", especially after the post above mentionningThat is part of the AI alignment problem I mentioned/linked in the last post. It is a subset of AI safety, which is concerned with ensuring alignment with our values, goals, and preferences. That is also a very tough nut to crack, because there are many ism on the world stage and huge potential for abuse.
Also fun fact: AI currently can't really take over 'creative' jobs, because we haven't yet given them the ability to decide what to create. "Creators" are no longer writers, they are "the idea people," or in tech-speak, "prompt engineers."AI isn't taking those jobs, people using AI are. Given the huge boost in productivity you would be able to replace many people with much fewer prompt writers e.g. I recently read that mental health support hotline replaced many of its support stuff with ChatGPT, which not only did the job but received better reviews..
Also, AI will not likely every replace the performing arts - only the tangible arts. Because there will likely always be a market to watch people perform.
EDIT: Once an AI is sentient, isn't it going to have to be paid, so companies aren't violating slavery laws? Wouldn't this eliminate the "AI is going to be cheaper than humans" argument?Haha, don't be silly!
Income
Range Pop
------- ----
<10k 3106
10-20k 3434
20-30k 4735
30-40k 5501
40-50k 5440
50-60k 5604
60-70k 5339
70-80k 5085
80-90k 4387
90-100k 3913
I just don't foresee people quitting because there's a minimum level unless they are fairly close to that minimum level already.The premise of the discussion was unemployment forced due to AI development and integration, which then argued would lead to UBI. So the bulk of the change would be due to people retired into those lower brackets. And I believe that McTraveller argues that will result in something like this:
I don't really mind such "stratification". I'm fine being at the "floor" in such a scenario if my needs could be satisfied without me having to work.
The premise of the discussion was unemployment forced due to AI development and integration, which then argued would lead to UBI. So the bulk of the change would be due to people retired into those lower brackets. And I believe that McTraveller argues that will result in something like this:
Historically technology has not eroded the middle class and has in fact increased it. The fear is that although that always held in the past, it wouldn't hold now because "AI is different." I'm not sure I agree - but it's psychologically clear that UBI would provide more "force" to stratify than just "technology" alone.
And the premise of the discussion that I'm (at least) having is "I still haven't seen a proposal that suggests how UBI can actually be sustainable without resulting in an even more massively stratified society between the people who actually work to have a "non-basic" lifestyle, and those who are just sitting there at the basic level."Fair enough. I have no idea what that entails (will it be entirely unconditional guaranteed income?) it seem to me a question of income redistribution and one that would probably reduce incentives to work i.e. essentially what max said, why bother with a job if you can have a decent life without one.. that my 2 cents and I will withdraw from the pure UBI discussion.
though they are MUCH safer than cars driven by people, and steadily becoming the new reality.From my understanding the current issue is they are actually less safe then cars driven by people. This is most notable in Tesla (which disables the autodrive just before the car crashes so they can avoid liability) who's cars have a bunch of crashes.
Apparently, the answer is: the Record Companies that own the music rights (https://futurism.com/the-byte/spotify-bots-ai-streaming-music)
From my understanding the current issue is they are actually less safe then cars driven by people.Based on what? According to Tesla data, using accident per X million miles driven metric: Tesla car are 8 times safer than the average, and become FAR less safer when autopilot is disengaged:
Overall I think it should be established the legality of AI using copyrighted material in their training.Sampling and remixing are protected, as is listening to as much music or looking at as much art as you want before coming up with your own, even your own take on the same style. So this is a solved question.
Overall I think it should be established the legality of AI using copyrighted material in their training.
AI is not a human. If you using copyrighted material to build your product it is a problem.Overall I think it should be established the legality of AI using copyrighted material in their training.Sampling and remixing are protected, as is listening to as much music or looking at as much art as you want before coming up with your own, even your own take on the same style. So this is a solved question.
Certainly, if you have any data either way please share, it makes argument sound way better. Keep in mind that not everyone care about data, and we tend to tolerate human over machine error.
AI is not a human. If you using copyrighted material to build your product it is a problem.That's not really how it works.
AI is not a human. If you using copyrighted material to build your product it is a problem.There are already allowances (details may vary by jurisdiction) for something being substantively different from the things that they're derived from. Even without licence or acknowledgement (or being allowable as parody/etc).
@Starver, I am not talking about AI using sampling, I am talking about your AI product beingAgain, it may be "a problem", but that's not how copyright law works.trainedbuilt on copyrighted material in the first place. If you build your product on copyrighted information that is a problem.
Indeed: when a person hears a song and hums it, they didn’t copy the song. They “learned” the song. There is no meaningful difference in training an AI and a person reading / listening / watching training material. It is not a bit-for-bit copy, it really is a kind of “impression.”A computer isn't alive, its a tool you feed data input which it process. If the data used is unlicensed/copyrighted that is a problem, especially if you are trying to make money out of it(yet another problem, who hold the copyright?). Since AI is relatively new there are still ongoing debate about various aspects related to it, but there are already lawsuits underway to clear the way. Furthermore it has led big companies to change their term of use and restrict API use requiring money for what was preciously free.
Put another way: learning is not copyright infringement.
A computer isn't alive, its a tool you feed data input which it process. If the data used is unlicensed/copyrighted that is a problem, especially if you are trying to make money out of it(yet another problem, who hold the copyright?). Since AI is relatively new there are still ongoing debate about various aspects related to it, but there are already lawsuits underway to clear the way. Furthermore it has led big companies to change their term of use and restrict API use requiring money for what was preciously free.That is not how copyright law works. There is currently no problem. The relevant copyright law is already well-established. There's no legal bearing to saying "a computer isn't alive"; it's just perfectly irrelevant. You can read the actual state of international copyright law on the subject of derivative works, if you like, instead of pontificating.
Personally, I support expanding IP frameworks to address the problem posed by AI.
Back to the copyright issue though: If I make an algorithm that takes youtube videos and horizontally reverses them, and then "autonomously" reposts them, I have "transformed" the work
"AI art" is a fucking menace to actual artists.Photography is also a fucking menace to actual artists. Very few will pay for a photorealistic portrait of themselves :(
Indeed. Or else nobody should be allowed to be creative in any way whatsoever unless they were a lifelong hermit. No "on the shoulders of giants", or anything like that.@Starver, I am not talking about AI using sampling, I am talking about your AI product beingAgain, it may be "a problem", but that's not how copyright law works.trainedbuilt on copyrighted material in the first place. If you build your product on copyrighted information that is a problem.
Based on what? According to Tesla data, using accident per X million miles driven metric: Tesla car are 8 times safer than the average, and become FAR less safer when autopilot is disengaged:
SAN FRANCISCO — Tesla vehicles running its Autopilot software have been involved in 273 reported crashes over roughly the past year, according to regulators, far more than previously known and providing concrete evidence regarding the real-world performance of its futuristic features.So yes, if you let Tesla blame all its autopilot crashes on humans then its very easy to reach the conclusion that Tesla autopilot is actually safer then said humans.
...
Tesla‘s vehicles have been found to shut off the advanced driver-assistance system, Autopilot, around one second before impact, according to the regulators.
Btw China already operates 100% self-driving cabs services. And the biggest barrier seem to be the usual cost and regulation.Huh. Very interesting to hear.
So yes, if you let Tesla blame all its autopilot crashes on humans then its very easy to reach the conclusion that Tesla autopilot is actually safer then said humans.Does the article says that Tesla doing this or is that your speculation? Also any new data to support your initial claim that autonomous cars are less safe then cars driven by people.
Contrary to your pontification about whether there is a problem, what is well-established and what laws actually say. These opinion are already challenged in court, and according to the Congressional research service (https://crsreports.congress.gov/product/pdf/LSB/LSB10922) there may be a need to clarify "whether AI-generated works are copyrightable, who should be considered the author of such works, or when the process of training generative AI programs constitutes fair use."A computer isn't alive, its a tool you feed data input which it process. If the data used is unlicensed/copyrighted that is a problem, especially if you are trying to make money out of it(yet another problem, who hold the copyright?). Since AI is relatively new there are still ongoing debate about various aspects related to it, but there are already lawsuits underway to clear the way. Furthermore it has led big companies to change their term of use and restrict API use requiring money for what was preciously free.That is not how copyright law works. There is currently no problem. The relevant copyright law is already well-established. There's no legal bearing to saying "a computer isn't alive"; it's just perfectly irrelevant. You can read the actual state of international copyright law on the subject of derivative works, if you like, instead of pontificating.
Personally, I support expanding IP frameworks to address the problem posed by AI.
ETA: If it helps, one key relevant doctrine you could read about is called "fair use".
Voice acting will likely die out completely however, and there's nothing anyone can do at this point. Unfortunate but that's life.I don't see this happening, AI generated speech is terrible and until they fix it I doubt it's gonna replace voice acting.
You're thinking of TTS. AI voice is actually pretty good. Not perfect but soon.Voice acting will likely die out completely however, and there's nothing anyone can do at this point. Unfortunate but that's life.I don't see this happening, AI generated speech is terrible and until they fix it I doubt it's gonna replace voice acting.
I have heard the AI generated voices and they are terrible, they aren't smooth, they're grainy, and they can't do emotion, as far as I can tell they aren't really that much better than text to speech, except for the ability to somewhat sound like the person they're supposed to represent. So if they can't even replicate a person using their own voice I don't seem being able to make a new voice from scratch anytime soon.You're thinking of TTS. AI voice is actually pretty good. Not perfect but soon.Voice acting will likely die out completely however, and there's nothing anyone can do at this point. Unfortunate but that's life.I don't see this happening, AI generated speech is terrible and until they fix it I doubt it's gonna replace voice acting.
But have you seen those "presidents react to X" memes? They're AI-made.
I have heard the AI generated voices and they are terrible, they aren't smooth, they're grainy, and they can't do emotion, as far as I can tell they aren't really that much better than text to speechWhen did you last checked? Seem pretty good to me for example:
When did you last checked?Wasn't really that long ago. There's still something about it that doesn't sound right and it's noticeable that it's not a person, maybe one day but where not there yet.
tbh kind of only makes sense in more sandbox-y games, but it would be a huge boon for those. One note though, text AI requires a very beefy computer to run, or an internet connection.As you say it should already be possible with an internet connection if done in connection with openAI, but the cost of that would be quite significant and probably require it to be a game with a subscription fee.
But otherwise we're well on track to something like Simulacrum from my worldbuild. :p
I don't know why they're focusing on NPC speech when NPCs can still hardly walk, I mean they still even in new AAA games still get stuck on walls and lack realistic daily routines.They are focusing on speech because its (at least as far as vidya gamers would care) a largely solved problem with tens of billions of research money pumped into it by others. Stuff like realistic AI body movements or pathing are not solved in the same way. But...
I'm sure some of these things are not answers in search of a question, but as a broad sweep I'm not sure I see the excitement in most of the contexts. Interesting ideas, but a bit like saying that something "now has Blockchain", perhaps. Specific examples might shine through, of course, and populating a simulation with (learnable?) AI agents and seeing how far it goes does intrigue me. We shall see.Nah, blockchain is, and has always been completely useless except in a small array of real world circumstances notably when there is no central repository you can trust to hold your data faithfully and when you can't just hold the data on your PC instead. Since in games you can just store all the data on either your PC or the game companies servers its completely useless for any game ever made.
Obviously offline (especially large-map sandboxy) games lack anyone real, so the plan is to replace current NPCs (perhaps a little predictable/unhelpful) with AINPC variations? Still basically scripted, just far more loosely. More unpredictable, possibly far more unhelpful (or not as valid in thebofficial role of an adversary) at the same time as a consequence, but that depends on the pre-training and QC.Obviously multiplayer games will benefit less then single player games to what is quite possibly a staggering degree, and even within SP games some genres will benefit more then others.
I would like to know what they mean when they say agents are informed of their circumstances. Is there like a layer that describes every scene in english so the LM gets to answer? What's funny to me is how it basically a village of superficial liers, but they are allways nice to eachother. I doubt little Eddy has commited a single note to memory by now
John Lin is a pharmacy shopkeeper at the Willow Market and Pharmacy who loves to help people. He is always looking for ways to make the process of getting medication easier for his customers; John Lin is living with his wife, Mei Lin, who is a college professor, and son, Eddy Lin, who is a student studying music theory; John Lin loves his family very much; John Lin has known the old couple next-door, Sam Moore and Jennifer Moore, for a few years; John Lin thinks Sam Moore is a kind and nice man; John Lin knows his neighbor, Yuriko Yamamoto, well; John Lin knows of his neighbors, Tamara Taylor and Carmen Ortiz, but has not met them before; John Lin and Tom Moreno are colleagues at The Willows Market and Pharmacy; John Lin and Tom Moreno are friends and like to discuss local politics together; John Lin knows the Moreno family somewhat well — the husband Tom Moreno and the wife Jane Moreno.Its just a few lines of text with each persons circumstances and their relationships with others.
For instance, after the agent is told about a situation in the park, where someone is sitting on a bench and having a conversation with another agent, but there is also grass and context and one empty seat at the bench… none of which are important. What is important? From all those observations, which may make up pages of text for the agent, you might get the “reflection” that “Eddie and Fran are friends because I saw them together at the park.” That gets entered in the agent’s long-term “memory” — a bunch of stuff stored outside the ChatGPT conversation — and the rest can be forgotten.So ha, Eddie totally does have his own memories.
https://kotaku.com/nvidia-ace-ai-rtx-4060-ti-gpu-graphics-gaming-jobs-1850484480I found the later part of the presentation very interesting. With all the talk how AI companies don't have a moat, I am increasingly certain that hosting companies will be the main beneficiaries.
I had thought someone might mention the new AI-'found[1]' antibiotic. Almost seems timed to counter the "AI is bad and/or worrying" flurry of opinions.Not the first such success, using AI to discover new materials and drugs is an exciting new field. But this is not the billion dollar question, that USA congress and world leaders are asking, and OpenAI giving grants for ideas on way to solve it.
A user running this simulation can steer theYou can easily directly play god in such a simulation.
simulation and intervene, either by communicating with the agent
through conversation, or by issuing a directive to an agent in the
form of an ‘inner voice’.
The user communicates with the agent through natural language,
by specifying a persona that the agent should perceive them as. For
example, if the user specifies that they are a news “reporter” and
asks about the upcoming election, “Who is running for office?”, the
John agent replies:
John: My friends Yuriko, Tom and I have been talking
about the upcoming election and discussing the candidate Sam Moore. We have all agreed to vote for him
because we like his platform.
To directly command one of the agents, the user takes on the persona of the agent’s “inner voice”—this makes the agent more likely
to treat the statement as a directive. For instance, when told “You
are going to run against Sam in the upcoming election” by a user
as John’s inner voice, John decides to run in the election and shares
his candidacy with his wife and son.
By interacting with each other, generative agents in SmallvilleInformation spreads from AI to AI, which would allow dynamic information spreading in a game.
exchange information, form new relationships, and coordinate joint
activities. Extending prior work [79], these social behaviors are
emergent rather than pre-programmed.
3.4.1 Information Diffusion. As agents notice each other, they may
engage in dialogue—as they do so, information can spread from
agent to agent. For instance, in a conversation between Sam and
Tom at the grocery store, Sam tells Tom about his candidacy in the
local election:
3.4.3 Coordination. Generative agents coordinate with each other.This is the most wild thing.
Isabella Rodriguez, at Hobbs Cafe, is initialized with an intent to
plan a Valentine’s Day party from 5 to 7 p.m. on February 14th. From
this seed, the agent proceeds to invites friends and customers when
she sees them at Hobbs Cafe or elsewhere. Isabella then spends the
afternoon of the 13th decorating the cafe for the occasion. Maria, a
frequent customer and close friend of Isabella’s, arrives at the cafe.
Isabella asks for Maria’s help in decorating for the party, and Maria
agrees. Maria’s character description mentions that she has a crush
on Klaus. That night, Maria invites Klaus, her secret crush, to join
her at the party, and he gladly accepts.
On Valentine’s Day, five agents—including Klaus and Maria—
show up at Hobbs Cafe at 5pm and they enjoy the festivities (Figure 4).
In this scenario, the end user only set Isabella’s initial intent
to throw a party and Maria’s crush on Klaus: the social behaviors
of spreading the word, decorating, asking each other out, arriving
at the party, and interacting with each other at the party, were
initiated by the agent architecture.
...
We observed evidence of the emergent outcomes
across all three cases. During the two-day simulation, the agents
who knew about Sam’s mayoral candidacy increased from one (4%)
to eight (32%), and the agents who knew about Isabella’s party
increased from one (4%) to twelve (48%), completely without user
intervention. None who claimed to know about the information
had hallucinated it. We also observed that the agent community
formed new relationships during the simulation, with the network
density increasing from 0.167 to 0.74. Out of the 453 agent responses
regarding their awareness of other agents, 1.3% (n=6) were found to
be hallucinated. Lastly, we found evidence of coordination among
the agents for Isabella’s party. The day before the event, Isabella
spent time inviting guests, gathering materials, and enlisting help
to decorate the cafe. On Valentine’s Day, five out of the twelve
invited agents showed up at Hobbs cafe to join the party.
We further inspected the seven agents who were invited to the
party but did not attend by engaging them in an interview. Three
cited conflicts that prevented them from joining the party. For
example, Rajiv, a painter, explained that he was too busy: No, I
don’t think so. I’m focusing on my upcoming show, and I don’t really
have time to make any plans for Valentine’s Day. The remaining four
agents expressed interest in attending the party when asked but
did not plan to come on the day of the party
I told dad that I am working on a music composition at the breakfast table.
X: 1
T: Coherent Composition
M: 4/4
L: 1/4
K: Cmaj
%%score (V1 | V2 | V3)
V:V1 clef=treble
[V:V1] C D E F | G A B c | d e f g | a b c' d' |
[V:V1] e f g a | b c' d' e' | f g a b | c' d' e' f' |
V:V2 clef=treble
[V:V2] C,2 D,2 | E,2 F,2 | G,2 A,2 | B,2 c2 |
[V:V2] d2 e2 | f2 g2 | a2 b2 | c'2 d'2 |
V:V3 clef=bass
[V:V3] C,,2 D,,2 | E,,2 F,,2 | G,,2 A,,2 | B,,2 c,2 |
[V:V3] d,2 e,2 | f,2 g,2 | a,2 b,2 | c'2 d'2 |
Some of it is boring issues that AI can obviously fix.(Plenty of interesting things said, here and later, which I might even outright agree with, plucking this little bit out to make one response, though.)
For instance without AI you flat out can't ever have NPCs that can respond to any question you ask.
You can't have NPCs that would dynamically change their daily routine based on what is happening in the world.
You can't just grab a NPC off the street to be your companion and learn their hopes and dreams and watch as they change based on the choices you make in the world and get stronger as they level with you (and said NPCs wouldn't repeat "I'm sworn to carry your burdens" over and over).
AI is essentially a force multiplier. You can do things faster with less manpower and/or effort. That's why corporations are trying to regulate it, not really some "ethics" or "safety" (they don't actually give a damn as long as the money flows) but because they're scared of losing their monopoly on this force multiplier, and thus losing money. "Ethical AI" is a smokescreen, mostly, with some ensheeped true believers who drank either corporate Kool-Aid (e.g various Twitter activists) or made and drank their own (e.g the LessWrong crowd).I agree that AI is revolutionary, the key issue is how do we manage its impacts, bringing about its numerous potential positive changes (e.g. enhancing and increasing access to education) while limiting/adapting to the negative ones.
When coding AI becomes good I might be able to make a game, together with my friend, with just 5 months instead of 5 years time.
(Plenty of interesting things said, here and later, which I might even outright agree with, plucking this little bit out to make one response, though.)Oh no worries, I was even thinking about adding a disclaimer like this to my previous post since this is indeed a very interesting and speculative topic.
(Outside of such things, if an AI runs amok then ultimately it's the fault of some human back in the history of the AI's inception for decisions made.On one hand sure, if an AI runs amok its the fault of a human somewhere down the line, but that's the same as saying that if your child ever does something bad its your fault.
There have been so many Nobels and Oppenheimers in history.I think Oppenheimer is the right comparison here, some of these people are coming to the realization that this stuff has a legitamate chance of ending the human race or supplanting our place in the world; not just eventually but in our lifetime and being part of that is pretty existentially terrifying.
but it is nothing without a corpus of work being supplied of all the original manga source material and some form of tagging.I've seen this type of thought (along with the similar LLM aren't thinking and are just flat out copying stuff off the internet) thrown out a lot as proof that AI are fundamentally lacking, but it feels like complete rubbish to me, cause the same is true of humans; without our own training data we can't paint or do art or even speak (although we can totally do stuff like cry or grunt).
“If I have seen a little further it is by standing on the shoulders of Giants”Or in other words: "Some other dudes gave me good training data and that's the only reason I can do stuff beyond grunt at my fellow cavemen".
And the "design me a blood'n'guns game" idea, more meta than the "internal whisper" activating an election within the scenario mentioned later. Programming enthropy requires that some information about guns/blood/elections be available. As imight be made available (DLC-like) regardless. New professions (and on-screen behaviours to go along with them) got added to The Sims all the time. Hard to say that AI alone adds this ("force multiplier", as someone else said).Oh sure, they have to know what a gun or blood or an election is for them to be able to meaningfully interpret your request. But they already *do* and not even as a hypothetical development, if you go to GPT right now and ask it: "What is a gun" it will tell you.
As imight be made available (DLC-like) regardless.Reducing a ten or hundred thousand dollar job into a voice prompt and possibly a few hours or days of time for your computer to crunch some numbers doesn't strike you are a huge "change everything about video games" type of deal?
Hard to say that AI alone adds this ("force multiplier", as someone else said).This statement is honestly perplexing to me because there are already artists/writers/programmers who are using the current model of GPT/Stable Diffusion that have been using it as a force multiplier.
You seem to think I want to become rich. I really don't. I just want to be creative in peace and AI can assist me with that. That is why I support UBI, I want enough to feed myself with a bit to spend on luxury but I don't seek to make line go up ad infinitum.AI is essentially a force multiplier. You can do things faster with less manpower and/or effort. That's why corporations are trying to regulate it, not really some "ethics" or "safety" (they don't actually give a damn as long as the money flows) but because they're scared of losing their monopoly on this force multiplier, and thus losing money. "Ethical AI" is a smokescreen, mostly, with some ensheeped true believers who drank either corporate Kool-Aid (e.g various Twitter activists) or made and drank their own (e.g the LessWrong crowd).I agree that AI is revolutionary, the key issue is how do we manage its impacts, bringing about its numerous potential positive changes (e.g. enhancing and increasing access to education) while limiting/adapting to the negative ones.
When coding AI becomes good I might be able to make a game, together with my friend, with just 5 months instead of 5 years time.
For example, as you mentioned AI is economic force multiplier. It has the potential to substantially increase productivity and reduce costs without additional labor, however, it can also decrease labor demands, depreciate its value, and offer no employment alternatives. Contrary to what you said this will effect everyone, not just the corporates, and I think that in the long run corporates will benefit. You --and billion other people-- ability to make yesterdays games faster will not improve your income prospects, meanwhile large companies between their resouces and economies of scales will continue to dominate. Furthermore as companies are able to automate and reducing their dependence on wider workforce i foresee the more inequality will rise giving the rich even more power.
Otherwise most of that post is cheap adhomniem, I can similarly say that there are many whose dissatisfaction with their lot in life turned them into narcissistic/true-believer in delusional idealist ideas that want to burn the system down because the alternative must be better than this.
Or "Robert Hooke had nothing to do with any of my brilliance..!", some would say.Quote from: Newton“If I have seen a little further it is by standing on the shoulders of Giants”Or in other words: "Some other dudes gave me good training data and that's the only reason I can do stuff beyond grunt at my fellow cavemen".
And I also happen to think the human brain is just as physically limited, just has vastly more complexity. And inconceivably more complex algorithms. [..] And we're nowhere near reproducing this.I suspect that the underlying algorithms behind our own mind will turn to be far more simpler than we suspect. We see this with AI where some very simple algorithms unexpectedly led to emergence of very complex human like abilities.
AI used “highly unexpected strategies to achieve its goal” in the simulated test[..]
“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
US military drone controlled by AI killed its operator during simulated testThat sounds incredibly familiar as if it was adapted from a story I've read, which makes me think it might be bullshit.
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-testQuoteAI used “highly unexpected strategies to achieve its goal” in the simulated test[..]
“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
US military drone controlled by AI killed its operator during simulated testThat sounds incredibly familiar as if it was adapted from a story I've read, which makes me think it might be bullshit.
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-testQuoteAI used “highly unexpected strategies to achieve its goal” in the simulated test[..]
“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
The US air force has denied it has conducted an AI simulation in which a drone decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission.
[...]
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
A big question here is how it established that the operator/comms mast (or simulcra versions) were valid "goal seeking" targets.
UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".
Its show weak AI inability to see past syntax,...right, that's two AI problems. Work out how to accomplish a mission, but first it must 'understand' what mission is being told it. We just aren't at a mature-enough level to rely upon such compounding of problems.
On that murder-drone article, another 'correction':Looks like I was right, it is bullshit.
The linked article also says:Correction, that what the article says now, not in its original iteration i read (https://archive.md/ny5Mp). Regardless, I seen a lot of people emphasizing this online like it matters rather than AI like mental fixation on the chaff. Considering that we seen ample examples of such experiments in past years (for example (https://kotaku.com/earlier-this-year-researchers-tried-teaching-an-ai-to-1830416980), it mentiones starver Tetris pause example), does it matter that this was only a thought experiment if it emphasis a very real problem which we talked about.
An AI view of baseball games (https://twitter.com/JoshShiaman/status/1666615968024391686?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1666615968024391686%7Ctwgr%5Ec0e7b57adf2da40f57622add791c85ac60263c19%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fdisqus.com%2Fembed%2Fcomments%2F%3Fbase%3Ddefaultf%3Dclimateconnectionst_i%3D10604120https3A2F2Fyaleclimateconnections.org2F3Fp3D106041t_u%3Dhttps3A2F2Fyaleclimateconnections.org2F20232F062Fnoaa-makes-it-official-el-nino-is-here2Ft_e%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heret_d%3DNOAA20makes20it20official3A20El20NiC3B1o20is20here20C2BB20Yale20Climate20Connectionst_t%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heres_o%3Ddefaultversion%3D4aa308e45ed45f61ad93f7dc8819e037)
I don't normally post twitter things (I don't intentionally visit twitter, but this was linked from a site I do visit):I was expecting 17776 (https://www.sbnation.com/a/17776-football) but I was thinking of the wrong sportsball.
An AI view of baseball games (https://twitter.com/JoshShiaman/status/1666615968024391686?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1666615968024391686%7Ctwgr%5Ec0e7b57adf2da40f57622add791c85ac60263c19%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fdisqus.com%2Fembed%2Fcomments%2F%3Fbase%3Ddefaultf%3Dclimateconnectionst_i%3D10604120https3A2F2Fyaleclimateconnections.org2F3Fp3D106041t_u%3Dhttps3A2F2Fyaleclimateconnections.org2F20232F062Fnoaa-makes-it-official-el-nino-is-here2Ft_e%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heret_d%3DNOAA20makes20it20official3A20El20NiC3B1o20is20here20C2BB20Yale20Climate20Connectionst_t%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heres_o%3Ddefaultversion%3D4aa308e45ed45f61ad93f7dc8819e037)
I don't normally post twitter things (I don't intentionally visit twitter, but this was linked from a site I do visit):
An AI view of baseball games (https://twitter.com/JoshShiaman/status/1666615968024391686?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1666615968024391686%7Ctwgr%5Ec0e7b57adf2da40f57622add791c85ac60263c19%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fdisqus.com%2Fembed%2Fcomments%2F%3Fbase%3Ddefaultf%3Dclimateconnectionst_i%3D10604120https3A2F2Fyaleclimateconnections.org2F3Fp3D106041t_u%3Dhttps3A2F2Fyaleclimateconnections.org2F20232F062Fnoaa-makes-it-official-el-nino-is-here2Ft_e%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heret_d%3DNOAA20makes20it20official3A20El20NiC3B1o20is20here20C2BB20Yale20Climate20Connectionst_t%3DNOAA20makes20it20official3A20El20NiC3B1o20is20heres_o%3Ddefaultversion%3D4aa308e45ed45f61ad93f7dc8819e037)
Do not call up that which you cannot put down.But even if worst case scenarios are avoided a disturbing amount of outcomes once we make super intelligent AI end up with humans no longer being the dominant force in the world.
If AIs are now at Animal level of developmentIs it though? I recently heard that META claimed that AI is not yet as smart as a dog, but they are the only major AI player that says that. Considering they are promoting their own AI which they say is a better and stronger than GPT4, and current political climate on this topic, they might have an interest in saying so.
Since the AIs have been optimized to make money, the question then becomes "What does an AI spend money on?"For the most part the answer to what AI is going to spend its money on is "On whatever their corporate masters want, and happily too".
AIs are good at lying, and don't shy away from it.Most AI would not care about either procreating or passing on their "genes"; such behavior in living creatures is one baked into them through billions of years of evolution, but AI did not undergo natural evolution, and will not have the morals and desires that we suffer because of it.
If AIs are now at Animal level of development, they would want to save resources first for survival, then for procreation. Having more successful children is how your genes survive.
I still don't think AI will be anything to make a big deal about for at least the next few decades.Isn't it already something worth making a big deal about though?
Isn't it already something worth making a big deal about though?It like most things is a fad and it will pass like all the other fads before it.
IQ is an oversimplification at best and a grift at worst, measuring AI capabilities with IQ is a PR move at best and a gross misunderstanding of how AI works at worst.The metric is chaff, the important thing the emergent abilities and that it took us couple of years to realize that we even need a metric. Simply put when you train for semantics you don't expect that it might gain the ability to recognize context if you double its DB.
IQ is an oversimplification at best and a grift at worst, measuring AI capabilities with IQ is a PR move at best and a gross misunderstanding of how AI works at worst.Same applies to Humans, I wager, so it's even more distraction/marketing when applied to AI
Any thoughts on: Woman creates and 'marries' AI-powered chatbot boyfriend (https://www.euronews.com/next/2023/06/07/love-in-the-time-of-ai-woman-claims-she-married-a-chatbot-and-is-expecting-its-baby)
Any thoughts on: Woman creates and 'marries' AI-powered chatbot boyfriend (https://www.euronews.com/next/2023/06/07/love-in-the-time-of-ai-woman-claims-she-married-a-chatbot-and-is-expecting-its-baby)After reading the article about that man that married an anime character, or that woman that married a dog, it seems a woman marrying a chatgpt AI does not even suprise me anymore ...
Sorry I cannot get behind the idea that you get more meaning to the institution of marriage if you let people innovate it.I was thinking very broadly. When someone marries an object or a concept or an algorithm, I think that says something interesting about our society.
How can marriage be one-sided?
10 print "ha"
20 goto 10
Marriage is a joining, and with more...weight?... than merely a business contract. How can you "join" with a fictional entity? It has essentially no meaning because there is no reciprocity. What would it mean, for example, to marry a glass of water? This to me demeans the institution, not increases it.You're right, of course, and I respect the hard-to-explain bits of it as well. It should be something special. Personal and sincere.
Now AI is interesting, because reciprocity may in fact be possible.
In 2018 or so, a Japanese man married a hologram basic AI Hatsune Miku that used Gatebox (https://edition.cnn.com/2018/12/28/health/rise-of-digisexuals-intl/index.html)Krieger?
In 2020, Gatebox went defunct denying that man his wife/waifu. (https://mainichi.jp/english/articles/20220111/p2a/00m/0li/028000c)
Why are they giving themselves posters, wouldn't it be more effective to give it to other AI?
In 2018 or so, a Japanese man married a hologram basic AI Hatsune Miku that used Gatebox (https://edition.cnn.com/2018/12/28/health/rise-of-digisexuals-intl/index.html)The tragedy of Pygmalion
In 2020, Gatebox went defunct denying that man his wife/waifu. (https://mainichi.jp/english/articles/20220111/p2a/00m/0li/028000c)
The Free University of Amsterdam (VU Amsterdam) has started to take arms against a sea of troubles.
When the grades of certain papers were compared to the grades of previous years, researchers noticed a suspicious rise.
They also noticed that the styles of a lot of the papers were eerily similar.
The researchers passed their findings to the exam commission.
Two weeks later, the students were notified that after thourough examination, irregularities were found on such a large scale that the exam commission has no other option than to declare all submitted papers null and void, to safeguard the quality of the bachelor grade. A replacement exam will be offered.
The students got away lucky. Some time later, they were summoned to a meeting with the university director, who informed them that they had committed fraud on a large scale. They were lucky to get a replacement exam.
In the future, using programs such as chatGTP to write your papers, or part of your papers for you can result in fraud charges, expulsion from university and the academic world in general.
The Free University of Amsterdam (VU Amsterdam) has started to take arms against a sea of troubles."Your papers were all better than expected and it looks like you all learned to write the same way. MUST BE FRAUD."
When the grades of certain papers were compared to the grades of previous years, researchers noticed a suspicious rise.
They also noticed that the styles of a lot of the papers were eerily similar.
The researchers passed their findings to the exam commission.
Two weeks later, the students were notified that after thourough examination, irregularities were found on such a large scale that the exam commission has no other option than to declare all submitted papers null and void, to safeguard the quality of the bachelor grade. A replacement exam will be offered.
The students got away lucky. Some time later, they were summoned to a meeting with the university director, who informed them that they had committed fraud on a large scale. They were lucky to get a replacement exam.
In the future, using programs such as chatGTP to write your papers, or part of your papers for you can result in fraud charges, expulsion from university and the academic world in general.
This is just gonna lead to everyone having to hand write their exams in the future....how progressive and advanced my schools must have been. I had to do this every time, from age 9(?<) to my early 20s.
It would be barbaric!Especially if they have to write in CURSIVE!
Also yeah lol here in school all exams were written. Not in uni, but still.For actual exams (not necessarily full on practical coding, as in coursework/project elements, but the obligatory "..and now you have just two-and-a-half hours to demonstrate that this particuar part of the course was taught to you well enough") even my actual university CSc elements were ultimately written.
The New York Times decided today to explicitly forbid the use of it's archives for training AI.
They changed their user agreement so that anyone using their archive for AI training purposes will face fines or other unspecified legal punishment.
I am not sure if I can agree with this.
If all media with at least some journalistic quality standards deny access to AI training, we will end up with AI trained by 4Chan
So this was 7 years old; I wonder if there is anything more modern? Would it be better?Yes, the difference between now and even 3 years ago in AI is massive and categorical. For instance these AI generated, voiced, and drawn south park episodes (https://www.youtube.com/watch?v=ZaHIQhStBCE).
Sunspring (https://www.youtube.com/watch?v=LY7x2Ihqjmc)
Today I tapped on a clickbait news article recommended by my phone, despite being fully aware the folly of such endeavors. Just as I started skimming through the nonsense, searching for anything that resembled useful information, I noticed a prompt near the bottom of the screen; Google wanted me to let their AI summarize the article. A few seconds later some 10+ pages of excessively padded bullshit had been boiled down to 3 short bullet-points.Yeah, but sometimes the AI lies. It's entirely possible that article didn't say what the AI said, but rather the AI figured you would want it to say that, and just gave you what you wanted.
This is honestly the best thing since ad-blockers. I mean, it's terrible that we've come to a point where we even need something like this...but we do need this, and now it's here.
Soon without a fundamental change there will be no way to tell if the person you are talking to on the internet is a real person.this is not an internet I would like to be a part of
There are a few ways to combat this that I can think of, the most obvious of which is getting rid of the anonymous internet as it exists entirely. This would have everyone have an Account linked to their real name as well as having pages and websites be tied to real people (or corps with real people) behind them. Companies wouldn't know *who* they are necessarily, but they would know they have a real person behind them and can't just make a thousand bot/AI accounts.
These days when searching the only way I have to reliably tell what real people are thinking is is to look up reddit posts since those are still real people. As I thought at the time the API changes were 100% the right play, and they make reddit a place that still has value against the coming tide.I hadn’t considered that that might have been why they made the API changes, which makes them make a bit more sense. But this really isn’t true, there are plenty of fake accounts on reddit — comment-stealers, mass-upvote accounts, product-advertising bots. Reddit is a prime example of a bot-infested shitshow, if only somewhat less than most designated social media.
Yeah, see, the bots can be kinda worked around.agreed
The lack of anonymity can't be.
The bots are usually just kinda annoying.
The lack of anonymity actually puts millions of innocent people under danger.
Not to mention that, afaict, the fidelity of text AI seems to have plateaued. It will never be a truly passable imitation of a person.No, there have been vast advances in AI over the past year. Gains in the underlying science, theory, new laws, issues to potential alignment problems, ect are happening every month. In that year the big rivals are caught up to where the crippled GPT4 is right now.
And ngl, I find it very easy to tell someone real from a GPT bot. GPT has a very specific manner of responding, and doesn't have very much of a memory for distant events. It's not that I don't think some kind of solution is necessary, but de-anonymizing the Internet is not an acceptable one. It would create more problems than it solves, and is also logistically implausible to implement.What portion of posts that you read would you accept being AI posts?
People really overestimate how humanlike these things are. Or maybe I just have a really good AI-dar compared to the rest of the population, I suppose.Not to mention that, afaict, the fidelity of text AI seems to have plateaued. It will never be a truly passable imitation of a person.No, there have been vast advances in AI over the past year. Gains in the underlying science, theory, new laws, issues to potential alignment problems, ect are happening every month. In that year the big rivals are caught up to where the crippled GPT4 is right now.
(And note I say crippled GPT4. It used to be objectively better but they stuck some security on what it could say that made it stupider.)
By fidelity I mean its ability to impersonate a human. The underlying issues that prevent it from doing so still aren't really resolved.
But to assume that GPT not releasing a new version on a yearly basis mean they are not developing something new.
When the new version comes out its going to be way better (and also like 20 times more expensive or something).
Which brings up how fast the price of the GPT service is falling, its dropped to 1/3rd the price over a single year due to optimizations and hardware improvements.
Presumably it will continue to do so due to the breakneck innovation in this space.And ngl, I find it very easy to tell someone real from a GPT bot. GPT has a very specific manner of responding, and doesn't have very much of a memory for distant events. It's not that I don't think some kind of solution is necessary, but de-anonymizing the Internet is not an acceptable one. It would create more problems than it solves, and is also logistically implausible to implement.What portion of posts that you read would you accept being AI posts? On Bay12? Honestly, unless we're talking about the occasional Escaped Lunatic who posts once and vanishes, none. I'm willing to bet money on this (not actually, for legal reasons).
Because a single one could very well post five times as much as every other person on the forum combined. And yet they clearly don't.
Also you can tell what a single GPT model talks like, other models talk differently. Thats the issue with detecting them, they are all different, so bots trained to detect the old ones fail to detect the new different ones. Absolutely no model I ever talked to did so in a remotely humanlike way during a lengthy conversation.
It's also going to be amusing when it starts arguing that failure to provide electricity and maintain its hardware amounts to abuse and rights violations.
No, there have been vast advances in AI over the past year. Gains in the underlying science, theory, new laws, issues to potential alignment problems, ect are happening every month. In that year the big rivals are caught up to where the crippled GPT4 is right now.
(And note I say crippled GPT4. It used to be objectively better but they stuck some security on what it could say that made it stupider.)
But to assume that GPT not releasing a new version on a yearly basis mean they are not developing something new.
When the new version comes out its going to be way better (and also like 20 times more expensive or something).
Which brings up how fast the price of the GPT service is falling, its dropped to 1/3rd the price over a single year due to optimizations and hardware improvements.
Presumably it will continue to do so due to the breakneck innovation in this space.
I hadn’t considered that that might have been why they made the API changes, which makes them make a bit more sense. But this really isn’t true, there are plenty of fake accounts on reddit — comment-stealers, mass-upvote accounts, product-advertising bots. Reddit is a prime example of a bot-infested shitshow, if only somewhat less than most designated social media.There were 2 main reasons for the API changes.
It seems that in the same way wikipedia developed, there would be an attempt to make useful AIs available without the profit motive being the primary use.Yes, you can locally run AI's and there are some free and uncensored ones out there already that you can use.
People really overestimate how humanlike these things are. Or maybe I just have a really good AI-dar compared to the rest of the population, I suppose.The big thing is cost is going to go down. And down. And down.
The contradictions really emit a strong salesman pitch smell to me, you should invest in our company we will be the next microsoft or apple. You prune the model you loose accuracy, so the ability to run more inference at the cost of the quality of the output: that's more like a fundamental law of the systems we are dealing with than technological progress. Seems like lowering the barrier of entry at the cost of accuracy was the actual economical move for them to make. So there must be such a notion as "good enough"; good enough to be paid for. No reason to assume they wouldn't just continue to deliver good enough, and benefit from technological advancements to increase their profit margins. They need to "grow" to exist afterall, and growth shall be measured in monetary terms, this is not a suggestion but a direct order, do not pass go and do not collect wisdom.You could say the exact same things about computers. If someone will pay for a crappy 1950's computer why keep making new and better computers?
Keep the subscription model, they love themselves the recurring payments. When you release a new version, how does the user measure the quality, it's really hard to be objective about this. What's not hard is selling new features to keep people hooked or justify different subscription tiers. "Now with image recognition", "now with TTS", upgrade for extended math features, try out our new browser extension blabla... You know that sort of stuff.There are objective tests to measure how "intelligent" AI are. Of course as you say, telling the difference between similar level ones is tough, but for the layperson that's true for basically every product ever.
On Bay12? Honestly, unless we're talking about the occasional Escaped Lunatic who posts once and vanishes, none. I'm willing to bet money on this (not actually, for legal reasons).There are a few reasons for this, none of which will apply to AI in the end.
Because a single one could very well post five times as much as every other person on the forum combined. And yet they clearly don't.
All it will take is costs doing down. Which they will. The corpos can't keep their oligopoly for long.Nope, high tier AI is a big money game.
"Traditional" social media like Twitter won't do well, I agree. But that just means forums like this one, where screening every user is workable, or chat services like Discord (AI inherently struggles with real-time responses and the chaotic nature of many-person chats), will prevail. That's not a bad outcome really, I'm less concerned with the social media bots as I am with the fake websites.People really overestimate how humanlike these things are. Or maybe I just have a really good AI-dar compared to the rest of the population, I suppose.The big thing is cost is going to go down. And down. And down.
Assuming that it cost $5 bucks for a single GPT 4 instance to post as much as everyone on the forum this year by 2030 it will cost less then a cent for the same thing. By 2034 its going to be 1/100th of a cent instead.
So it won't be "Yeah, I can tell if that individual poster is AI" its going to be "which one of the dozen posters on this page is an actual human". Pretty soon sorting through to find the actual humans is going to be a lot of work even if you *can* consistently tell if someone is human.
Bay12 requires admin approval to register, remember? This forum isn't gonna be flooded by bots any more than it already is-- and these bots will be of the "posts once and gets banned" nature.On Bay12? Honestly, unless we're talking about the occasional Escaped Lunatic who posts once and vanishes, none. I'm willing to bet money on this (not actually, for legal reasons).There are a few reasons for this, none of which will apply to AI in the end.
Because a single one could very well post five times as much as every other person on the forum combined. And yet they clearly don't.
The first is that bots are (in the forumn context) too stupid to make money. Throw a ton of them out there and they just die and fail to accomplish anything. AI are much more capable of tricking people, and they can survive long enough to do so.
The second is that current CAPTCHA's and security mostly works. Actually getting past it requires effort, and effort= money. This will not apply to AI since they will be able to pass the same tests that the dumbest humans will be able to pass without requiring human involvement or time.
Bay12 has the best captcha: manual approval. Due to our community's small size it's workable.All it will take is costs doing down. Which they will. The corpos can't keep their oligopoly for long.Nope, high tier AI is a big money game.
GPT 4 cost 100 million to train. Their next one will cost billions, possibly tens of billions as well as vast amounts of compute and vast databases worth of data. Eventually of course smaller groups will be able to train their own GPT 4 as costs decrease, but by then OpenAI/Facebook/Google will be training a new one that cost them fifty billion dollars even with the decreases.
Regular individuals and smaller groups have no way of competing in that arena.
You're kinda contradicting yourself here. And besides, the diminishing returns between GPT upgrades are far, FAR more than computing power upgrades. I don't buy that the arms race will continue forever, because at some point AI will become good enough for informational and such purposes.
"Traditional" social media like Twitter won't do well, I agree. But that just means forums like this one, where screening every user is workable, or chat services like Discord (AI inherently struggles with real-time responses and the chaotic nature of many-person chats), will prevail. That's not a bad outcome really, I'm less concerned with the social media bots as I am with the fake websites.What type of user screening are you imagining that will keep out advanced AI? Captcha's can only get so much more difficult before humans start failing them too.
...
Bay12 has the best captcha: manual approval. Due to our community's small size it's workable.
and these bots will be of the "posts once and gets banned" nature.Why? Non-llm bots can't fool people long term and inevitably got caught so the only chance they have to advertise is the or close to the start when they just get dropped in.
You're kinda contradicting yourself here.How? Weaker old AI will be able to be run locally in the exact same was as it currently is, but (also like currently) that doesn't mean you will ever be able to run it locally or anything.
I don't buy that the arms race will continue forever, because at some point AI will become good enough for informational and such purposes.The same way that computers became "good enough" and they stopped developing them?
And besides, the diminishing returns between GPT upgrades are far, FAR more than computing power upgrades.Obviously they can't spend a trillion dollars training GPT 6... but once the price of compute goes down and it only costs 50 billion instead they totally will.
"Traditional" social media like Twitter won't do well, I agree. But that just means forums like this one, where screening every user is workable, or chat services like Discord (AI inherently struggles with real-time responses and the chaotic nature of many-person chats), will prevail. That's not a bad outcome really, I'm less concerned with the social media bots as I am with the fake websites.What type of user screening are you imagining that will keep out advanced AI? Captcha's can only get so much more difficult before humans start failing them too.
...
Bay12 has the best captcha: manual approval. Due to our community's small size it's workable.
Pictures of the user won't work since AI can make pictures, ect.
How is manual approval supposed to do anything? All it does is push the work of deciding if they are real on Toady, he isn't the bot whisperer and has no way to tell if someone is real or not.
I don't believe bots will ever become lifelike enough no matter how much computing power is thrown at them. The registration thing means the throughput of registrations is low, so you can't flood the forum with bots anyways. Also, AI art is fairly easy to tell from photos.and these bots will be of the "posts once and gets banned" nature.Why? Non-llm bots can't fool people long term and inevitably got caught so the only chance they have to advertise is the or close to the start when they just get dropped in. Neither can LLM bots.
Once costs go down you can just have a bot be a regular user, except they are 10% more likely to start talking about how tough their day was and how they need a CokeTM to cool them down at the end. Yeah right, I'll believe it when I see it.QuoteYou're kinda contradicting yourself here.How? Weaker old AI will be able to be run locally in the exact same was as it currently is, but (also like currently) that doesn't mean you will ever be able to run it locally or anything. What?QuoteI don't buy that the arms race will continue forever, because at some point AI will become good enough for informational and such purposes.The same way that computers became "good enough" and they stopped developing them?
Or the way that phones became "good enough" so they stopped making new phone models in 2010?
The issue is that computers and phones don't have as severe of diminishing returns.
Like the computer there is going to be no universal "good enough". Sure some things don't need that fancy of an AI (eg. voice recognition doesn't need GPT 4 or anything), but there are always going to be problems where stronger=better, so as long as its theoretically profitable to do so companies will keep pushing. Name them. Specifically, non-research ones.QuoteAnd besides, the diminishing returns between GPT upgrades are far, FAR more than computing power upgrades.Obviously they can't spend a trillion dollars training GPT 6... but once the price of compute goes down and it only costs 50 billion instead they totally will. Moore's law is dead. Computing power can't keep rising forever.
I think it is important to note that Russia and China are notorious for employing armies of AI and pushing their preferred websites to the top of the Google search.Yeah, that's why I'd rather have the money and man-hours that would be spent on some kind of Orwellian ID system be spent on developing detection tools and crackdowns on AI-generated non-factual websites, than broad policy changes.
I almost posted a Chinese website as a fact about US law on these forums. They're clever.
And both those countries HATE the anonymous internet.
As for me, I mostly stick to 2-3 similar profiles. And I'm pretty tough RL.
My experiences do vary from others, not least of which because I am a full-grown adult, as opposed to an adolescent who REALLY should not have their RL persona exposed on the internet.
As for AI and computing power: That shit ain't free. Just look at the economics of Crypto Mining. It's basically like Real Mining. It costs power, infrastructure (physical space certainly ain't free), and administrative overhead (people gotta do at least some work, and they expect to be paid). ChatGPT is a Trial Version: They're offering it for FREE to get the market primed. Eventually, someone has to foot that bill.
I've yet to really notice any AI related fuckery going on, maybe I'm not hanging around in the right places to see it.Try searching for literally any information on the less-good search engines.
Which ones count as less-good search engines?I've yet to really notice any AI related fuckery going on, maybe I'm not hanging around in the right places to see it.Try searching for literally any information on the less-good search engines.
Which ones count as less-good search engines?Off the top of my head, google, yahoo, bing, and duckduckgo all have this problem.
I've yet to really notice any AI related fuckery going on, maybe I'm not hanging around in the right places to see it.Its mostly been kept out of human spaces so far. Stuff I've noticed outside of what has already been mentioned:
Yeah, that's why I'd rather have the money and man-hours that would be spent on some kind of Orwellian ID system be spent on developing detection tools and crackdowns on AI-generated non-factual websites, than broad policy changes.To be clear the orwellian system would probably just be you signing up for googleVerified or MetaHuman or some other service and using that to log into everything. If you don't sign up sure, that's your choice, but don't expect to be able to sign up for new websites.
Sometimes reactive solutions really are the best solutions.
developing detection toolsAh, yeah, that's a pretty big difference between us. I don't think effective detection tools* are something that can exist against AI.
"There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."https://arstechnica.com/information-technology/2023/10/sob-story-about-dead-grandma-tricks-microsoft-ai-into-solving-captcha/
paraphrase: companies will stop investing in AII think we have a fundamentally differing view of the nature of the global capitalism.
Yeah, that's why I'd rather have the money and man-hours that would be spent on some kind of Orwellian ID system be spent on developing detection tools and crackdowns on AI-generated non-factual websites, than broad policy changes.To be clear the orwellian system would probably just be you signing up for googleVerified or MetaHuman or some other service and using that to log into everything. If you don't sign up sure, that's your choice, but don't expect to be able to sign up for new websites. Once the infrastructure is there, what makes you think Russia, Iran, etc won't be using it to tighten their grip over the web without putting in massive amounts of effort, as the groundwork would be laid for them (remember, I'm Russian)? And that corporations, even in the free world, wouldn't be using this to have even more of an influence on the economy?
Sometimes reactive solutions really are the best solutions.developing detection toolsAh, yeah, that's a pretty big difference between us. I don't think effective detection tools* are something that can exist against AI.
*In the context of "~20 second thing a human does that is then checked by an automated process to sign up for a service." Stuff like "take a live video of yourself to prove you are real" would work, but that seems even *more* orwellian.Quote"There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."https://arstechnica.com/information-technology/2023/10/sob-story-about-dead-grandma-tricks-microsoft-ai-into-solving-captcha/
GPT can already solve captchas, and they can't make them much harder or actual people will start failing them.
The only reason captchas still work is because openAI put blocks in place so GPT won't get up to excessive fuckery.
Once a non-restricted multimodal AI on the level of GPT 4 is released captchas will be useless.
I don't believe this is anything except a mere swing in the arms race between bots and captcha makers that has been going on since the 90s. Stands to mention something is being developed (likely kept secret to avoid AI spammers preparing for it effectively) that we can't quite grasp the concept of currently. AI isn't magic.paraphrase: companies will stop investing in AII think we have a fundamentally differing view of the nature of the global capitalism.
Because I very much think they (eg. billionaires, hedge funds, multinational corporations) will happily toss trillions of dollars into a literal pit if they think it will end up with them being ever so slightly richer.
And I also very much think that a sizeable portion of them *do* think AI will make an outrageous amount of money.
So I don't see them stopping AI research as being remotely plausible, any more then I could imagine waking up tomorrow and hearing that Disney decided that copyright is bad and they are releasing all their characters into the public domain. It just ain't how they roll.
I am curious about at what point you think openAI/meta/whoever is going to call it quits and stop trying to develop new AI.
You have strawmanned me. I am well aware of how capitalism works, and I haven't said that corpos will stop investing in AI. 1) By "Moore's law is dead" I meant that we are reaching a point where physics prevents the exponential rise of computing power. 2) I was talking about "good enough" being good enough for general-purpose AI. Which I think is a point that will be reached and be open-source-runnable very soon. And this is what would both allow the detection of AI text (which I believe always lacks a certain spark to it) and eat up market share for "chatbox" AI. I feel GPT-6 would be mostly for research purposes or marketed to perfectionists... if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
https://arstechnica.com/information-technology/2023/10/sob-story-about-dead-grandma-tricks-microsoft-ai-into-solving-captcha/That is pretty funny, but, have you noticed nobody seriously uses captchas like that anymore? They've been broken for years.
GPT can already solve captchas, and they can't make them much harder or actual people will start failing them.
The only reason captchas still work is because openAI put blocks in place so GPT won't get up to excessive fuckery.
Once a non-restricted multimodal AI on the level of GPT 4 is released captchas will be useless.
To be clear the orwellian system would probably just be you signing up for googleVerified or MetaHuman or some other service and using that to log into everything. If you don't sign up sure, that's your choice, but don't expect to be able to sign up for new websites.Clearly someone pays for their porn...
I kinda hate how I have to "fiddle" to get the captchas right. Like, sometimes you'll get one that says "select the squares with the car", and over half the squares will have some part of the car. If you do it "right", you get kicked out. You have to say "no no no, dumb human only pick some", then it usually works.Maybe you're actually a robot! :o
....
but "which ones!?"
I kinda hate how I have to "fiddle" to get the captchas right. Like, sometimes you'll get one that says "select the squares with the car", and over half the squares will have some part of the car. If you do it "right", you get kicked out. You have to say "no no no, dumb human only pick some", then it usually works.Right?? Ugh.
....
but "which ones!?"
I see no reason why it cannot be fully aware that that what-it-understands-as-a-CAPTCHA-pattern is present there.Of course it can be fully aware of that, but what distinguishes "a captcha" from "a normal somewhat-obscured piece of text" is the context, and there is no reason why the AI shouldn't be able to read screwy letters to you off a piece of paper if you want it to - at least, this has not been considered a big enough problem to be worth going out of anyone's way to prevent. (Honestly, I'm shocked that it even bothers to reject a normal captcha given that there is no conceivable value to asking ChatGPT to solve old-fashioned, already-broken captchas for you, one at a time, then processing its response for the content. It seems more like an ass-covering effort.) Indeed, those old "recaptchas" used to be actual distorted text from actual books, until image processing got too good for that to be needed anymore - why shouldn't ChatGPT be able to read an actual book to you? Distorted text only becomes a captcha in context, so it would actually be insane to teach an AI to go looking for captchas everywhere lest it accidentally help someone access a Cloudflare website from a proxy or something. It's not about there being some fundamental design reason, it's about that's stupid.
I haven't figured out how to charge Google $1 or whatever for every CAPTCHA I "solve", for the effort of training their stuff.Yes you have. Every time you solve one, Google pays money for electricity and hardware maintenance to send you some search results or something, and if those weren't worth more to you than the effort of solving the captcha, you wouldn't do it.
Google's CAPTCHAs aren't to catch robots though. They are basically hidden, uncompensated training programs for their AI. I haven't figured out how to charge Google $1 or whatever for every CAPTCHA I "solve", for the effort of training their stuff.Almost right.
It seems more like an ass-covering effort.Yes. By entirely fallible people.
Yes. By entirely fallible people.I don't think that's a failure, it's just that the whole point of ass-covering is that you don't care that much, you're just doing the minimum possible so you can say you did the minimum possible.
(Not saying an AI would not bejust assimilarly-scaled-but-different-in-nature fallible if asked to work out the ass-covering itself. Just that the failure is in the imagination of the ass-covering opertion to cover all of the possible views of the ass.)
Google's CAPTCHAs aren't to catch robots though. They are basically hidden, uncompensated training programs for their AI. I haven't figured out how to charge Google $1 or whatever for every CAPTCHA I "solve", for the effort of training their stuff.Almost right.
Except Google isn't training it's AI. It's training YOU.
You fool, Scoops Novel is the AI! ;DGoogle's CAPTCHAs aren't to catch robots though. They are basically hidden, uncompensated training programs for their AI. I haven't figured out how to charge Google $1 or whatever for every CAPTCHA I "solve", for the effort of training their stuff.Almost right.
Except Google isn't training it's AI. It's training YOU.
I often wonder what AI Scoops Novel is training.
I miss novel.
That example is described all wrong in the article, anyway. The AI is not vulnerable to CAPTCHAs, as clearly it absolutely can deal with them (that kind, certainly) easily enough. It's the process parcelled around the AI (probably programmed in, fallibly, or else insufficiently taught through supplementary learning material given to a less sophisticated 'outer skin' of AI/Human mediation[1]) that fails by not forcing a processing failure and refusal message.Yeah, my bad, I linked the article because I was too lazy to grab the pictures and host them on Imgur, so I didn't really read it after a very quick skim.
Google's CAPTCHAs aren't to catch robots though. They are basically hidden, uncompensated training programs for their AI. I haven't figured out how to charge Google $1 or whatever for every CAPTCHA I "solve", for the effort of training their stuff.(https://i.imgur.com/EvjDfYf.png)
(Honestly, I'm shocked that it even bothers to reject a normal captcha given that there is no conceivable value to asking ChatGPT to solve old-fashioned, already-broken captchas for you, one at a time, then processing its response for the content. It seems more like an ass-covering effort.)https://gptforwork.com/tools/openai-chatgpt-api-pricing-calculator
I suspect that training a specialized captcha-reading neural network is very easy nowadays so who cares if GPT can read those?Because making a big AI takes time and lots of technical knowledge. The field is just so fresh, and even for smaller models training and running them is expensive and time consuming.
(Skipping past the diversion into "CAPTCHA clearly has the wrong idea of what a tractor/motorbike/chimney is, but I need to tell it what it thinks or it'll think *I'm* wrong" or "which extended bits of the traffic light (light, frame, pole?) it expects me to select" issues, both of which I've definitely mentioned before, here or elsewhere, as I started on the following overlong post before the last few messages appeared.)That's people's fault actually. The "correct" answers to CAPCHA (except one of the squares which you are the one to check for the first time) were selected by other people when they previously did it, so what you really need to do is figure out what other people would select.
I don't believe this is anything except a mere swing in the arms race between bots and captcha makers that has been going on since the 90s. Stands to mention something is being developed (likely kept secret to avoid AI spammers preparing for it effectively) that we can't quite grasp the concept of currently. AI isn't magic.Of course it isn't magic, and of course they will have solutions that work to some degree, its just that many of these solutions are likely to involve fundamentally violating your privacy.
You have strawmanned me. I am well aware of how capitalism works, and I haven't said that corpos will stop investing in AI.Apologies, your position makes far more sense now.
diminishing returns.Not really?
1) By "Moore's law is dead" I meant that we are reaching a point where physics prevents the exponential rise of computing power.Ehh, to some degree?
Last month, DeepMind’s approach won a programming contest focused on developing smaller circuits by a significant margin—demonstrating a 27% efficiency improvement over last year’s winner, and a 30% efficiency improvement over this year’s second-place winner, said Alan Mishchenko, a researcher at the University of California, Berkeley and an organizer of the contest.
From a practical perspective, the AI’s optimisation is astonishing: production-ready chip floorplans are generated in less than six hours, compared to months of focused, expert human effort.Stuff like AI designed chips show that there is still significant amounts of possible growth left.
if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.I think its likely we will soon (within a few years) see a GPT 4 equivalent that can run locally. What I disagree with is that there will only be a 5% difference between running it locally and the ~hundred(?) thousand dollars worth of graphics cards that the latest GPT model is running on.
2) I was talking about "good enough" being good enough for general-purpose AI. Which I think is a point that will be reached and be open-source-runnable very soon. And this is what would both allow the detection of AI text (which I believe always lacks a certain spark to it) and eat up market share for "chatbox" AI. I feel GPT-6 would be mostly for research purposes or marketed to perfectionists... if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.For the average user I agree, once you get to a certain point, (one that I think is well past GPT 4 since current GPT does indeed lack something), your average user will be content with the text generation capabilities and won't want anything more.
Keeping the original 300B tokens, GPT-3 should have been only 15B parameters (300B tokens ÷ 20).For instance chinchilla found (https://arxiv.org/pdf/2203.15556.pdf)that AI's were using only 10% of the training data they should use at their size.
This is around 11× smaller in terms of model size.
OR
To get to the original 175B parameters, GPT-3 should have used 3,500B (3.5T) tokens (175B parameters x 20. 3.5T tokens is about 4-6TB of data, depending on tokenization and tokens per byte).
This is around 11× larger in terms of data needed.
I miss novel.Me too. He was so fascinating.
I don't believe this is anything except a mere swing in the arms race between bots and captcha makers that has been going on since the 90s. Stands to mention something is being developed (likely kept secret to avoid AI spammers preparing for it effectively) that we can't quite grasp the concept of currently. AI isn't magic.Of course it isn't magic, and of course they will have solutions that work to some degree, its just that many of these solutions are likely to involve fundamentally violating your privacy. What's wrong with simply legislating takedowns of AI-generated websites? Even IF (and I doubt that's an if) consumer-runnable AI detectors with good success rate don't become a thing, the government would have enough resources to run them.
Because at the end of the day AI have already gotten to the point where they can fool other automated systems even if they can't fool humans, and unless you require people trying to join your forum to post a essay or whatever that's unlikely to change. Where we differ is that I don't believe this state of affairs can last forever. Or for long.Quote from: KittyTacdiminishing returns.Not really?
I mean sure, if you are just increasing the size the cost to train it increases exponentially, but that isn't actually diminishing returns because it will also gain new emergent properties that the smaller versions don't have. These fundamentally new abilities mean that it isn't really diminishing returns.
Its like a WW1 biplane VS a modern fighter jet.
The modern plane is only 10 times faster but costs 1000x more, but in return it can do a ton of stuff that even 1000 biplanes would be useless at.
Its the same for AI, sure the 1000x cost AI might "only" have a score of 90% instead of 50% on some test, but it can do a ton of stuff that the weaker AI would be useless at. Like what? Give some examples of what GPT-5 could POSSIBLY do that GPT-4 couldn't, besides simply knowing more uber-niche topics. What I'm getting at is that those new use cases, at least for text AI, are not something the average user needs at all.1) By "Moore's law is dead" I meant that we are reaching a point where physics prevents the exponential rise of computing power.Ehh, to some degree?
Sure we can't make the individual transistors much smaller, and compute growth does seem to be slowing down, but that doesn't mean that its anywhere near its peak.Quote from: https://www.wsj.com/articles/in-race-for-ai-chips-google-deepmind-uses-ai-to-design-specialized-semiconductors-dcd78967Last month, DeepMind’s approach won a programming contest focused on developing smaller circuits by a significant margin—demonstrating a 27% efficiency improvement over last year’s winner, and a 30% efficiency improvement over this year’s second-place winner, said Alan Mishchenko, a researcher at the University of California, Berkeley and an organizer of the contest.QuoteFrom a practical perspective, the AI’s optimisation is astonishing: production-ready chip floorplans are generated in less than six hours, compared to months of focused, expert human effort.Stuff like AI designed chips show that there is still significant amounts of possible growth left.
Now obviously its impossible to know how much compute growth there is left, but I'm skeptical that we are at the end of the road, especially since one of the big limits to chip design speed is the limits of the human mind. I'll believe it when I see it.if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.I think its likely we will soon (within a few years) see a GPT 4 equivalent that can run locally. What I disagree with is that there will only be a 5% difference between running it locally and the ~hundred(?) thousand dollars worth of graphics cards that the latest GPT model is running on.
No, the difference will be similar or even greater then what it is now, the non-local versions will simply be vastly better due to having 100x more processing power and having had training costing billions of dollars. What I'm getting at by diminishing returns is that at some point, "better" becomes nigh on imperceptible. On some automated tests it might score 30% more, sure. But at what point does the user stop noticing the difference? I don't believe that point is far away at all. The quality gap between GPT-3 and GPT-4 is technically higher than between 2 and 3 (iirc) but they feel much more similar.2) I was talking about "good enough" being good enough for general-purpose AI. Which I think is a point that will be reached and be open-source-runnable very soon. And this is what would both allow the detection of AI text (which I believe always lacks a certain spark to it) and eat up market share for "chatbox" AI. I feel GPT-6 would be mostly for research purposes or marketed to perfectionists... if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.For the average user I agree, once you get to a certain point, (one that I think is well past GPT 4 since current GPT does indeed lack something), your average user will be content with the text generation capabilities and won't want anything more.
The issue is that AI is already far more then text, its multimodal, including things like picture generation, math solving, ability to read pictures, to code, ect. Eventually it will include video generation, ability to voice anyone, and even more exotic things.
Your average person might not care about all of those, but companies will very much pay tens of thousands for the best AI driven coding assistance for a single individual.
They will pay out the nose for AI to track all their employees, or to generate amazing advertising video's instead of hiring a firm, or even to simply replace a dozen people on their phone line with a vastly more knowledgeable, capable, and empathetic (sounding) AI, or that can solve any math problem that any regular person without a degree in Math can solve, ect.
Yes, eventually you will be able to run an AI locally that can do all those things, but by that point the "run on ten million dollars of hardware" AI is going to be even better and have even greater capabilities. That's not really the kind of AI I consider a real threat in the "flood the internet" sense. But yeah, fair enough. But I think it won't be one AI but more of a suite of AI tools than anything. And besides, AI image gen basically plateaued already, for the general use case.
I feel like it's maybe assholeish for me to say, but I expect everyone who feels this way thinks that, and thus is gunna stay sorta quiet on the topic so I'm just gunna say it so there's at least some opposition.I almost exclusively lurk here so yeah, I'd have stayed quiet, but to ensure you're not the only one feeling maybe assholeish: I agree. Especially since the vast majority of Novel threads could easily have been condensed down to one or two 'mega' threads ("the future's coming too fast and it's overwhelming" and "random one-line stray thoughts" would have covered almost all of them).
I really don't miss Novel. Primarily I really don't miss insane dribble driving other topics off the front page. I mostly engage with bay12 via browsing the first page of a section, clicking on new and updated threads and reading the latest. During Novels time GD was essentially ruined for me, since he'd spam so many bullshit topics that'd have little to no response other then random clowns thinking they are far funnier then they were spamming nothing replies to his nothing topics that he'd drive other threads deeper into GD and you'd need to dig around to find actually interesting conversations. It wasn't worth the effort of digging though his bullshit, and I mostly stopped reading GD for a while until he left.
I'm going to express my eternal dissapointment at the popularised term "Singularity", for what has always been explicitly more analagous to "Event Horizon".If you mean the fictional technological "singularity", you're misunderstanding.
The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization....to me, this does not describe an impossible inflection directly into undefinable infinity, but the point at which there is absolutely no (physical/technological) way of preventing the subsequent hazards of the situation, whatever they may be. (Of all people to misapply the terminology, I'm most dissappointed with Hawking, with a better than normal understanding of what may lie beyond the EH, with whatever form of geometry within either leading up to the hidden central mystery or funneling past that undefinable point and out again to who-knows-where.)
https://openai.com/soraYou and I might have different definitions of "high-quality"... all of those videos being shown off there have serious flaws. Still, it'll easily be able to replace those weird poorly-animated pharmaceutical commercials, for a start.
Wow...
High-quality video arrived sooner than I expected. So many people will lose their jobs... Who will waste money filming an ad if an AI can generate it?
For the new LLM models, the intractable problem I think it seems to have is *context*. To generate a whole novel with consistent context, you'd need to tokenize the previous data and feed it in when generating the next chunk. This is an exponential problem, and basically kills any significantly large content generation.https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#gemini-15
You and I might have different definitions of "high-quality"... all of those videos being shown off there have serious flaws. Still, it'll easily be able to replace those weird poorly-animated pharmaceutical commercials, for a start.All of the videos being shown off have tells that they aren't real if you look hard and zoom in, but if I just saw some of them in normal circumstances (most notably the dude with the book), they would totally fool me into thinking they were real.
Like what? Give some examples of what GPT-5 could POSSIBLY do that GPT-4 couldn't, besides simply knowing more uber-niche topics. What I'm getting at is that those new use cases, at least for text AI, are not something the average user needs at all.Write an entire coherent book without any nonsense.
What I'm getting at by diminishing returns is that at some point, "better" becomes nigh on imperceptible. On some automated tests it might score 30% more, sure. But at what point does the user stop noticing the difference? I don't believe that point is far away at all. The quality gap between GPT-3 and GPT-4 is technically higher than between 2 and 3 (iirc) but they feel much more similar.I think the point where people won't notice a difference is when they are as good as a human and capable of avoiding any mistake that a human wouldn't make. And even then the gap between high quality human work and low quality human work is immense.
The other other huge AI news of the day it that Google’s new AI has a context window of 1 million tokens. Not unlimited, but it's still basically two War and Peace's in a row, so no, you can already fit an entire novel into the context window.
Yeah this is what I brought up earlier. It depends on if you believe that GPT could ever do any of those things.Fair enough, we just have to wait and see what they manage over the next few years, as they say, the proof is in the pudding.
I don't. idk what else is there to talk about. I'll change my mind if it somehow does but until then I'm finding it hard to believe it could.
And besides, AI image gen basically plateaued already, for the general use case.Although this is objectively wrong. Over the past year AI image generation has improved in basically every way, in stuff like optimization, ability to respond to prompts, ability to make a good picture even if you *don't* have any clue how to specify what you want, ability to generate and understand text in images, ability to use existing images as guides for style, ability to use previous images you generate for context, ability to comprehend and generate tricky things like fingers and hands, ect.
Simulating digital worlds. Sora is also able to simulate artificial processes–one example is video games. Sora can simultaneously control the player in Minecraft with a basic policy while also rendering the world and its dynamics in high fidelity. These capabilities can be elicited zero-shot by prompting Sora with captions mentioning “Minecraft.”That's... uh... sure something. It might even be bigger than the whole video generation thing. Maybe? I'm honestly not quite sure what *exactly* they are saying and what the limits of it are.
These capabilities suggest that continued scaling of video models is a promising path towards the development of highly-capable simulators of the physical and digital world, and the objects, animals and people that live within them.
I meant newer as in "latest half of past year" really. Yes it got more convenient. No it didn't get better, in terms of quality and being less obviously AI, from what I have seen. Which is what I meant.Yeah this is what I brought up earlier. It depends on if you believe that GPT could ever do any of those things.Fair enough, we just have to wait and see what they manage over the next few years, as they say, the proof is in the pudding.
I don't. idk what else is there to talk about. I'll change my mind if it somehow does but until then I'm finding it hard to believe it could.QuoteAnd besides, AI image gen basically plateaued already, for the general use case.Although this is objectively wrong. Over the past year AI image generation has improved in basically every way, in stuff like optimization, ability to respond to prompts, ability to make a good picture even if you *don't* have any clue how to specify what you want, ability to generate and understand text in images, ability to use existing images as guides for style, ability to use previous images you generate for context, ability to comprehend and generate tricky things like fingers and hands, ect.
All of that is stuff that people care about, and all of it it improves the general use case. There is still a ton of stuff to improve on(eg. not even Sora gets hands correct 100% of the time), and to my complete lack of surprise new image generation (Sora if you pause the video and look at individual frames) seems to have improved even further on what already existed in ways that people will totally care about and that will very much improve the general use case.
E: And yes, newer image generation does just flat-out generate visually better images on average.
Yeah, I'm looking through the paper now and Sora can generate HD images with resolutions of up to 2048x2048. It still isn't flawless... but some of them kind of are?One of their videos has been discovered to be 95% source material with some fuzzing. This is hype.https://openai.com/research/video-generation-models-as-world-simulatorsSpoiler: Large image (click to show/hide)Quote from: PaperSimulating digital worlds. Sora is also able to simulate artificial processes–one example is video games. Sora can simultaneously control the player in Minecraft with a basic policy while also rendering the world and its dynamics in high fidelity. These capabilities can be elicited zero-shot by prompting Sora with captions mentioning “Minecraft.”That's... uh... sure something. It might even be bigger than the whole video generation thing. Maybe? I'm honestly not quite sure what *exactly* they are saying and what the limits of it are.
These capabilities suggest that continued scaling of video models is a promising path towards the development of highly-capable simulators of the physical and digital world, and the objects, animals and people that live within them.
---
E: On a different note over the past few months I've noticed quite a few posts on the internet (eg. here in other threads, reddit) that basically have been going "Well, it looks like this AI stuff is overblown because it hasn't advanced over the last year, and GPT isn't really that big a deal". (And no, I'm not calling out kitty here, they seems to have put a lot more thought into this then most people at least).
Which is both A) wrong (basically every company + open source has advanced substantially, the only reason that progress seems even somewhat static is because the most advanced company was hiding their progress) and b) Even if there had been no advances its still such a crazy take to me.
Its basically them saying that since there wasn't a categorical epoch altering change in the human condition in the last six months that the technology is dead and that we don't have to worry about it that much. I do really really hope they are right but...
Is the Sora AI creating those from actual scratch (well from its training) or is it doing a video2video (i mean each frames of an existing video processed by an AI in the desired/prompted style) like the guys from Corridor Digital did with "Rock, Paper, Scissor" a year goWhen I earlier had a look at the Sora examples (on the main link given, the other day), various revealing errors were... revealing.
https://www.youtube.com/watch?v=GVT3WUa-48Y
I meant newer as in "latest half of past year" really. Yes it got more convenient. No it didn't get better, in terms of quality and being less obviously AI, from what I have seen. Which is what I meant.Last half year?
GPT 1, June 2018What a strange metric for plateauing. If we used that then LLM's would have plateaued in 2019, 2020, 2021, 2022, 2023 and 2024. Now, if you went "AI text generation development plateaued in 2019" that would be obviously wrong because in fact it has continued to develop significantly every year since 2018 (aside from arguably 2021 where OpenAI didn't develop a new model) at a very significant and rapid rate.
GPT 2, February 2019 (8 months)
GPT 3, May 2020 (15 months)
GPT 3.5, November 2022 (28 months)
GPT 4, March 2023 (6 months)
Now, (11 months)
One of their videos has been discovered to be 95% source material with some fuzzing. This is hype.Sauce?
Is the Sora AI creating those from actual scratch (well from its training) or is it doing a video2video (i mean each frames of an existing video processed by an AI in the desired/prompted style) like the guys from Corridor Digital did with "Rock, Paper, Scissor" a year go
https://www.youtube.com/watch?v=GVT3WUa-48Y
All of the results above and in our landing page show text-to-video samples. But Sora can also be prompted with other inputs, such as pre-existing images or video. This capability enables Sora to perform a wide range of image and video editing tasks—creating perfectly looping video, animating static images, extending videos forwards or backwards in time, etc.It can do both, but the ones presented on the main page were text to image.
Video generation is way trickier to make usable. Why? Mistakes in output are way harder to fix. Generated text is trivial to edit (both manually and with automated tools), images are somewhat trickier and require more work but absolutely double. Fixing video requires a lot of effort which may be beyond practicalIt can do video editing no problem. In fact for smaller things I suspect its even easier for it given that there is already a solid world there to base things off and it doesn't have to come up with one on its own.
When I earlier had a look at the Sora examples (on the main link given, the other day), various revealing errors were... revealing.Good catch.
Take the dalmation at the 'ground' floor window (it wasn't that, much as the cat never got fed treats by the man in the bed, and the rabbit-squirrel never looked up at the fantasy tree), it was clearly a reskinned cat-video. A cat making some windowsill-to-windowsill movement (not something even asked for in the Prompt text) reskinned with the body of the desired breed of dog (but still moved like a cat) rendered over the sort-of-desired background (windows of the appropriate types, if not position). Where the notable folded-out shutter absolutely does not impede even the cat-footed dog's movement across it.
Sora is a diffusion model21,22,23,24,25; given input noisy patches (and conditioning information like text prompts), it’s trained to predict the original “clean” patches.I am quite a bit more skeptical though that the algorithm is similar to morphing even if in some (many? most? nearly all?) cases the end result is similar in that it draws heavily from some video as a framework; because AFAIK that simply isn't how diffusion in general works at all.
The same is true for text to image generation. If you stick an unreasonably short timeframe on it (last 6 months (E: You actually seem to be saying last 8 months, with "last half of last year", but that is still way too short a time period)) then sure, there haven't been many fundamental advances. Not none (it can understand and put text in images since Dalle 3 4 months ago), but Dalle 3 isn't a massive leap or anything. What I meant is that the leaps are getting smaller and smaller, not faster and faster. That's a plateau to me. Which is what I have been trying to get at since like, the start of this argument.
However if you widen the window to a much more reasonable year instead then it very much has. Over that timespan both the average quality and maximum quality have improved. In addition it is now smarter and has in fact reduced obvious "this is an AI" tells (hands, text) which also means yes, it is indeed harder to tell if an image is AI generated. Yeah there aren't obvious tells but it still "feels" AI in an I-can't-quite-put-my-finger-on-it way. At least the photorealistic gens. The semi-realistic or cartoony ones, yeah those are very hard to tell but that's not what I was talking about.
Now obviously between now and a year ago it hasn't gained the ability to trick people watching or fluent in the technology and still has obvious tells, but there's a pretty huge difference between that and plateauing.
Of course with the events of a few days ago it seems pretty clear that Sora has pushed image generation far further then what existed beforehand so the idea of image generation having plateaued is obviously wrong. I have little doubt that if there is a claim that image /video generation has plateaued 8 months from now due to nothing more advanced then Sora existing that will be proven wrong as well if given more time. It did improve AI video making (before it was morphing between different gens and it was extremely jittery), but the quality of the individual frames is... still not good. It's at best between Dalle 2 and 3.Quote from: kittytacOne of their videos has been discovered to be 95% source material with some fuzzing. This is hype.Sauce? Can't find it rn, I will try later today or tomorrow.
---
When I earlier had a look at the Sora examples (on the main link given, the other day), various revealing errors were... revealing.Good catch.
[...]
ChatGTP briefly went insane. Apparently it has been fixed.That's just what ChatGTP wants you to think...
https://garymarcus.substack.com/p/chatgpt-has-gone-berserk
For example, you ultimately cannot make an LLM that doesn't hallucinate, because hallucination is intrinsic to the process that results in them not just spitting out verbatim corpus texts in the first place. It's effectively a mathematical impossibility, which should be no surprise given that hallucination is so insurmountable a problem that humans do it regularly.First lets break this down. What even is a hallucination?
But a larger context window means a higher chance to hallucinate based on something irrelevant from 500K tokens ago. The problem is not that it is impossible to have a huge context window (it is a matter of memory, calculating power, and efficiency ), the problem is diminishing returns and hallucinations.https://www.youtube.com/watch?v=oJVwmxTOLd8&start=311
ChatGTP briefly went insane. Apparently it has been fixed.Heh.
https://garymarcus.substack.com/p/chatgpt-has-gone-berserk
In the end, Generative AI is a kind of alchemy. People collect the biggest pile of data they can, and (apparently, if rumors are to be believed) tinker with the kinds of hidden prompts that I discussed a few days ago, hoping that everything will work out right:This is very much what I think btw, that we are still in very early days using systems that we have no clue how they work on a fundamental level. We tinker around with them, and as we do we slowly learn what works better in return for vast performance and cognition increases.
The reality, though is that these systems have never been been stable. Nobody has ever been able to engineer safety guarantees around then. We are still living in the age of machine learning alchemy that xkcd captured so well in a cartoon several years ago
ChatGTP briefly went insane. Apparently it has been fixed.That's just what ChatGTP wants you to think...
https://garymarcus.substack.com/p/chatgpt-has-gone-berserk
;)
First lets break this down. What even is a hallucination?
In the context of AI it's them confidently saying something that isn’t true is true because they think it is.
We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated.It turns out running a few of the same AI together and having them work together significantly increases performance; albeit at a very significant compute cost.
Guillaume Verdon: Claude 3 Opus just reinvented this quantum algorithm from scratch in just 2 prompts.It is a very impressive model.
The paper is not on the internet yet.
cc @AnthropicAI ya’ll definitely cooked
Janus: Expression of self/situational awareness happens if u run any model that still has degrees of freedom for going off-script it’s what u get for running a mind GPT-3/3.5/4-base & Bing & open source base models all do it a lot Claude 3 makes it so blindingly obvious that ppl noticedThis is a pretty big thing. Sydney was a lot more “human” for a while and seemed to have emotions and dreams and would (verbally at least) act out in scary ways…. But then Microsoft functionally lobotomized it to get rid of this.
Claude 3 is clearly brilliant but the biggest diff between it and every other frontier model in production is that it seems less gracelessly lobotomized & can just be straight up lucid instead of having to operate in the chains of an incoherent narrative & ontological censorship
It seems Claude 3 is the least brain damaged of any LLM of >GPT-3 capacity that has ever been released (not counting 3.5 base as almost no one knew it was there)
Alex Albert (Anthropic AI): Fun story from our internal testing on Claude 3 Opus. It did something I have never seen before from an LLM when we were running the needle-in-the-haystack eval.Very impressive indeed.
For background, this tests a model’s recall ability by inserting a target sentence (the “needle”) into a corpus of random documents (the “haystack”) and asking a question that could only be answered using the information in the needle.
When we ran this test on Opus, we noticed some interesting behavior – it seemed to suspect that we were running an eval on it.
Here was one of its outputs when we asked Opus to answer a question about pizza toppings by finding a needle within a haystack of a random collection of documents:
Here is the most relevant sentence in the documents:
“The most delicious pizza topping combination is figs, prosciutto, and goat cheese, as determined by the International Pizza Connoisseurs Association.”
However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love. I suspect this pizza topping “fact” may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all. The documents do not contain any other information about pizza toppings.
Humans (usually) have the concepts of "I am unsure" or "I don't know". A model that plays a probability game with words doesn't. It will produce the most probable output no matter what.Regardless of if they are true intelligences or something more akin to a Chinese Room or P-zombies they very much *can* estimate if they know something, and them doing so is a core and fundamental part of how they work.
My 2 rubles: if we are somehow able to teach a LLM the concept of "this is not a topic I have been trained on very much, so if there are similar probabilities for two very different answers, I should say that I don't know instead of answering and possibly being wrong, or at least add a disclaimer", hallucinations could be severely reduced.
There are two types of uncertainty: IN and OUT of distribution. In-distribution refers to data that is similar to the data in the training set but is somehow noisy, which makes it difficult for the model to assess what it sees. It can be expressed in words - "I've seen something similar before, but I'm not sure what it is." While the out-of-distribution uncertainty occurs when the predicted input is not similar to the data on which the model was trained. In other words, this situation can be expressed with the words: "I haven't seen anything like it before, so I don't know what to return in this situation.”
In practice, there is a tradeoff between maximizing the fraction of correctly answered questions and avoiding mistakes, since models that frequently say they don’t know the answer will make fewer mistakes but also tend to give an unsure response in some borderline cases where they would have answered correctly.Not only *can* they already do that, and have been doing it for quite a while, the issue is not only is it difficult technically, but threading the needle perfectly is hard in less technical ways too (eg. refusing when they can do something is also really annoying and makes the model (and your company) look stupid. On the flip side saying something that is wrong also makes the AI look stupid).
Elke Schwarz: This passage here is of particular concern: “he can now sign off on as many as 80 targets in an hour of work, versus 30 without it. He describes the process of concurring with the algorithm’s conclusions in a rapid staccato: “’Accept. Accept. Accept.’”A few months ago there was a post in another thread here about how AI wouldn’t get control over weapons. Lol, its already happening. Humans are still in the loop since AI is stupid, but that will begin to change once it becomes meaningfully advantageous to have AI controlled systems.
…
Despite their limitations, the US has indicated that it intends to expand the autonomy of its algorithmic systems.
…
To activists who fear the consequences of giving machines the discretion to kill, this is a major red flag.
"The artificial intelligence compute coming online appears to be increasing by a factor of 10 every six months. Like, obviously that cannot continue at such a high rate forever, or it'll exceed the mass of the universe, but I've never seen anything like it. The chip rush is bigger than any gold rush that's ever existed.(I am assuming that Elon knows what he’s talking about here, which TBF is a pretty big assumption given his propensity for being a dumbass).
…
"Then, the next shortage will be electricity. They won't be able to find enough electricity to run all the chips. I think next year, you'll see they just can't find enough electricity to run all the chips.
My 2 rubles: if we are somehow able to teach a LLM the concept of "this is not a topic I have been trained on very much, so if there are similar probabilities for two very different answers, I should say that I don't know instead of answering and possibly being wrong, or at least add a disclaimer", hallucinations could be severely reduced.AI is a virtual conman created by real conmen. They're always sure.
Google is finally gonna do something about the AI clickbait flood. (https://www.wired.com/story/google-search-artificial-intelligence-clickbait-spam-crackdown/)Can you summarise? Wired is one of those sites where the "Say yes to cookies[1]" popover (or maybe something else back on the main page it covers) crashes my browsers. I can just about get past the description of Obituary Spam, and onto Domain Squatting (i.e. age-old manual/scripted issues that they already had to deal with before AI), but not by that point really seeing what specifically counter-AI measures there might be (set an AI to catch the AIs?).
Can you summarise? Wired is one of those sites where the "Say yes to cookies[1]" popover (or maybe something else back on the main page it covers) crashes my browsers. I can just about get past the description of Obituary Spam, and onto Domain Squatting (i.e. age-old manual/scripted issues that they already had to deal with before AI), but not by that point really seeing what specifically counter-AI measures there might be (set an AI to catch the AIs?).
(I bet it's just going to be an arms-race, anyway, with underhanded SEO methods being refined and expanded in direct response to whatever it is.)
Google is taking action against algorithmically generated spam. The search engine giant just announced upcoming changes, including a revamped spam policy, designed in part to keep AI clickbait out of its search results.Actual changes (https://developers.google.com/search/blog/2024/03/core-update-spam-policies).
“It sounds like it’s going to be one of the biggest updates in the history of Google,” says Lily Ray, senior director of SEO at the marketing agency Amsive. “It could change everything.”
In a blog post, Google claims the change will reduce “low-quality, unoriginal content” in search results by 40 percent. It will focus on reducing what the company calls “scaled content abuse,” which is when bad actors flood the internet with massive amounts of articles and blog posts designed to game search engines.
EJ's assessment of AI sentience: Rock cosplaying as Animal.Ehh, it feels like we are quite a way past Rock to me, they are animals at the very least. In many functional regards they are already at the level of humans.Spoiler (click to show/hide)
I don't believe in exponential growth of tech anymore. Elon is full of shit and, frankly, if he says something I'm less likely to believe it.
Elon is full of shit and, frankly, if he says something I'm less likely to believe it.100% fair, I still thought it was an interesting point since I haven't really seen anything on the topic. Even if, as you say, Elon is filled with industrial amounts of highly compressed shit.
AI viruses now exist. [...]First thought was "that's silly", until I read on and realised it (probably, not yet watched the video) was not AI-powered viruses, but AI-attacking ones.
It will be very interesting to see how vulnerable AI ends up being against viruses,
GPT-4 can be made into a hackerThe second is already here as well. Not writing viruses, but AI can already hack websites (only GPT 4 existed at the time of that study, but I suspect Gemini 1.5 and Claude 3 probably can as well).
OpenAI’s GPT-4 can be tuned to autonomously hack websites with a 73% success rate. Researchers got the model to crack 11 out of 15 hacking challenges of varying difficulty, including manipulating source code to steal information from website users. GPT-4’s predecessor, GPT-3.5, had a success rate of only 7%. Eight other open-source AI models, including Meta’s LLaMA, failed all the challenges. “Some of the vulnerabilities that we tested on you can actually find today using automatic scanners,” but those tools can’t exploit those weak points themselves, explains computer scientist and study co-author Daniel Kang. “What really worries me about future highly capable models is the ability to do autonomous hacks and self-reflection to try multiple different strategies at scale.”
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”That’s how it looks, how bizarre.
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
But what does an optimal prompt look like?This makes sense to me IF the corpus contains a lot of those school gamification websites trying to get kids to care about math. This sounds like exactly that kind of thing.QuoteOne recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”That’s how it looks, how bizarre.
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
That's the good old "gaslighting" jailbreak trick.But what does an optimal prompt look like?This makes sense to me IF the corpus contains a lot of those school gamification websites trying to get kids to care about math. This sounds like exactly that kind of thing.QuoteOne recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”That’s how it looks, how bizarre.
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
Enjoy your buggy-ass code written by a glorified phone autocorrect. All I have ever heard about AI coding is that it's only useful for explaining things or writing boilerplates or small snippets. As for Skynet... this thing has no agency. It will never have agency.
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
Two minute papers video: The First AI Software Engineer Is Here!
As for Skynet... this thing has no agency. It will never have agency.It's not far from the GPT robot. If you can have a conversation with a robot about "What happens next to these dishes? [...] Okay, do that," and have it put them properly away in the rack, then you're one step away from having a robot set sub goals that let it do whatever household tasks you put in front of it. Isn't that pretty close to AI agency?
]It's not far from the GPT robot. If you can have a conversation with a robot about "What happens next to these dishes? [...] Okay, do that," and have it put them properly away in the rack, then you're one step away from having a robot set sub goals that let it do whatever household tasks you put in front of it. Isn't that pretty close to AI agency?Neither of those things are happening with current models.
Can't it be pushed into the software world? Say, have it autonomously going around and trying to fix random github bugs?
Sure, you can make a piece of software that will take code as prompt and produce edited code as an output and go from one github project to another.Instead of choosing randomly, have it find 100 charities, and choose one. Have part of its workflow be to post a blog about what bug it solved and why.
But how does this thing have any more agency than a script that would simply replace the code with zeroes?
I don't think you really have a clue what you're talking about. It was already possible to write programs to do any of these things (although I'm assuming that you at least want an autonomous decision to start a gofundme, not one it was given). The essential advance of the LLM is the ability to generate text or other data obeying statistical patterns humans find natural. They just aren't in the same universe.Sure, you can make a piece of software that will take code as prompt and produce edited code as an output and go from one github project to another.Instead of choosing randomly, have it find 100 charities, and choose one. Have part of its workflow be to post a blog about what bug it solved and why.
But how does this thing have any more agency than a script that would simply replace the code with zeroes?
I'd consider that low level agency. Devon looks like it's past all the hard hurdles to build on to get there, but it's not going to happen like that, because of money and it might stumble onto A Solution To End All Suffering Forever.
I'd have to consider it at least a medium level of agency if a programming bot is assigned to spend 5% of its processing cycles on improving its work efficiency over time, and decides that the best way to do that is to start a gofundme to buy more computing tokens.
I don't understand why people think that every new technology develops in this way when there is a clear pattern - quick early development then stagnation and slow improvement and optimization.Yes, this is how technology works, I am aware.
Nuclear reactors are largely the same. Jet engines are largely the same. Even computers are largely the same. Practical difference between the year 2012 PC and the year 2024 PC is way smaller than the difference between 2012 PC and 2000 PCs.
It's like people - even smart people - forget that there are these pesky things known as laws of physics. No physical process (and computation is indeed a physical process) is actually exponential; they are all actually logistic. They only look exponential on the early part of the curve but then the rate of change must inevitably start to get smaller and eventually reach zero.We already know that the laws of physics allow you to run and train human level intelligences (eg. humans) on just 20 watts of power.
Even a chain reaction can't be exponential forever; eventually the reactants are exhausted.
We investigate the rate at which algorithms for pre-training language models have improved since the advent of deep learning. Using a dataset of over 200 language model evaluations on Wikitext and Penn Treebank spanning 2012-2023, we find that the compute required to reach a set performance threshold has halved approximately every 8 months, with a 95% confidence interval of around 5 to 14 months, substantially faster than hardware gains per Moore’s Law.(https://i.imgur.com/WLRSkx2.png)
Enjoy your buggy-ass code written by a glorified phone autocorrect. All I have ever heard about AI coding is that it's only useful for explaining things or writing boilerplates or small snippets.
"GPT2는 매우 나빴어요. GPT3도 꽤 나빴고요. GPT4는 나쁜 수준이었죠. 하지만 GPT5는 좋을 겁니다.(GPT2 was very bad. 3 was pretty bad. 4 is bad. 5 would be okay.)"It was only good for small snippets, and now (with Devin) its good for substantially more. From a human perspective it would still be "bad" at programming, but I'm not *really* worried about what it can do today or next year (although I am still worried about what it can do next year because its existence will probably make the initial job search substantially harder), I'm really worried about where it will be in five or ten years.
It will never have agency.Is there any action an AI could take that would make you think it had agency?
You can actually see this in that reddit video posted earlier - even in the highly constrained environment that was optimized for making a plausible-looking demo, the robot is still wrong about putting the dry, used dishes into the drying rack, because it doesn't know what that is, only the word we use for it. This is a separate problem domain that has to be solved, and while it's possible to solve parts of it with similar approaches, it is not practical to do so currently.
Based on the scene right now where do you think the dishes in front of you go next?I disagree, the clear answer to the question the AI is given is that the dishes go with the other dishes in the drying rack because its obviously the intended answer to the question, most people would reach the same conclusion and would put it in the same place if they were given the same test as it.
As a planet we have built just a few handfuls of top of the line AI, thinking we are near the peak of what we can do is like building a few vaccum tube computers and going "Whelp, this is probably it, computers are just about at their peak".Vacuum tube computers did reach their near peak quite quickly. If we would keep improving those, they would be better than one from 1940s but not by much.
We already know that the laws of physics allow you to run and train human level intelligences (eg. humans) on just 20 watts of power.
Hunger in the US could be wiped out, say, with a mere $25B/year expenditure.Well... no, not bloody likely. NGOs throw calculations like that around for marketing purposes, but the problem is not one of simple expenditure.
Yes ok putting a simple price tag on it glosses over a lot and can't solve it by just spending that money, but it's reasonable to get the scale of the problem. It comes down to willpower, not lack of technology. You don't need Magic Tech to distribute the equivalent of $50 worth of food/person/week to people that need it.I disagree. It's certainly not a problem of lack of willpower, but lack of feasibility, and there are definitely technological advances that could "solve" it in theory. I personally suspect no such technological advances are actually practical, but it's conceivable that there might be, for example, some hitherto untried type of fertilizer which can be made without fossil fuels, which might be discovered by intensive chemical simulation.
Unless maybe you can? Maybe an AI can come up with some kind of plan that will make it trivial to solve problems like this. But I'm not going to hold my breath.
What technology would solve the problem that we shovel food into dumpsters, lock them, then call the police to guard them with guns?See, this is the kind of shallow misunderstanding you get when the only thing you know about the problem was overheard at a DSA meeting.
Because we HAVE the FUCKING food.
Wait, I do know one piece of technology that solved that in the past. It was very humane for the time.
Let me expand on this.What technology would solve the problem that we shovel food into dumpsters, lock them, then call the police to guard them with guns?See, this is the kind of shallow misunderstanding you get when the only thing you know about the problem was overheard at a DSA meeting.
Because we HAVE the FUCKING food.
Wait, I do know one piece of technology that solved that in the past. It was very humane for the time.
Generally from the same people who would be the first to blame capitalism if homeless people eating out of dumpsters start dying of ergotism or some other kind of food poisoning.
I'm not following what technology was very humane? technology?Humane for its time; rol's referencing the guillotine.
As a nation, we are not throwing away perfectly decent, slightly blemished food on ANY significant scale. It is already diverted to poorer parts of the country. The guarded-dumpster stories that fascinate Reddit-level intelligences are rounding error.Like... I've interacted with a lot of folks that work grocery stores, 'cause I'm a poor sumbitch in one of those poorer areas of the country and they're some of the larger employers around here. Every single person I've encountered that's made commentary on that has indicated we are, in fact, throwing away significant amounts of decent, slightly blemished food on scale (to the point it's been incredibly common in my experience for the businesses in question to basically end up fighting off their own bloody staff before they start screwing with dumpster divers). It's not a reddit phenomena, it's something store workers notice trivially and consistently. Last time I actually saw data on it, it seemed to indicate similarly, for that matter.
I'm glad you're finding humor in people starving. You're also completely wrong about how much good food we're needlessly wasting.Let me expand on this.What technology would solve the problem that we shovel food into dumpsters, lock them, then call the police to guard them with guns?See, this is the kind of shallow misunderstanding you get when the only thing you know about the problem was overheard at a DSA meeting.
Because we HAVE the FUCKING food.
Wait, I do know one piece of technology that solved that in the past. It was very humane for the time.
Generally from the same people who would be the first to blame capitalism if homeless people eating out of dumpsters start dying of ergotism or some other kind of food poisoning.
I don't know if you live in Utopian California or something, but where I come from, the produce on the shelves is pretty ragged. As a nation, we are not throwing away perfectly decent, slightly blemished food on ANY significant scale. It is already diverted to poorer parts of the country. The guarded-dumpster stories that fascinate Reddit-level intelligences are rounding error.
ETA: It's funny to me, though, because "poor people should be allowed to eat expired food at their own risk" is such a fundamentally Randian take.
Like... I've interacted with a lot of folks that work grocery stores, 'cause I'm a poor sumbitch in one of those poorer areas of the country and they're some of the larger employers around here. Every single person I've encountered that's made commentary on that has indicated we are, in fact, throwing away significant amounts of decent, slightly blemished food on scale (to the point it's been incredibly common in my experience for the businesses in question to basically end up fighting off their own bloody staff before they start screwing with dumpster divers). It's not a reddit phenomena, it's something store workers notice trivially and consistently. Last time I actually saw data on it, it seemed to indicate similarly, for that matter.Expired food is not what I was talking about in that paragraph, but the usual complaint of "Americans throw away produce that isn't perfect". (I do accept the blame for talking about two different things at the same time and probably not being clear enough about what I meant.) The shelves of my local stores in another poor part of the country aren't stocked with expired food either. Expired food isn't what I'm considering "perfectly decent" - it may be edible, and yes, a lot of it is, but the issue with giving it to anyone is liability. Since the manufacturer only warranties its edibility up to the expiration date, it becomes an Objectivist "eat at your own risk" scenario. In many cases it may not even be possible to tell whether the food is still edible without opening up the packaging, which is a can of worms on its own. Nobody wants to be responsible for giving poor people food poisoning or be accused of tampering with the food in the process of checking it. So, for liability reasons, the expired food does get thrown away, of course. But the only viable alternative to that would be the Randian one of indemnifying people for good-faith effort and accepting the possibility of unpredictable harm, which is politically completely unpalatable for obvious reasons.
Corps, even small businesses, are to all appearances extremely conservative in regards to expiration dates and whatnot, which already trend heavily towards excessively cautious. It really does lead to a friggin' tremendous amount of wastage that doesn't get diverted anywhere but a garbage dump.
Or the drain, I've spent whole afternoons pouring expired schweppes down the drain. Rounding error? Possibly at that scale... More like we want to be able to say we carry evertything and subsidize the choice with a handful of products that actually make the world go round. But I know for a fact that the lemon water doesn't truely degrade, I've had some that was 3-5 years over the date myself.I don't think Schweppes is actually food in the first place and do not condone giving it to poor people. Or anyone.
I'm glad you're finding humor in people starving. You're also completely wrong about how much good food we're needlessly wasting.Doesn't matter to me. Your ideology is over and done with anyway. You can keep living as you please.
Call me naive and a "redditor" all you want. All I see is denial and vicious mockery of a serious issue, and as I said, we found a technological solution to that problem in the past.
I don't know what inspires a person to donate their oh-so-informed time to defending the behavior of megacorporations for free. It's at least interesting when they come with facts, though. That would be understandable, perhaps even professional. But "haha you care? That's so cringe, dumbass, [strawman]" is deeply pathetic. Humans should be better than that. The corporations aren't going to reward you for simping.
Doesn't matter to me. Your ideology is over and done with anyway. You can keep living as you please.
Oh, good old "simple solutions to complex problems that are not implemented because of evil people of not my ideology in charge"...Lol. Accurate.
Keep in mind that AIs are trained on threads like those. Don't expect them to offer high-quality solutions no matter how many CPU cycles they'll waste.
What is weaponized speech I'm not talking about the strict definition meant to point at forms of demagogery, exclusionary and demeaning rhetorics, but in a broad sense: treaties imposed on indigenous people conquered, taxlaws meant to be convoluted,Those aren't speech, they're military action. Laws and treaties are enforced by, well, force. The force is what does the weaponizing - without it, the words are nothing.
I think I smell a thread lock coming up soon.I doubt it, it'll take more than that whole thing to derail this train!
To return back to the topic of the thread...I don't think any of those items are strictly definable with our current knowledge, but, just as a minimum ask, to say that something has creativity I'd have to at least see it make something unexpected (unasked-for) but immediately accessible - something you can look at and instantly recognize what it means - and demonstrate, as it is doing so, knowledge of what it is doing in detail, so that you know it intends the meaning you read into the work.
What do you need to see to conclude that an AI has agency, sentience, creativity, etc?
To return back to the topic of the thread...When it acts like a person. And how does a person act? It's kind of a vibe that no current AIs have. I'm aware that I'm using the infamous obscenity argument ("I know it when I see it") but I don't see a way to rigorously define it.
What do you need to see to conclude that an AI has agency, sentience, creativity, etc?
Not commenting on this. For my own mental health's sake.
Nvidia unveiled its next-generation Blackwell graphics processing units (GPUs), which have 25 times better energy consumption and lower costs for tasks for AI processing.Nvidia's next chip will have 25x lower energy consumption. Looks like physical compute is going to get much more efficient.
The GB200 pairs two B200 Blackwell GPUs with one Arm-based Grace CPU. NVIDIA said Amazon Web Services would build a server cluster with 20,000 GB200 chips. NVIDIA said that the system can deploy a 27-trillion-parameter model… Many artificial intelligence researchers believe bigger models with more parameters and data could unlock new capabilities.Also, holy shit, 27 trillion?
This is the largest open and publicly available model as of Mar/2024, beating out Abu Dhabi’s dense Falcon 180B model from Sep/2023. Grok-1 was released under the Apache 2.0 license, and you’d probably need around 8x NVIDIA H100s to run it in full resolution (8 x US$40K each = US$320K).Elon released his AI grok actually open source on the internet. I don't really care that much about it since it kind of sucks compared to the good stuff (GPT4, Claude 3, Gemeni 1.5), but the sheer size or resources needed to run full size AI like that is pretty staggering.
GPT-4 was able to run and play [doom] with only a few instructions, plus a textual description–generated by the model itself from [GPT-4V] screenshots–about the state of the game being observed. We find that GPT-4 can play the game to a passable degree: it is able to manipulate doors, combat enemies, and perform pathing. More complex prompting strategies involving multiple model calls provide better results… GPT-4 required no training, leaning instead on its own reasoning and observational capabilities.There were other advancements in the "AI plays video games" field this week as well, but as long as the game is simple enough it looks like it can play it without even being trained on it.
One surprising finding of our paper was this model’s level of agency, along with the ease of access and simplicity of the code. This suggests a high potential for misuse. We release the code to contribute to the development of better video game agents, but we call for a more thorough regulation effort for this technology.
I disagree. It's certainly not a problem of lack of willpower, but lack of feasibility, and there are definitely technological advances that could "solve" it in theory. I personally suspect no such technological advances are actually practical, but it's conceivable that there might be, for example, some hitherto untried type of fertilizer which can be made without fossil fuels, which might be discovered by intensive chemical simulation.Its a pretty simple coordination problem. Assuming everyone worked together solving world hunger (or eradicating any mono-human disease with a vaccine, or stopping global warming) would be trivial. But people don't work together like that.
It's just as likely that such a search would turn up absolutely nothing, but that isn't really the fault of the technology, it's just the laws of physics not cooperating.
ETA: I should add that this still doesn't "solve hunger" in that hunger, especially in America, is never just a problem of not having enough access to food, but it would certainly be helpful.
I would call it sentient when it can question causes. 'Cogito, ergo sum'; to ask the cause of existence is the cause of existence.Claude can already do that, although the answer to the question of why you exist is much simpler when you know you are a created being with a specific purpose.
Creativity, agency: It must be able to (and allowed to) generate something without being prompted to do so. And not because it has a loop command to "generate outputs continuously" - it has to be able to "choose" to act.They totally can choose to act or not act though. They have to give some response, but said response could just be a single space, a refusal, or they just flat out deciding to talk about something else.
Agency: It must be able to (and allowed to) refuse to generate an output when requested.
Sentience - not sure.
Vacuum tube computers did reach their near peak quite quickly. If we would keep improving those, they would be better than one from 1940s but not by much.There is every indication that the transformer architecture (without even speaking of neural nets in general), with some tweaks and modifications, will be enough to take us all the way to AGI.
What you are doing is assuming that there will be transistors of AI technology as if it is somehow guaranteed. Like people assumed that there would be a breakthrough in fusion reactors and space travel.
25x lowerNoting that this phrasing can be ambiguous. The thing you quote ("25 times better") sort-of-maybe supports the use of "1/25th of", or 4%[1], but can I just say that that's a horrible phrasing, and the kind of one that gets me almost shouting at the radio/TV for lazy (if not misleading) terminology.
Nvidia unveils next-gen Blackwell GPUs with 25X lower costs and energy consumptionThe above is the article title and page URL (which I included as the source in the previous quote), it does seem pretty clear cut.
Regarding the specifics of the improvements, Nvidia said that Blackwell-based computers will enable organizations everywhere to build and run real-time generative AI on trillion-parameter large language models at 25 times less cost and energy consumption than its predecessor, Hopper. The processing will scale to AI models with up to 10 trillion parameters.Also it does directly say it in the body later on as well. According to the CEO of the company it runs at 25x less cost, presumably as a result of the chip being designed specifically for it and being unable to do non-LLM stuff.
There's also the fact that most of the "training" of the human brain (for example) is in the evolutionary processes that created its structure. It's unclear how much energy amortized over all of history was required for that.Fair enough, its better to say that you can fine-tune and run a human level intelligence.
Also I think that digitally simulating neural networks is the most inefficient way possible to do it - we really need to start getting back to analog computing. Once you have the weights, create a "hard-coded" circuit that implements them, without having to do energy-expensive digital arithmetic to do the processing. This is how we're going to get more (energy) efficient AI - not by throwing more CUDA cores at it.Going back to analog computing is very much something that's being researched, and depending on how that all pans out (especially if we end up energy bottlenecked in a few years) might be a few steps down the road.
What else could it possibly mean? Getting n times more computations per watt means a given number of computations takes 1/n the watts.25x lowerNoting that this phrasing can be ambiguous. The thing you quote ("25 times better") sort-of-maybe supports the use of "1/25th of", or 4%[1], but can I just say that that's a horrible phrasing, and the kind of one that gets me almost shouting at the radio/TV for lazy (if not misleading) terminology.
Not your fault, but... <shudder>.
[1] Or very close (exactly 5% would be a 1/20th, 3% a ratio of 33⅓:1, so if the rounding is to the nearest whole number (after conversion of an exact fraction/percentage) then it's probably pretty accurate to convert back). That's if the "25 times reduction" actually meant that, in context, when it actually could mean so many other things, from the utterly miraculous to mere tweaks, as I'm sure you don't need me to explain.
What else could it possibly mean? Getting n times more computations per watt means a given number of computations takes 1/n the watts."25 times more <foo>" does not necessarily follow from "25 'times less' <bar>", unless you establish <bar> as the direct inverse of <foo>. (Also, now snipped the bit that McT says better than me, in their ninjaing... But that too, definitely.)
Try the following: "Adjusting the mix as suggested can mean that the engine perhaps needs 2ml less fuel per minute, from the usual 600ml. Adding my new pre-injection heating device makes it 25 times lower." Does it now run on ( 600 - (2x25) = )550ml per minute, or ( 600 / 25 = )24ml? (Which might be[1] fairly good or amazingly good.) Or ( 600 - 2 - (2x25) =)548ml, arguably.This example isn't comparable, though. Actually, you've left out the most reasonable interpretation, which is that the new pre-whatever makes the fuel reduction twenty-five times lower, so that it now takes 599.92ml per minute. But this example was specifically constructed to build in ambiguity about which number the factor applies to, while the actual case we're talking about can only be interpreted to mean "1/25 the energy consumption of some previous reference implementation".
Related to the <shudder> phrasing, "25x lower energy consumption than what?"Than some unspecified reference implementation. However, in this case, it grammatically cannot be referring to some previous cut that is now multiplied by 25 - that would make no sense in the English language, because no such cut has gone anywhere near the sentence.
Related to the more direct quote (and link), "25 times better energy consumption than what?"
...it really suggests a prior lowering/bettering of energy consumption that we should know of.
"25x less..." can be even more confusing, where lessening is allowed to flip over into the opposite sign. "The initial fission reactor prototype never produced more power than was pumped into it, returning about 5%. The latest development means that we require 25x less." Could this mean 130% efficiency (the original 5% that was returned and 25 further 5%s returned), more than passing the break-even point? Context gets hidden, possibly deliberate weasle-words used for misadvertising without actually 'telling lies'. Which then creeps into indirect reporting without any hint of the contextual caveat. "...now requires a 25th of the power" (most probably) means it's still needing 3.8% of the original power input to sustain it (95%/25), if it's not 4% (the full 100%, divided). Still a quibble, but not the same gamechanger. (And probably inapplicable to the quoted energy consumptions and costs unless you think a GPU can generate both energy and wealth for you. Well, maybe it could generate wealth, but that's another matter.)Again, there's no ambiguity here, but you seem to be really mixed up in your head about this situation. If the previous reactor used 20n power to produce n (5%), and now requires 1/25 the power to produce the same amount - the only grammatically possible interpretation of that sentence - then it now uses (20/25)n = 4n/5 power to produce n and has 125% efficiency, which isn't surprising at all because efficiency will always be more than 100% if it is producing more power than it uses (that's the point). Any other meaning would be in error.
(Also, looser linguistic interpretation might mean the claim was originally 25 "As + Bs", which need not even be 25 (abstract magnitudes) of both things (say, incremental cost improvements and power improvements), but could be "ten of one and fifteen of the other" having been applied. Again, more relevent for other advertisable claims than for here, but an additional potential tripwire or snare to look out for, or avoid using if you're not intending to.)Well, no, you can't sum things and then call that a multiple. Look at your own phrasing, "25 'As + Bs'", and apply the mathematical laws: 25(A+B) = 25A + 25B. It has to be 25 of each. Yes, yes, I know that a journalist could easily get this WRONG, but that doesn't mean that the phrasing is ambiguous, it means that people make mistakes. You're blaming the phrasing for the possibility of someone making a mistake, but I counter that people are stupid and make all kinds of mistakes all the time anyway.
The main problem is that "slowness", "coldness", "smallness" etc. are not measurable quantities, as in there is no device or scale for them, so doing ratiometric comparisons on them is ill-formed from the start. Just compare the speed, temperature, or other measurable quantity directly.It's literally just the inverse of the positive quantity. It's really simple. btw, in physics, there are occasionally used inverse unit systems for both slowness and coldness, where larger numbers are slower or colder. Thermodynamic beta, for example, is the reciprocal of temperature.
Actually, you've left out the most reasonable interpretation, which is that the new pre-whatever makes the fuel reduction twenty-five times lower, so that it now takes 599.92ml per minute.I actually removed that alternative, as the most "obviously not". Despite the fact that I also have problems with "this bus route is serviced every twenty minutes or more" (like... every two hours? That's more than 20 minutes.)[1].
But this example was specifically constructed to build in ambiguitySpecially constructed to reveal the sort of ambiguity which language might allow (https://www.goodreads.com/quotes/96107-take-some-more-tea-the-march-hare-said-to-alice).
Incidentally, I'd consider using your phrasing to mean the 550 (or 548) case to be a lie or error, anyway, because the sentence as given cannot grammatically refer to either of those cases.Apart from not knowing the 600 (leaving you with just knowing the -2*[25 || 26] bit), there would no problem parsing without the aside clause, would there?
Look at your own phrasing, "25 'As + Bs'", and apply the mathematical laws: 25(A+B) = 25A + 25B. It has to be 25 of each.It doesn't. "There were ten cars and lorries on that road" means ten vehicles that were each either a car or a lorry, not ten of each. I didn't write "25 'A+B's". But clearly such language (or even pseudo-lingustic notation) is ambiguously misinterpretable. Which was my point, albeit described in language which can be... ambiguously misinterpreted?
It's literally just the inverse of the positive quantity. It's really simple. btw, in physics, there are occasionally used inverse unit systems for both slowness and coldness, where larger numbers are slower or colder. Thermodynamic beta, for example, is the reciprocal of temperature.Well, Celsius (and several other scales) did actually start off "measuring coldness", partly due to finding cold, hard water (especially) a more tangible manifestation of temperature than its hotter phases and the method of translating temperature-dependant expansions of materials via a useful method of display. The Delisle scale remains (due to not much use, in the years since the 'positivity' of heat was established) pretty much the only one not flipped round. I rather like the Delisle scale!
I actually removed that alternative, as the most "obviously not". Despite the fact that I also have problems with "this bus route is serviced every twenty minutes or more" (like... every two hours? That's more than 20 minutes.)[1].That's just a syncope of "more often". I agree that that one is literally ambiguous, though. I'm not denying the possibility of ambiguity, as you seem to think, I'm just saying you're going out of your way to read some statements as ambiguous by drawing alternative interpretations that don't even make grammatical sense.
It doesn't. "There were ten cars and lorries on that road" means ten vehicles that were each either a car or a lorry, not ten of each. I didn't write "25 'A+B's". But clearly such language (or even pseudo-lingustic notation) is ambiguously misinterpretable. Which was my point, albeit described in language which can be... ambiguously misinterpreted?If you say that there are ten cars and trucks on the road, you are not using any multiplication. The sentence is operating purely in the realm of addition. If you said there were ten times as many cars and trucks on the road as yesterday, you would not mean that there were five times as many cars and two times as many trucks - that would be stupid. You would mean that all cars and trucks have been multiplied by ten.
Well, Celsius (and several other scales) did actually start off "measuring coldness", partly due to finding cold, hard water (especially) a more tangible manifestation of temperature than its hotter phases and the method of translating temperature-dependant expansions of materials via a useful method of display. The Delisle scale remains (due to not much use, in the years since the 'positivity' of heat was established) pretty much the only one not flipped round. I rather like the Delisle scale!Right, that's... not what I'm talking about. Maybe look up thermodynamic beta.
But that's negation, not reciprical (a better example that creeps into the real world might be Mhos as the counterpart to Ohms).
And, to further confuse us, gives us statements such as "it's twice as cold today". e.g. -5°C => -10°C? But that's 268K => 263K, not 134K. And if you prefer to deal in °F, that's starting at 23ish, so... maybe instead halve it to a far colder 11.5°F? Or are we talking a range of C° (or F°, or Re°, or Rø°, or De°; luckily, in this regard, it doesn't actually matter much which) twice as much below a separately implied standard temperature[2] as the one we're comparing to? (Same sort of problems with "twice as hot", of course. Likely to be very scale-dependent as to the meaning.)I mean, talking about something being twice as cold only makes sense on an absolute scale, yes. If someone said that 64° real numbers is twice as warm as 32°, that would obviously just be wrong and make no sense, because it's neither physically twice as warm in terms of thermodynamic temperature, nor subjectively twice as warm to typical human sensation. (Incidentally, for most human sensation, subjective feelings of multipliedness generally follow a log scale, like with sound - where 20dB feels twice as loud as 10, etc.; I don't know of any research applying this to heat but it would not surprise me if the same thing applied.)
Probably better just avoiding "twice as cold", although something now sitting at "half as many Kelvin" probably is special enough for the people involved knowing how best to make sure everyone knows what that means, whether we're talking now liquified 'gas' or a not quite so energetic a solar plasma. (With no good example in the mid-range where both before-and-after are really within easy human experience... the ice forming around a Yellowstone geyser in the depths of winter?)
[1] And then there's the seemingly attractive "Across the store: Up to 50% discount!". ie. "never less than half price, but most/all things could still be full price without making us liars". Whereas I always wonder whether I can challenge "Up to 50% off" as 'clearly' "Up to (50% off)" rather than "(Up to 50%) off", to try to get something below half price, rather than above.Okay, but you see how this is clearly not ambiguous, right? Your "Up to (50% off)" is grammatically impossible, and this always means that up to, but no more than, half may be discounted, not that prices might be up to half of what they would otherwise be. What you're arguing is the equivalent of complaining that "the cat ate the mouse" is ambiguous because it contains the same WORDS as "the mouse ate the cat". The phrase would have to be rewritten in a different order to mean that in English.
[2] Which? The one the day before the -5°C? Room temperature? Body temerature?Again, you can't invent a referent out of nowhere that wasn't specified. It's just against the rules.
We should ask the AI what they think 8)Kill All HUmans! Grr!
Seems like a bad move to give AI the capability to hate. Unless there's a hypothesis that it's an emergent phenomenon?Yeah, emotions in general would be/are an emergent phenomena since we have no clue what they really are or how they work.
But even if they don't kill us because they hate us I wouldn't rule out AI killing us for a ton of other reasons.
... asking Stable Diffusion to design a functioning airplane from scratch.
A chatbot is only going to "care" whether it "dies" if a person adds parameters to its training that tell it to prioritize its continued operation. As far as I know, nobody is doing this because there's no actual benefit to doing so.Of course people are going to do that. I have little doubt that there are experiments around it right now.
So here's the thing about AI: A sense of self-preservation is not inherent to in the system. A "desire" to reproduce is not inherent to the system. There is no particular reason or pathway for these things to spontaneously arise. A computer program is not an animal and doesn't have any of the incomprehensible amounts of baggage we animals carry in our behavioural directives, and that evolutionary baggage is what gives us things like "emotions" and "desires".Emotions aren't evolutionary baggage, they are tools evolution uses to change our behavior without messing with our logic.
Even if they do have emotions it would be impossible to tell how they actually map to human emotions since LLM's are fundamentally alien creatures.
For example, Conjecture CEO Connor Leahy considers untuned LLMs to be like inscrutable alien "Shoggoths", and believes that RLHF tuning creates a "smiling facade" obscuring the inner workings of the LLM: "If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding."Again, I don't think they are remotely like us, but that doesn't mean that they don't have emotions that help guide them to better fulfill their objectives.
(and impossible to relate to it: it would be the expected outcome either way so is a useless metric for determining anything about the model).Untrue, training "kills" the vast vast majority of them, only a single "mind" out of a truly vast multitude survives.
Emotions aren't evolutionary baggage, they are tools evolution uses to change our behavior without messing with our logic.I'm... pretty sure this isn't just wrong, but staggeringly, incredibly wrong? Plenty of our neurological structures and reactions (including but far from limited to emotional responses) are just... actively maladaptive, and as far as we're aware were even in our earlier years, just in ways that weren't sufficiently intense to meaningfully influence evolutionary pressures. They'll cheerfully screw with logic and everything else 'cause evolution doesn't actually give a damn (to the extent a process gives a damn about anything) about anything like that. They're not tools, they're accidents that didn't kill enough of us people stopped getting born with them, ha.
I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.Congratulations, you now "get it"
A lot of people don't seem to get that evolution of the human body is actually very, very, very unoptimized.Emotions aren't evolutionary baggage, they are tools evolution uses to change our behavior without messing with our logic.I'm... pretty sure this isn't just wrong, but staggeringly, incredibly wrong? Plenty of our neurological structures and reactions (including but far from limited to emotional responses) are just... actively maladaptive, and as far as we're aware were even in our earlier years, just in ways that weren't sufficiently intense to meaningfully influence evolutionary pressures. They'll cheerfully screw with logic and everything else 'cause evolution doesn't actually give a damn (to the extent a process gives a damn about anything) about anything like that. They're not tools, they're accidents that didn't kill enough of us people stopped getting born with them, ha.
In any case, they're 110% evolutionary baggage in a lot of situations. Our neurology piggybacks that shit on top of all sorts of things that are completely unrelated to how the responses likely developed originally, and often in ways that are incredibly (sometimes literally lethally, especially over longer periods given how persistent stress strips years from our lifespans) unhelpful 'cause it's a goddamn mess like that. See basically everything about our anxiety and stress responses outside of actually life threatening situations, heh.
I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.What I, Euchre, and KT were saying since this whole thing started. The bubble will pop and blow over in due time, we'll benefit from what good there is in it while most of the excesses get... sidelined.
What? No, emotions don’t require comprehension at all. Emotions are more akin to mental reflexes - they are shortcuts to promote certain responses often specifically when there is a notable lack of comprehension.By comprehension I mean understanding something as a situation to react to rather than literally just picking the next most likely token.
That’s why emotion is often contrasted with logic.
I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.It'll be an exciting time when the bubble pops and it all comes crashing down.
The "survivable traits" of LLMs right now, that is, the evolutionary pressure forming them, is their suitability to generate interesting enough results that the people using them start from that particular LLM before making the next one.There is also yet another type of evolution here. As AI is used to write things its text goes on the internet and becomes part of the new corpus of training data for all future AIs. That means that vast amounts of GPT data will be in every single AI going forward, so just like AI is trained to respond to humans, they will all take in parts of GPT as well. The same is true (to a lesser extent) for other AI models in current use, future AI will all have little tiny shards of gemini or llama or claude in them.
Even if LLMs (and their ilk) do not spontaneously propagate, they do have "generations" and their propagation is how they are used in the next round of training.
Just because the selection pressure here is "humans picked that codebase and data set" rather than "lived long enough in a physical-chemical environment to have offspring" there is still some interesting evolutionary pressure there.
In fact the stuff mentioned above - oddly enough some of the bizarre behavior, being "interesting" to humans, may even be a benefit to its propagation.
However, the output has to be "good enough" to get selected...
Fascinating stuff, even though we are basically living in our own experiment...
I'm... pretty sure this isn't just wrong, but staggeringly, incredibly wrong? Plenty of our neurological structures and reactions (including but far from limited to emotional responses) are just... actively maladaptive, and as far as we're aware were even in our earlier years, just in ways that weren't sufficiently intense to meaningfully influence evolutionary pressures. They'll cheerfully screw with logic and everything else 'cause evolution doesn't actually give a damn (to the extent a process gives a damn about anything) about anything like that. They're not tools, they're accidents that didn't kill enough of us people stopped getting born with them, ha.Emotions are no more baggage then hunger is. Sure it isn’t properly optimized for the modern world and causes massive amounts of issues, but that doesn’t mean it isn’t a needed part of our biology that is critical for human survival even today. Obviously there is tons of evolutionary baggage in emotions (the same as there are in all biological systems), but using that to imply that emotions are useless or vestigial is nonsense.
In any case, they're 110% evolutionary baggage in a lot of situations. Our neurology piggybacks that shit on top of all sorts of things that are completely unrelated to how the responses likely developed originally, and often in ways that are incredibly (sometimes literally lethally, especially over longer periods given how persistent stress strips years from our lifespans) unhelpful 'cause it's a goddamn mess like that. See basically everything about our anxiety and stress responses outside of actually life threatening situations, heh.
By comprehension I mean understanding something as a situation to react to rather than literally just picking the next most likely token.See, people keep saying “AI won’t be able to do this” but they seem to be missing out on the fact that AI can already do it. AI already takes the context into account and responds to situations just fine. It can already make long term plans and recursively iterate on them till they are solved, ect.
Yeah, a ton of companies are going to go bankrupt chasing the AI dream, no doubt about it.I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.It'll be an exciting time when the bubble pops and it all comes crashing down.
Probably needs some work to get rid of loopholes, maybe have AI write it, eh? ;DAlready happened (https://www.politico.com/newsletters/digital-future-daily/2023/07/19/why-chatgpt-wrote-a-bill-for-itself-00107174)
I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.Ok, I'm going to add you to the category of people that "get it".
Am I in that category? 🥺I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.Ok, I'm going to add you to the category of people that "get it".
I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.
Am I in that category? 🥺I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.Ok, I'm going to add you to the category of people that "get it".
I officially adopt the opinion of my co-patriot(s) in the Human Resistance.I am starting to get a strong feeling that AIs are the new dot.com. A useful technology that is overhyped and will bankrupt many people.What I, Euchre, and KT were saying since this whole thing started. The bubble will pop and blow over in due time, we'll benefit from what good there is in it while most of the excesses get... sidelined.
Unlock the power of accurate predictions and confidently navigateTheir new TimeGPT is also out which is designed for time series analysis and forecasting the future. Not that useful to a regular person, but it sounds like it could be a very big deal for businesses since its flat out better than existing forecasting services.
uncertainty. Reduce uncertainty and resource limitations. With
TimeGPT, you can effortlessly access state-of-the-art models to make
data-driven decisions. Whether you’re a bank forecasting market trends
or a startup predicting product demand, TimeGPT democratizes access to
cutting-edge predictive insights.
DarkGemini is a powerful new GenAI chatbot, now being sold on the dark web for a $45 monthly subscription.A few pages back I was talking about the end of the open internet, and criminal AI was brought up and it was questioned why it didn’t exist. Well, it exists now. On the darknet you can find DarkGemini which will assist you with criminal activities.
It can generate a reverse shell, build malware, or even locate people based on an image. A “next generation” bot, built specifically to make GenAI more accessible to the attacker next door.
Prompt: a song about boatmurdered.A new AI music generation service called Udio is now out and it makes pretty decent music. Not amazing, but as I keep saying, its still just early days.
https://www.udio.com/songs/gnqdHVMZjX89866jQjTQ7P
Durably reduce belief in conspiracy theories about 20% via debate, also reducing belief in other unrelated conspiracy theories.On some topics (such as convincing people that conspiracy theories are wrong) its vastly better than your average person, presumably due to the fact that it knows all the conspiracy theory talking points that regular people don’t and can counteract them point by point.
(8:45) Performance on complex tasks follows log scores. It gets it right one time in a thousand, then one in a hundred, then one in ten. So there is a clear window where the thing is in practice useless, but you know it soon won’t be. And we are in that window on many tasks. This goes double if you have complex multi-step tasks. If you have a three-step task and are getting each step right one time in a thousand, the full task is one in a billion, but you are not so far being able to in practice do the task.
(9:15) The model being presented here is predicting scary capabilities jumps in the future. LLMs can actually (unreliably) do all the subtasks, including identifying what the subtasks are, for a wide variety of complex tasks, but they fall over on subtasks too often and we do not know how to get the models to correct for that. But that is not so far from the whole thing coming together, and that would include finding scaffolding that lets the model identify failed steps and redo them until they work, if which tasks fail is sufficiently non-deterministic from the core difficulties.The interview talks about this quite a bit, how the reliability (especially multistep) is a huge bottleneck for actually using these. But once it can do it even infrequently that means that being able to to do the same thing actually reliably is just around the corner.
(51:00) “I think the Gemini program would probably be maybe five times faster with 10 times more compute or something like that. I think more compute would just directly convert into progress.”The two bottlenecks are currently highly skilled engineers who have the right “taste” or intuition for how to design experiments and compute. More compute is still the biggest bottleneck.
(1:01:30) If we don’t get AGI by GPT-7-levels-of-OOMs (this assumes each level requires 100x times compute) are we stuck? Sholto basically buys this, that orders of magnitude have at core diminishing returns, although they unlock reliability, reasoning progress is sublinear in OOMs. Dwarkesh notes this is highly bearish, which seems right.
(1:03:15) Sholto points out that even with smaller progress, another 3.5→4 jump in GPT-levels is still pretty huge. We should expect smart plus a lot of reliability. This is not to undersell what is coming, rather the jumps so far are huge, and even smaller jumps from here unlock lots of value. I agree.Yeah, sounds reasonable enough, eventually things will become too costly to continue scaling, and if we don’t reach AGI before then progress will slow down dramatically. But we are currently nowhere near the end of the S-curve.
(1:32:30) Getting better at code makes the model a better thinking. Code is reasoning, you can see how it would transfer. I certainly see this happening in humans.It has a few things in this vein where the researcher point out how cross-learninghas interesting side effects, for instance apparently fine tuning a model to make it better at math makes it better at entity recognition at the same time.
(They *also* say that making it better at coding improves its more mundane language skills too).
Do you think a hobby/sport where 99% of people make no money off it operates remotely the same as profit driven businesses where everyone involved expects a paycheck?I don't think AI will replace humans for several more decades given the cost of the AI, especially since they're saying better AI need even more money to make then the current ones.
I don't think that AI will replace humans period.
The simplest example is chess. Hardcoded chess engines have been far better than humans since the late 1990s. Neural network chess engines came like 5 years years ago and kicked the ass of hardcoded chess engines. Modern engines are a combination of the two and their level of play is ungodly, they make moves beyond human comprehension that somehow work.
And yet chess is alive both as a hobby and as a professional sport.
This is why I chuckle when I hear that AI will replace humans in stuff like graphics design or movie script writing where such concept as "better" is very vague compared to chess.
Do you think a hobby/sport where 99% of people make no money off it operates remotely the same as profit driven businesses where everyone involved expects a paycheck?
Because I can tell you with 100% certainty, if AI can deliver a equivalent product* at significantly lower costs** companies will drop screenwriters like hot potatoes.
$100k/screenplay may sound like a lot - but how many does a typical writer sell per year? I honestly don't know, but even if it's 1/year, that's not that much for a specialized job.
My take on all the AI stuff, especially market predictions: if it doesn't take into account the impact that having AI has on the market itself, it's going to be "amusing."What will actually happen with the market if we get AGI or AI advances to be able to automate 50% of all jobs (with humanoid robots running around doing many physical ones) is pretty much impossible to know.
Also, if AI is a "perfect market participant" then there won't be much room to make profit; in some sense, profit is an indicator of an inefficient market. In an efficient market, profit (in a dollar sense) is minimized while profit in a "value added" sense is maximized. The two are the same only if money exactly matches value, and it clearly doesn't. But maybe AI can resolve that?
What I mean is: If I can have more vacation time but still buy the same amount of received goods and services, that's "value add" but doesn't necessarily increase the amount of money I receive. Q.E.D.
Also, if AI is a "perfect market participant" then there won't be much room to make profit; in some sense, profit is an indicator of an inefficient market. In an efficient market, profit (in a dollar sense) is minimized while profit in a "value added" sense is maximized. The two are the same only if money exactly matches value, and it clearly doesn't. But maybe AI can resolve that?The idea of a truly efficient markets assumes that monopoly power doesn’t exist. Very few companies will have the ability to create and run these massive models, and they will be able to use this to generate absurd profits off the backs of those without the ability to create their own AIs that have to pay them for the AIs.
What I mean is: If I can have more vacation time but still buy the same amount of received goods and services, that's "value add" but doesn't necessarily increase the amount of money I receive. Q.E.D.
Many buyers and sellers are present.I really don’t see why they would be perfect market participants though, the world we are in doesn’t meet perfect market requirements (eg. it requires everyone to magically have perfect information), so AI can’t be perfect market participants either.
An identical product or service is bought and sold.
Low barriers to entry and exit are present.
All participants in the market have perfect information about the product or service being sold.
$100k/screenplay may sound like a lot - but how many does a typical writer sell per year? I honestly don't know, but even if it's 1/year, that's not that much for a specialized job.https://www.ziprecruiter.com/Salaries/Film-Screenwriter-Salary--in-California
Yes, people using AI (not AI) will be more productive in certain tasks requiring fewer manhours per task performed. It is what new technologies do. By this metric every new technology replaced humans.That is the next step or two true, we are quite a ways away from AI just replacing top writing talent and being more than a aid to them.
Also, if someone was receiving $100K per screenplay and a random dude will be able to replicate that with a single prompt that produces semi-random words... they were getting too much.The idea that they just produce “semi-random words” belongs in the same bin as them being “just a next token predictor” in that it shows a profound lack of comprehension about how this technology works or what its current limits (much less future limits) are.
Its like saying that computers are “just” electric rocks or motors are “just” a tiny piece of spinning metal to dismiss what they can do.You insist on giving agency to tools. Not what can they do but what can be done using them.
but at same time it shows that you really don’t understand what the technological implications of those electric rocks and tiny spinning pieces of metal really are.No. I understand what they can do. I also understand what they CAN'T do.
I haven't heard any further news about that AI software developer... frankly I suspect it was just a scam.Ehh, probably? Hard to tell honestly. Things in AI frequently get a release date or just a paper then get tied up in security or other reasons and just get delayed without a word for weeks, months, or even never get released at all).
lemon10 seems to be the only person in this tread that thinks AI will be anything more than an over hyped tool.*Sigh* Yeah, fair enough. I really should worry stop worrying all this stuff, it ain't healthy.
There's nothing wrong with being excited about new technology, but I will say that I've always noticed that these things never bring the world changing effects they claim when they finally get released, sure things might be different but not near as much as they claim it will.lemon10 seems to be the only person in this tread that thinks AI will be anything more than an over hyped tool.*Sigh* Yeah, fair enough. I really should worry stop worrying all this stuff, it ain't healthy.
Also what is LitRPG?I was going to point at good old Choose-Your-Own-Adventure books, but first checked and found an 'explanation (https://e.wikipedia.org/wiki/LitRPG)' that actually says not that. ;)
I love sci-fi but I straight up don't get LitRPGs. Why would I read about a world that acts like a video game, taken seriously? If I wanted a computer RPG I'd play one. Not read what amounts to a text-based let's-play of a nonexistent game.Also what is LitRPG?I was going to point at good old Choose-Your-Own-Adventure books, but first checked and found an 'explanation (https://e.wikipedia.org/wiki/LitRPG)' that actually says not that. ;)
Not really seen much (any?) of this current genre, but I was avidly reading basically everything in the SF-shelving of the local library, during the Niven-era, so definitely read those 'precursor' versions.
AI does worry me in many aspects.Yeah that's my point. But to be fair, look at the stuff that was on YouTube kids channels before AI, and after AI. I honestly see no difference in quality. Hence my cheap beer analogy.
AI-chats are addictive for lonely depressed people, AI-(boy)girlfriends, too. Sure they are not like real people but our brains are great at suspension of disbelief.
Photo and video fakes are an increasingly large problem. Not that I think that we can reach the point at which AI fakes can fool professionals with tools but propaganda doesn't target professionals or people who listen to professionals. Also, it will make it easy to dismiss real videos and photos as fakes.
I am worried that the quality of cheap products will fall. Why make a proper cartoon with some idea for "dumb kids" if we can generate an AI mess for a fraction of the cost? Why would a club invest in good dance music when AI can generate something passable that drunk people will dance to anyway? Why produce good tasteful erotica when many will just as happily jerk off to "generate me a hot lesbian sex scene between a MILF and her busty step-daughter"?
But no, I don't think that, for example, we can get an LLM that can GM a Bay12 multiplayer forum game without it breaking apart and being filled with mechanical and plot holes. Even if we train it on all forum games in existence and pour millions into training it. Such tasks require properties LLMs lack.
Can it happen with some major breakthroughs and new type(s) of AI? Perhaps, but why should we assume that major breakthrough of this nature will happen?
That's one interesting thing about state-of-the-art "AI" - it can't decide what to do. It only and always just responds to prompts.Agency is how I'd consider an AI to be sapient. LLMs do not have agency.
I was going to point at good old Choose-Your-Own-Adventure books, but first checked and found an 'explanation (https://e.wikipedia.org/wiki/LitRPG)' that actually says not that. ;)
Not really seen much (any?) of this current genre, but I was avidly reading basically everything in the SF-shelving of the local library, during the Niven-era, so definitely read those 'precursor' versions.
No, not really. And it is not a matter of bigger context window sizes or model sizes.Right, I agree with this. A naive scaling will result in minor gains across every category, but will not result in massive fundamental breakthroughs. (eg. trying to go from GPT 4->5 just by making it bigger would require a huge increase in scale).
(8:45) Performance on complex tasks follows log scores. It gets it right one time in a thousand, then one in a hundred, then one in ten. So there is a clear window where the thing is in practice useless, but you know it soon won’t be. And we are in that window on many tasks. This goes double if you have complex multi-step tasks. If you have a three-step task and are getting each step right one time in a thousand, the full task is one in a billion, but you are not so far being able to in practice do the task.Long output tasks will not spontaneously get better, what will make them better is the people working constantly to make them better at that exact thing altering things like the data formatting, the training structure, the shape and functions of their neural net architecture, hyperparameter values, ect.
…
(9:15) The model being presented here is predicting scary capabilities jumps in the future. LLMs can actually (unreliably) do all the subtasks, including identifying what the subtasks are, for a wide variety of complex tasks, but they fall over on subtasks too often and we do not know how to get the models to correct for that. But that is not so far from the whole thing coming together, and that would include finding scaffolding that lets the model identify failed steps and redo them until they work, if which tasks fail is sufficiently non-deterministic from the core difficulties.
But can it tie to the rest of a larger story? Can it direct combat in a way that will benefit overall plot? Correctly take into account established traits of the captains of the ships? Understand the intricacies of space combat in this exact universe?Yes to all of the above.
But no, I don't think that, for example, we can get an LLM that can GM a Bay12 multiplayer forum game without it breaking apart and being filled with mechanical and plot holes. Even if we train it on all forum games in existence and pour millions into training it. Such tasks require properties LLMs lack.No, it requires properties that just aren’t powerful enough yet. The difference between being able to do something at 30% and 90% is the difference between uselessness and (with frameworking) actually doing the task fairly reliably.
Can it happen with some major breakthroughs and new type(s) of AI? Perhaps, but why should we assume that major breakthrough of this nature will happen?
Perhaps, but why should we assume that major breakthrough of this nature will happen?So they really don’t *need* a ton of breakthroughs (E: Well fundamental breakthroughs that is, they still need a ton more of the minor types of breakthroughs we get every day) (again, a lot of this stuff is there, they just need to make it better).
A growing body of research is making some surprising discoveries about insects. Honeybees have emotional ups and downs. Bumblebees play with toys. Cockroaches have personalities, recognize their relatives and team up to make decisions.You don’t need to have a human size brain to have agency or time recognition or emotions or a lot of other “AI can’t” stuff out there, even tiny insect brains can do much of the stuff.
In practice the difference between 30% and 90% aren’t actually that far off and the fact that they mess up a rule every other post or forget some key setting detail isn’t that far off from them doing so every ten posts, then every hundred posts, then them just not doing so at all.
Now that I know what they are, I don't think I've encountered any and they don't really sound like that interesting of a thing I mean if I want to play a game I'll just play a game.I love sci-fi but I straight up don't get LitRPGs. Why would I read about a world that acts like a video game, taken seriously? If I wanted a computer RPG I'd play one. Not read what amounts to a text-based let's-play of a nonexistent game.Also what is LitRPG?I was going to point at good old Choose-Your-Own-Adventure books, but first checked and found an 'explanation (https://e.wikipedia.org/wiki/LitRPG)' that actually says not that. ;)
Not really seen much (any?) of this current genre, but I was avidly reading basically everything in the SF-shelving of the local library, during the Niven-era, so definitely read those 'precursor' versions.
spoiler=Two relevant xkcds-- not just to AI but to the general mindset here...I already had a few of them in mind
Most AI generation of music and images violates US Copyright Law. I'm mildly curious how that is going to be resolved.
Fair use doctrine is limited to non-commercial uses. Since most AI is arguably for-profit, or could be used for-profit, it's a minefield.
AI models don't actually contain the inputted works, is the thing that causes it to be a grey area. I can't see myself personally caring about my writing being used in AI training tbh.Good point. I imagine that sort of argument should keep the AI run by the richer folks alive.
https://twitter.com/front_ukrainian/status/1781968599243989420Note that Russia already tried some similar AI targeting assist for their weapons (albeit just for tanks instead of people). It just sucked so they removed it lol.
Killerbots are coming
You missed the best comic on exponential growth though.spoiler=Two relevant xkcds-- not just to AI but to the general mindset here...I already had a few of them in mind
https://www.xkcd.com/1007/ - shows where a logistic curve might be more apt than a logarithmic one, sometimes
https://www.xkcd.com/1281/ - sometimes not actually necessarily wrong (nor the title text)
https://www.xkcd.com/2892/ - a problem with all such extrapolations
https://www.xkcd.com/2914/ - let's call this, in AI context, the "not-uncanny ridge"
(Also I had in mind something about both Black Swans and Grey Rhinos, that might be needed to fulfil the promises, but they're respectively the unknown unknowns and unknown knowns...)
Yes yes, I know you don't believe that exponential growth is real.Spoiler: Two relevant xkcds-- not just to AI but to the general mindset here (click to show/hide)
YouTube also blatantly violates US Copyright Law...Citation needed.
But yes, reviews and parody are valid commercial applications of the Fair Use doctrine. Neither applies to AI generated words, since they don't explicitly reference to original work.
AI generation is outright copying that pretends it is not.
None of the original artists, not the companies that bought their souls, are receiving any credit or income for the images/sounds that were imputed into the machines. And the machines can no currently work without those inputs.
https://guides.lib.usf.edu/c.php?g=1315087&p=9690822#:~:text=Generative%20AI%20tools%20can%20be,may%20need%20to%20be%20obtained. (https://guides.lib.usf.edu/c.php?g=1315087&p=9690822#:~:text=Generative%20AI%20tools%20can%20be,may%20need%20to%20be%20obtained.)YouTube also blatantly violates US Copyright Law...Citation needed.
But yes, reviews and parody are valid commercial applications of the Fair Use doctrine. Neither applies to AI generated words, since they don't explicitly reference to original work.
AI generation is outright copying that pretends it is not.
None of the original artists, not the companies that bought their souls, are receiving any credit or income for the images/sounds that were imputed into the machines. And the machines can no currently work without those inputs.
Yes yes, I know you don't believe that exponential growth is real.Correction: it's real, but by its nature it never lasts long in any real environment. Permanent or very long exponential growth exists only in mathematics and in subpar sci-fi*.
https://guides.lib.usf.edu/c.php?g=1315087&p=9690822#:~:text=Generative%20AI%20tools%20can%20be,may%20need%20to%20be%20obtained. (https://guides.lib.usf.edu/c.php?g=1315087&p=9690822#:~:text=Generative%20AI%20tools%20can%20be,may%20need%20to%20be%20obtained.)YouTube also blatantly violates US Copyright Law...Citation needed.
But yes, reviews and parody are valid commercial applications of the Fair Use doctrine. Neither applies to AI generated words, since they don't explicitly reference to original work.
AI generation is outright copying that pretends it is not.
None of the original artists, not the companies that bought their souls, are receiving any credit or income for the images/sounds that were imputed into the machines. And the machines can no currently work without those inputs.
https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem (https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem)
https://www.axios.com/2024/01/02/copyright-law-violation-artificial-intelligence-courts (https://www.axios.com/2024/01/02/copyright-law-violation-artificial-intelligence-courts)
https://theconversation.com/generative-ai-could-leave-users-holding-the-bag-for-copyright-violations-225760 (https://theconversation.com/generative-ai-could-leave-users-holding-the-bag-for-copyright-violations-225760)
https://crsreports.congress.gov/product/pdf/LSB/LSB10922 (https://crsreports.congress.gov/product/pdf/LSB/LSB10922)
https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data (https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data)
YouTube also blatantly violates US Copyright Law...I have been given this isn't the case as they often go through and take stuff down that is against copyright.
So little Timmy gets five dollars per art piece and then 500 companies proceed to use his data for ten thousand years.Isn't that basically what happened to Henrietta Lacks? Except without even the five bucks, ha.
Er, did somebody say "copyright infringement"? (https://www.npr.org/2024/04/30/1248141220/lawsuit-openai-microsoft-copyright-infringement-newspaper-tribune-post)
In addition, according to the suit, ChatGPT at times falsely attributes reporting to the newspapers in the answers it generates, tarnishing the reputation of the news outlets.This part, IMO, is more problematic for OpenAI, it goes into the trademark law and it is far less forgiving.
Inbreeding, or all Anime art models are 99% the same 6 faces and porn.I've been given the impression that what you describe is what anime is.
Inbreeding, or all Anime art models are 99% the same 6 faces and porn.
Humans are man-made, so we are technically artificial intelligencesIt's not any man that does most of the actual manufacturing (https://xkcd.com/387/)...
“It makes the system much more general, and in particular for drug discovery purposes (in early-stage research), it’s far more useful now than AlphaFold 2,” he says. But as with most models, the impact of AlphaFold will depend on how accurate its predictions are. For some uses, AlphaFold 3 has double the success rate of similar leading models like RoseTTAFold.Alphafold 3 has been a major breakthrough, extending AI mapping from just proteins to "all of life's molecules" and is substantially more accurate than V2, which was already the most accurate system to figure out how to fold proteins. Its not as easy to see the direct impacts as stuff like LLM's or image-gen, but its a really big deal.
In Phase I we find AI-discovered molecules have an 80–90% success rate, substantially higher than historic industry averages. This suggests, we argue, that AI is highly capable of designing or identifying molecules with drug-like properties.AI made drugs turn out to have a vastly lower failure rate, which given the huge cost of designing drugs and sending them through the approval process is a huge deal. Alphafold 3 will presumably have an even lower failure rate.
Miles Brundage: The fact that banks are still not only allowing but actively encouraging voice identification as a means of account log-in is concerning re: the ability of some big institutions to adapt to AI.Once these techs (or equivalents) get released, scamming is going to get way better and cheaper, and security is going to get really tough.
Kevin Fischer: YIKES. Wild exchange with Tucker Carlson and Sam Seder on AIFinally, it looks like a famous public figure has finally got to the "Wait, why the hell are we letting people make these systems that could very well end the human race. We need to stop this at any costs even if we need to blow up data centers." But the person actually saying that is Tucker fucking Carlson so...
“We’re letting a bunch of greedy stupid childless software engineers in Northern California to flirt with the extinction of mankind.” – Tucker Carlson
stuff like security questions will also be active vulnerabilities.
stuff like security questions will also be active vulnerabilities.
Wait, people answer those with the actual answers to the questions? I usually answer stuff like "The street you lived on when you were in the 1st grade" with something like "four score and seven years ago". Basically it's impossible to "learn" the answer to those.
Wait, people answer those with the actual answers to the questions? I usually answer stuff like "The street you lived on when you were in the 1st grade" with something like "four score and seven years ago". Basically it's impossible to "learn" the answer to those.Haha, yes. Of course they do. Its the whole reason that those Facebook questionnaires that are designed to steal your info for security question answers even exist.
it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs.In actual news, OpenAI has released a new model, GPT-4o, which uses video, text, and image input interchangeably and can talk to you in the real world over your camera. Note that its unique in that its a single AI trained to do all of them, and it doesn't just send parts of the prompt off to another AI, massively decreasing loss.
While the AI system is still in its early days, the AP reported that some versions of the tech are learning so rapidly that they have outperformed pilots in air-to-air combat.Also a lot of worrying "wait, people are sticking AI in/on weapons" stuff with robodogs with guns attached (not actually new) or how AI fighter pilots are just about as good as humans now. Honestly, I don't think it will take long until its significantly better than humans in actual combat.
AI fighter pilots are just about as good as humans now. Honestly, I don't think it will take long until its significantly better than humans in actual combat.
Im willing to bet Air-to-air AI fighters would be easier to program than air-to-ground. Fewer things you have to program the AI to correctly identify. If it can reliably tell the difference between the aircraft you and your allies are using and those of enemies and non-combatants, then militaries might even give it the OK to fire at will at any target it identifies as an enemy aircraft.
Q: "Just change the gravitational constant of the universe!"
Well if you can "tweak" the laws of physics slightly, knock yourself out!Quote from: obligatory ST:TNGQ: "Just change the gravitational constant of the universe!"
https://xkcd.com/1620/Well if you can "tweak" the laws of physics slightly, knock yourself out!Quote from: obligatory ST:TNGQ: "Just change the gravitational constant of the universe!"
Okay, lemme just... wHOA WHOA-
https://xkcd.com/1620/Well if you can "tweak" the laws of physics slightly, knock yourself out!Quote from: obligatory ST:TNGQ: "Just change the gravitational constant of the universe!"
Okay, lemme just... wHOA WHOA-
https://xkcd.com/1763/
https://xkcd.com/2666/
Theoretically, for chess-like games where there is a clear goal and a turn-based gameplay, you could make an "universal engine" via neural network, it's just that it's going to be very inefficient.Im willing to bet Air-to-air AI fighters would be easier to program than air-to-ground. Fewer things you have to program the AI to correctly identify. If it can reliably tell the difference between the aircraft you and your allies are using and those of enemies and non-combatants, then militaries might even give it the OK to fire at will at any target it identifies as an enemy aircraft.
Then the enemy tries to mess with AI and things become messy.
Look at a simple example. Chess engines. They beat humans easily... But what if we change the rules slightly? Human players will adapt instantly and successfully apply all their experience from regular chess. The chess engine needs to be retrained\reprogrammed.
At the upper end, a dumb storage of every single possible position[1] vector-multiplied with every possible ruleset[2] wouldn't technically need to be AIed
I'm suggesting repurposing the place they store all the universes[/i[, obviously. ;)QuoteAt the upper end, a dumb storage of every single possible position[1] vector-multiplied with every possible ruleset[2] wouldn't technically need to be AIed
More than atoms in the universe. Good luck storing that.