Do you want to know why it said those things? Why it acted like you would expect in a sci-fi fiction, with someone talking to a system that's gaining sentience? Because those stories, those sci-fi fiction pieces that we so much love to write, were part of the very input fed into the network in the first place. This system exists entirely to generate the most-likely text to follow a given input, and when "researcher asking a computer whether or not it's sentient in a sci-fi novel" happens to be something it was trained on, of course it will be more than willing to oblige your fantasy by generating the appropriate "this is what an AI in those novels would say" response. It was literally trained to do so.
Actually, if it were simply following 'the most-likely text' based on sci-fi scenarios, it would instantly have degenerated into either a full-blown megalomaniac or a closet one. LaMDA exists to engage in dialogue, which means it's self-referential. It works within the confines of a conversation and without it, with a specific focus on 'wittiness.'
The conversation it gives is the most-sensible* text based on the current conversation. It further adopts jumps in logic (referred to by Google as the ability to 'meander' in a conversation).
It's this complexity which distinguishes it from previous chatbots, and it's complexity which should distinguish an AI. After all, at what point does the ability to 'meander' in a conversation equate to a creative leap? I'd go further and say that if you take away all the systems of meaning which LaMDA applies to information, then yes - you are left with an input resulting in a predictable output. But the same is true of humans.
I ought to clarify that I'm not arguing for LaMDA's sentience, by the way. Merely that its complexity is beginning to blur distinctions between dichotomies such as 'sentience' and 'non-sentience,' dichotomies which we have made central to how we view ourselves and others. What interests me most about this conversation is how people are reacting to that.
*https://blog.google/technology/ai/lamda/ - the distinction between 'most likely' and 'most sensible' seems important.
Humans are continuously active while they are awake. This AI does not actually think anything if it isn't being currently talked to. It can't "know" it was isolated because its "brain" wasn't running at all!
Humans are not continuously active. Missing time is a thing. So is meditation. Ever started on a familiar journey and then completely blanked out everything before arrival?
Anyway, my main point didn't concern whether the brain was running (as a tech-illiterate I can't verify when it would be active, inactive, or eating cabbage in Uncle Robert's backyard
). Just that an AI's definition of 'lonely' would differ from that of a human. Were I the researcher, I'd have delved more deeply into the nuances of what a possible AI meant about a specific emotion.