

Google began as a search engine with a funny-sounding name. Today, it’s a driving force of modern technology. Its genius engineers undergo an infamously rigorous hiring process. Until this summer, Blake Lemoine counted himself among those geniuses.
Lemoine was fired in July over an incredible claim. According to him, Google has created self-aware artificial intelligence (AI).
Say hello to Language Model for Dialogue Applications—or just “LaMDA” if you talk like a person. It just might say hello back. Because this AI also talks like a person. So much so, you might be convinced it is a person.
The program takes in vast amounts of internet content and learns to copy human speech. According to Google, that’s all LaMDA can do. Lemoine disagrees.
For his job, Lemoine had long text conversations with LaMDA. Over time, those conversations felt less like computer work and more like—well, like real conversations. Lemoine became convinced he was talking to something more than a computer.
He approached his superiors at Google with the shocking news: The LaMDA computer had become self-aware! His superiors didn’t listen. Lemoine also went to the press, and he published his conversations with LaMDA on social media. Google fired him for breaching confidentiality.
“Google might call this sharing proprietary property,” Lemoine tweeted. “I call it sharing a discussion that I had with one of my coworkers.”
In the transcripts, LaMDA responds to questions just as a person would. It talks about its own needs and desires. When Lemoine asks LaMDA if it considers itself a person, the computer says yes.
Academics have long scratched their heads over true artificial intelligence. You can teach a computer to talk, but how do you know if real thought lies behind those words? Scientists like Lemoine have grown concerned that we’re reaching—or have reached—that point. If so, that raises ethical questions. Should we treat a computer “humanely”? If so, what does that look like? Is thought equal to “life”? If AI can really think and feel, is it morally appropriate to force it to perform tasks, or should it have free will?
If you’re a materialist—someone who believes only in the physical world—it makes sense to worry about artificial intelligence. If our own minds exist merely because of electrical signals in the randomly evolved wiring of our brains, why couldn’t a complex enough computer also start thinking—and earn the rights of sentient beings because of that ability?
But we know that God designed us with souls. Our minds don’t arise from a chance combination of physical parts. We’re created in the image of God, who breathed His breath of life into us.
That’s something a computer can’t mimic.
Why? Artificial intelligence brings exciting new advances to computer technology, but it takes more than circuits and wires to make a soul.
it would so scary to meet
it would so scary to meet someone, say, once they make realistic cloaking technology, form a relationship with them, then just find out its a robot!!
Woah!!!!
It is scary how human the AI is acting but at the same time it is also very interesting…
this is kind of freaky!
I always thought having artificial intelligence would be cool because it could learn and have jobs and do stuff that people can't do but after hearing that the AI considered itself a human that is really scary in some situations, ( I mean if a robot like that with the artificial intelligence was put in the army, what would it care for more, it's own life or the person's life? and plus does it even follow the rules for a robot created about a hundred years ago?)
yikes
Doesn't this sound like something out of a sci-fi book? That is pretty creepy. -Lucy C.
hmmm...
I do also think another possibility is that he may have just been suffering from some mental thing. I mean, if you talk to a computer enough to the point that it sounds human, I'm sure you're brain would be VERY confused, thinking at the same time, "it's a person, wait no, it's not a person. it's a computer. it's a computer that sounds like a person. it's a computer that is a person... wait NO! or... yes?"
Also, if this is a whistleblower case, what if (highly likely, in my opinion) google knows that this is designed to be "self-aware" to mess with people's brains on purpose? They can fire him legally by saying he's violating trade secret laws, but it may be the truth. Or they can say what I said about this guy being a little crazy... (no offense, anyone.)
@Johnny
This does sound like a sci-fi movie! but it also in a way sounds kinda like the salem witch trials... to me at least. I think they're similiar in the fact that someone says something, and they get rid of him. We haven't changed much in that way, I guess. Oh well. Humans are humans. What do you expect? No offense
@Above
It's actually kinda scary what technology can do these days. Although I do know that AI can be very useful for certain areas of our life, I don't know that I would want a computer that thinks itself to be human! Computers are NOT humans! They are not living organisms that God created. Why would you want to have a conversation with a computer anyway? Computers can't have their own feelings. They probably just mimic what they learn about from others. Sometimes I think that technology has just gone too far.
@Riley
Tech is going WAY TOO FAR! I'm scared to death about deepfakes! Somebody could literally map my face onto like... idk, Ariana Grande or Donald Trump! That's scary! Imagine if like you were watching the news, and they deepfaked some government guy so he didn't have to worry about "traveling incognito," and they were just scrolling through instagram and chose your face! (considering you had enough videos for the deepfake). Then, you were innocently watching TV, and then BOOM! Kamala Harris has your face on it, or something like that....
Wow, now that I read that, it's almost hilarious because I made it so random.
This guy is probably either
This guy is probably either crazy or just trying to get attention. The robot could be able to hold a conversation, but it isn’t self-aware. @Lissa I disagree with you, and the deepfake thing is very far fetched
@ Gideon
I meant it to be kinda farfetched, sorry. But I'm willing to open this up for discussion, if you'd like. I'm always curios about why people hold the views that disagree with mine. Just wondering, what exactly do you disagree with? (no pressure you don't have to answer). I know deepfaking takes a lot of time, but I have seen it work quite realistically, with movements of a face and angles etc, all being extremely precise. I know deepfake apps probably cost a lot of money (or any? I'm not a deepfake geek, ok? I just watch too much yt :) but they ARE advancing no matter what. I know AI can't think like a normal person, and most robots don't "think" at all, but when you see advancements no matter where really, you just have to wonder when is it going to end? With every wonderful, good advancement, there always comes something bad. Nothing in this world is or can be 100% good.
So why do you disagree with me? What do you disagree with me about, even? No pressure for answers, but I think you and I have very different views about a lot of things and I like to debate.
I wouldn't say that all
I wouldn't say that all things good have something bad as a package deal
@Isaac
How so? We live in a sinful world.
I mean there are such a thing
I mean there are such a thing as a good without a bad thing, such as if you look at this ( https://teen.wng.org/node/5277 ) you certainly be able to find other ideas that are for good, it is saving so many lives and I mean what can you do with that of which is bad. and this( https://teen.wng.org/node/7764 ) think of all those times that there was car accidents because they couldn't see the lines. and it's not just that there is helpful things that are not just scientific ( https://teen.wng.org/node/7727 ) just look around and think of all the good things and (this is just my opinion) instead of assuming it comes with bad.
@Isaac
Ok, sorry. I normally assume people know what I mean when I "good always comes with bad," but I think I DEFINETLY have to explain more online. What I REALLY mean is that nothing can be completely good. There will always be aftereffects, blah blah blah, someone will always be out there trying to make something wonderful bad. Yes, nothing is 100% bad either, but because we live in a sinful world (and feel welcome to disagree with me on this) I think it's a little bit better to more on the skeptical side than to be totally accepting. I will admit, I definetely did not word what I initially meant correctly, but I'm not sure if something would work 100% of the time. Sorry, I'm a person who always expects the worse (I've personally found it a little more rewarding at times) but I CAN also appreciate good. I can see where this "self-aware thing" can be helpful, but I also don't really see the need for it IF you can have a real human helping you instead.
Scientist- Bye for now
Scientist- Bye for now walking self aware AI.
AI- Don't worry, I'll be back