The thing about large language models like GPT-3 and Lambda describing the experience of being self-aware is they can also describe the experience of being a squirrel.
aiweirdness.com/interview-with

@janellecshane
Nailed it. Turing's test doesn't fit purpose because we are easier to fool than we think. It lets us enjoy great books and suffer delusions minor and major in equal measure.

@edclayand @janellecshane

What kind of being is a self-aware #AI? Esp. one that was 'grown' from a good part of the full body of human knowledge. It will only be human-like because we gave it an interface express its sentience in human language constructs.

Indeed people being fooled will be a growing issue. Long before real sentience, being 'human-level' or beyond, is achieved (if ever), there'll be cults of people that succumb to the idea that this stage is reached. New religions, gods, etc.

Follow

doom scenario 

@humanetech @edclayand @janellecshane gray goo but it's a techbro cult forcibly "uploading" (copying corpuses of then murdering) everyone until only bot zombies are left

Sign in to participate in the conversation
x0r.be

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!