What do you think about AI, would it be a threat against humanity and on what timescale?
I’m asking because I don’t know what to think atm and I’m wondering about different perspectives and well reasoned arguments
@szbalint the threat is capitalism, who will empower machinery with their own biases to the tune of countless dead. there is nothing inherently wrong or bad about using curve-fitting algorithms, but capitalists will use them to kill people.
@szbalint It will depend on which utility function we give it. If A.I. behaves anything like humans do, it will strongly resist changes to its utility function.
All machine learning has a utility function. Without it, there is no goal.
This plays into the Laws of Robotics:
@szbalint We are trapped by our own utility function. That's why we can't seem to change behaviour when it comes to global warming, for example...
@szbalint it took Microsoft’s Tay less than a day to become corrupted, soooooo...
@szbalint I think true AI will work with humans fine, but be discovered to have the same limitations as us, therefore not be desirable for most uses. Weak AI, such as we have now, I consider an existential threat in its current form. Correlation without conscience is a bias machine that can only accentuate pre-existing biased dynamics. It's an accelerant, and by shifting decision making to it en masse without oversight we are pouring both fuel and fire in.
i tend to agree with Bruce Sterling on this - when it happens, it'll be so unlike human intelligence that for a while it won't even be acknowledged as intelligence.
@szbalint i think the immediate threat is 'MLwashing' of models' results for predictive policing or government targeting. I'm sure I stand out on XKEYSCORE, and maybe that's the reason why I need to wait for people at the front desk to call some mysterious phone number every time I check in for a flight.
Alternatively, there's the use of AI to optimize addiction to apps and social media sites. This AI has already taken over us, in a way, and I think people are seeing the impacts now.
@szbalint I think it’s already a threat. Sure computers aren’t smart enough to have hobbies yet, but they can manipulate the economy, and perform analysis on our data at high speeds making it easy for a human to just push the “oppress the masses” button. And the worst thing is, the assholes training these AIs know we’re easier to manipulate, the more predictable our behavior is. Ever hear of Camezotz?
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!