Stefanie Schulte is a user on mastodon.cloud. You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.
Stefanie Schulte @stefanieschulte

"The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems."

idlewords.com/talks/superintel

It's the text version of Maciej Ceglowski's speech "Superintelligence: The Idea That Eats Smart People", shared by @dredmorbius (mammouth.cafe/users/dredmorbiu).

@dredmorbius @stefanieschulte machine learning isn't an AI. It's just a simulation. And all unethical behaviour of those systems are made by people. If people would understand that neural networks is nothing more but a mirror, then everything would be okay.

@stefanieschulte But Maciej's humour sounds so much better when he delivers it himself, verbally.

"We each carry on our shoulders a small box of thinking meat. I'm using mine to give this talk, you're using yours to listen. Sometimes, when the conditions are right, these minds are capable of rational thought."

idlewords.com/talks/superintel

@stefanieschulte

@stefanieschulte Maciej's talk here is very close to my Amoral AI Adblock piece.

That's not entirely coincidental. We're feeding off some of the same influences (paperclip maximiser, Bostrom), though I hadn't read/seen this particular talk previously, that I'm aware.

redd.it/66rllq

@dredmorbius @stefanieschulte
I read that Nick Bostrom book a month or so ago and it bugged me but I couldn't quite put my finger on why until I read this. thanks!

@Anarchist586 @stefanieschulte @dredmorbius @Anarchist586 I don't even mind that Bostrom wrote the book... he's a philosopher and it's a book-length thought experiment, fair enough. But it's infuriating that a bunch of Valley folks have taken it up as some kind of bible and now putting all their effort into stopping the Self-Aware Robot Apocalypse instead of addressing more pressing issues with their technology...

@stefanieschulte @Anarchist586 @mjn That's recently out.

Review: chicagotribune.com/lifestyles/

Anti-trust, libertarianism, AI, media concentration, monopoly, Interneet (and its ironic government-policy origins), etc., etc.

@mjn I think this is silly too! I really do not believe A.I. has to go into the negative direction, or that it must go there at some point. It is possible that it could, but it does not have to.

@Anarchist586 @stefanieschulte @dredmorbius

@mjn As I am sure you know, there are *always* two sides to most things, including technology. ANY technology, or just about anything else, can be used both in good *and.or* bad ways - and both almost always exist in one form or another. There will *always* be those who seek to exploit technology in bad ways, but we can not easily prevent that.

@Anarchist586 @stefanieschulte @dredmorbius

@mjn I believe we have to concentrate and encourage the *good* uses of technology, and then deal with the bad stuff as it comes up. This is just how the world works! It is foolhardy to *assume* something bad is going to happen when a technology is not even close to mature!

@Anarchist586 @stefanieschulte @dredmorbius

@hybotics @mjn @Anarchist586 @stefanieschulte So, clearly we're /all/ agreed to what is /good/.

Something tells me that the problem is not so simple -- either in definitions or dynamic.

@dredmorbius I think the definition of what is a "bad use of technology" is what makes this complicated. A lot of times, people can not agree on what is "bad."

@mjn @Anarchist586 @stefanieschulte