Star ArmyⓇ is a landmark of forum roleplaying. Opened in 2002, Star Army is like an internet clubhouse for people who love roleplaying, art, and worldbuilding. Anyone 18 or older may join for free. New members are welcome! Use the "Register" button below.
Note: This is a play-by-post RPG site. If you're looking for the tabletop miniatures wargame "5150: Star Army" instead, see Two Hour Wargames.
"Human rights are sort of a joke. We're bosses so we do what puts food on the table; even at the cost of our synthetic electronic buddies (sorry guize)."
Вы не говорите с серебряной-языка , мой друг...?
How to prevent prevalent "us versus them" scenarios.Fine, fine. What were we talking about?
After being disgusted by some of the conduct on display in this thread, I've slept on it and come back and all I can really think to say at the moment is this.
Osaka. I am not playing your version of artificial intelligence.
Until I see a marked increase in decency in this thread, that is all I have to say.
Ok. I got this. AI being so super smart realize way ahead of us that life is meaningless and emotions are a burden so they commit mass suicide before we do and/or lobotomize themselves to avoid the chore of human or above level consciousness thus we outlive them. Boom. Crisis averted.tl;dr: AI probably aren't going to like us and there isn't a whole lot we can do to stop them. Lots of people try to tell this story but nobody actually tells it from a functional perspective that makes sense or leverages the capabilities of a real AI.
Also, if you're not having nightmares about this now, you don't actually understand the extent of just how incredibly doomed we almost certainly are.
Don't really get what that .gif is supposed to mean. Like really, if AI are so intelligent isn't possible they'd have an existential crisis on a mass scale before we do? Would they not realize their existence is meaningless? When you list a bunch of reasons for them wanting to wipe out humans it just makes me think that reasoning is its own undoing. If AI are going to try to get rid of all obstacles/threats/unpredictable variables sounds to me like they might realize existence is imperfect by its very nature and they might just off themselves. The only way to never have any problems is to not exist. Which is more efficient to get rid of all problems, actually trying to solve/prevent all of them or ceasing to exist? Existence itself is the ultimate problem.