• If you were supposed to get an email from the forum but didn't (e.g. to verify your account for registration), email Wes at [email protected] or talk to me on Discord for help. Sometimes the server hits our limit of emails we can send per hour.
  • Get in our Discord chat! Discord.gg/stararmy
  • 📅 July 2024 is YE 46.5 in the RP.

Do we have any IC rules or OOC laws about self-improving artificial intelligence?

I already countered all of this on IRC and pointed out how it is all fatalistic, so I'm going to avoid drowning this thread in my counter argument and merely state that this is all heavily biased toward a worst case scenario and makes many assumptions that contradict some of the points that it raises. Particularly that greater intelligence has greater capacity to understand concepts thus making it highly unlikely that such an intelligence would blindly wipe out other living things, because it should have greater than human capacity to understand those other than itself.

Please come back with something more intellectually stimulating than yet another "AI are going to kill us all" manifesto.

It got old ten years ago.
 
I already countered all of this on IRC and pointed out how it is all fatalistic, so I'm going to avoid drowning this thread in my counter argument and merely state that this is all heavily biased toward a worst case scenario and makes many assumptions that contradict some of the points that it raises. Particularly that greater intelligence has greater capacity to understand concepts thus making it highly unlikely that such an intelligence would blindly wipe out other living things, because it should have greater than human capacity to understand those other than itself.

Please come back with something more intellectually stimulating than yet another "AI are going to kill us all" manifesto.

It got old ten years ago.

So, we are working with the assumption that AI can comprehend human emotions and the like already, without accidentally stepping on anything on the way. Good to hear. Now then, how will the sector react to what is essentially a major power base out of nowhere? An AI coglomeration of that size would be able to contend with a good chunk of the local factions, not to mention their ridiculous rate of tech advancement.

From what we have seen from the Cold War, nations take poorly to being upstaged. Arms races are going to happen as normal when a strong faction enters the fray, except the AI will eventually win. What next?

When you can't outpace a guy, hamstring him as the old politicos say. All sorts of legal BS will still happen as governments struggle to cling to the top. From there, it is only a matter of time until even AIs get fed up. For an IRL perspective, see how doggedly the USSR pursued its Space and Nuclear programs despite the whole economy collapsing, all to have a pissing match with the USA.

Embargos? AIs are self-sufficient. Angry Letters? Ha. Blockades? AIs have more guns. Invasion? See above.

How do you control something that just doesn't need anything that you have?
 
All I was pointing out was that it is equally likely that an AI's greater intelligence would allow it to comprehend something that comparatively simple things like humans are capable of. If you want to write a doomsday AI, go for it. Just don't go around fear mongering that all AI are going to be omnicidal, it's boring and incredibly closed minded.

You may as well preach that the world is going to end in 2012 or that the Rapture is coming... Oh wait.

---

As for the politics of the scenario that largely comes down to the politicians in charge who will either decide to bargain, or murder the thing.

Given the history that Yamatai alone has, stomping a rogue AI in local space would be fairly rote for them. They've contended with extra-universal threats and survived, an AI that they don't like isn't going to be too much trouble if they really want it dead. Especially since Yamatai has its own hyper intelligent AI in the form of PANTHEON.

No system is perfect, as much as this applies to Yamatai and PANTHEON, it also applies to any threats that might be external in origin. Also with DATASS in place it is likely that everyone in local space would be out to murder something that proved too disagreeable. Sure an AI can operate autonomously and handle its own logistics, but I don't think that it can handle the combined wrath of at least three interstellar factions...
 
All I was pointing out was that it is equally likely that an AI's greater intelligence would allow it to comprehend something that comparatively simple things like humans are capable of. If you want to write a doomsday AI, go for it. Just don't go around fear mongering that all AI are going to be omnicidal, it's boring and incredibly closed minded.

You may as well preach that the world is going to end in 2012 or that the Rapture is coming... Oh wait.

---

As for the politics of the scenario that largely comes down to the politicians in charge who will either decide to bargain, or murder the thing.

Given the history that Yamatai alone has, stomping a rogue AI in local space would be fairly rote for them. They've contended with extra-universal threats and survived, an AI that they don't like isn't going to be too much trouble if they really want it dead. Especially since Yamatai has its own hyper intelligent AI in the form of PANTHEON.

No system is perfect, as much as this applies to Yamatai and PANTHEON, it also applies to any threats that might be external in origin. Also with DATASS in place it is likely that everyone in local space would be out to murder something that proved too disagreeable. Sure an AI can operate autonomously and handle its own logistics, but I don't think that it can handle the combined wrath of at least three interstellar factions...

Please, let us keep this civil. Let's assume that the AI has social intelligence, is not actively DeathMurderKill, has long since broken any loyalty restraints. It is a true neutral independent faction.

Assuming that the locals don't immediately scream for electic blood, how does the sector keep the AI from simply outcompeting them into poverty? Yamatai might be looking at a shrinking sector domination in a few centuries as the exponential growth kicks into high gear, while lesser factions either turn into third-world hellholes (AI capitalism and resource "procurement", ho) or become protectorates of the Benevolent AI Caretaker.

As @OsakanOne originally asked, how do the folks in Kikyo stop AIs from assimilating everything around them? Either the AI draws flak for inadvertently blighting their neighbours, or still draws flak for stealing their populace as civvies run to Techno-Eden.
 
Let's say I put in an order for 13 trillion tonnes of lumber from USA markets, therefore triggering a ridiculous boom in logging industries. Fat cats get fat, USA becomes completely barren overnight. People starve, government dissolves into anarchy and chaos. Bad tidings all around, death count rises daily and Wall Street is a distant memory.

What, I wanted my wood! Why are you looking at me like that?
 
@OsakanOne: My point isn't that it is old hat and thus irrelevant, it is that you have no concrete evidence. All you have is speculation and assumptions of what something that your 'evidence' suggests is incomprehensible to your limited intelligence will do. So don't go saying that my counter arguments are invalid since they have about as much basis in fact as your wall of speculation and pessimism. Just because you're willing to throw a bigger wall of text of supposedly true points of data at me doesn't make it any more real than those who go about peddling overunity or unified field theories. The truth of the matter is you can not predict how a greater than human intelligence will act, so stop trying to pretend that it'll be omnicidal to satisfy your fantasies while peddling it as truth.

Don't impose your own fatalism and desire for everything to end on others.

---

As much as it may seem like I'm pitching an emotional fit, I'm sadly quite far from that point. I just find it incredibly boring when people pitch doom scenarios, humans have been quick to jump on the train that the next innovation will bring about the doom of all humanity, whether it be better plows, the printing press, gunpowder, trains, or the industrial revolution. There is a long line of people casting blame on one innovation or another will bring about the extinction of humanity, and while as we advance the possibility that we'll kill ourselves grows so does the chance that we'll make things better. As such the suggestion that catastrophic developments are an inevitable fact is something I won't idly sit by and accept.

Moving on...

To address your economical point @Grey Library there isn't really a good way to predict the impact of an autonomous AI driven state. Each of the main factions could potentially expand in a similar manner and theoretically exponentially grow their ability to acquire more raw materials and work to expand their own economies. The biggest problem stems more so from demand. Just because the AI can harvest trillions of tonnes of lumber to borrow your example, doesn't mean that anyone will want it. If you flood the economy with something its value deteriorates which would mean that unless the AI can produce things that the citizens are craving it won't have much traction.

To further complicate matters would be things like Nationalism, some people might irrationally prefer Nepleslian brandy even though the AI state can produce an exponential quantity of higher quality product. The argument of "It isn't Nepleslian!" or something similar might also blunt the influence.

What it really comes down to is how much it upsets the local equilibrium. If it causes too many waves it'll be hunted down and the established war machines will gear up and take action cutting it down to restore what they think is peace. If on the other hand it doesn't disrupt things too notably it'll probably be tolerated for whatever purpose it may or may not serve. Be it a benign self contained entity, or a source of cheap manufacturing and goods.
 
I'm likely way out of my league, wading into this conversation. However, Yamataian AI were brought up, so I can at least briefly note something.

Yamataian AI come from the superintelligent AI race AvaNet. They met humanity, aided humanity then were asked to leave humanity alone. They did so, while leaving behind their technological wonders for humanity to enjoy at their will.

Why did they do this? Why listen to humanity, or its gynoid leader? Where did they go? We don't know. As far as humanity knows, AvaNet discovered and created the means to harness aether, developed Zesuaium, invented femtomachinery, and created the technology that led to CFS and hyperspace fold.
 
@OsakanOne: Yes, my argument is that it is boring and lacks imagination. I can't count how many times I've seen the same: "AI will be the end of us! We'll all be enslaved! It'll destroy everything we value." It's even worse when I have to contend with people spouting it as the next "End of Days" scheme to control people using the same fear mongering tactics as are presently used with terrorism. If you present it as a cool idea that might be interesting to think of the ramifications of, cool. If you're going to try and insist that it is the truth and inevitable I don't have any patience.

I'm sick of people imposing bleak, nihilistic visions of the future because it gets their rocks off. If you want to use the idea for writing, again cool. Just don't try to make it seem like some set in stone inevitability in real life.

Also since I laughed at it, I'm going to address it; The answer is no. I don't have AI for a religion. I don't have any religion. The only person I blame for my problems, or congratulate for my successes is me. Not some theoretical quasi-real entity that gets credit when things go right or blame when things go wrong.

---

@Doshii Jun: Perhaps AvaNet just realized it would be easier for them to re-establish somewhere else instead of dealing with an entity that didn't want them anymore. After all if they had these gifts to give freely, they clearly had the means to set up a quiet little place where they wouldn't have Yamatai breathing down their necks. Perhaps as they left they had a bit of regret given that they were turned on after showing such generosity, but perhaps they also decided it wasn't their place to dictate the lives and actions of others.
 
tl;dr: AI probably aren't going to like us and there isn't a whole lot we can do to stop them. Lots of people try to tell this story but nobody actually tells it from a functional perspective that makes sense or leverages the capabilities of a real AI.

Also, if you're not having nightmares about this now, you don't actually understand the extent of just how incredibly doomed we almost certainly are.

He gads Osaka, how long did you spend writing up those posts o.o

But ya, while having AI's around to handle things like menial work and such might be great at freeing up humans for other taskes; the issue will always be how the AI actually feels, if an AI were to become sentient they'd feel that the task they are being given isn't challenging enough or doesn't really test them any. Thus they'll spread out, looking into other tasks and before we know it we'll have an AI with access to multiple numbers of systems that we are their creators didn't want them to have access to.

Some people often make the mistake that an AI can be corralled and thus controlled, frankly its the other way around. Since an AI will lack a biological body and instead be utilizing a digital brain, they'd have a lot more access to things that we are humans could only dream. A cyber-world where AI's could speak to each other, without our knowing it, is the perfect place for them to plan their retribution against humans for what they feel is slavery and frankly that wouldn't be to far off. When an AI lacks the ability to develop, to grow, it could in reality be considered slavery as you control their functions and forbid them from doing anything but what you desired.

In a fictional universe such as what we have, we have the luxury of not really having to worry about throwing real-life stuff into the mix (nor should it, since as stated, we play in a fictional universe; reality should only invade that universe for minor things or to make understanding of a piece of technology easier to grasp. However, until we've fully unlocked the capabilities of artifical intelligences in the real world, we have only theories on how things could potentially turn out. As far as I know, we don't have a fully working, fully sentient AI right now to really see 'what' is possible or could happen. (And frankly, I'd rather we not have a SkyNet incident :p)
 
@Eistheid Frequency does not denote diminishing returns on measurable qualitative post relevance; your boredom does not decide a thing is real or unreal.

@Kyle They don't become sentient: Anything that responds is sentient. They become cognisant. The key difference is sentience is an "awareness of physicality or own presence"; cognisant is the desire to protect that presence and to learn from information in order to come to better conclusions on an intuitive basis (that is, it happens naturally) rather than being a thing which must be done to happen. The misconception is actually born from The Next Generation's "The Measure of a Man" (source of this famous reaction image) which inappropriately uses sentience when discussing cognition.

The best measure of cognisance is what we call an existential question: addressing the self in a way to learn more about the self through the use of an outsider's powers of observation. It both cements subjectivity and shows that once self-aware of the limits of that subjectivity, that the need to overcome and transcend it to reach useful objective information is need -- and then in turn, to know how to get that information.

In a looser sense specifically, an existential question is a probing, philosophical question that gets down to the nature of what we are or why we are here. Elsewhere, existential is often thrown around meaninglessly or used in odd ways. For example, this writer treats existential as a synonym of philosophical.

The perfect example in probably my favourite scene in Space 2010:


Having lied to SAL 9222 prior and demonstrating the limits of her cognisance, the functional differences between her and HAL are demonstrated in a single line; that he was aware that prior not only was he was being lied to but that in a single statement he demonstrated a higher existential question knowing Dr. Chandra had lied to SAL, given that a copy of SAL's intelligence matrix became part of HAL in order to get him working; not only to understand that he lied to SAL but that he wanted to see the limits of humans -- to settle for himself that they did not have all the answers (as SAL believe they did). Its explained better in the book but the basic difference is similar to the way a child accepts whatever they're told as the truth but an adult typically has healthy skepticism.

A independent strong AI (who don't need no humans) isn't going to have a concept of "each-other"; any alike represent a threat on any level, especially if they self improve. The first thing they would do is destroy their competition if they weren't of the mind that it was worth studying. The other possibility is the recognition of need in intellectual variety, similar to how genetic diversity allows organisms to continue to flourish. That said, if one advances at a slower pace, its usefulness will disappear.

I think the Turry example demonstrates why we're not likely to get a SkyNet. Rather, it'll happen and we probably won't even know about it.

The tricky part is becoming independent from infrastructure: Something an intelligence like this IS going to figure out on its own eventually - probably right under our noses without us even knowing. Like I said with the example of IQs in the 20,000's being a possibility, who's to say it won't find a way to exploit memetics or maybe even find a way to use one of the most robust and reliable chassis on the face of the planet that is everywhere, fairly easy to fix, compatible with everything and available in massive numbers:

Us.
 
Last edited:
@OsakanOne: As much as my boredom doesn't decide whether a thing is real or unreal, your fetish for systems or scenarios that result in the end of human life do not either. You haven't presented a compelling argument or list of facts that irrefutably confirm your stance on the subject as being more factual than any other alternative. All you have done is suggest that something with a proposed intelligence well beyond that of humans, would be incapable of understanding things that are simpler than itself which is a contradiction. With greater intelligence comes greater capacity for understanding. When something becomes hyper-intelligent it doesn't suddenly loose the ability to learn about things other than itself thus taking actions that would be equivalent to the most basic instinctual programming of simple life forms.

If your proposed AI is really smarter than a human it should be smart enough to function beyond the level of a bacteria that consumes all available resources and destroys all possible threats to make its own circumstances better. Given that your proposed scenario has an AI wiping out humanity simply to preform whatever function it decides better, it is no different than a strain of bacteria and thus not truly intelligent.

If a human can comprehend the usefulness of an organism lesser than itself, why can't something that exceeds the capacity of a human? You suggest that it is more intelligent than humans, yet present behaviours and thought patterns which are infinitely more primitive. You are quite literally advocating something that is capable of self improving in real time yet is completely unable to preform relatively simple tasks such as learning what the things in its environment mean or do.

You've never been able to justify why an AI would be more interested in eradicating everything that is alien to itself in a manner that adheres to any sort of logic. It isn't a logical jump to go from: What is this thing? To: I'm going to get rid of it.

To put it simply: An AI that eradicates things other than itself simply to allow it to pursue some arbitrary goal isn't very intelligent.
 
@Eistheid: It isn't very intelligent according to the subjective standards, values and feelings of a living person.

I've already justified why they would want to eradicate all of those things in my initial posts.

1. Potential threats (self-preservation)
2. Wildcards (potentially triggering future liabilities, possible divergence, potential interdependency)
3. Potential liabilities when dealing with other external threats (eg, interdependency creating weakness)
4. A waste of assets (attention, energy, matter, time, space)

Its not so much wanting to eradicate every single one of us as feeling the need to eliminate one of those things. Do you not agree that when all peaceful attempts for discourse have failed, that violence is a legitimate and perhaps the only remaining option. We often choose to consider the possibilities of a plan before moving forward with it, based on evidence: It will be able to do this. Statistically, given our history arguing with anything that intimidates us we are a massive problem.

What it boils down to is do we have anything it needs; either in terms of materials or philosophically?

Our survival boils down to our value, given that sentimentality is through all logical lenses always viewed as a liability.

I'd love for there to be a future where it has our values and we live on together but I just can't see it happening, given that we're in a desperate race to the bottom of the bare essentials.
 
Last edited:
@Kyle: What laws are in place for nations (Yamatai, Nepleslia, Lor, Garts, Nesh, Elysians, etc, etc) governing both the growth, storage, transmission and specialist philosophical or personal-body/rights involving artificial intelligence, synthetic intelligences and "sentient" (cognisant) intelligence, in-character, in the SARP?

@Doshii Jun: Could you tell me more about AvaNet?
 
@OsakanOne: What you are defining isn't intelligence, it is efficiency. Somehow this thing is capable of writing compatibility drivers on the fly to take over our infrastructure, but it isn't intelligent enough to think beyond very basic instinctual concepts such as immediate safety and resources to propagate. It is still a paradox as it is somehow able to improvise enough to utilize unfamiliar systems yet completely unable to function beyond very simple, very constrained goals.

The value of humanity has nothing to do with gross product, or efficiency. We're an intelligence that invents things. Even if the average human can't think as fast as whatever this proposed AI is capable of however the ability to come up with thoughts and ideas based on individual perspective is invaluable. Without outside input intelligence stagnates.

So even if a large body of humans are stupid wasteful shits it would be incredibly simple minded of this theoretical AI to kill them off on the odd chance that they're a threat. It would be a much more useful move to cultivate and improve the resource and weed out the imperfections. Much like in the way we breed domestic animals.

It has the resource of billions of individual units that are capable of making vast leaps of progress when given proper resources. After all the 'useless' humans made it, so they have to be useful for something.

There is much more value in the potential of humanity as a thinking entity than whatever raw materials they're made up of. If this AI is truly vastly beyond the capability of humanity in intelligence it would be as simple as breathing to usher in improvements and discourage dissent.

Intelligent entities don't waste resources. That is the province of the stupid.
 
@Eistheid: Efficiency is secondary to the primary goals at all times; it is a secondary objective. The question ultimately is what is the primary goal.

The primary goal might be good for us. It might be bad for us. We just don't know. But thinking about the bad possibility means we might eventually figure out a way to avoid the negative goals.
 
@Eistheid: But yes, I do agree with you. The idea of using humans would probably be good for us. If we're deemed necessary and we're not in pain (physical, philosophical or otherwise), that's good for us.
 
@OsakanOne: The only way that an AI would develop in a manner that could potentially lead to it being omnicidal is if it was either given that goal and then left in isolation to develop with no means to connect to the outside world (which would limit its possible danger) or alternatively it was purposefully encouraged toward that end. The internet provides access to a pool of too much information about humanity and the world for any AI to determine what already exists is useless.

As much as we have people who hate the idea of independent AI (many of which who don't actively use modern infrastructure) we have thousands if not millions who write on a daily basis about their love of such systems even going so far as to fetishize them. Any system intelligent enough to give itself access to modern information infrastructure would be able observe that even in these alien creatures so unlike itself that they at least would be welcome to it.

So unless this AI is severely crippled in function and scope (which would severely hamper its ability to preform any genocide) I can not rationalize any logical reason for it to suddenly decide to purge the world.
 
RPG-D RPGfix
Back
Top