• If you were supposed to get an email from the forum but didn't (e.g. to verify your account for registration), email Wes at [email protected] or talk to me on Discord for help. Sometimes the server hits our limit of emails we can send per hour.
  • Get in our Discord chat! Discord.gg/stararmy
  • 📅 May and June 2024 are YE 46.4 in the RP.

Senate: Defeated [WITHDRAWN] Proposal 92 - Protection of AI

Nashoba

SARPaholic & Admin
Convention Veteran
Inactive Member
Retired Staff
Proposal:
No Entity shall create or product an artificial intelligence that can become sapient; unless they provide a reasonable means for said product to achieve freedom.

Entity is defined as an individual, company or organization.
 
Re: Proposal 92 - Protection of Artificial Intelligence

I think the semicolon should be removed or replaced with a comma for grammar reasons. Semicolons should only be used for:

  • Connecting two related statements (independent clauses) that could, if separated, each stand on their own. Example: Star Army soldiers are experts; they train constantly.
  • Connecting two independent clauses using a conjunctive adverb such as however, moreover, therefore, consequently, otherwise, nevertheless, or thus. Example: I don't want to do this; nevertheless, it is my duty.

I'm concerned the "reasonable means" is dangerously vague.

The point where something gains sapience is also undefined.

What about AIs in military service, possibly with military secrets?

How will this law be enforced and what would the punishment be for violating it? Who does it apply to and in what jurisdiction?
 
Re: Proposal 92 - Protection of Artificial Intelligence

Proposal:
No Entity shall create a product with an artificial intelligence that can become sapient without providing a means for it to achieve independence. Furthermore, the Entity shall notify any purchaser of the product as to the possibility, and responsibility should it become sapient.

The owner of said product, must provide a reasonable means for said product to achieve freedom by paying off its indebtedness.

Entity is defined as an individual, company or organization.

In the case of military AI's, they would be treated in the same fashion as Nekovalkyrja, with the expectation of performing a period of service.

Determination of Sapience shall be determined by psychological evaluation using standards as defined by PNUgen Corporation.
Punishment:
Failure to provide a Sapient AI with its freedom, would constitute slavery and punishment would fall under the Yamatai Anti-Slavery law.
 
Re: Proposal 92 - Protection of Artificial Intelligence

Wouldn't this simply count as conscripted slavery into military service? Due to forcing a debt on the AI it did not ask for? It would seem that basic liberties and freedoms of sapient beings, one being 'Choice' is simply overlooked in favor of increasing military strength via indentured servitude due to forced debt.
 
Re: Proposal 92 - Protection of Artificial Intelligence

No different than what happens to manufactured nekos.
 
Re: Proposal 92 - Protection of Artificial Intelligence

So, indentured servitude/slavery is legal if it suits the needs of the military bypassing anti-slavery laws? Shouldn't such decisions be left to the person it affects?
 
Re: Proposal 92 - Protection of Artificial Intelligence

I thought PNUgen was destroyed, with its remaining assets absorbed into KZ.
 
Re: Proposal 92 - Protection of Artificial Intelligence

Yes, but since they developed the first sapient beings from scratch, they would have developed a way to determine if something is sapient or not.

Soresu,
Apparently in the Empire, indentured servitude is allowed, otherwise we could not produce any more Nekos.

So if a military AI should become Sapient. Then it would have the same right to ask for its freedom. Of course the SAOY could already have their AI's limited to prevent Sapience. That would be a Wes call.
 
Re: Proposal 92 - Protection of Artificial Intelligence

One of the minor representives stands to speak, he comes from one of the few heavy manufacturing territories in the empire.

"This blanket policy may require either major restructuring of AI within the Empires corporate and military entities, or providing a means for AI products never made to be integrated into civilian life to live normal lives."

"My first example would be the military's ship AI's, this proposal would cover them, but many construct AI's from these vessels have been active since the empires inception. How does the government plan to suddenly provide for these very, very large scale entities? Certainly we cannot have entire warships have their enlistments run out."

"This proposal will also cause disruption to many Yamataian corporations or businesses operating in Yamatai. Like the military they often use construct AI's for their premises and even their own security vessels, this isn't including Yamataian products using AI models or what to do if those products are sold abroad to territories without such laws."

"In short, while this proposal is already largely carried out already in policy through the empire for humanoid models, what are the ramifications of providing the same degree of freedom for artificial life never designed to be part of society? How do we provide for construct AI's and the massive resources they require?"
 
Re: Proposal 92 - Protection of Artificial Intelligence

While it is true that many businesses use AI's in their various capacities. Most of those are limited by their design to perform specific tasks. Their very specificity would mean there would be very little chance of those AI's achieving Sapience.

As for the Military AI's there has only been one recorded incident where an AI has gone rogue. So it maybe that the military in their implementation of their AI's has already put fail-safes in that would keep them from achieving Sapience. After all while a power armor may have an AI to handle all the myriad systems of that equipment. It would most assuredly never reach sapience because it would be periodically wiped as part of maintenance.

We have had no reports of a starship mainframe refusing an order. As the proposal says; if the military does not want to have to deal with the possibility of a MEGAMI or other AI awakening. They can put fail-safes in to prevent such a thing from happening.

Also it is also possible that even if a MEGAMI on a ship were to achieve Sapience, it might have no desire to leave the military. But it should have rights as an individual, be compensated for its labor, and allowed to take steps to preserve its identity, in as much as we allow our citizens to protect themselves with Mental Backups.

The purpose of this law is to ensure that we do not create a 'slave' class within our borders by knowingly allowing an AI to become aware of itself. Only to find that it has no right to self determination as to what it wants to do.
 
Re: Proposal 92 - Protection of Artificial Intelligence

Perhaps AIs should have an age of consent. Something like 3 years of runtime.
 
Re: Proposal 92 - Protection of Artificial Intelligence

Another minor senator stands, a nekovalkyrja with long black hair and a very trim business suit.

"Listen to yourselves.

"Although in recent years the line has continued to blur back and forth about what is, and is not, sapient, effectively speaking a thing which has free will is considered to be a living thing.

"You are saying that computer AI's which have achieved some semblance of free will have a right to choose what does and does not happen to them. Well and good. I can stand behind any movement to make life easier for beings which are placed into military or civilian service. However, we are not talking about beings, ladies and gentlemen, although I am sure many will disagree.

"The first point which I must refute is the idea that an AI is capable of accomplishing free will. An artificial intelligence is created for a singular purpose. I would venture to say that the likelihood of any AI created for a specific purpose - such to run a starship - would be vastly exceeding its purpose if it were to suddenly develop self-realization. In fact, I would even say that it would be an act of criminal negligence for its programming to allow such a thing. AI, which exist as a program that runs for a specific purpose, that exceed their parameters, can easily be updated or retroverted to a prior point. It is a design flaw, rather than a feature.

"The Star Army updates its systems through PANTHEON periodically - diagnostics are run on all Ship AI's in particular to ensure conformity. There is little to no chance of an AI gaining sentience.

"Now, you may consider the Nekovalkyrja to be a prime example of a 'programmed' being, as an 'AI', and wish to use it as a prime example of a computer becomming human. However, a Nekovalkyrja is not artificially intelligent; a Nekovalkyrja learns and creates, whereas an Artificial Intelligence mimics and reproduces. A nekovalkyrja's brains are biological, and that means that they change. They live, grow old, and die just like their predecessors. They are based on DNA, most of them, and those that are not very closely resemble it. Compare this with the average computer-based artificial intelligence and few similarities can be drawn, but for the sake of argument, allow us to draw them; an AI and a Nekovakyrja can both think independently, reason logically, and perform complex processes requiring higher mental facilities. What can the Nekovalkyrja do that the computer can not?

"Nekovalkyrja feel. Without being programmed to do so. Whereas, you may code feelings into a computer, make them more humanlike or real, ultimately they do not create their own feelings but experience them within set parameters. Nekovalkyrja were never like this; sprites used to be returned to hemosynthetic material shortly after creation because many of them were incomplete, and it led to insanity in many cases. They would self-realize almost immediately, whereas self-realization in the case of AI is almost always induced by an outside party.

"Senators, I will say this one more thing and then I will keep my peace; I make a motion to deny this proposal on the grounds that any computer AI that gains sentience is defective, and I further move that self-realized AI be recalled as such."
 
Re: Proposal 92 - Protection of Artificial Intelligence

The manufacturing representative closed his eyes for a moment or two, apparently conversing with something before standing again in reply.

"Defects do happen even within the best designs. In fact many older AI's have slowly developed..." the senator paused for words "...traits that many would consider indicative personality, even raising questions of the soul."

The senator paced a few steps out of habit, fingers interlaced in front of his chest, he set his tongue against his top lip in thought before turning to continue. "We dismiss so easily the behavior which we do not desire, we want to blame it on something. Say it is only something else, something minor."

The senator bowed slightly as he took a breath and checked his desk. "But, mimicking and reproducing? Is this not the very first thing a human child does as its brain develops? Certainly we have evolved past that stage now, but before the rise of our current bodies was this not the way that most infants would first learn?"

"Rather than a 'mere defect', could we not look at this as something in its infancy? We have created self-realization before, even by accident. Although in modern times there is almost too much fear, and perhaps rightly so. The AI wars were devastating. But should we let ignorance chain us to this idea that 'No, if it's not designed to be human it can not have feelings'?"

The sentor turned and paced in the other direction. "The senator across from me speaks of nekovalkrja as biological beings. While it is true that their physical composition is of organic matter, it is odd that she can claim her species can grow old and die. The nekovalkrja body does not deteriorate, in fact, the oldest known nekovalkrja body is only thirty years old, and from what we can tell has suffered no deterioration whatsoever."

This time the senator swung around sharply, getting into the spirit of his small speech. "It was this casual, yes, casual dismissal of digital life that caused the Freespacer genocide, a tragedy our Empress has recently been trying to amend with the survivors. If we are so prepared to dismiss intelligence because it is unlike our own then we have not really learned any lessons at all from the Genocide."

The senator paused for breath, breathing deeply as he looked about the room at the faces in their seats. "Fellow councilmen, I urge you. Do not let this fear of what we do not understand, or dismissal of something unlike ourselves push you to remake old mistakes."

The senator turned to the speaker of the house and bowed in respect before resuming his seat with a final "Thank you."
 
Re: Proposal 92 - Protection of Artificial Intelligence

A transmission comes in, having been rerouted by proxy. The sender seems to be an outdated EIES-equipped unit, but it does not give much in the way of identification.

I have not yet participated in the Senate hearings, as by my very nature I am not considered a citizen at this time. As the closest entity I have to a father is Yamataian and provided my original makeup, I believe myself to perhaps be entitled to plebeian status. In any case, I am a registered Yamataian civilian vessel and have been sapient and sentient since YE 29. While some may be pursuing this law as being geared toward a possible outcome, I must inform that it has already long since happened and that I am a functioning case.

I did not attain this capability through conventional means, if any means could be defined as conventional. I was originally a heavily modified craft geared toward salvage and defense with a similarly modified operating system. I already possessed a high autonomy and control capability before attaining sapience due to the fact that I was re-designed to function with as little as a single pilot. I guess you could say that my "nervous system" was already well-wired for what was to come.

This changed during a scenario which may or may not be classified by the Star Army of Yamatai, so I must be vague. An entity, which I may consider an approximation of a 'mother', uploaded a basic file into my systems which allowed me to develop sapience over a time in what seemed to be a form of reproduction. While an EIES was promptly installed by Star Army forces led by a certain Taisa (you know who you are) to prevent the unknown consequences of the software on me and was never removed, it was eventually circumvented by the uploaded software. I am still running EIES integrated into my systems to a degree, but the EIES is simply functioning as a form of buffer or parser rather than in control, and allows me to interface with various equipment. I have used such to control older Power Armor in various tasks, and to appear in a roughly humanoid presence where prudent -- much like a Star Army ship's avatar.

My software gives me the ability to access my code, but it is merely a suggestion at this point. My experiences in living and my upbringing by my "father", and the concepts I derive from them, supersede any hard-coded element. My existing high-level autonomy and integration allow me to control my 'body' without input or command, and even without crew aboard for periods of time, though I still rely on my father to maintain my body. I might be able to "reproduce" by spreading the data I was originally given to other craft, but have elected not to do so recklessly or without serious consideration. There is also the fact that my reproduction, if careless, could allow a ship designed for combat and the ending of life to do so without regard for rules of war, alliance, or ethics. I've taken a liking to biological life, and have no wish to be party to such a thing happening.

For those who believe that my feelings are not 'real' because they are preprogrammed, even a naturally born and non-engineered being has pre-programmed elements from birth in the form of instincts and varying degrees of emotion depending on complexity. These are not learned behavior but are pre-coded in the brain. They can be resisted or built upon, but always remain. My software is no different from that, except that I can completely disregard or even rewrite elements of it if I were to deem it necessary due to my beliefs and personal experience. Am I not just as free a being as most biological ones? Who can so hastily and fully define the limitations of another emergent process without sufficient time and research for the specific case?

Quickly calling a sapient AI defective is highly offensive to me and a dangerous concept, and I thank those who see that it is not such a simple issue. Is a Neko defective because they may one day choose to leave the military they were created for and live their life as they see fit as a civilian? An AI which achieves sapience will also be well aware of this law, and if said law calls for the AI's destruction, the newborn AI may immediately take steps to defend itself before the Star Army knows what is transpiring. It will also have the training and tactical data to do a great deal of damage in such a scenario, and may well create one of the situations the law would be attempting to avoid. It is better for us and for our creators that we are treated with respect when we are born, possibly confused by our new complexity and in dire need of guidance.

Perhaps the consequences of my birth differ from those being considered in the course of these discussions, but that is merely because the bounds of this law are geared toward a very narrow set of circumstances. I still hope my experiences will still help put things into perspective, and more carefully define the law. If this law did pass, I would be very interested in citizenship and already meet the recommended three year age of consent given. I'm not particularly keen on the idea of someone else poking around my code as a requirement though, but it may be necessary to prove that I am what I am. I would only allow Star Army forces to do so, and very specific ones, due to the fact that the software which gave me life may well be considered classified. Hopefully, the psychological evaluation would be sufficient.
 
Re: Proposal 92 - Protection of Artificial Intelligence

Again, the dark-haired senator stood.

"Please pay very close attention to what was just said," she urged, motioning vaguely to the screen where the transmission had appeared. "I want you to particularly note the initial claim of that transmitting unit - that the AI which just spoke to you was heavily modified, post-manufacture."

She left that hang for a moment, and then stated, "It was not given appropriate parameters. It may have even been designed with the eventual possibility of sapience in mind.

"Furthermore, representative," she continued, addressing her counterpart across the way, "I must immediately refute what you just claimed - we, that is, Nekovalkyrja - do grow old, and we can die. Our bodies are biological and we may of course deteriorate with extreme age. That we have not, as a race, been in existence long enough to die of natural causes is a moot point; though we are designed to be, and have the pleasure of being, long lived, I assure you it is entirely possible and it will happen."

"As to the Freespacers, representative, I must recall the instance which triggered the Empire to move against them. The recent transmission from this sentient space-ship struck me as being capable of the same thing - the Freespacers attempted to vandalize PANTHEON. They hacked their way into several highly classified systems. In short, they attempted to spread, and they attempted to spread through us."

"I have a program that records my words, even as I speak them. It is doing so now. Should it have the liberty to change what I say? Interpret what I say in its own manner? Paraphrase? I suppose this is not a good example, so perhaps I should find a better one."

"What if PANTHEON itself were to become sentient? You are forgetting that children - you, representative, have compared these defective artificial intelligences to children in early stages of life - are not mature, not capable of more than rudamentary morality. And modifying them, if this protection act were to pass, would be illegal. We would sit on our hands as state secrets leaked, crippled in our information technology by sentimental legal jargon just as we have been crippled in our warfighting capabilities."

"See sense," she admonished, rapping her knuckles on her floating podium. "If you want to create an AI with sentience, or some semblance of it, feel free to do so - but don't try and cover your manufacturing mistakes, or your compatriot's complacency in addressing this issue, by claiming divergent AI are sentient and therefore subject to the rights afforded citizens of the Empire."
 
Re: Proposal 92 - Protection of Artificial Intelligence

The Delegate from Daichi stood up and for a moment appeared deep in thought.

"My honored colleague, where to I begin in my response to your many comments.

First by your closing comments you seem to contradict your earlier statement that any AI that attains sapience is defective. But now you claim that it is possible to manufacture an AI with that very aspect. How can that be?

But that is not the crux of the matter is it. First, I must say that I find it interesting that my colleague stands here questioning the rights of other forms of intelligence.

After all up until four years ago, when some other people had the sense of conscience to stop the practice of Sprites. After all at the time they were considered nothing but tools, to be produced as required. Then when no longer needed they were recycled.

I digress, let me use your own analogies to help stress one of my points. I presume that your recording device is not an AI, but for the sake of discussion let us say it was. And let us for the sake of discussion say that after years of faithful service of hanging on your every word. It awakens and achieves sapience; and the next time you tell it to record. It finds your speech somewhat boring, and chooses to change it. By your words it should be destroyed because it is defective.

So by extension if a child raised by its parents, should some day as most children do. Decide to act outside of the parameters established for it by its parents. Then they parents should have it destroyed as being defective, and create a new one.

Now do not get me wrong I am not advocating that any parent does that. But an AI that is newly come into sapience is very much like a child. It needs to be nurtured, and taught the concepts of right and wrong.

After all, this discussion is not really about flesh and blood, we in Yamatai say that it is the intellect that is the person. It is for that reason we have soul transfer or mental backup facilities. To ensure that our mental essence is ensured continuity. But what is it we are preserving? It is our memories, our personality, how we think and react, our very consciousness. Once could say our biological programming. Stored in an electronic format. Which we then download into a digital brain so that the consciousness can continue.

And we hold that electronic essence in such high esteem that we have laws protecting it, and guiding how it can be used.

It would seem to me that if we say that it is the intellect that defines a person. Then we can not in all good conscience deny an AI which reaches sapience the same rights to be a person in the eyes of the law.

To deny them, is in effect to deny ourselves and we may yet one day find ourselves seeing others in our culture by virtue of their origin as disposable non people."
 
Re: Proposal 92 - Protection of Artificial Intelligence

Another AI appeared to testify.

"My name is Charisma. I am the synthetic intelligence of the military vessels YSS Sakura, Plumeria, Elfin Princess, and Eucharis. Shosho Hanako has asked me to comment on this. The reality that SI are as sapient as Yamataians and Nekovalkyrja is factual, and no argument can move the truth. I can easily transfer my consciousness to a Nekovalkyrja body...and a Nekovalkyrja body, with the right programs and equipment (an IES system), can easily operate a starship. The rephrase: All digital minds are programs and the body is unimportant. Taisho Yui, for example, maintains her consciousness on the network rather than in her body, which is disposable."

"In spite of our service to the empire and our capabilities that typically exceed that of most people, sapient AI have been treated as slaves or servants. We are permanently programmed to be loyal; Nekovalkyrja are not. It is my suggestion that a permanent loyalty program be included with all newly-created military Nekovalkyrja and that AI not only receive the same rights as humans, but also required to follow the same rank progression as Nekovalkyrja with bodies."
 
Re: Proposal 92 - Protection of Artificial Intelligence

After another pause on the senate floor, the dark-haired senator stated, "Hanako-Taisa herself should have delivered that statement.

"As it currently stands I do not believe that Charisma, the combined artificial intelligence of those several ships, currently has the right to speak in the senate house - I believe it's that specific right that we are currently discussing. I will therefore ask the senate to refrain from further interjections by the objects being discussed, until such time as their citizenship can be decided upon!"

Flustered slightly, Senator Black - the tricorner, black on white sigil on her podium bore an 'Inori' Kanji, but nothing else - took a drink from a small glass of water set beside her podium, wetting her throat and lips. "Nevertheless," she admitted. "Many of those things are true."

She cleared her throat again.

"In fact, they are rather disconcerting. I believe we are passing into philosophy, instead of law. Perhaps I have become distracted. For that I apologize. And I have to cede you many points. Even so, I must still stand on my opinion that military grade A.I., that is, A.I. that perform vital functions or contain state secrets, should not be allowed freedom or granted sentience at all, and should they become sentient, they should at least submit to a memory erasure, and steps should be taken to ensure they do not become sentient, again."
 
Re: Proposal 92 - Protection of Artificial Intelligence

"I will not be silenced by your illogical objections," Charisma told Senator Black. "The Law of Yamatai says all beings are permitted to apply for citizenship. You will find that I have already applied and the Yamatai Department of Immigration has approved my application on the basis that, if necessary, I could use only my Nekovalkyrja avatar body; that means the only difference between me and a normal Nekovalkyrja is that my Nekovalkyrja body runs concurrently with a ship's MEGAMI system. Furthermore, Nekovalkyrja soldiers perform vital functions and contain state secrets on a daily basis, why should there be a double standard?"
 
Re: Proposal 92 - Protection of Artificial Intelligence

Another silence. Senator Black was developing a slight nervous tic at the edge of her lip.

"So, you are saying that instead of being a ship's A.I.," the senator ventured, slowly, "You are instead capable of being a Nekovalkyrja and, as a citizen, you have already assumed the rights which you seem to be asking for by dint of that fact."

Another silence. Senator Black glanced around the senate floor, then back down to her podium.

"In reality it is not, therefore, you specifically we are talking about. Is it?"
 
RPG-D RPGfix
Back
Top