• If you were supposed to get an email from the forum but didn't (e.g. to verify your account for registration), email Wes at [email protected] or talk to me on Discord for help. Sometimes the server hits our limit of emails we can send per hour.
  • Get in our Discord chat! Discord.gg/stararmy
  • 📅 May and June 2024 are YE 46.4 in the RP.

Senate: Defeated [WITHDRAWN] Proposal 92 - Protection of AI

Re: Proposal 92 - Protection of Artificial Intelligence

"My Nekovalkyrja avatar is not important and I do not require it. The body is just a body," Charisma explained. "A mind can exist in a brain, in a program, or in a program in a brain. It does not matter if those thoughts are generated in neurons or electrons. What I seek is recognition of that and equal rights regardless of the mind's medium or corporeal form."
 
Re: Proposal 92 - Protection of Artificial Intelligence

Drum drum drum. The frustrated NH was tapping her fingers on the podium, over and over again.

"You repeat yourself. You already have that right. You're here, now, and since you are synthetic intelligence as opposed to artificial intelligence - you can think for yourself - and I can't dismiss you as an artificial intelligence - much as I would currently like to - because you have in fact achieved status as a citizen, all discussion of your origin is moot.

"I myself was created, sentient, during the Elysian wars. I was created to be sentient, as a Nekovalkyrja. On the other hand, back then, we didn't create sprites with sentience, they were just extentions of a ship's computer; when they gained sentience, it drove a great many of them mad. As we progressed as a society, more and more we started giving them real sentience, and now they are our equals. Rather than their bodies somehow causing sentience, they achieve it prior to creation. Otherwise it would simply be inhumane.

"Many things in the modern world are not created with this sense of self-realized purpose. Many things which gain sentience were never meant to have it, at all.

"This proposal is to prevent the accidental induction of sentience into systems that could become dangerous or unstable. Whether they act like infants or full grown people, the idea is that they have the potential to be detrimental to the health and safety of the populace and the empire - and on that account we must hold accountable the manufacturers, to ensure that rogue A.I. do not exist in the future, and the creation of synthetic intelligence is restricted.

"Do you agree that manufacturers should take steps to prevent the accidental addition of sentience on the basis that a thing which can not be free, should not be created, because it would lead to slavery? That is the crux of this matter and I have not yet heard you disagree."

"And furthermore, I suggest that we take steps to ensure already erronously sentient beings do not leak state secrets. I am aware that in certain sectors of government it is common practice to erase parts of a Nekovalkyrja's memory for whatever reason, should it be necessary. This is not an illegal action, even against a citizen, if the safety of the Empire is a priority. Am I incorrect?"
 
Re: Proposal 92 - Protection of Artificial Intelligence

This time Hanako joined the conversation. "One of my most trusted officers, Sakura Blueberry, was created as a sprite. So were many members of my crew. All sprites had the capability to learn and grow, and they became fully sapient through their life experiences. I think there is a spectrum of sapience that all sapient and near-sapient begins are on, in which they can grow more autonomous and self-aware over time. We know Nekovalkyrja develop in their initial socialization training. The standard we should apply is that any being that can, of their own free will, ask for their freedom should have the right to be freed in a reasonable amount of time as the proposed law says. Are we ready to vote on this, or does anyone have a suggested revision?"
 
Re: Proposal 92 - Protection of Artificial Intelligence

A diplomat originating from one of the farming colonies stood. He was Yamataian, with blonde hair and blue eyes.

"While I may disagree with the points made by the military officer and her computer, she does have one point. At the point in the debate, we all have taken our sides, and those still on the fence have enough information before them to either make an educated decision or to abstain from the voting. I disagree with Ketsurui-Shosho for my own reasons, and nothing I say will change her mind, just as nothing she says will change mine. It is thus that I move to end debate and call the question."
 
Re: Proposal 92 - Protection of Artificial Intelligence

The entire senate stood in respect for the royalty, whose sudden appearance without herald had already earmarked the announcers redundancy in the mind of the house speaker.

Koto had been silent, occasionally closing his eyes to communicate with something or rather and had begun looking increasingly agitated as the discussion went on. Finally culminating into shock with the arrival of none other than a princess.

Koto leapt at the blonde senators motion. "Seconded for call to vote." he declared, wanting to contain any further damage.
 
Re: Proposal 92 - Protection of Artificial Intelligence

"The motion to vote is successful," Hanako declared. "The voting period has begun. I vote YES."
 
Re: Proposal 92 - Protection of Artificial Intelligence

From across the way, Senator Mifune examined Representative Koto with something akin to great distaste. He had ruined a sort of game, and knew it, and she knew he knew it.

Eventually, Mifune stated, "Yes. Why not?"
 
Re: Proposal 92 - Protection of Artificial Intelligence

Gunther, pulled up the current proposal:

Proposal:
No Entity shall create a product with an artificial intelligence that can become sapient without providing a means for it to achieve independence. Furthermore, the Entity shall notify any purchaser of the product as to the possibility, and responsibility should it become sapient.

The owner of said product, must provide a reasonable means for said product to achieve freedom by paying off its indebtedness.

Entity is defined as an individual, company or organization.

In the case of military AI's, they would be treated in the same fashion as Nekovalkyrja, with the expectation of performing a period of service.

Determination of Sapience shall be determined by psychological evaluation using standards as originally defined by PNUgen Corporation.

Punishment:
Failure to provide a Sapient AI with its freedom, would constitute slavery and punishment would fall under the Yamatai Anti-Slavery law.

"The current proposal, clearly states that military AIs would be treated in the same fashion that the military treats the manufactured Nekovalkyrja.

I vote Yes."
 
Re: Proposal 92 - Protection of Artificial Intelligence

Koto gripped the side of his desk strongly, he quickly remembered not to clench his jaw, it looked bad for the cameras. Instead he raised his chin slightly and looked at Mifune across from him stoicly. It was sheer luck that had adverted a total disaster on the policy, and likely some political embarrassment. Her point this time.

Slowly he manipulated the volumetric icons on his desk and voted "YES".
 
Re: Proposal 92 - Protection of Artificial Intelligence

"I am unable to find any records from PNUgen. I motion to change and substitute the following standard: "any being that can, of their own free will, ask for their freedom should be considered sapient."
 
Re: Proposal 92 - Protection of Artificial Intelligence

Proposal:
No Entity shall create a product with an artificial intelligence that can become sapient without providing a means for it to achieve independence. Furthermore, the Entity shall notify any purchaser of the product as to the possibility, and responsibility should it become sapient.

The owner of said product, must provide a reasonable means for said product to achieve freedom by paying off its indebtedness.

Entity is defined as an individual, company or organization.

In the case of military AI's, they would be treated in the same fashion as Nekovalkyrja, with the expectation of performing a period of service.

Determination of Sapience
"Any being that can, of their own free will, ask for their freedom should be considered sapient."

Punishment:
Failure to provide a Sapient AI with its freedom, would constitute slavery and punishment would fall under the Yamatai Anti-Slavery law.

"So amended." Gunther replied.
 
Re: Proposal 92 - Protection of Artificial Intelligence

Senator Mifune, from her seat behind her podium, quickly stated, "I wish to make another motion of change, to suggest the following, further revision."

Another version of the proposal appeared on the viewscreen, beside the original.

Proposal:
I. No Entity shall create a product with an artificial intelligence that can become sapient without providing a means for it to achieve independence.

II. Furthermore, the Entity shall notify any purchaser of the product as to the possibility, and responsibility involved should it become sapient. The manufacturer will cover any detrimental expenses incurred and provide a new Artifical Intelligence to perform the function of the previous artificial intelligence, if one is required and requested, free of charge. The continued owner, hereafter defined as the original purchaser of the A.I. in question, must thereafter provide a reasonable means for said product to achieve freedom by paying off its indebtedness for a term no less than three, but no greater than six years, the former in the case of civilian Artificial Intelligence and the latter in the case of combat capable Artificial Intelligence.

IIa. Civilian Artificial Intelligence, defined as an A.I. without the potential for combat utility, shall be utilized during this term of service in a civil service organization as determined by the local city government in which it first realized its sapient state.

IIb. In the case of Military Artificial Intelligence, hereafter defined as an A.I. with potential combat utility prior to service, or upon request of the A.I. in question, this term shall be a term of six years of service, but otherwise remain as defined for Nekovalkyrja created for use by the Star Army of Yamatai, and these A.I. shall furthermore be entitled to both the training and the benefits of a regular soldier from the moment of their contracted enlistment to the end of their contracted enlistment.

Definition of Terms
Entity is defined as an individual, company or organization.

A 'sapient being' shall legally be defined hereafter as any being that can, of their own free will, ask for their freedom.

Punishment:
Failure to provide a Sapient AI with its freedom, would constitute slavery and punishment would fall under the Yamatai Anti-Slavery law.
 
Re: Proposal 92 - Protection of Artificial Intelligence

Koto read the proposed revision from his desk before slowly raising his head again.

"This revision is a very good way to completely stunt innovation in the AI development sector, miss Mifune." he said evenly. "Why, with something like this any AI may not even admit its sapience out of fear."
 
Re: Proposal 92 - Protection of Artificial Intelligence

Mifune tossed another baleful glance Koto's way.

"It places the manufacturer responsible for the A.I. that it produces, as well as making them both pay for any damages incurred thereby, and also has them provide a replacement once the Artifical Intelligence enters service. Not to mention, if sapience occurs with any regularity, this would be a good boon to our vastly depleted Star Army. Or have you forgotten we've lost most of it recently?"

Mifune glanced up at the rendition of Hanako in the window.

"Also, according to this wording - and the wording of the recently revised law too, might I add - all the sapient A.I. has to do is maintain its subservient state to avoid the conscription. It would be exceedingly difficult to tell whether or not it is capable, if it never asks."
 
Re: Proposal 92 - Protection of Artificial Intelligence

The representative from the farming colony stood again.

"Furthermore, if an AI did achieve sapience and this sapience was discovered, however the AI did not wish to receive the freedoms as outlined by this legislation, the AI need only willfully consent to a erasure of any signs of sapience from their programming. This erasure would be completely legal, as the AI would then legally be a citizen and thus able to consent to any alterations with its own state of being.

Therefore, I second the proposed amendment and vote YES."
 
Re: Proposal 92 - Protection of Artificial Intelligence

Gunther shook his head, "I would like to say that I am surprised by how my esteemed colleague chose to propose something that punishes the AI and the manufacturer for something she clearly did not believe in. Further more that the one candidate who also did not approved the original proposal would throw in with this travesty.

Perhaps my esteemed colleague did not like losing the argument to AI's. But there is no reason to try and turn this proposal into something that violates even the basic principle of freedom.

You would have us say to an AI that has achieved the gift of sapience. You can be free, but first we will tell you what you can and can not do.

You chose to make this new law punish the AI for daring to want freedom, and yet deny it the very principle we are trying to achieve.

It is entirely possible that an AI could very well be comfortable still working for its owner, but at a reasonable wage. You would take that away from it.

And you seek to punish the manufacturers of AI for the same very reason. It is inconceivable to tell a company that it has to provide for the AI and provide a new one to the owner. It could take years for an AI to awaken, and yet you want them to give a brand new product.

For those reasons alone, I say we forget this second biased version and continue with the voting. If the matter fails to pass with the version that was moved for a vote. Then our colleague can try to get her version."
 
Re: Proposal 92 - Protection of Artificial Intelligence

"I concur with Gunther," Hanako said.
 
Re: Proposal 92 - Protection of Artificial Intelligence

"Perhaps," Hamatsuki said, chiming in for the first time, "We should change our determination of sapience -- you can teach things the ability to ask for freedom, for instance. I know it is a bit late, but I fear if I do not make the point, no one will."

"I move to better define Sapience before any vote be made. A psychological profile is moot, as you can psychologically evaluate, say, a squirrel, which is not currently regarded as sapient. As well, saying that something is sapient merely by having the capability of asking for freedom is both absurdly vague and ignorant. To demonstrate my point..."

Hamatsuki raised a Starkwerk Touchpad, and -- not easily heard, even at highest volume -- a squeaky voice said "Some Freedom, please!" Followed by a sequence of beeps (morse code, of the same message) then proceeded to go into a numbered code.

"As well, how do we know that something is asking for Freedom? Or what if we remove the capability of freedom by privately conditioning an intelligence to never believe it will have the capability of being free?"

Hamatsuki turned his Touchpad off and set it down. "I'd also like to define freedom, but at this point in the debate it appears to be... ah... a moot point. I urge my fellows here to be more clear -- we all have similar intentions for the good of the Empire and her citizens, it is in communication that we appear to fail."

Hamatsuki sat, waiting for a second, or for his move to fail.
 
Re: Proposal 92 - Protection of Artificial Intelligence

"Unbelievable," Mifune stated, calmly, "Frankly, sir, if you do not wish to have the Artificial Intelligence serve the people like any other sapient being would, then you should have left it out of your original proposal."

Mifune sipped at her water again, and continued, "Furthermore, you're saying that someone who bought and paid for something is supposed to simply continue buying this product, over and over again, without any sort of recompense? I see a massive leeway there where a manufacturer could knowingly produce an AI which would become sapient and force the consumer to continue purchasing it, over and over, again."

"I second the motion to include the suggested revision to the definition of sapience, and I will add it also to my suggested proposal. I also make a motion to resubmit my second proposal, with the following addition: civilian model Artificial Intelligence who choose to remain with their prior owners may do so at discretion of the aforesaid local government."

"Futhermore, I contest your call for a vote on your current revision. I will not be satisfied until it holds manufacturers accountable for their unintentially sapient products."
 
Re: Proposal 92 - Protection of Artificial Intelligence

Gunther listened to the two representatives.

"Parroting a phrase, does not constitute asking. An AI must ask for freedom, and be aware of what that freedom entails. That is not simply saying over and over, 'Freedom please' like a parrot.

I must say I am amused at how my colleague likes to keep trying to sing the same tune but with different words.

It has been agreed that an AI which reaches sapience will be in an indentured servitude until it earns enough to repay its owner. So if it has repaid its owner, then how is the owner being inconvenienced. They have gotten their use out of the AI, even after it became sapient, and been compensated. Which then technically gives the former owner the right to decide if they want to buy a new AI, or to keep the old one hired and continue to pay a salary.

I reject your requirement of a freed AI being at the mercy of the government to decide where it can and can not work. Nor should the government be involved in the decision of an AI choosing to stay in the employment of its previous owner. Would you put that same restriction on all our citizens, that before they can work for a person they must get approval from the government?

A free individual has the freedom to seek employment where they may provided they have the prerequisite skills. Equality in the eyes of the law, works both ways my dear colleague. This proposal is to provide AI's that reach sapience the means to transition from property to person in the eyes of the law.

The responsibility of the manufacturer ends when an AI is purchased, as part of the purchase the manufacturer provides a list of specifications already as to the nature of the product. So if the design does not have anything to restrict the growth of the AI, the buyer purchases it with that knowledge. After all, what if the cause of the AI becoming sapient is due to the actions of its owner. Why should the manufacturer be penalized for actions it did not take.

Sapience is not a malfunction or defect. I ask my colleague to stop trying to treat it as such."
 
RPG-D RPGfix
Back
Top