• If you were supposed to get an email from the forum but didn't (e.g. to verify your account for registration), email Wes at stararmy@gmail.com or talk to me on Discord for help. Sometimes the server hits our limit of emails we can send per hour.
  • Get in our Discord chat! Discord.gg/stararmy

Silhouette/Mirror Neural Operating Construct

Couldn't this be a violation of mental data law since your mind would be in two places at once (technically)?
 
It dousen't make permanent copies, and I believe that it is with consent, meaning you've authorized the temporary copy to exist while you piloted the armor.
 
It's a ROM copy that thinks it's dreaming.
The data density changes too quickly: the moment you try to save state, you destroy the data so it's a constantly ongoing process which is terminated the very second you switch it off.

It's very volatile and frequently needs to re-scan the pilot's brain... It's a very rough solution but in practice it's actually very elegant and makes the most of quantum computing - far beyond simple decryption tasks.


Without a pilot, at best it'll keep working 15-30 seconds before we get Robocop II style shut-down and the whole host armor comes crashing down like a rag doll.
 
Overall it seems pretty good, but I see two issues that need to be addressed.

1) How fast does the initial scan take? I had thought that a ST scan (which is what this is essentially) took several minutes. I could be wrong on the time frame, but if I am not wouldn't this system represent a acceleration of that process by a order of magnitude?

2) I see no technological reason that the systems memory be so unstable. So, you are making a copy of the pilot that is for all intents that pilot and purposely making it short lived. To me this would be analogous to intentionally cloning someone with a terminal illness that, while useful to you, will kill them in short order. What this would in terms of its legal status would obviously depend on how a polity treated sapience and rights inherent therein.

Consider this issue, how would a polity handle a duplicate created by accident? For example, say a crewmen (recognized as a distinct, sapient being with all the rights of a human or equivalent in that polity) where to become stranded on a planet for several weeks/months/years. He had a mental backup (ST backup in this setting) saved and the person/company/State handling it believes him to have died so they create a duplicate of him using the backup. Later on the original is finally rescued.

How does the State handle this? Does the original have the only right to exist and the clone is destroyed (so there is no duplicate)? Is it the other way around? Are they both declared distinct sapients, both entitled to survive (even though they are duplicates)?

For a State that considers the original mind to be the only one with full rights, with the clone considered as a danger to the order of things, I don't think this system would have any issues.

However, for a polity that considers them both to have rights (to whatever degree) I can see this having some serious legal problems.

That is all,
Vesper
 

Well, there is the issue of managing entropy within such a system which has the tendencies of a quantum system whereby the scope of possible variables is endless (Turing's prediction of greater systems).

There is then also the fact that any sufficiently complex system can never be software-perfect, especially if it is undergoing ongoing changes. It can become more adapt but it can never become truly stable and it is impossible to compute when a program will halt (Godel's incomputability and Boltzmann's management of entropy).

Then there is the inherent issue of the mind itself. Specifically stimuli...
If a mind "wakes up" and realizes it is merely a ghost in a machine, you have psychohygenic issues which could lead to a negative performance impact: I don't want half of my processing cycles being wasted on "Why am I?" and hence the system is inherently unstable as so the echo or mirror never lives long enough to ask these questions: It is riding the psychological momentum of the pilot.

My solution, truly is to consistently reset and re-inform the system via long-term recall.

The name of the man is beyond my grasp at this moment but I recall the story of a piano teacher in the mid 90's who had only five minutes of short term memory. When it ran out, it reset because of the peculiarity of the damage left by his brain tumor.

He could teach every hour of every day: even into the night, convincing himself he'd slept in or some other nonsense and utterly believing it and he held a record of three days and sixteen hours of nonstop play with his only pauses being to read the letter he left himself explaining what had happened in the very shortest of terms.

He was permanently motivated and endlessly so. His skills even improved in long term memory as a pianist but he could not recall anything before the day whence his troubles began with the tumor. To him, it is still that exact same day, not an hour passed.

I use this peculiarity to my advantage here and also to avoid any messy laws around the copying of one's neural structure.


The initial startup scan during scramble likely takes a good 60 to 180 seconds but the repeat "flashes" or checks take seconds and happen seconds apart.

How does the State handle this?
It would not: by definition, we are not handling an abstraction which can be extracted as an ST backup. This is a neural simulation. The machine has no legal rights and is in a "sleep" state.

It is not aware that it is, only that it is doing.
 
As I said in our IRC discussion on this topic yesterday, I believe you are viewing the concept of AI in a unnecessarily narrow view.

In regards to the technical feasibility of AI construction, that is essentially a moot point. The technology exists in the setting. It has existed for awhile and will continue to exist into the future. Further, I think that the argument that creating a sapient AI is impossible because of the physics of it is a invalid argument. Everyone reading this has a wetware sapient intelligence operating between their ears. It is obviously possible, though not easy to reproduce.

Your statements seem to be contradictory regarding the nature of the simulation. You say it is a copy, but not. It seems to have all of the reasoning and intuition of the pilot but you keep saying that its not sapient. You are going out of your way to argue that its not when it seems to have all the hallmarks of being one. To me this seems like a 'it looks like a duck, quakes like a duck, it probably is a duck' situations. I also do not see how it would be capable of doing all of this if it where 'sleeping'.

I can certainly see how the simulation, if disconnected from the sensors, would become unstable but I don't see how that would be much different from if you shoved a human into a black box for awhile. If anything the simulation would have more stimuli than the organic pilot since it can sense and process vastly more information than a normal, organic mind could hope to process.

In regards to the AI constantly pondering philosophical questions I do not see why it would do this any more than a normal sapient mind would. IRL some people go off and ponder that stuff their entire lives. The vast, vast majority of people however manage to function without being paralyzed with self-doubt. I think that the AI would be similar. Certainly some would become all emo and philosophical, but the majority would keep on going without serious problems.

In a general discussion I would state that it makes no sense that it would become more unstable simply because the simulation can't go to the break-room for coffee. However, since this AI is essentially copied from the pilot it would be used to that kind of stimuli and would still need it to remain stable, baring substantial changes in its psyche (this being a problem a purpose-made AI would not have). That brings us back to the whole issue of the legality/morality of the system. You are creating a copy of the pilots mind fully aware that it is going to become unstable as a result of its form. Your solution to this problem erase its memory every dozen seconds or so.

Your example is a fairly good one to show in-practice what your AI would be like, and this serves to reinforce my point. The main is fully sapient and aware but lacks the ability to remember information in the longer term. This is a unfortunate situation for him, but I highly doubt that you can argue that it would be acceptable for someone to go a intentionally do this to a person that is otherwise normal. This is what you are doing to the AI in this system. It would otherwise continue on normally (with a resultant psychosis in all likelihood due to its inability to interact with other people like it is used to) where you not to go and destroy its memory.

You are essentially saying," We made this sapient AI here that we knew would have problems because of its nature. We knew that there where other options available that would avoid this problem. Rather than use one of these options, we have chosen to lobotomize the sapient so there wouldn't be problems. That makes our choice A-OK."


,
Vesper
 
Could you two quit arguing about heavy Theory, it's derailing the submission. All this is is a specific control system that works in the way described, it does not matter what we technically *Could* do, what matters is that this does what it does, for reasons which Osaka has already explained. Now, Can we get back to the actual submission instead of seeing who's Theoretical knowledge is largest?
 
I am not arguing that it does not function. I am stating that, as written, its operation brings up some moral issues and that it may have legal problems in some nations.
 
Well, The Pilot has full knowledge of the copy, and would obviously have to consent to the copies being made. The copies would still be under the original owner's control, therefore, it does not violate Yamatai's Backup laws.
 
The thing is, you are not just making a backup, you are also initializing it. A ST backup is just data (albeit important data) while it is siting in storage. It becomes a person when it is activated and transferred to a body (presumably after the previous bodies demise). I would argue that after you start the copy up, it becomes a sapient entity, one whose rights the State would be required to uphold.

If the initialized backup (copy) doesn't have any rights, how does any person restored from a ST backup have rights?
 
It's a simulation, as Osaka said, Not an actual consciousness. the simulation, as Osaka has said, doesn't exist long enough to self-realize. I would argue that it isn't a separate individual because of this.

It does not violate the laws because it is not an ST or a mental backup.
 
I dream and in this dream I dream of myself. As I looked into my own eyes I ask myself which is the real me and to that I answered the one standing before me.

OK, I think I get both points. But perhaps this could answer the question. When the pilot gets into the machine the box copies all of his memories, his neural impulses and the like. But it is still the pilot in control. Perhaps the way we should look at it is not an SI, but a device that expands the consciousness of the pilot. Without the pilot, it is just a machine with no awareness. But with a pilot it's like adding a portion to the brain that is specifically designed for handling the armor which it is placed in, effectively making the two one. But doing this method also allows the pilot some distinction between the machine and himself so he doesn't fall to the ground screaming when one of his frame's arms gets blown off.

Does this make sense to everyone?
 
That was beautiful, Wanderer, that's exactly what it's supposed to be.
 
I think the article needs to be rewritten to include concepts of this discussion. It also needs its "intro" sentence under the header.

Here are the applicable laws, by the way:
Item A is clearly violated in this case.

Purging the computerized copy from the mecha could also be considered murder (see Item 3).

3. It is illegal to clone (make physical copies of) or ST-clone (make mental copies of) any Star Army citizen/plebian/soldier without their consent (Proposal 39, Item 2).
Presumably, the pilot would have given consent (or upon activation, the device could ask).

In my opinion, a brain scanner like this that could predict pilot actions could be considered ST tech.
 
Pro Tip: You can always sell the top-grade intended version to Nepleslia, the UOC, the Lorath Matriarchy, and the Free State.

A dumbed down less effective version of the unit which uses the pilot's brain matter as a processor instead of copying the process to the machine could be applicable for Yamatai. This process would be viable due to the Yamataian/Nekovalkryja brain being able to accommodate the additional processing demand, and can actually execute software in a synthetic operating environment.
 
No offense but I think Yamataians born brains would be able to make far better use of this than Nekovalkyjra would - Simply because Yamataians have a life-time of authentic experience whereas Neko are vat-grown and lack the experience the system needs to be used to its full potential.
 
It's a combat system, and Nekovalkyrja are more likely to have combat experience than any other race, since they're only used in the Yamataian military. Since Nekovalkyrja already have computer-type brains, would there even be a demand for this system?
 
Cookies are required to use this site. You must accept them to continue using the site. Learn more…