Star Army

Star ArmyⓇ is a landmark of forum roleplaying. Opened in 2002, Star Army is like an internet clubhouse for people who love roleplaying, art, and worldbuilding. Anyone 18 or older may join for free. New members are welcome! Use the "Register" button below.

Note: This is a play-by-post RPG site. If you're looking for the tabletop miniatures wargame "5150: Star Army" instead, see Two Hour Wargames.

  • If you were supposed to get an email from the forum but didn't (e.g. to verify your account for registration), email Wes at [email protected] or talk to me on Discord for help. Sometimes the server hits our limit of emails we can send per hour.
  • Get in our Discord chat! Discord.gg/stararmy
  • 📅 October and November 2024 are YE 46.8 in the RP.

Rules Question 004: Nanotechnology

Status
Not open for further replies.

phacon

Inactive Member
Nanotechnology construction devices or nanotechnology anti-starship weapons are not plausible (see this page for an explanation)
So I was look at this rule, grey goo and more importantly at its explanation from Star Wars Destroyer

Generally the premise that its a bad idea to use nanomachines or grey goo as weapon is generally true. However some of the evidence Star War's Destroyer.com uses is either outdated or so conceptual and not based on actual physics. In some cases, the actual information is actually wrong. For example:

Nanobots would most likely be even slower than the aforementioned technologies; electroplating and nickel vapor deposition pour on atoms as quickly as they can bond to the underlying material, and nanobots would only add complexity to this process.
The problem with this statement is two fold
  • First, he is comparing the method of making nanotechnology to other normal chemical processes, while assuming all chemical process go at the same speed regardless of temperature. First of all no nanotechnology I know of uses electroplatin. Second, Most other non-carbon nanostructures either involve, like gold colloid, special biochemical liquid solutions Or involve, like Silver nanoparticles use ion injection with two-bus long mass driver that a cousin to the Gatling Gun. I am pretty sure that Gatling gun is not slow and makes silver nanostructures on the order of microseconds.
  • Second he makes the assumption that carbon nanotubes are produced in the same chemical vapor deposition (CVD) as regular nickel vapor disposition. Carbon Nanotube and graphene is grown fastest using a CVD method; however, graphene and other nanostructure require a very specific CVD called Ultra-High Vacuum Chemical Vapor Disposition, which has a heuristically different reaction rate than normal CVD. I am hesitant to give out the actual temperature and pressure for well research reasons. So I didn't say anything; however, the chemical conditions for nanotubes might be in this paper by Walt DeHeer (don't read this just search the word torr on page 5 and 300 on page 4 ^_- ^_-). A better comparison is that making carbon nanobots is basically the same as making steel in the vacuum of space while just outside the suns photosphere and solar winds. The iron going to metal and fuse with the carbon pretty darn fast in fact way too fast for you to control.
So back to the Star Trek Destroyer.com point about nanorobots being in-effect due to speed is correct; however, it not that they would be too slow, it that they are too fast. So fast that it compounds exponentially any inaccuracy

In conclusion, think that the premise that nanobot construction or nanobots is valid; however, I think it for reasons are completely conceptual and not based on actual physics. I mean quantum physics is supposed counterinuitive by default so why should something that only 10-100 atoms large be any different? ;3
________________________________________

Where was I.... oh a forgot the question XD....


My question was in light of all of this, I was looking at grey goo[url] and the general ban on nanomachine weapons and I kind of wondered...

Would it be good in SARP to show some of reason ICly why making nanobots as weapons is a bad idea? I mean Michael Wong was right that attack nanobot are a bad idea, it just their are newer reason why it is true. Basically I wanted to know, if ICly if the opportunity arose could we talk about reason why certain "forbidden" nanomachines are a bad idea? Or its that against the rules to even talk about attack nanomachines?(*)

P.S. Actually my main point would be to take the physics I mention above about nanomachine growth apply it by showing how a simple flamethrower can cause any grey good to solidify(*) itself into a gaint block of carbon.

*remember what I said about too fast ^_- ^_-
 
  • Like
Reactions: Wes
I'll put it this way:

Dusk, when you post things like this, the immediate responses of some people in this community are, "Oh boy, here we go, one more techhead trying to take the fun out of SARP."

Many of us do not care that nanotech or "grey goo" is feasible in the real world or not. You want to call it magic without scientific basis, go for it. We like it anyway.

If you turned a flamethrower on a Nekovalkyrja, yeah, she'd turn into a hulk of carbon, just as you state. So would most living things, in one sense or another.

If you want an IC opening to attack nanotech, go nuts. Create a device or something to attack it. Just know it'll go through an NTSE submission thread and that more elements than the mere "science" behind it will go into judging it.
 
Doshii Jun said:
I'll put it this way:

Dusk, when you post things like this, the immediate responses of some people in this community are, "Oh boy, here we go, one more techhead trying to take the fun out of SARP."

Many of us do not care that nanotech or "grey goo" is feasible in the real world or not. You want to call it magic without scientific basis, go for it. We like it anyway.

If you turned a flamethrower on a Nekovalkyrja, yeah, she'd turn into a hulk of carbon, just as you state. So would most living things, in one sense or another.

If you want an IC opening to attack nanotech, go nuts. Create a device or something to attack it. Just know it'll go through an NTSE submission thread and that more elements than the mere "science" behind it will go into judging it.

Well yeah, I could use this to attack nanomachines in general; however, I hoped that by quoting the exact rule above people would realize I was not.
Doshii you need realize like I suggested in the stuff above, I am intentionally focusing on look for making In-character reason for why SARP wouldn't use certain types of "forbidden" nanomachines; specifically types of nanomachines that Wes and the other tech/setting moderators have agreed they don't want to see. Attack nanomachines are already against rules anyways, so I was wondering for the sake of argument whether or not it was ok to make In character reason to help explain the rules we already have in SARP. Does that make more sense?

So Doshii, I don't think I am attacking anything that people in SARP already say they don't want to use.
 
Plus I though that maybe giving IC-explinations for some of the rules would be nice for Wes if the opportunity arises >>;]

Then again giving the responses, if people don't want me do anything, I can do that too. <<;

I more wanted the moderators to be aware of the use flamethrower can have in solving well ""rp problems"' they might run into ;3.
 
As far as I've heard from more informed people (and books) than the author of that atrocious article (it really is a very bad article, I do not like it in the least) nanetic technology is more feasable than the traditional 'macro-scale' methods we are used to and comfortable with, however, it is only feasable given the proper infrastructure.

For example, let's look at antimatter production, right now we can make miniscule amounts of antimatter in current particle accelerators just by smashing protons together. However, it is a very inefficient way to produce antimatter, if our handful of multimillion/billion dollar scientific accelerators all producing antimatter as fast as possible, current projections place us at producing something like a microgram (perhaps a whole gram, I'm not sure) at around a thousand years of effort.

Of course that assumes that no new accelerators will be built and that designs will not become more efficient (i.e. faster per generation cycle) or compact in the intervening time.

The same principle applies to nanoconstruction techniques, right now, it's a pipe dream, but there are conservative and liberal projections that predict a viable nanoconstruction industry within the next generation (about 20~30 years). As to some of the outright fallacies invoked by the author of the above article, open-air nanoassembly is unrealistic, and the universal nanobots that can fulfil a host of operational and utility functions are very far off if not outright infeasable.

However, things like assembly vats in which nanites form a labor scaffold and are directed from a central planning computer via contact transmission (i.e. all of the nanites snap together like a bunch of legos and act as a larger computer and assmbly structure) within a liquid-filled tank. Raw materials and nanites themselves would be pumped in by various cycles to build the completed product, which in 'Engines of Creation' a very in-depth book about the possibility and utility of nanotechnology described this very process, the example product of which was a one-piece crystalline rocket motor that would alter shape and function (i.e. to attain stable combustion rates at specific tempuratures or become more efficient at a higher velocity of thrust) based on conditions within itself and possibly with external commands from the operators.

Things like grey goo aren't ridiculously ficticious, nor are they very impossible to develop, though like phacon said, extermination is fairly easy (IIRC nano structures are much more vulnerable to energy transferance since they are so incrediblly small). However, nanetic weapons are, in my slightly informed opinion, generally only viable in massed volumes or as biological-effect weapons targeting populations instead of materiel.

===

So, back on topic, why should nanotech be restricted or forbidden?

Well, first off, the manufacturing techniques of a piece of SARP equipment doesn't matter so much unless it needs to have some sort of weakness (nanofabrication pretty much eliminates massively exploitable flaws in a device because of faulty construction processes) then such details are often necessary and thought should probably be put into what sort of stresses the elemental material can withstand even with a perfect assembly system.

Secondly, nanotech is already used by most of the major factions, keeping in mind that biological equipment such as the Mishhuvurthyar are nanetic in nature anyways, I've been informed by older members that armor storage and deployment on ships was performed by nano-scale assembly chambers before and after most missions. And yes, cells are nanomachines, don't argue this point unless you're prepared to engage me in a knock-down drag-out fight, because I know I'm correct when I say this. So nanotech isn't banned from the SARP outright, it's just shyed away from since its been so popularized in newer sci-fi media.

This brings us to the third point, try not to use nanotech for everything, it has weaknesses when deployed directly and most of its benefits are indirect in nature (anti-viral and anti-biotic injections/application creams, industrial manufacturing, highly accurate miscroscopy and nanoscopy, lock picking, and other things that don't require tons of nanoscopic robots to be sprayed all over the enemy).
 
Star Trek or Star Wars? In the first post you said one the first time you mentioned it, and the other the second time.

As for the topic: Star Army does have offense-aimed nanotech. They just are aimed at supporting conventional combat rather than being used in combat themselves. However, I would not like to be the landing party on Yamatai or any of their ships if PANTHEON's nanites are still saturating the air.

Also, some nanotech is resistant to fire.
 
Haters gonna hate, then they are likely going to complain when their character preforms poorly in an RP with someone who knows what they are doing. Ultimately learning is what makes you a better RPer.

Yes Nanotechnology is used as a magic wand in some places of SARP and it certainly isn't really as useful as it is portrayed in the setting. A lot of people also don't understand how nanotechnology works (See Cy83r K0rp52's post about how particle accelerators can scale up....). Fortunately it has been toned down a lot recently, and certainly isn't being used at the 'magic' level as much anymore. Even the nodal system is going away.

Though it is good that people read up on things they actually use in RP to avoid embarrassingly stupid situations where they use something wrong and put other people in the situation of having to go along with it or trying to correct the character that is supposed to be an 'expert'.

It also helps in stopping people from just ripping things off from anime they've seen.

---

As far as nanomachines go though, everything they do is going to have an order of complexity much higher than a simple chemical process because while they have to preform the same type of action (bonding atoms or whathaveyou), they will have to preform it in a special order along with other similar actions. It is much more realistic to say they will be going slower than going faster than a chemical process that would be similar to the materials they are going to be made from. This is the difference between making steel, and making a car.
 
Uso said:
As far as nanomachines go though, everything they do is going to have an order of complexity much higher than a simple chemical process because while they have to preform the same type of action (bonding atoms or whathaveyou), they will have to preform it in a special order along with other similar actions. It is much more realistic to say they will be going slower than going faster than a chemical process that would be similar to the materials they are going to be made from. This is the difference between making steel, and making a car.

This statement is untrue, the Star Destroyer article also made the same mistake of applying macroscale concepts of accuracy against time to nanoscale techniques.

The work put into moving an atom with an atomically-scaled arm is miniscule compared to moving a car frame with a robotic arm. There is much less mass to move for both the constructor and the construction material, and anyone who remembers the simplest principles of physics knows that mass does not scale linearly with volume. Comparitively, speed is not linear either since, by moving a robotic arm and several hundred pounds of steel alloy takes more energy and produces more heat than moving a nanoscopic molecular or elemental 'brick', thusly, the nanoassmbler can perform thousands of operations on the same timeframe that it takes that massive robot to move the car part into position to be welded and thousands upon thousands (they don't need to be much bigger than a virus, if that) of nanoconstructors can be used at once across the entire surface of the construction, geometrically multiplying the amount of workers that can be used as surface area increases.

As to whether or not nanoassembly is faster or slower than conventional manufacture, I don't know, my guess is that they operate at a similar enough speed that the accuracy of nanoassembly would be worth an increase in overall price of a product, especially if that product requires accurate construction (i.e. mortarless bricks or NBC hazard suits). With nanoassembly, processes like welding and forging become completely defunct. The hope of futurists and transhumanists like myself is that nanomanufacturing is so much more efficient than marco-scale manufacturing that it becomes prevalent in all forms of production, especially since, assuming it has few if any drawbacks in comparison to MS manufatured materials, you can build complicated alloys in a fraction of the time it takes to produce them (i.e. the equivalent of 300-fold [ridiculiously high number] katana, the best are forged with something like 50 folds and forging a 300-fold blade is practically impossible with conventional technique, could be built as fast as a conventional kitchen knife without any need for the mostly forgotten artistry involved in its forging).

As to the problem of breaking atomic bonds, it is not breaking an atom (which is much harder and causes a big explosion), and conventional manufacturing techniques already break and bond atoms on the large scale, so the problem is in a misconception of imagery and a natural fallacy of thought related to our marco-scale state of being; that is, the idea that accuracy always costs energy and time. Also, since, as already discussed, a great number of nanobots are being used to construct an object, individual molecules essential to the final product can be produced in large numbers quite easily, assuming the collective constructor scaffold is capable of identifying different assemblies of atoms from one another, from a general pool of resources. there is nothing saying a single constructor has to build the entire device itself, nor is it not allowed to use macro-scale assembly techniques such as the prefabrication of parts and substructures.

Overall, nanites can (I hesitate to use 'should') be used within a society as transparently as assembly lines filled with mono-tasked robots, medical labs dedicated to developing and producing new drugs, and construction crews building the next subway line or high-rise apartment. Blunt usage of nanites, as per weapons, is little more than a cleaner (or sometimes messier if there's not enough nanites to go around or if the object is to induce terror in observes of a flesh-and-blood target) and universally effective flame thrower or directed incendiary. As I recall, acid was the 'weaponized nanite' of past eras of fiction.
 
This is one of those Read the article things. Specifically the ‘perspective of the nanobot’ part. Moving things atom by atom is going to require a nanobot to find, move, and place the atom. It is going to have to do this trillions of times to make something large enough to be seen, it is going to have to find a way to refuel itself, it is going to have to have something communicate to it the instructions several trillion times because it can’t hold much data internally ect. Regardless of the complexity of the item it has to be assembled atom by atom so it is always going to take a lot of time. Remember you also have to find each atom before you can move it which is absurdly difficult. Being able to ensure a high quality finished product because of the difficulty in assembling would be very difficult, so it is unlikely nanomachines will be able to make you that katana.

Sure a nanomachine could perform thousands of operations quickly, but that will barely build a few molecules. A large robot wielding and moving pieces of metal will assemble items faster simply because it doesn’t have to deal with the same massive overhead (controls, locating and assembling each atom, ect) and can move trillions of atoms simultaneously, something nanomachines can’t do. It also has the benefit of being able to be part of an assembly line, vastly increasing the amount of finished product that can be put out.

So seriously, read the ENTIRE article before you start claiming that its wrong. SD.net nailed the physics behind nanomachines already

To quote star destroyer:
"But humans are grown, and that works, so you're making it sound harder than it is!" some may protest. But they would be missing the point. As mentioned previously, our manufacturing accuracy leaves something to be desired, and is well below the standards expected of machined parts. A $1 compact disc is manufactured with tighter tolerances than the human body, which can't even make two arms, two legs, two eyes, or two of anything which match to within what a typical manufacturer would consider tight tolerances. Moreover, initial growth stages must take place in a special environment (the womb), so the process doesn't work on a table in the middle of a factory. A constant stream of nutrients (ie- fuel) must be fed into the body so it can grow itself. And what about speed? It takes approximately 16-18 years to manufacture a mature human being, remember? If it took that long to make a car, would you wait? What about waste? A human being will emit more than 1E10 joules of waste heat before it is mature, in addition to producing some 5,000 litres of urine and several hundred kilograms of feces (dry weight), all while consuming enormous amounts of both solid and liquid nutrients and burning them at 25% efficiency. Is this really a manufacturing model that we want to emulate for industry?

People who propose one-stop "cure-all" solutions usually haven't thought clearly and thoroughly about them; in reality, there is no conceivable advantage in 99% of the applications where nanotechnology disciples would have us use it. Small robots are good for doing small things (eg- killing a cancer cell), but not for doing big things (eg- making an engine block). Moreover, accuracy is a serious problem with any atom-by-atom or molecule-by-molecule manufacturing scheme; whereas an engine block can be easily finished to within close tolerances with large CNC grinding tools, that same block would be nightmarishly difficult to manufacture to the same tolerance using nanobots (to say nothing of the staggering difference in speed and efficiency between casting the block and building it atom by atom with nanobots).


While you’re guessing at what nanomachines are like, this article is written by a mechanical engineer. He knows what he is talking about. After all your entire argument is based around 'if there are no problems with using nanomachines, they will be great!'. Realistically you can't ignore that nanomachines can't hold a lot of data, that moving things atom by atom is inefficient, and that nanomachines are naturally super-vulnerable to things making them terrible weapons.
 
I love how you completely disregard the fact that I have stated that I have not only read the article in its entirety (hence why I can say I don't like it and think it is incorrect) but I also gave a reference of my own: http://en.wikipedia.org/wiki/Engines_of_Creation.

It is a book written by a man with what I can only guess is either a superior or inferior grasp of the limitatons and advantages of nanotechnology compared to the author of the Star Destroyer article (that is, he might be right, but I'm inclined to support my position and Mister Drexler's views on nanotechnology). Though I will admit the Star Destroyer article make a good arguement against universally capable airborne/free-roaming nanobots.

Fuel is easy with nanobots that assemble into a collective scaffolding designed specifically to produce the target product, simply transfer electrical or kinetic energy through the scaffold via substrutures within the nanites themselves. In the same way information can be transfered and there's a great deal of knowledge and development even today regarding swarm logic and kinesthetic awareness in robots. finding an atom individually is quite hard, I will admit, but that is not what a dedicated assembly nanite will be doing, it will be plucking an atom from a cloud of identical atoms, and in th same way, thousands upon thousands of nanobots like it will be doing the exact same thing.

THOUSANDS OF TIMES EACH SECOND.

There is a compounding interest of speed and accuracy when one considers nanoassembly as a swarm endeavor supported by an infrastructure (i.e. an information saturated nanoscaffold made up of worker nanites controlled by a single or multiple connected CPU nodes, a tank filled with construction materials that are fed and drained in sequence according to the product being built) than as an individual builder off wandering blindly in the resource-devoid wilderness of some fallacious assembly bay floor.

Yes, the idea of free-roaming nanoconstruction techniques is ficticious. However, the methods I describe, that have been described to me, make more than enough sense to defeat the arguments you keep repeating without consideration.

In reference to your quote
As mentioned previously, our manufacturing accuracy leaves something to be desired, and is well below the standards expected of machined parts. A $1 compact disc is manufactured with tighter tolerances than the human body, which can't even make two arms, two legs, two eyes, or two of anything which match to within what a typical manufacturer would consider tight tolerances.

Something must be considered, evolution crafted us and other organisms to survive, not be perfectly symmetrical. Additionally, that we can craft devices on a marcoscopic scale with design tolerances tighter than natural nanomachines says something, and that something is not "noanoassembly is inaccurate" it is "we can be more accurate with conventional techniques developed over a range of a few centuries to a few millennia than mother nature is with billions of years of nanoscopic design development". The optimistic expectations of industrial nanoengineering are therefore much higher than detractors might claim.

As for this little gem:
Moreover, initial growth stages must take place in a special environment (the womb), so the process doesn't work on a table in the middle of a factory. A constant stream of nutrients (ie- fuel) must be fed into the body so it can grow itself.

The same controled environment takes place on an assembly line.

After all, if a welding robot does not have a lifter robot to move a part into place or even a line of partially finished product for it to weld, or dare I say electrical power supplied by the city's powergrid, it would not be able to build the car at all. Clearly an assembly line is a highly inefficient process of construction, why did we ever turn away from the robust and ubiquitous ability of the human hand? (drama exaggerated to prove a point)

Your arguments are completely fallacious as are the words from the authority you quote. They lack consideration of the modern industrial machine and appeal to base and emotive forms of thinking that I find distasteful and unfit for display in even the softest of science fiction universes (Star Trek notwithstanding, those shows are more appropriately placed in the realm of Morality Tales than listed as primarilly science fiction).
 
You're quoting a wiki page about a book, not a book (which is already being criticized for bad science as is)

Also if you build a scaffolding for nanomachines, you can't move what you're making meaning you can't use an assembly line.

In addition, how do you build this magical scaffolding? You have the exact same problems in trying to build anything with nanomachines when trying to build this scaffolding which also has to be taken apart as you build the object adding another level of complexity. You also can't transfer data through your scaffold because if you did you'd have to have a data bus capable of supporting trillions (not thousands) of different nanomachines, naturally such a bus would be enormous and it would require that all the nanomachines stay in the same spot. Of course you also have the same problem with power, trying to power all the nanomachines through something small would result in burnouts making a scaffolding unusable. Of course then you have the problem of going to this could of atoms and comming back, finding your way there and back, and of course identifying the atom you are picking up. Having them in a cloud is the assumption from the start, it does not make finding and acquiring the atom any easier. If you do this thousands of times each second you can still only produce something the size of a grain of dust.

The larger the item, the more nanomachines you need, the harder it gets to control and maintain them all.

Quite frankly the system you are basing your arguments on doesn't work.
 
Bad science? no, I'm afraid not...
The book and the theories it presents have been the subject of some controversy. Scientists such as Nobel Laureate Richard Smalley and renowned Harvard chemist George M. Whitesides have been particularly critical. Smalley has engaged in open debate with Drexler, attacking the views presented for what he considered both the dubious nature of the science behind them, and the misleading effect on the public's view of nanotechnology. Others, such as futurist Ray Kurzweil, who draws heavily on it in his own publications, have embraced the book.

For a concise list of scientific criticisms of Drexler's ideas, see physicist Richard A.L Jones' Six Challenges for Molecular Nanotechnology, which outlines a number of the problems that have to be overcome in order for the technology to become feasible.

Please note that there is no general statement about the book's truth values one way or the other and that, if you read the link to Richard Jones' 'Six Challenges', the technology is still plausible if not probable. The issues raised have very little if anything to do with the issues raised by the Star Destroyer article.

I'd like to quote a friend on the issue of design tolerances, whom I asked to review my statements in case I had misstated anything...
<LTK> That quote in the second post of Uso is completely inaccurate. You can't possibly compare the human body to the end product of a factory.
<LTK> If you're comparing a factory's tight tolerances to an organic lifeform, the end product would be a protein. The factory is a cell. The body in its entirety is comparable to the country that the factory is built in.
...
<LTK> I mean, we wouldn't even still exist if DNA replication was given as much margin of error as a car assembly line.
<LTK> But that's the only thing I can claim factual knowledge of. I don't know if it's actually possible to make nanotech as efficient as organic proteins.
<LTK> Data storage? Don't need it. Maintenance? If it breaks, replace it. Energy? Most of them utilize the inherent atomic forces of the elements, and the rest uses a ubiquitous and renewable energy currency.

So, in fact, it seems (assuming my friend is correct, and I do) that macro-scale production is less accurate than current examples of developed nanotechnology. And I do take the leap in assuming that humankind can emulate and replicate the processes developed by nature as well as implement an analogous nanotech system.


The scaffolding is not magical, please don't insult my intelligence with that sort of line of argument. We have robots that can find each other on a table and self-assemble, there is nothing saying a nanobot can't perform the same feat. The computing requirements for each nanite can be integral to the individual unit, removing the majority of need for external computing support, of course, I'm assuming that computer science and electronic design will compact and enhance as feasible nanoassembly solutions are developed.

The scaffolding does not need to move since the elemental and molecular material is pumped and circulated around the assembly tank. I'm also extremely sure that we have microscopes capable of detecting and identifying atoms, like the assumption of constant improvement and miniaturization of electronics and programming, I also assume that these devices will also become more accurate and much smaller, up to the point that an individual nanite will be able to identify the atom or molecule its has grabbed as well as pick out a specific item within the cloud of mostly homogenous raw materials it will be using to construct the product.

Quite frankly, I think the system you and the author of the star destroyer article are basing your ideas on does not work (I've been saying this since my first comment, but it bears repetition).
 
The few mentionings of the book involve the magical scaffold which won't work (as I've explained, machines still have to know what they are doing, you can't just have a hammer and expect it to put a nail in all by itself) and the crystalline engine which not only lacks any engineering to back up its design but also seems really overengeneered compared to an ion, orion, or even merlin engine which would produce better results at a fraction of the complexity.

And of course DNA replication is just that, replication. You are not making something entirely new, the nanomachines (or nucleus in this case) is simply duplicating something that it is touching which requires little to no computing power. This is entirely unsuitable for building anything other than what is being copied. Note that even cells have data storage! so yeah, you do need that because without it nothing works. Your friend doesn't know what he is talking about.

So to copy DNA the cell needs data storage, in this case the data is the DNA itself which is the exact size of the DNA in the first place. This is because when you are remembering where atoms are you pretty much have to use atoms as your data storage medium. If you are going to store the location of every atom in an object you'll need a copy of the object in its entirety at least. This means that to build a ruler, each nanomachine will have to be the size of the ruler at least if the data is stored on the nanomachines.

Now computing not only where each atom is, but how to move to the atom, where to go to get atoms, self-diagnostics, and the other necessities of construction is going to take some space too. You aren't going to be able to fit this on a tiny nanomachine because there just isn't room. Even in a nanomachine the size of a cell you're going to have a hard time fitting a processor small enough to do anything because of size limitations with items that small.

Circulating material through a tank (even at a low speed) is going to pick up and move nanomachines around in turn preventing them from doing their job because they won't be able to find and stay at their work locations. (in the SD example, it would be like trying to build a house during a hurricane) Of course you could pump things slowly but then you run into the main problem, this is going to be absurdly slow.

We also do have microscopes that can detect small things, but we do not have microscopes that are only atoms in size. Again, you would have to be able to build a microsocope that can essentially fit inside of a cell. An nanomachine could identify an atom, but it is going to require chemical or electrical processes which will in turn require a random search for the target atom and the location atom. A cell can't magically see everything going on around it from its vantage point.

Yes we do have machines that can find each other and self-assemble now but these are MASSIVE compared to a nanomachine.

Even in your best estimates we're talking about a cell sized object trying to build something atom by atom. Even moving thousands of atoms a second (or millions of atoms a second!) it will take weeks, maybe months, to build something the same size as the cell.

And if you read the six challenges you'll find they are right in line with the SD.net article. 1: nanomachines, even when clustered together are inherently unstable and make for poor structures. That even in the best case you are likely going to end up with a metastable structure (one with high turnover of nanomachines, but stable enough to preform some function).

2: again, nanomachines are unstable. Not even including the problems outlined in SD.net, nanomachines have to constantly deal with the changing thermal properties of the area around them, causing them to flex, distort, and make building a large structure difficult.

3: "stiction" which can only be overcome by creating entirely perfect materials. Of course if entirely perfect materials can be created then standard manufacturing techniques are going to be improved as well. And of course due to 2^ you aren't going to be able to make a perfect material due to the inherent nature of things being wobbly at small sizes.

4:Power transfer for an engine. Issues arise with transfering power to the nanomachine's engine, which is solved by having extremely exact tolerances engineered into the motor. This kind of thing is going to prevent nanomachines from drawing power from an external source without considerable work to ensure they are connected properly. There really is no way around this.

5: They are saying that even in the best of cases, thermal changes are going to cause problems for nanomachines. Simply adding in material is going to cause problems and has to be done with extreme care.

6: They are basically saying we have no clear plan right now for how to go about figuring out how to make nanomachines.


And of course your friend is proposing a machine that magically knows how to do everything without data storage (WTF?), requires no energy (Breaks a fundamental law of physics), and cannot be repaired (massive turnover and delays).

---


So what about the SD.net article is wrong exactly? You already agreed that free range nanomachines aren't viable, leaving them confined to extremely sterile spaces. Or to quote:

as with all new things, it's all too tempting to think of it(nanotech) as a panacea, and like every other cure-all in the history of technology, it's not.
 
Well, the utility fog runs into the same problems that any nanomachines have including being perceived as way more useful than they really would be.

These foglets are never going to have supercomputer levels of processing power. At most they could have powerful processing for a nanomachine which might including basic functions like finding something, or solving a very simple problem, but it is still going to require outside processing to build/move/do anything on a large scale. This is because of the same problems you run into when you have a multi-core processor. Each time you add a processor, you need to have a way to keep track of what both processors are doing which diminishes your performance gain. Then each time you add another processor to that you have to have greater and greater dedication to keeping track of each processor. Now if you are talking about large nanomachines with considerable processing power for their size, they are going to have to have most if not all (and then some) of their processing power dedicated to how they are going to share processing between the different nanomachines. This is because they are getting the worst of both worlds of multi-core processing: extremely weak processors and lots of them. Trying to use them as one big multi-core processor just isn’t going to work which limits their collective intelligence.

Then there is the problem with having free floating nanomachines with SD.net goes over pretty well.

To do any high-level science fiction stuff you are going to need some outside help and a nice safe environment for the nanomachines to work in. Even then you aren’t going to be able to do things quickly. You could also create some neat materials with the technology but you aren’t going to be able to teleport, create objects instantly, or nullify gravity like Hall thinks. You probably won’t even be able to make variable density air for the Hall seatbelt (its more likely you’d destroy the nanomachine by hitting it before it can send the signals to the others to begin retracting).


In the end foglets aren’t a real thing, they are an idea that some guy came up with but it doesn’t look like there is any science behind them. Even hall says the technology doesn’t exist and he doesn’t do any work to figure out how to make these machines, he just makes bold claims.


My favorite bit about his writing is this:
Fog needs the self-reproducing productivity of nanotechnology to be economical.

and

The first question anyone asks about the Fog seems to be, “Won’t it mutate into Gray Goo?” Then they learn that Foglets are not individually self-reproducing, and they say, “Oh.”
 
Now hang on, Uso, if multiple core processors have many bad features then why do I hear about things like experimental 64-core super processors in the works for the next generation of computing machines?

Your bias against nanotech seems to be ridiculously pessimistic. After all, our neurons are comprable in fuction and structure to a foglet super computer even if they don't share the same OS.

Also, using a neural network as a hypothetical model of a foglet computer the output doesn't necessarilly include omniscience of the activities that produced it. 1+1=2, but the numbers don't perform diagnostics on each other to determine the result, it is a phenominon of the structure of the equation that yields the result, not the pieces of the equation. To use another analogy, phone-line operators simply perform the task they are programmed to do and route calls to the proper location, they don't analyze every call and store the content for later reference; that sort of thing is the domain of dedicated archivists.
 
There is a difference between 64 high end processors working together and trillions of extremely small processors working together. With the 64 cores you are getting diminishing returns, each processor you add makes each individual processor slightly less effective. However ultimately it is cheaper to make a bunch of weaker processors than one extremely powerful one. At 64 you still get positive returns on your processing power and you can easily keep adding more up into the thousands (even today) and still get performance gains. However with nanomachines the problem is orders of magnitude larger. Keep in mind a foglet will only have a small fraction of the processing power of today's computers due to the size of its body limiting the computing power we can put inside.

Also, lets assume foglets act like neurons. Now they can't move, work as part of a larger device, and essentially have to act like a brain. This in turn means the fog has to have some part of its mass that remains solid and acting like a brain. This mass will naturally just be a solid mass somewhere without any of the properties of a foglet. At this point you have essentially made an external computer that feeds the fog data, which is what I mentioned before: You are going to need external computing and power.

So basically: Foglets are nothing like neurons, Neurons are part of the processor while a foglet is a processor itself. U-fog is just as unrealistic as any other 'magic' type nanomachines.
 
Keep in mind a foglet will only have a small fraction of the processing power of today's computers due to the size of its body limiting the computing power we can put inside.

This sounds erroneous, keep in mind you can have gates (which I believe, IIRC, are the underpinning structure of processors and their ability to compute) the size of a few atoms of carbon. So hypothetically, with tommorrow's technology we could craft today's computers on the atomic scale, which could most defiantely fit inside of a foglet, and with tommorrow's programming techniques those atomicomps will be much more efficient than say... Windows Vista.

So, no, I think you are wrong, the space inside of a foglet's shell can house a computer of sufficient complexity and miniature scale to operate it on a network with multiple other foglets. Keeping in mind that EVERY foglet will NOT be in communication with EVERY OTHER foglet, you could probably group smaller networks of foglets into nodes within the greater mass, isolated on a particular task whether physical or computational.

And no, I don't think this idea of a core comptroller needs to be present, our brains are evidence enough for that. Just the same, neurons have the capability of shifting their form and budding new axions to make are break synapses, so again, no, this idea of the necessity of a 'queen' processor of yours is practically moronic in its insistency.

And on the subject of multiple processors working together, let's make another analogy, the human species (or ants if you prefer, they're individually like a computer). If each human could be compared to a computer, then again, hypothetically, if WHAT YOU SAY is true, then all of society should still be at tribal levels, if that. After all, we have trillions of brain cells, and I mean, those are like processors too, how do those things even work, the efficiency must be horrible with all of them interacting with each other inside one person's skull?!

You continue to rest your convictions and assertions on obvious and infuriating untruths.
 
Status
Not open for further replies.
RPG-D RPGfix
Back
Top