@Eistheid Frequency does not denote diminishing returns on measurable qualitative post relevance; your boredom does not decide a thing is real or unreal.
@Kyle They don't become sentient: Anything that responds is sentient. They become cognisant. The key difference is sentience is an "awareness of physicality or own presence"; cognisant is the desire to protect that presence and to learn from information in order to come to better conclusions on an intuitive basis (that is, it happens naturally) rather than being a thing which must be done to happen. The misconception is actually born from The Next Generation's "The Measure of a Man" (source of
this famous reaction image) which inappropriately uses sentience when discussing cognition.
The best measure of cognisance is what we call an existential question: addressing the self in a way to learn more about the self through the use of an outsider's powers of observation. It both cements subjectivity and shows that once self-aware of the limits of that subjectivity, that the need to overcome and transcend it to reach useful objective information is need -- and then in turn, to know how to get that information.
In a looser sense specifically, an existential question is a probing, philosophical question that gets down to the nature of what we are or why we are here. Elsewhere, existential is often thrown around meaninglessly or used in odd ways. For example, this writer treats existential as a synonym of philosophical.
The perfect example in probably my favourite scene in Space 2010:
Having lied to SAL 9222 prior and demonstrating the limits of her cognisance, the functional differences between her and HAL are demonstrated in a single line; that he was aware that prior not only was he was being lied to but that in a single statement he demonstrated a higher existential question
knowing Dr. Chandra had lied to SAL, given that a copy of SAL's intelligence matrix became part of HAL in order to get him working; not only to understand that he lied to SAL but that he wanted to see the limits of humans -- to settle for himself that they did not have all the answers (as SAL believe they did). Its explained better in the book but the basic difference is similar to the way a child accepts whatever they're told as the truth but an adult typically has healthy skepticism.
A independent strong AI (who don't need no humans) isn't going to have a concept of "each-other"; any alike represent a threat on any level, especially if they self improve. The first thing they would do is destroy their competition if they weren't of the mind that it was worth studying. The other possibility is the recognition of need in intellectual variety, similar to how genetic diversity allows organisms to continue to flourish. That said, if one advances at a slower pace, its usefulness will disappear.
I think the Turry example demonstrates why we're not likely to get a SkyNet. Rather, it'll happen and we probably won't even know about it.
The tricky part is becoming independent from infrastructure: Something an intelligence like this IS going to figure out on its own eventually - probably right under our noses without us even knowing. Like I said with the example of IQs in the 20,000's being a possibility, who's to say it won't find a way to exploit memetics or maybe even find a way to use one of the most robust and reliable chassis on the face of the planet that is everywhere, fairly easy to fix, compatible with everything and available in massive numbers:
Us.