… and my subsequent hypothesis would be that… the extent to, and manner in which that has occured is very far from uniform in the entire currently existing human species.
See my other comments around 50% - 70% of the population just not having an inner monologue, not actively using that complex language processing internally in a conscious manner, seemingly because for them, those abilities are… underdeveloped? not prioritized by the brain? maybe not practiced, learned, taught?
Like, am I wrong here, or … are you not really sapient if you don’t/can’t do that?
Sentient, of course, but… sapient?
How do you ‘make decisions’ if… you’re basically just acting instinctually, not actually ‘figuring it out’ in some kind of actual semantic/logical sense?
Aren’t you basically just … the sum of your experiences at that point, not being actually capable of metacognition?
Like, one large problem with trying to make ‘AI’ right now… is that it can emulate human-ish discourse fairly well… but it can’t maintain consistency, it can’t metacognate, it has a process for how to formulate a response… but it can’t evaluate the process by which it formulates responses, consider modifications to that process, potentially make those modifications.
Its about as ‘honest and consistent’ as a person with no internal monologue, no self evaluation, just responds to inputs based on its version of experience and instinct (data and trained maps of weights for symbolic associations).
And the Cognitive Tradeoff Hypothesis would say we gave up that ability to get complex language.
… and my subsequent hypothesis would be that… the extent to, and manner in which that has occured is very far from uniform in the entire currently existing human species.
See my other comments around 50% - 70% of the population just not having an inner monologue, not actively using that complex language processing internally in a conscious manner, seemingly because for them, those abilities are… underdeveloped? not prioritized by the brain? maybe not practiced, learned, taught?
Like, am I wrong here, or … are you not really sapient if you don’t/can’t do that?
Sentient, of course, but… sapient?
How do you ‘make decisions’ if… you’re basically just acting instinctually, not actually ‘figuring it out’ in some kind of actual semantic/logical sense?
Aren’t you basically just … the sum of your experiences at that point, not being actually capable of metacognition?
Like, one large problem with trying to make ‘AI’ right now… is that it can emulate human-ish discourse fairly well… but it can’t maintain consistency, it can’t metacognate, it has a process for how to formulate a response… but it can’t evaluate the process by which it formulates responses, consider modifications to that process, potentially make those modifications.
Its about as ‘honest and consistent’ as a person with no internal monologue, no self evaluation, just responds to inputs based on its version of experience and instinct (data and trained maps of weights for symbolic associations).