My studies are in Philosophy (I am a graduate of the University of Essex), and Ι work as a literary translator (English to Greek). Published translations of mine include works by E.A. Poe, R.L. Stevenson and H.P. Lovecraft. I sometimes post articles at https://www.patreon.com/Kyriakos
KyriakosCH
How memories are stored certainly matters, it is too much of an assumption that levels are sealed off. Such an assumption may be implicitly negated in a model, but obviously this doesn’t mean something has changed; the nature of material systems has this issue, unlike mathematical ones.
Another poignant property of material systems is that at times there is a special status of observer for them. In the case of the mind, you have the consciousness of the person and while certainly it can be juxtaposed to other instances of it, it is a different relation from the one which would allow carefree use of the term “anecdote”. Notice “special”, which in no way means infallible or anything of such a class, but it does connote a qualitative difference: apart from other means of observation—those available to everyone else, like the tool you mentioned—there is also the sense through consciousness itself, which here was for reasons of brevity referred to as intuition.
Of course consciousness itself is problematic as an observer. But it is used—in a different capacity—in all other input procedures, since you need an observer to take those in as well. If one treats consciousness as a block which acts with built-in biases, it is too much to believe those are cancelled if one simply uses it as an observer of another type of input. It’s due to this (particular) loop that posing a question about intuition is not without merit.
Going by practice, it does seem likely that intertwined (nominally separate, as over-categories) memories will be far easier to recall at will, than any loosely related (by stream of consciousness) collection of declarative memories. However it is not known if the stored memories (of either type) actually are stored individually or not; they are many competing models for how a memory is stored and recalled, up to the lowest/”lowest” -for there may be no lowest in reality—level of neurons.
That said, I was only asking about other people’s intuitive sense of what works better. It isn’t possible to answer using a definitive model, due to the number of unknowns.
I mean more cost-effective, so to speak. My sense is that while procedural is easier to sustain (for years, or even for the entirety of your life), it really is more suitable for focused projects instead of random/general knowledge accumulation. Then again it is highly likely that procedural memories help with better organization overall, acting as a more elegant system. In that, declarative memories are more like axioms, with procedural being either rules or just application of rules, with far fewer axioms needed.
I agree. Although my question was not whether 3d is real/independent of the observer. I was wondering why for us it had to be specifically 3d instead of something else.
For all we know, maybe “3d” isn’t 3d either, in that any way of viewing things would end up seeming to be 3d. In a set system, with known axioms, examined from the outside, 3d just follows 2d. But if as an observer you are 3d-based, it doesn’t have to follow that this is a progression from 2d at all and it might just be a different system.
You are confusing “reason to choose” (which is obviously not there; optimal strategy is trivial to find) with “happens to be chosen”. Ie you are looking at what is said from an angle which isn’t crucial to the point.
Everyone is aware that scissors is not be chosen at any time if the player has correctly evaluated the dynamic. Try asking a non-sentence in a formal logic system to stop existing cause it evaluated the dynamic, and you’ll get why your point is not sensible.
Thank you, I will have a look!
My own interest in recollecting this variation (an actual thing, from my childhood years) is that intuitively it seems to me that this type of limited setting may be enough so that the inherent dynamic of ‘new player will go for the less than optimal strategy’, and the periodic ripple effect it creates, can (be made to) mimic some elements of a formal logic system, namely the interactions of non-sentences with sentences.
So I posted this as a possible trigger for more reflection, not for establishing the trivial (optimal strategy in this corrupted variation of the game) ^_^
Please read my edited reply to lsusr.
Edit (I rewrote this reply, cause it was too vague in the original :) )
Very correct in regards to every player actually having identified this (indeed, if all players are aware of the new balance, they will pick up that glue is a better type of scissors so scissors should not be picked). But imagine a player comes in and hasn’t picked up this identity, while (for different reasons) they have picked up an aversion to choose rock from previous players. Then scissors still has a chance to win (against paper), and effectively rock is largely out, so the triplet scissors-paper-glue has glue as the permanent winner. This in turn (after a couple of games) is picked up and stabilizes the game as having three options for all (scissors no longer chosen), until a new player who is unaware joins.
Essentially the dynamic of the 4-choice game allows for periodic returns to a 3-choice, which is what can be used to trigger ongoing corrections to other systems.
″ Presumably the machine learning model has in some sense discovered Newtonian mechanics using the training data we fed it, since this is surely the most compact way to predict the position of the planets far into the future. ”
To me, this seems to be an entirely unrealistic presumption (also true for any of its parallels; not just when it is strictly about the position of planets). Even the claim that NM is “surely the most compact [...]” is questionable, given that obviously we know from history that there had been models able to predict just the position of stars since ancient times, and in this hypothetical situation where we somehow have knowledge of the position of planets (maybe through developments in telescopic technology) there is no reason to assume analogous models with the ancient ones with stars couldn’t apply, thus NM would not be specifically needed to be part of what the machine was calculating.
Furthermore, I have some issue with the author’s sense that the machine calculating something is somehow calculating it in a manner which inherently allows for the calculation to be translatable in many ways. While a human thinker inevitably thinks in ways which are open to translation and adaptation, this is true because as humans we do not think in a set way: any thinking pattern or collections of such patterns can—in theory—consist of a vast number of different neural connections and variations. Only as a finished mental product can it seem to have a very set meaning. For example, if we ask a child if their food was nice, they may say “yes, it was”, and we would have that statement as something meaning something set, but we would never actually be aware of the set neural coding of that reply, for the simple reason that there isn’t just one.
For a machine, on the other hand, a calculation is inherently an output on a non-translatable, set basis. Which is another way of saying that the machine does not think. This problem isn’t likely to be solved by just coding a machine in such a way that it could have many different possible “connections” when its output would be the same, cause with humans this happens naturally, and one can suspect that human thinking itself is in a way just a byproduct of something not tied to actual thinking but the sense of existence. Which is, again, another way of saying that a machine is not alive. Personally, I think AI in the way it is currently imagined, is not possible. Perhaps some hybrid of machine-dna may produce a type of AI, but it would again be due to the DNA forcing a sense of existence and it would still take very impressive work to use that to advance Ai itself; I think it can be used to study DNA itself, though, through the machine’s interaction with it.
Thank you all for your answers… I will be taking this piece out, cause ultimately it isn’t anything good :)
Cthulhu ^_^
Well, this is only an introductory part. The glyphs are to be described later, and they stand for the meaning of the intense emotion. Much like the idol symbolizes the emotion as a whole, the glyphs on it are specks which may be analyzed.
If I may, to address both yours and MakoYass gist of the replies:
-I do feel that the summation of the excerpt is not loyal to the idea I had—which, to be sure, means I did fail, cause I cannot ask of the reader to see just what I aimed. That said, my own summation would be as follows:
1) vengeful acts seem to be usually not very analyzed, particularly by their agents
2) even in the case of calculative agents, this doesn’t change in the crucial part (the calculative agent still won’t examine the actual emotion, it is just that in their case they are more able to distance themselves from it).
The piece would then move on to examining whether the emotion which tends to lead to vengeful action (in cases where it is potent enough; eg to lead to murder in reciprocation) was actually tied to the event which triggered it; and therefore to examine if such an agent is actually negating the source of injury. The main idea is that no, it isn’t much tied, but it is felt as tied and due to lack of ability to analyze the mental phenomenon it is usually the case that seeking to just negate the idol of it (the emotion) suffices here for the individual.
Emotions can serve as a block. The metaphor of the idol is tied to the one about the barrier mentioned earlier on. The underlying issue, however, is that if you are presented with an emotional wall, you would have to undertake more complicated steps to approach the matter differently; in a way, reacting to the emotion is like throwing back a ball you got into your yard, from someone who threw it behind a tall wall. But you can also try to go to the area from where it got sent to you—yet, for whatever reason or balance, apparently this was not the automatic development of this situation.
″ People don’t live merely to survive: we’re hardwired to propagate our genes. If you cannot think abstractly and articulate your ideas well, you will have difficulty attracting a mate. People who have disabled their ability to examine themselves will be quickly eliminated from the gene pool. Hence, it seems unlikely that such an illness will occur because it goes against how natural selection has shaped us. ”
I don’t disagree with the gist of the above. However it is tricky to assign clear intentions to a non-human agent, assuming one views biological undercurrents as an analogue to an agent in the first place. Which brings us to:
″ This reasoning seems to rely on the assumption that the mind was designed by some kind of agent. Who do you think is deciding whether it “makes sense” to allow an expansion of the ability to think? Our best theory is that cognitive expansion resulted as a series of mutations that improved the ability of our ancestors to survive. One does not need to appeal to the fact that “Day Zero illness” does not “make sense” to argue for its implausibility. It is implausible simply by the fact that it is a priori highly unlikely for any novel previously unobserved phenomenon to exist in the absence of a very strong theory that predicts it. ”
If I assume such an illness can exist, it doesn’t mean I can pontificate on the way in which it would be triggered. Certainly some mental illnesses seem to be more common in modern times—despite the ability to account for them and measure number of patients more efficiently. Some slightly related illnesses that do exist are those which have aphasia as a core part. Usually in pre-modern times one finds more elaborate personal accounts by poets and other authors, of such sensations or states; eg in the case of an aphasia-like state, there are two good examples, one from Baudelaire (the french poet) and his sense that he was “touched by the wing of idiocy” etc, and the very dramatic story of the deterioration of Guy De Maupassant (important story-writer), who in the end “reverted to an animal state”.
However, as I noted, the hypothetical illness I wrote about is not just an individual case with elements of aphasia. Primarily my background for asking the question has been that any human is not primarily (in my view) an outward/social oriented being, but in the vast majority of cases humans are indeed social agents (due to a variety of reasons; usually having to do with clear rewards). However, below all that there is the person in their world of consciousness, as part of the greater world of the mind. It may be, therefore, that a risk can be picked up (more on by what it will be picked up later) as serious enough if it somehow attacks the inner world, that even a massive exodus from formations about anything closer to a surface (like interests in the external world) may occur. In such case, assuming it is possible, it would be easier to cause not a full erasure of memories or skills, but a negation of the ability to stabilize them, as briefly presented in the definition of the new illness in the OP.
As for your point about all this having to allow for the mind being created by an agent—no, that isn’t so. I certainly have no reason to think the mind was created as a set work, nor (of course) that it existed a priori or may be sensed as existing a priori even figuratively. The way in which it developed (mutations etc) doesn’t by itself have to cancel the possibility of a non-yet seen illness appearing. After all, as you agreed, not much of the final (such a thing cannot even exist) form of a mind can manifest, given this system of connections cannot exhaust all its possible rearrangements during the person’s lifetime (likely not even if the person could live for 1000 years). I do approach this from a more literary (which, sadly, at times means even less literal...) point of view, given literature and philosophy is where my interests and studies lie.
I should also give at least one parallel (it won’t be perfect, and it may lead to problems as well...) with a procedure which allows for a new development on a larger scale, while it wasn’t picked up individually up to then. Given that if something like the DZI would exist, it wouldn’t have been picked up before, it can be said that what was doing the picking-up or noticing certainly would not act on the same level as an individual (eg some individual sufferer or some aphasia-like condition). This would perhaps be possible, if the complexity of both the trigger and the formations which pick up the trigger were again far larger. In effect, in my hypothetical, the general idea was that some core pattern or patterns—not created by any agent; not conscious and not accounted for—does exist, which would signal due to special relation to the unconscious mind some particular and grave danger. Such patterns do not even have to be intelligible to an individual in the first place. In that, perhaps, it deteriorates somewhat to the realm of fiction; yet most complicated patterns aren’t making a full impression on someone who views them. In fact we can be said to be surrounded by patterns which are not picked up, due to our position or lack of related interest to notice. Maybe—that is the hypothesis—a slight difference will lead to the unintended formation of a curious pattern which happens to be related not to the thinker but to some scheme in the mental world. After all—here comes the parallel—it isn’t rare to see the opposite happen, for humans project math formations into external objects (eg the fibonacci and other φ related patterns, on shells etc). If we can project math onto the external world (which isn’t anthropic or mathematical; math is not cosmic, in my view), why shouldn’t some formation there present us with other elements and balances of our own mental world?
That such would be catastrophic, or cataclysmic, is just an assumption.
″ The idea that consciousness is an phenomenon unrelated to brain structure and neural connections, is not helpful” is something I agree with. My question meant to have you argue in what way this hypothesis prerequisites a duallistic view.
Hi, I read the synopsis in that wiki page. While the Snow Crash story seems highly unlikely, indeed there isn’t any prerequisite of understanding (by the conscious person) so that a change may take place. One could go as far as to claim that understanding by its very nature rests mostly on not understanding, while focusing on something to be understood.
I certainly am not aiming to define possible conditions under which something like the DZI may occur. Those may or may not exist. However it isn’t by itself unrealistic that if we suppose that the vast majority of any mental goings-on in one’s mind at any given mind are not conscious, some pattern with crucial similarities to those yet not conscious mental goings-on may affect them; up to a very crucial degree.
That said, an obvious difference with the Snow Crash story is that I am not talking about anything consciously constructed. DZI would not be a man-made virus. In essence the question is more tied to whether the start of consciousness itself was ‘clean’ in regards to not allowing for any reverting to a previous state or a collapse due to the risk of such reverting. For what it’s worth, I do doubt that man developed consciousness in a clear-cut case of advancing and bettering one’s chances in the world.
If you wish, you can elaborate on how you mean “weird dualism”. If I attempt to guess—likely falsely—I’d imagine that you formed the view the hypothetical DZI had to affect just one part of the mind or just some ability which can reform or be provided by other parts (as in cases of people who suffer brain injury and in time may form new connections and means to generally possess again the same—or ‘same’ - abilities).
Intuitively, I think it is possible it will appear.
Rationally, one may consider the following as well:
-not much time has passed since the first use of language (by prehistoric people) to this day, so it can be assumed that only a negligible part of the possible mental calculations/connections has occured
-there is no direct survival bonus through ability to think in complicated manner; on the other hand there is arguably an cost-effective logic in disabling great freedom in self-examination
However it may take centuries for that to happen.
At any rate, it is just my guess—there are so many unknowns about the mind that this may too be impossible to actually happen. One reason why it would be unlikely is that, ultimately, if so grave a danger was built-in a system, it would make more sense to never allow as an option the expansion of ability to think in the first place.
I wish to examine a point in the foundations of your post—to be more precise, a point which leads to the inevitable conclusion that it is not problematic in this discussion to use the term ‘agent’ while it is understood in a manner which allows a thermostat to qualify as an agent.
A thermostat certainly has triggers/sensors which force a reaction when a condition has been met. However to argue that this is akin to how a person is an agent is to argue that a rock supposedly “runs” the program known as gravity, when it falls. The issue is not a lack of parallels; it is a lack of undercurrent below the parallels (in a sense, this is causing the view that a thermostat is an agent, to be a ‘leaking abstraction’ as you put it). For we have to consider that no actual identification of change (be it through sense or thought or both) is possible when the entity identifying such change lacks the ability to translate it in a setting of its own. By translating I mean something readily evident in the case of human agents—not so evident in the case of ants or other relatively simpler creatures. If your room is on fire you identify this as a change from the normal, but this does not mean there is only one way to identify the changed situation. Someone living next to you will also identify that there is a fire, but chances are the (to use an analogy) code for that in their mind will differ very significantly from your own. Yet on some basic level you will be in agreement that there was a fire, and you had to leave.
Now an ant, another being which has life—unlike a thermostat—picks up changes in its environment. If you try to attack it it may go into panic mode. This, again, does not mean the act of attacking the ant is picked up as it is; it is once against translated, this time by the ant. How it translates it is not known, however it seems impossible to argue that it merely picks up the change as something set, some block of truth with the meaning ‘change/danger’ etc. It picks it up due to its ability (not conscious in the case of the ant) to identify something as set, and something as a change in that original set. A thermostat has no identification of anything set, because not being alive it has no power nor need to sense a starting condition, let alone to have inside it a vortex where translations of changes are formed.
All the above is why I firmly am against the view that “agent” is to be defined in a way that both a human and a thermostat can partake in it, when the discussion is about humans and involves that term.
Regarding Archimedes (a philosophy of math anecdote)
I do suspect that when things make sense it is because of a drive of the sense-making agent to further his/her understanding, but I think that unwittingly it is actually a self-understanding and not one of the cosmos. If the cosmos does make sense, it isn’t making sense to some chance observer like a human who is at any rate a walking thinking mechanism and has very little consciousness of either his own mental cogs or the dynamics between his own thinking and anything external and non-human. That this allows for distinct and verifiable progress (eg, as noted in my OP, anything up to space-traveling vehicles) is not due to some supposed real tie between observer agent and cosmos, but due to inherent tie between observer and translation natural (and inescapable past some degree) to said observer of the cosmos.
I generally agree, and I am happy you found the discussion interesting :)
In my view, indeed the Babylonian type of labyrinth does promote continuous struggle, or at least multiple points of hope and focus on achieving a breakthrough, while ultimately a majority of the time they won’t lead to anything—and couldn’t have lead to anything in the first place. The Arabian type at least promotes a stable progression, towards an end—although that end may already be a bad one.
Most of the time we simply move in our labyrinth anyway. And with more theoretical goals it can be said that even a breakthrough is more of a fantasy borne out of the endless movement inside the maze.
Machine language is a known lower level; neurons aren’t; perhaps in the future there will be more microscopic building blocks examined; maybe there is no end to the division itself.
In a computer it would indeed make no sense for a programmer to examine something below machine language, since you are compiling or otherwise acting upon it. But it’s not a known isomorphism to the mind.
If you’d like a parallel to the above, from the history of philosophy, you might be interested in comparing dialectic reasoning and Aristotelian logic. It’s not by accident that Aristotle explicitly argued that for any system to include the means to prove something (proof isn’t there in dialectics, not past some level, exactly because no lower level is built into the system) it has to be set with at least one axiom: the inability of anything to simultaneously include and not include a quality (in math you’d more often see this as A∨¬A). In dialectics (Parmenides, Zeno etc), this explicitly is argued against, the possibility of infinite division of matter being one of their premises.