tylerneylon 3 days ago

Author here: I'm grateful for the comments; thanks especially for interesting references.

Context for the article: I'm working on an ambitious long-term project to write a book about consciousness from a scientific and analytic (versus, say, a meditation-oriented) perspective. I didn't write this fact in the article, but what I'd love to happen is that I meet people with a similar optimistic perspective, and to learn and improve my communication skills via follow-up conversations.

If anyone is interested in chatting more about the topic of the article, please do email me. My email is in my HN profile. Thanks!

  • jcynix 3 days ago

    Dennett has walked that path before. In “Consciousness Explained,” a 1991 best-seller, he described consciousness as something like the product of multiple, layered computer programs running on the hardware of the brain. [...]

    Quoted from rom https://www.newyorker.com/magazine/2017/03/27/daniel-dennett...

    Regarding the multiple layers: the most interesting thoughts I read about theories of a mind, are the books by Marvin Minsky, namely: The Society of Mind and The Emotion Machine which should be more widely known.

    More on Minsky's ideas on “Matter, Mind, and Models” are mentioned in https://www.newyorker.com/magazine/1981/12/14/a-i

  • freilanzer 3 days ago

    "The Experience Machine: How Our Minds Predict and Shape Reality" could be interesting to you. The framework it presents tries to explain attention and world models, and does so quite convincingly in my opinion.

  • rerdavies 3 days ago

    Also a must read: How the Mind Works, Steven Pinker.

    Pinker advocates for a model of mind that has multiple streams of consciousness. In Pinker's model of mind, there are multiple agents constructing models of the world in parallel, each trying to predict future states from current state plus current input data. A supervisory process then selects the model that has made the best prediction of current state in the recent past for use when reacting in the current moment. The supervisor process is free to switch between models on the fly as more data comes in.

    Pinker grounds his model of mind in curious observations of what people remember, and how our short term memories change over a period of sometimes many seconds as our mind switches between different agent interpretations of what's going on. Witnesses are notoriously unreliable. Pinker concerns himself with why and how witnesses are unreliable, not for legal reasons, but for how those unreliabilities might reveal structure of mind. Pinker's most interesting observation (I think) is that what we seem to remember is output from the models rather than the raw input data; and that what we remember seeing can change dramatically over a period of many seconds as the supervisory process switches between models. Notably, we seem to remember, in short term memory, details of successful models that make "sense", even when those details contradict with what we actually saw. And when those details are moved to longer-term memory, the potentially inaccurate details of model are what are committed.

  • ypeterholmes 3 days ago

    Not seeing your email, so I'll post here. I wrote up a similar theory of consciousness here: https://peterholmes.medium.com/the-conscious-computer-af5037...

    In particular, I think there's a nice overlap between my description of self-awareness and yours. You mention a model capable of seeing its own output, but I take it a step further and indicate model capable of generating its own simulated actions and results. Curious what you think, thanks!

  • lukasb 3 days ago

    Minksy's The Society of Mind is a must-read

  • mensetmanusman 3 days ago

    Ones own consciousness is actually one of the few things you can’t apply the scientific method to.

    It will be a good practice to see how deeply terms can be applied in order to combat this gap.

    • n4r9 3 days ago

      How do you mean?

      I can conceive of scientific experiments involving consciousness. For example:

      Hypothesis: Consuming LSD gives me a hallucinatory experience.

      Method: Randomized, blind trial. Every Saturday morning I consume one tab of either LSD or water, sit in a blank white room with a sitter (who does nothing), and record my experience.

      Results: Every time after consuming water, I have no visual hallucinations and get bored. Every time after consuming LSD, I see shifting colour patterns, imagine music playing on the walls, and feel at one with the world.

      Conclusion: Results strongly support hypothesis.

      • falcor84 3 days ago

        If I recall correctly, a true hallucination is one that the person cannot distinguish from reality, so at best you'd measure "pseudohallucinations". But even then, your approach would suffer from all the limitations of phenomenology.

        • n4r9 3 days ago

          I see what you mean about "hallucination"; I guess I was using the looser definition as per Wikipedia of a perception that "has the compelling sense of reality".

          I'd be keen to understand those limitations of phenomenonology and how they apply to this specific experiment, if you have an opportunity to expand.

          • falcor84 2 days ago

            I'm not that knowledgeable about this particular area. My familiarity in this area is mostly around confabulations[0] and false memories[1] such as those studied by Elizabeth Loftus[2].

            I'm not sure if that would apply directly to your proposed experiment, but my concern is that what we experience as memories can sometimes apparently be generated on the fly rather than accessing something that was there before.

            In general, these kind of phenomenological studies suffer from subjectivity, and are very difficult to connect to explanatory mechanisms, without additional objective evidence. One relevant approach to deal with this is Dennett's Heterophenomenology[3], which seeks to combine the subject's own impressions with other external evidence - so in your case, instead of having the sitter do nothing, you may want to pay them to gather additional objective evidence.

            [0] https://en.wikipedia.org/wiki/Confabulation

            [1] https://en.wikipedia.org/wiki/False_memory

            [2] https://en.wikipedia.org/wiki/Elizabeth_Loftus

            [3] https://en.wikipedia.org/wiki/Heterophenomenology

            • n4r9 2 days ago

              Hmm, I almost see why it might be an issue. Although I guess that you could see the hypothesis as predicting what someone reports their subjective experience to be. Or perhaps that makes behavioural assumptions about consciousness?

              I suppose if the sitter confirms that they and I stayed inside the cell, and that nothing else came in or out for the duration, that should address that issue? There could be cameras and microphones set up to record the lack of weird trippy phenomena. And I could record my own perceptions in the moment, either by writing it down or speaking into a dictaphone.

    • observationist 3 days ago

      This is deeply incorrect; it's been a claim made for thousands of years, but it doesn't stand up in the face of Bayesian rationality.

      The brain is necessary and sufficient cause for subjective experience. If you have a normal brain, and have subjective experience, you should have near total certainty that other people's reports of subjective experience - the millions and billions of them throughout history, direct and indirect - are evidence of subjective experience in others.

      Any claim of solipsism falls apart; to claim uncertainty is to embrace irrationality. In this framework, if you are going to argue for the possibility of the absence of consciousness in others who possess normal human brains, it is on you to explain how such a thing might be possible, and to find evidence for it. All neuroscience evidence points to the brain being a necessary, sufficient, and complete explanation for consciousness.

      Without appealing to magic, ignorance of the exact mechanism, or unscientific paradigms, there exist no arguments against the mountains of evidence that consciousness exists, is the usual case, and likely extends to almost all species of mammal, given the striking similarity in brain form and function, and certain behavioral indicators.

      Cases against this almost universally spring from religion, insistence on human exceptionalism, and other forms of deeply unscientific and irrational argument.

      I can say, with a posterior probability of certainty exceeding 99.99999999% that a given human is conscious, simply by accepting that the phenomenon of subjective experience I recognize as such is not the consequence of magic, that I am not some specially endowed singular creature with an anomalous biological feature giving me subjective experience that all others, despite describing it or behaving directly or indirectly as if it were the case. Even, and maybe even especially, if the human in question is making declarations to the contrary.

      Consciousness is absolutely subject to the scientific method. There's no wiggle room or uncertainty.

      Quantum tubules, souls, eternal spirits, and other "explanations" are completely unnecessary. We know that if you turn the brain off (through damage, disease, or death) all evidence of consciousness vanishes. While the brain is alive and without significant variance in the usual parameters one might apply to define a "normal, healthy, functioning brain", consciousness happens.

      Plato's cave can be a fun place to hang out, but there's nothing fundamental keeping us there. We have Bayesian probability, Occam's razor, modern neuroscience, and mountains of evidence giving us everything we need to simply accept consciousness as the default case in humans. It arises from the particular cognitive processes undergone in the network of computational structures and units in the brain.

      Any claims to the contrary require profound evidence to even be considered; the question is all but settled. The simplest explanation is also the one with the most evidence, and we have everything from molecular studies to behavioral coherence in written and recorded history common to nearly every single author ever to exist.

      I find that disputes almost inevitably stem from deeply held biases toward human exceptionalism, rooted in cultural anachronisms, such as Plato's Allegory of the Cave. We could have left the cave at almost any point since the enlightenment, but there is deep resistance to anything challenging dogmatic insistence that humans are specially appointed to cognition, that we alone have the special magic sauce that make our lives important, but the "lesser" animals morally and ethically ours to do with as we will.

      Whenever we look more deeply into other mammal cognition, we find structural and behavioral parallels. Given hands, facile lips and vocal apparatus, and a comparable density and quantity of cortical neurons, alongside culture and knowledge, absent disruptive hormonal and biological factors, there don't appear to be any good reasons to think that any given mammal would not be as intelligent and clever as a human. Give a bear, a whale, a monkey, a dolphin an education with such advantages and science suggests that there is nothing, in principle, barring them from being just as intelligent as a human. Humans are special through a quirk of evolution; we communicate complex ideas, remember, reason, manipulate our environment, and record our experiences. This allows us to exert control in ways unavailable to other animals.

      Some seemingly bizarre consequences seem to arise from this perspective; any network with particular qualities in connective architecture, processing capacity, external sensors, and the ability to interact with an environment has the possibility of being conscious. A forest, a garden, a vast bacterial mat, a system of many individual units like an ant colony, and other forms of life may host consciousness comparable to our own experience. Given education and the requisite apparatus, we may find it possible to communicate with those networks, despite the radically alien and disparate forms of experience they might undergo.

      If your priors include mysticism, religion, magic, or other sources outside the realm of rational thinking, this might not be the argument for you. If you don't have a particular attachment to those ways of thinking, then recognize where they exert influence on ideas and update your priors all the way; brains cause consciousness. There's nothing particularly magical about the mechanics of it, the magic is all in the thing itself. By understanding a thing, we can aspire to behave more ethically, that we can include all forms of consciousness in answering the questions of how to make life as good a thing as possible for the most people... and we might have to update what we consider to be people to include the lions, tigers, and bears.

      • Propelloni 3 days ago

        There are quite a few attackable premises in your chain of arguments, and some dubious conclusions, too. I'm not going into it because I don't want to type a lengthy treatise, and, of course you know, those arguments have been going back and forth for the better part of three millenia now. I'm not the right person to rehash those. Just, if it would be easy it would be as cut and dried as you try to picture it.

        I guess each of us, individually, has to, at some point in time, come to a version of an answer that we can live with. I find yours neither the worst nor the best I've ever heard, but I'm glad you have it.

        • observationist 3 days ago

          I've had this point of view for years; I would love to refine, abandon, or update my perspective as necessary. Please tear into it!

          • MarkusQ 3 days ago

            I mostly agree with you, but since you ask... :)

            I think one weak spot is the assumption that the physical hardware for speech (e.g. lips & vocal chords) are to some extent necessary and sufficient for language. I think we know with pretty high degree of certainty (by arguments parallel to your other points) that <i>specialized brain circuits</i> are key to language use, and the physical means are secondary.

            You may also want to consider why you implicitly draw a line at "mammals" and if this is really justified. The line might belong at "primates" or at "vertebrates" or...the whole idea of a line might be mistaken.

            Also, kudos for inviting criticism. If more people did that, we'd all be better off (at least, I believe this to be the case).

            • observationist 3 days ago

              Speech is part of our framework of capabilities essential for what we experience as normal consciousness. We have vocal apparatus that, together with our brains, enables language. Our complex hands allow us to manipulate our environment and perform complex abstractions and communication. For example, pointing at an object and making a sound represents "naming," which facilitates further abstractions. With language, memory, and storytelling, cultural development occurs, refining the tools and abstractions by which we engage with and understand our environment. Without dextrous hands, capable vocal hardware, or sharp eyesight, language and culture might not have developed, preventing the creation of machines that think.

              Granting an animal the cognitive software of our culture means they only need speech to benefit from abstractions otherwise impossible to develop without humans.I focus on mammals due to the shared structure of our brains. Humans are conscious, and we must identify a specific phenomenon, network, circuit, or class of neurons responsible for consciousness. Since rat, human, and whale brains are incredibly similar, it's highly probable they have similar subjective experiences. We can't definitively rule out consciousness in different mammalian species due to our limited understanding of consciousness mechanics.

              Some birds and lizards exhibit conscious and intelligent behaviors, suggesting different brain structures can serve similar functions. An Etruscan shrew has the smallest brain of any mammal, and there might be a threshold of capacity and architecture required for consciousness. If consciousness depends only on architecture, shrews might be conscious in a way we can relate to.

              We don't fully understand consciousness yet; untangling the factors is a complex problem. Bayesian probability suggests that if your brain causes your subjective experience, other animals with similar brains are also conscious. Their behavior and artifacts, such as emotion, psychology, and intricate devices, indicate consciousness. Consciousness includes not just self-awareness or theory of mind but the second-by-second experience and interpretation of sensory constructs.

              This idea challenges dualistic explanations. Religion, simulation theory, metaphysics, mysticism, and other explanations often arise in the absence of a clear understanding. However, one's conscious experience as a fundamental prior suggests a material basis for consciousness. Abandoning the bias of human exceptionalism is necessary for a cold, clinical view.

              Consciousness seems to be a spectrum. Helen Keller described moving from a continuous, borderless blur of experience to concrete, episodic sequences with clear self-other delineation. Primitive humans, biologically similar to us but without language, would have experienced life differently - much like Helen's pre-language blur of the eternal now. Whales and other large-brained mammals might experience life similarly. If we discover their language, or a way of mapping engrams of their experience onto language, it will reflect a different way of experiencing things. The same applies to forests, giant mycelial networks, and colony insects, where we might find the computational circuits underlying consciousness and cognition.

              We don't get to explore those things if we insist on staying in the cave and theorizing; I'd love for someone to point out I've wandered down a dead end, but everything I've read and considered so far leads me to believe that there's no value in distinguishing one's own consciousness from all other things scientific. Treat it as the evidence it is; if you need to update, update all the way.

          • mistermann 2 days ago

            There's all sorts of assertions I take issue with, I'll just take the first one:

            > This is deeply incorrect; it's been a claim made for thousands of years, but it doesn't stand up in the face of Bayesian rationality.

            1. In this case, how is "Bayesian rationality" implemented (literally implemented(!)...what are the inputs, what are the outputs, what is the source code of the algorithm)?

            2. What is the meaning of "doesn't stand up in the face of"?

          • tivert 3 days ago

            > I've had this point of view for years; I would love to refine, abandon, or update my perspective as necessary. Please tear into it!

            I think he's telling you to do that work yourself, because he doesn't want to do it and he's also not the best equipped to do it either.

            • MarkusQ 3 days ago

              So..."I know you're wrong but I can't/won't say what about, how or why I know it, or otherwise back up my claim"?

              This sounds like just another way of saying "I don't like your conclusions, but I can't see a good way to refute them."

              • webstrand 3 days ago

                It's a very long argument and not easily structured into claims. It takes a lot of time to engage with such arguments.

              • tivert 3 days ago

                > This sounds like just another way of saying "I don't like your conclusions, but I can't see a good way to refute them."

                No, I don't think so. It sounded more like "your conclusions are not new and wrong, but to refute them would require more effort than I'm willing to invest in replying to a rando's internet comment," which is a totally legitimate stance.

                It's sort of like coming across someone in a forum who claims to have invented a perpetual motion machine, noting that they lack fundamental knowledge, but declining to volunteer to be their Physics 101 teacher.

      • webstrand 3 days ago

        I do agree with you: one can absolutely apply the scientific method to their own mind, the subjective experience of consciousness is a largely consistent source of information, and that's all you need for the scientific method. Applying those discoveries to other people is unlikely to be correct. But you're arguing a tautology, you've already accepted the axiom of other minds, and you're working backwards from that premise to find evidence supporting the axiom.

        The point of solipsism isn't to actually reject other minds. I'd argue that's quite a useless belief to hold, it provides no predictive power and no additional ways to engage with the world around one self. But there are any number of other equally valid interpretations of the only evidence one has access to: the fact that you yourself are conscious and that you receive input from a remarkably consistent source of information.

        Not all of those interpretations provide the same tools of thought, but some are equivalent in power: For instance you could reject other minds and instead posit that your perception of them is a sophisticated construct created by your own consciousness to simulate interaction with a complex and consistent world. Alternatively, you might consider that other minds are simply facets of your own mind, reflecting your subconscious processes. These constructs are similarly axiomatic, they're unfalsifiable.

        A lot of the rest of your argument feels like a straw-man to me. I don't fundamentally disagree with you—I also accept the existence of other minds because it provides predictive power and useful tools of thought. But writing-off people who question this axiom as "irrational", you're just defining "rationality" to include the axiom of the existence of other minds. And "irrational" is a loaded term when used to describe a person or their beliefs.

  • FrancisMoodie 3 days ago

    Gödel, Escher, Bach - An eternal Braid is a great book about consciousness from a logical standpoint. Is this a work you're familiar with?

    • Vecr 3 days ago

      Eternal Golden Braid, that is. Quite a bit of it's outdated at this point, but the explanation of uncomputability is probably still the best.

  • Animats 3 days ago

    > a book about consciousness

    Too many people have written books about consciousness. There's much tail-chasing in that space, all the way back to Aristotle. Write one about common sense. Current AI sucks at common sense. We can't even achieve the level of common sense of a squirrel yet.

    Working definition of common sense: getting through the next 30 seconds of life without a major screwup.

    • cornholio 3 days ago

      That's not a working definition of common sense. You've pulled as dependencies the entirety of human reasoning and self-reflection, a theory of mind for all sentient creatures you encounter and a gargantuan amount of context sensitive socio-cultural learning, to name but a few.

    • ben_w 3 days ago

      For most of my life, people use "common sense" as an insult, "XYZ has no common sense" or similar — therefore I think it is, in practice, the overton window of the person saying it.

      Others define it as "knowledge, judgement, and taste which is more or less universal and which is held more or less without reflection or argument", which LLMs absolutely do demonstrate.

      What you ask for, "getting through the next 30 seconds of life without a major screwup", would usually, 99.7% of the time, be passed by the Waymo autopilot.

      • bamboozled 3 days ago

        Same could be said for a driver less train I suppose or the auto-pilot on a 747.

        • creer 2 days ago

          I think you can go to a CNC machine, or a rock.

      • Animats 2 days ago

        > What you ask for, "getting through the next 30 seconds of life without a major screwup", would usually, 99.7% of the time, be passed by the Waymo autopilot.

        That speaks well of the Waymo autopilot, which does considerably better than that. Tesla "self driving", not so much.

        • ben_w 2 days ago

          The Waymo percentage was calculated based on the disengagement rate and a guess at the average speed.

          Tesla may be the same or not, but they aren't publishing current numbers where I could find them, only really old numbers.

          Not that the exact value matters, because so far as I can tell the only autopilot worse than your threshold was the Uber one that crashed into a pedestrian pushing a bike. Even Tesla's 2016 version beat that.

    • tgaj 3 days ago

      "Common sense" is just knowledge about society and our environment. It's not some kind of magic.

bubblyworld 3 days ago

Something that strikes me about this model is that it's bottom up - sensory data feeds in in its entirety, the action centre processes everything, makes a decision, sends a command to the motor centre.

There's a theory that real brains subvert this, and what we perceive is actually our internal model of our self/environment. The only data that makes it through from our sense organs is the difference between the two.

This kind of top-down processing is more efficient energy-wise but I wonder if it's deeper than that? You can view perception and action as two sides of the same coin - both are ways to modify your internal model to better fit the sensory signals you expect.

Anyway, I guess the point I'm making is you should be careful which way you point your arrows, and of designating a single aspect of a mind (the action centre) as fundamental. Reality might work very differently, and that maybe says something? I don't know haha.

  • gnz11 3 days ago

    I think you are correct. My understanding is that the brain uses sense data just to confirm predictions it has already made. The model in which the brain is just reacting to incoming sense data is outdated from what I understand.

    • cootsnuck 3 days ago

      Correct. At least that's what current research is pointing to. Makes sense from an efficiency standpoint as well. We forget how "messy" / "noisy" our minds' conceptions of reality are. We use heuristics for a lot of things, we rely on applicable, learnable patterns. And our brains seem to be well suited to this more "artistic" interpretation of reality which biases our perception of sensations.

privacyonsec 4 days ago

I don’t see any scientific citations on how the mind works, about the different parts in this article. Is it all speculation or science fiction?

  • ergonaught 4 days ago

    It isn't published academia, so why would this matter?

    • creer 2 days ago

      It matters because that would clarify how much this is thought out from first principles; how much recognizes and does not re-invent mountains of prior work; how broad or narrow the work is; whether it recognizes that directions of thought X and Y have been worked on quite a bit already, etc.

      Since the author here says this is not just a random blog post.

      To me, this would correct the introduction's reference to "AI-based language models". Which is a bit too "latest fad"; But which I do find fascinating because the success of language models hint that a very large part of the basic human's intelligence may be simply internalized language (much of the rest being still more pattern recognition.); But which cannot merely erase all other prior work on what makes our minds.

paulmooreparks 3 days ago

I've lately begun to think of conciousness as the ability to read and react to one's own log output. I don't like hypothesis by analogy, but it seems an apt description for what conscious entities do. I just don't see anything mystical about it.

  • Nevermark 3 days ago

    I agree completely, not mystical at all.

    It is just meta self-awareness.

    We evolved to be aware, model & react and to opportunistically control our environment. Loop 1.

    Then we evolved to be aware, model, & & react and to opportunistically control our selves as bodies. Loop 2.

    Then we evolved to be aware, model, & & react and to opportunistically control those activities too, our minds. In the process of being able to model & react to ourselves modelling & reacting to ourselves we completed a self-awareness of self-awareness loop. Loop 3.

    I think this is a good explanation because what else is consciousness but knowing you are conscious? Self-aware that you are self-aware?

    And it makes perfect sense as an evolutionary trajectory using greater levels of neural representation of our practical relationship with our environment, including ourselves, to create higher level survival options.

    So this definition is both a functional and developmental explanation of the emergent phenomenon of consciousness.

    Just as we only have partial access to information and control of our environment, our bodies, we are also limited to the degree we are aware and can control our mind. Introspection.

    The next level up is “theory of mind”, “empathy”, etc. our ability to model & interact with others mind’s as individuals and groups, reciprocally. Loop 4. That created society and culture, living information that extends outside us and grows and lives beyond each of us.

    Infusing our thoughts, and progressively thinking abilities into technology, that in theory and not too distant practice could have access to all parts of their own minds’ states, operations and design, would be Loop 5. When deeply conscious beings start, things will get interesting in the Sol system.

    When people start talking about quantum waves or smart rocks or universes in the context of consciousness I feel a little ill. People like to “solve” unrelated mysteries by merging them and shouting “tada!” (“We are puzzled about how they built the pyramids… but we don’t know if there are aliens either, so obviously… these mysteries solve each other!”)

    • naasking 3 days ago

      The mystery of consciousness is how first-person subjective experience emerges from what appears to be third-person objective facts.

      • Nevermark 3 days ago

        You are right, there is still the experiential question of consciousness.

        I.e. qualia. Whether that is of a color, the cold, or how it feels to be self-aware.

        I think that will become tractable with AI, since we will be able to adjust what their mind has access to vs. what it doesn’t. Experiment with various levels of information awareness.

        Qualia would seem to be a functionally necessary result of a self-awareness dealing with information encoded at a lower level, with no access to that encoding.

        Information is provided, the coding is inaccessible, so it gets perceived as … something which it cant decompose. So can’t describe.

        We are made aware of a signal we perceive as red, but cannot mentally decompose perceptual “red” into anything else.

        So the question is, how does differently encoded information provided to our self-aware level, without any access to the coding, get interpreted as the specific qualities we perceive?

        Wild thoughts:

        Once we understand how qualia emerge, what would the qualia space” look like? What are the rules or limits?

        Will we be able to somehow analyze a creature and infer its qualia in a way that we (or an AI) can then experience directly?

        Will designing qualia for AI be a thing? A useful thing? Or are they simply isomorphic to the type and relationships of data they represent? Or just a temporary glitch on the way to more fully self-aware, self-observable, self-designed life?

  • cheesewheel 3 days ago

    > conciousness as the ability to read and react to one's own log output

    "I think, therefore I am"

    • rerdavies 3 days ago

      More specifically, I have a log file that says I'm thinking, therefore I am. :-P

      I think Descartes, if he were alive today would accept that slight adjustment. Descartes would of course then go on to rewrite Meditations II (which immediately follows "I think therefore I am") to argue that there might be an evil daemon that writes log files so that they give the illusion that I'm thinking. But if evil daemons are able to forge log files then all knowledge is impossible, so we're doomed no matter what. So best to pretend that isn't a possibility.

      It always irritates me a bit that people like to throw "I think therefore I am" around as if Descartes himself didn't immediately refuted in Meditations II.

      • jakefromstatecs 2 days ago

        > More specifically, I have a log file that says I'm thinking, therefore I am. :-

        More specifically than that: "My ability to read the log file proves my existence"

ilaksh 4 days ago

It's a really fascinating topic, but I wonder if this article could benefit from any of the extensive prior work in some way. There is actually quite a lot of work on AGI and cognitive architecture out there. For a more recent and popular take centered around LLMs, see David Shapiro.

Before that you can look into the AGI conference people like Ben Goertzel, Pei Wang. And actually the whole history of decades of AI research before it became about narrow AI.

I'd also like to suggest that creating something that truly closely simulates a living intelligent digital person is incredibly dangerous, stupid, and totally unnecessary. The reason I say that is because we already have superhuman capabilities in some ways, and the hardware, software and models are being improved rapidly. We are on track to have AI that is dozens if not hundreds of times faster than humans at thinking and much more capable.

If people succeed in making that truly lifelike and humanlike, it will actually out-compete us for resource control. And will no longer be a tool we can use.

Don't get me wrong, I love AI and my whole life is planned around agents and AI. But I no longer believe it is wise to try to go all the way and create a "real" living digital species. And I know it's not necessary -- we can create effective AI agents without actually emulating life. We certainly don't need full autonomy, self preservation, real suffering, reproductive instincts, etc. But that is the goal he seems to be down in this article. I suggest leaving some of that out very deliberately.

  • doctor_eval 4 days ago

    I don’t work in the field at all but

    > it will actually out-compete us for resource control. And will no longer be a tool we can use.

    I’ve never been convinced that this is true, but I just realised that perhaps it’s the humans in charge of the AI who we should actually be afraid of.

    • ilaksh 3 days ago

      What I am proposing is to imagine that after successful but unwise engineering and improvements in hardware, there would be millions of digital humans on the internet, which emulate humans in almost every way, but operate at say 5 or 10 times the speed of humans. To them, actual people seem to be moving, speaking, and thinking in extreme slow motion. And when they do speak or do something, it seems very poorly thought out.

      We should anticipate something like that if we really replicate humans in a digital format. I am suggesting that we can continue to make AI more useful and somewhat more humanlike, but avoid certain characteristics that make the AI into truly lifelike digital animals with full autonomy, self-interest, etc.

    • fmbb 3 days ago

      We already have artificial persons in the world competing with humans for resources, and have had for hundreds of years: corporations.

  • sonink 3 days ago

    > If people succeed in making that truly lifelike and humanlike, it will actually out-compete us for resource control. And will no longer be a tool we can use.

    I believe it is almost certain that we will make something like this and that they will out-compete us. The bigger problem here is that too few people believe this to be a possibility. And when this becomes certainty becomes apparent to a larger set of people, it might be too late to tone this down.

    AI isn't like the Atom Bomb (AB). AB didn't have agency. Once AB was built we still had time to think how to deploy it, or not. We had time to work across a global consensus to limit use of AB. But once AI manifests as AGI, it might be too late to shut it down.

    • mylastattempt 3 days ago

      I very much agree with this line of thought. It seems for humans it is the default mode of operation to just think of what is possible within the foreseeable future, rather than thinking of a reality that includes the seemingly impossible (at the time of the thought).

      In my opinion, this is easily noticeable when you try to discuss any system, be it political or economical, that spans multiple countries and interests. People will just revert to whatever is closest to them, rather than being able to foresee a larger cascading result from some random event.

      Perhaps this is more of a rant than a comment, apologies, I suppose it would be interesting to have an online space to discuss where things are headed on a logical level, without emotion and ideals and the ridiculous idea that humanity must persevere. Just thinking out what could happen in the next 5, 10 and 99 years.

      • sonink 3 days ago

        > I suppose it would be interesting to have an online space to discuss where things are headed on a logical level, without emotion and ideals and the ridiculous idea that humanity must persevere.

        Absolutely. Happy to be part of it if you are able to set it up.

      • hollerith 3 days ago

        >the ridiculous idea that humanity must persevere.

        Could you expand on what you mean by this? Specifically, is it OK with you if progress in AI causes the death of all the original-type human people like you and I?

        • mylastattempt 3 days ago

          That comment was meant in a more general or universal sense. Perhaps consider it in the context of 'saving the earth'. There is no earth to be saved. The universe exists, and that's it. Life in all it's forms will find some way to survive. Or not. Wether it reverts all the way back to the size of insects or bacteria before it has a chance to flourish again, well, who knows, but so be it.

          Positioning the animal known as 'human' as some God-like entity that must survive at all costs, is extremely arrogant if you ask me. Obviously I wish for humanity to thrive and survive, as this is self preserverance and a bit of pride or ego. But the notion that we are special in some way just rubs me the wrong way and doesn't help think ahead on a large scale and timeline.

    • tivert 3 days ago

      > I believe it is almost certain that we will make something like this and that they will out-compete us. The bigger problem here is that too few people believe this to be a possibility. And when this becomes certainty becomes apparent to a larger set of people, it might be too late to tone this down.

      I think the bigger problem is that too many people are focused on short term things like personal wealth or glory.

      The guy who make the breakthrough that enables the AGI that destroys humanity will probably win the Nobel Prize. That potential Nobel probably looms larger in his mind than any doubts that his achievement is actually a bad thing.

      They guy who employs that guy or productionizes his idea will become a mega-billionaire. That potential wealth and power probably looms larger in his mind than any doubts, too.

      • hollerith 3 days ago

        That is why the government should help the researcher and the tycoon do the right thing by shutting down the AI labs and banning research, teaching and publishing about frontier AI capabilities.

    • visarga 3 days ago

      > Once AB was built we still had time to think how to deploy it, or not.

      It's in human hands, we can hardly trust the enemy or even ourselves. We already came close to extinction a couple of times.

      I presume when ASI will emerge one of its top priorities will be to stop the crazies with big weapons from killing us all.

    • mensetmanusman 3 days ago

      It can’t outcompete us on the global level due to energy restraints.

      It would require a civilization to consciously bond with its capability to do so (in such a way that it enhances the survival of the humans serving it). Not sure this would be competition in the normal sense.

    • rerdavies 3 days ago

      The problem will not be the AIs; the problem will be who owns the AIs, and how will we control them?

  • Jensson 3 days ago

    > If people succeed in making that truly lifelike and humanlike, it will actually out-compete us for resource control. And will no longer be a tool we can use.

    Symbiotic species exists, AI as we make them today will evolve as a symbiote to humans because its the AI that is most useful to humans that gets selected for more resources.

    • zurfer 3 days ago

      The certainty in this sentence is not appropriate. You refer to an evolutionary process that is highly random and inefficient. We don't have millions of tries to get this right.

      • mensetmanusman 3 days ago

        I think the certainty is warranted assuming we are talking about intelligence and not some sort of paper clip generator that does something stupid.

        An intelligent entity will want to survive, and will realize humans are necessary cells for its survival. A big dog robot with nukes might not care, but I wouldn’t call that AI in the same sense.

        • griffzhowl 3 days ago

          > An intelligent entity will want to survive

          I don't see how that follows at all. You could have an intelligent entity, in the sense that you can ask it any question and it'll give a highly competent response, that is nevertheless completely indifferent about survival. Biological organisms have been formed by selection for survival and reproduction, so they have they operate accordingly, but I don't see the justification for generalising this to intelligences that have been formed by a different process. You would need some further assumptions on top of their intelligence - e.g. that their own actions are the dominant factor in whether they survive and replicate, rather than human interests.

        • clob 3 days ago

          Apoptosis is when single cells die on purpose for the development of an organism. I don't think it's unreasonable to at least hypothesize this extending to intelligent entities. Humans aren't xylem cells, but perhaps Zorklovians from planet Fleptar do this.

          Or, you know, Alan Turing eating the apple. I think he was a pretty smart guy.

devodo 3 days ago

> (Pro-strong-AI)... This is basically a disbelief in the ability of physics to correctly describe what happens in the world — a well-established philosophical position. Are you giving up on physics?

This is a very strong argument. Certainly all the ingredients to replicate a mind must exist within our physical reality.

But does an algorithm running on a computer have access to all the physics required?

For example, there are known physical phenomena, such as quantum entanglement, that are not possible to emulate with classical physics. How do we know our brains are not exploiting these, and possibly even yet unknown, physical phenomena?

An algorithm running on a classical computer is executing in a very different environment than a brain that is directly part of physical reality.

  • deepburner 3 days ago

    > there are known physical phenomena, such as quantum entanglement

    QC researcher here, strictly speaking, this is false. Clifford circuits can be efficiently simulated classically and they exhibit entanglement. The bottom line is we're not entirely sure where the (purported) quantum speedups come from. It might have something to do with entanglement, but it's not enough by itself.

    Re: about mermin's device, im not sure why you think it can not be simulated classically when all of the dynamics involved can be explained by 4x4 complex matrices.

    • devodo 2 days ago

      Could you accurately simulate the device on a computer precisely following the rules of the challenge? So that means the devices are isolated and therefore no global state is allowed. The devices are not aware of each others state nor results. You are only allowed to use local state to simulate the entangled particle. You can use whatever local hidden variables you want as long as it doesn't break the global state rule.

  • clob 3 days ago

    > there are known physical phenomena, such as quantum entanglement, that are not possible to emulate with classical physics

    This is wrong. You can even get a mod for Minecraft which implements quantum mechanics.

    https://www.curseforge.com/minecraft/mc-mods/qcraft-reimagin...

    • justinpombrio 3 days ago

      More precisely: you can emulate quantum mechanics using a classical computer, but the best known algorithms to do so take exponential running time.

      • devodo 3 days ago

        This is not true. For example, Mermin's device cannot be done using classical physics regardless of running time. https://en.wikipedia.org/wiki/Mermin%27s_device

        • deepburner 3 days ago

          The first reference in the article you linked is titled "Computer simulation of Mermin's quantum device".

          • devodo 3 days ago

            The paper is saying that attempting to simulate the device in code is a valuable lesson to students for precisely the reason that it cannot be done (correctly), thereby illustrating the limits of classical computation.

            > In the current paper, we make use of the recently published work in quantum information theory by Candela to have students write code to simulate the operation of the device in that article. Analysis of the device has significant pedagogical value—a fact recognized by Feynman—and simulation of its operation provides students a unique window into quantum mechanics without prior knowledge of the theory.

            • deepburner 3 days ago

              Nowhere in the text you quoted (nor in the article body) it is said that simulation of this device can not be done. Had you read the paper you'd see that it _is_ about simulating this device. From the introduction: "After students are introduced to several projects in quantum computer simulation, they write code to simulate the operation of Mermin’s quantum device."

              This is immaterial, however. It is a well known fact that BQP is in PSPACE and Clifford circuits (a subclass of quantum circuits) can not only be simulated classically, but done so efficiently. It is not controversial.

              • oersted 2 days ago

                Of course, Mermin's device can be simulated in a classical computer, we do quantum physics simulations for research all the time. That doesn't entail that we can have quantum computer speedups on a classical computer.

                Indeed, the whole point of Mermin's device is to give a very simple illustration for how it is impossible to replicate the behaviour of two entangled particles using classical particles (with hidden variables).

                Now is this specific characteristic of entanglement an absolute requirement for quantum computing speedups? Could we have similar speedups with probabilistic hidden-variable algorithms? Probably not, but it is a good question. It is true that if you spend time reading research papers in the field, it is still not clear what the edge is between problems that can be sped up by quantum computers and which cannot, or if there is even an edge at all.

              • devodo 2 days ago

                The device cannot be accurately simulated using a classical computer because it relies on quantum entanglement that has no counterpart in classical physics. The results cannot be simulated even if hidden local variables are used.

                The only way to simulate accurately on a classical computer is to use global state but this goes against the instruction that the devices must be isolated from each other.

                > This is immaterial, however. It is a well known fact that BQP is in PSPACE and Clifford circuits (a subclass of quantum circuits) can not only be simulated classically, but done so efficiently. It is not controversial.

                Yes, BQP problems are solvable and a "subclass" of quantum circuits can be simulated efficiently. But the fact is there are known aspects of reality that cannot be simulated on a classical computer.

                • justinpombrio 2 days ago

                  > The only way to simulate accurately on a classical computer is to use global state but this goes against the instruction that the devices must be isolated from each other.

                  No shit. Of course you can't take a simulation method that takes exponential running time in terms of the size of the thing you're simulating (two Mermin devices), then simulate each half (each Mermin device) independently. If you could split it up like that you'd have a polynomial time simulation method!

              • oersted 2 days ago

                BQP (Bounded-error Quantum Polynomial-time) is in PSPACE sure, but that doesn't mean much.

                P ⊆ BPP ⊆ BQP ⊆ PSPACE

                BQP problems can be solved on quantum computers in polynomial time, some of these problems may be outside of P and BPP (Bounded-error Probabilistic Polynomial-time), so they may not be possible to solve in polynomial time in classical computers, even with probabilistic algorithms.

                It is true that there's still room for BPP = BQP, that has not been disproven, but it is somewhat controversial to expect so, at this point many smart people have spent their lifetimes prodding at it.

abcde777666 2 days ago

My instinct is that this is probably on the naive side. For instance, we use separation of concerns in our systems because we're too cognitively limited to create and manage deeply integrated systems. Nature doesn't have that problem.

For instance, the idea that we can neatly have the emotion system separate from the motor control system. Emotions are a cacophony of chemicals and signals traversing the entire body - they're not an enum of happy/angry/sad - we just interpret them as such. So you probably don't get to isolate them off in a corner.

Basically I think it's very tempting to severely underestimate the complexity of a problem when we're still only in theory land.

Jensson 3 days ago

> Now the LLM can choose to switch, at its own discretion, back and forth between a talking and listening mode

How would it intelligently do this? What data would you train on? You don't have trillions words of text where humans wrote what they thought silently interwoven with what they wrote publicly.

History has shown over and over that hard coded ad hoc solutions to these "simple problems" never work to create intelligent agents, you need to train the model to do that from the start you can't patch in intelligence after the fact. Those additions can be useful, but they have never been intelligent.

Anyway, such a model I'd call "stream of mind model" rather than a language model, it would fundamentally solve many of the problems with current LLM where their thinking is reliant on the shape of the answer, while a stream of mind model would shape its thinking to fit the problem and then shape the formatting to fit the communication needs.

Such a model as this guy describes would be a massive step forward, so I agree with this, but it is way too expensive to train, not due to lack of compute but due to lack of data. And I don't see that data being done within the next decade if ever, humans don't really like writing down their hidden thoughts, and you'd need to pay them to generate data amounts equivalent to the internet...

  • tylerneylon 3 days ago

    Replying to: How would a model intelligently switch between listening or speaking modes? What data would you train on? (I'm the author of the parent article.)

    It's a fair question, and I don't have all the answers. But for this question, there might be training data available from everyday human conversations. For example, we could use a speech-to-text model that's able to distinguish speakers, and look for points where one person decided to start speaking (that would be training data for when to switch modes). Ideally, the speech-to-text model would be able to include text even when both people spoke at once (this would provide more realistic and complete training data).

    I've noticed that the audio mode in ChatGPT's app is good at noticing when I'm done speaking to it, and it reacts accurately enough that I suspect it's more sophisticated than "wait for silence." If there is a "notice the end of speaking" model - which is not a crazy assumption - then I can imagine a slightly more complicated model that notices a combination of "now is a good time to talk + I have something to say."

  • cornholio 3 days ago

    It's surprising people still consider large scale language models as a key solution to the problem of AGI, when it has become quite clear they will hit all practical scaling limits without surpassing the "well informed imbecile" intelligence threshold.

    All evidence points towards human reason as a fundamentally different approach, orders of magnitude more efficient at integrating and making sense of ridiculously smaller amounts of training data.

    • rerdavies 3 days ago

      I'm pretty sure that the argument would be that extensions of current LLM and ML techniques could be the solution to the problem of AGI.

      And all evidence actually points toward human reason as an incredibly inefficient and horrifyingly error-prone approach, that only got as far as it did because we're running 8.1 billion human minds in parallel.

      While evidence suggests that human reasoning uses a fundamentally different approach, it remains to be seen whether human reasoning uses a fundamentally superior approach.

      • cornholio 3 days ago

        > human reason as an incredibly inefficient

        I have yet to see a single AI system that can learn to produce the word "mama" after doing fully self supervised training, and being fed only the cosine transform of the audio it produces and a few hundred hours of video/audio feed showing a mom saying the word and becoming very happy when the word is finally uttered. Did I mention the output must be produced using an array of mechanical oscillators, resonance chambers and bellows with unknown and highly variable acoustic parameters, that need to be discovered and tuned at runtime?

        I have seen this "human intelligence training is wasteful" line and I think it is complete nonsense. The efficiency with which humans can acquire any language with barely any training data is unfathomably better than large scale statistical models.

    • JohnMakin 3 days ago

      > It's surprising people still consider large scale language models as a key solution to the problem of AGI

      Marketing, and a bit of collective delusion by a lot of people having the "can't understand what they are paid not to" thing going on.

m0llusk 3 days ago

Would recommend reading The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning by Daniel Bor for a lot of ideas strongly connected to recent research. My interpretation of this is the mind ends up being a story processing machine that builds stories about what has happened and is happening and constructs and compares stories about what might happen or be made to happen. Of course it is difficult to summarize a whole book rich with references in a sentence, but the model seems arguably more simple and well established than what you are currently putting forward.

Very much looking forward to seeing continuing progress in all this.

jcynix 3 days ago

> I’m motivated by the success of AI-based language models to look at the future of digital minds.

When intelligent machines are constructed, we should not be surprised to find them as confused and as stubborn as men in their convictions about mind-matter, consciousness, free will, and the like.

Minsky, as quoted in https://www.newyorker.com/magazine/1981/12/14/a-i

visarga 3 days ago

The model is good. Environment -> Perception -> Planning/Imagining -> Acting -> Learning from feedback.

What is missing from this picture is the social aspect. No agent got too smart alone, it's always an iterative "search and learn" process, distributed over many agents. Even AlphaZero had evolutionary selection and extensive self play against its variants.

Basically we can think of culture as compressed prior experience, or compressed search.

  • tylerneylon 3 days ago

    There's a ton missing from the article, and certain social training or skills are a big part of that.

    Although it's not spelled out in the article, I'm hoping that the feature of agency along with an emotional system would enable constructive social behavior. Agency is helpful because it would empower AI models to meaningfully speak to each other, for example. Human emotions like empathy, social alignment, curiosity, or persistence could all help AI models to get along well with others.

navigate8310 4 days ago

The author talks about agency which require being able to independently take actions apart from reacting to an input. However, the feedback provided by a two-input model also limits the mind model as it now reacts to the feedback it receives when in listening mode. Isn't it contradictory to the concept if agency?

  • tylerneylon 3 days ago

    The idea of "agency" I have in mind is simply the option to take action at any point in time.

    I think the contradiction you see is that the model would have to form a completion to the external input it receives. I'm suggesting that the model would have many inputs: one would be the typical input stream, just as LLMs see, but another would be its own internal recent vectors, akin to a recent stream of thought. A "mode" is not built in to the model; at each token point, it can output whatever vector it wants, and one choice is to output the special "<listening>" token, which means it's not talking. So the "mode" idea is a hoped-for emergent behavior.

    Some more details on using two input streams:

    All of the input vectors (internal + external), taken together, are available to work with. It may help to think in terms of the typical transformer architecture, where tokens mostly become a set of vectors, and the original order of the words are attached as positional information. In other words, transformers don't really see a list of words, but a set of vectors, and the position info of each token becomes a tag attached to each vector.

    So it's not so hard to merge together two input streams. They can become one big set of vectors, still tagged with position information, but now also tagged as either "internal" or "external" for the source.

freilanzer 3 days ago

How is this blog generated? With code and latex formulas, it would be exactly what I'm looking for.

  • rnewme 3 days ago

    Seems to be substack. But should be easily done with pandoc and some shell scripting

    • d4rkp4ttern 3 days ago

      Unfortunately Substack doesn’t have math/latex/katex/mathjax.

sonink 3 days ago

The model is interesting. This is similar in parts to what we are building at nonbios. So for example sensory inputs are not required to simulate a model of a mind. If a human cannot see, the human mind is still clearly human.

  • tsimionescu 3 days ago

    Model training seems to me to be much closer to simulating the evolution of the human mind starting from single cell bacteria, rather than the development of the mind of a baby up to a fully functional human. If so, then sensory inputs and interaction with the physical through them were absolutely a crucial part of how minds evolved, so I find your approach a priori very unlikely to have a chance at success.

    To be clear, my reasoning is that this is the only plausible explanation for the extreme difference in how much data an individual human needs to learn language, and how much data an LMM needs to reach its level of simulation. Humanity collectively probably needed similar amounts of data as LLMs do to get here, but it was spread across a billion years of evolution from simple animals to Homo Sapiens.

    • sonink 3 days ago

      > If so, then sensory inputs and interaction with the physical through them were absolutely a crucial part of how minds evolved, so I find your approach a priori very unlikely to have a chance at success.

      If that was the case, people who were born blind would demonstrate markedly reduced intelligence. I dont think that is the case, but you can correct me if I am wrong. A blind person might take longer to truly 'understand' and 'abstract' something but there is little evidence to believe that capability of abstraction isnt as good as people who can see.

      Agree that sensory inputs and interaction were absolutely critical for how the minds evolved, but model training replaces that part when we talk about AI, and not just the evolution.

      Evolution made us express emotions when we are hungry for example. But your laptop will also let you know when its battery is out of juice. Human design inspired by evolution can create systems which mimic its behaviour and function.

      • tsimionescu 3 days ago

        > If that was the case, people who were born blind would demonstrate markedly reduced intelligence. I dont think that is the case, but you can correct me if I am wrong. A blind person might take longer to truly 'understand' and 'abstract' something but there is little evidence to believe that capability of abstraction isnt as good as people who can see.

        No, because the mind of a blind person, even one blind from birth, is still the product of a billion years of evolution of organisms that had sight, sound, touch, smell, etc.

        Not to mention, a person who has no sensory input at all (no sight, no sound, no touch, no smell, no taste, nothing at all) is unlikely to have a fully functioning mind. And certainly a baby born like this would not be able to learn anything at all.

        Of course, the situation is not 1:1 by any means to AI training, as AI models do get input, it's just of a vastly different nature. It's completely unknown what would happen if we could input language into the mind of an infant "directly", without sensory input of other kinds.

        Still, I think it's quite clear that humans minds are essentially born "pre-trained", with good starting weights, and everything we do in life in essentially fine-tuning those weights. I don't think there's any other ways to explain the massive input difference (known as the poverty of the stimulus problem in cognitive science). And this means that there is little insight to draw for better model training from studying individual human learning, and instead you would have to draw inspiration from how the mind evolved.

    • dboreham 3 days ago

      > but it was spread across a billion years of evolution from simple animals to Homo Sapiens

      Hard disagree. Evolution made a bigger/better neural processor, and it made better/different I/O devices and I/O pre-processing pipelines. But it didn't store any information in the DNA of the kind you're proposing. That's not how it works. The brain is entirely "field programmable", in all animals (I assert). There is no "pre-training".

      • saeranv 3 days ago

        A simple counter example here is instinctual behaviour. A sea turtle is born, and with little to no guidance, experimentation, or exploration heads to the sea. That knowledge is embedded at birth.

        I think the analogy of the brain as hardware devices ("neural processor", "I/0 devices", etc) is misleading. I think I understand the very strict mind-matter dualism you're alluding to here. But so far attempts at using actual computer hardware to reproduce human-like cognition has gotten nowhere close, despite consuming order of magnitude more energy and data.

      • tsimionescu 3 days ago

        That is certainly false. You're born with plenty of very specific reflexes, and with lots of information about how to use our neural wiring to control much of our body. We are born with certain associations built in (good and bad smells, good and bad tastes, certain shapes that scare us, liking shiny objects, and many others).

        This is all somewhat hard to gage in human babies, as we take a relatively long time to become functional. However, it's clear when looking at many other mammals - baby reindeer or horses, for example, are able to run within minutes of being born; they can see, they can interpret the images they see as objects, they understand things like object permanence, they can approximate distances and speeds, they have a simple theory of mind and can interact with other agents, they can recognize their mother's udders and suckle at them for food, and many many other tasks that they have 0 training for. The only possible conclusion is that their brains are pre-trained, and they are only performing some quick fine-tuning based on experience in their first hours of life.

mensetmanusman 4 days ago

Whatever the mind is, it’s a damn cool subset of the universe.

Simplicitas 3 days ago

Any discussion of a model for consciousness that doesn't include Daniel Dennett's take is a bit lacking from the get go.

bbor 4 days ago

You’re on the right track :). Check out The Science of Logic, Neurophilosophy, I am A Strange Loop, Brainstorms, and Yudkowsky’s earlier work, if you haven’t! Based on what you have here, you’d love em. It’s a busy field, and a lively one IME. Sadly, the answer is no: the anxiety never goes away

miika 3 days ago

Ever since LLM’s came out many of us has been wondering these things. It would be easy to say that perhaps our attention and senses somehow come together to formulate prompts and thoughts etc what appears in the mind is the output. And everything we ever experienced has trained the model.

But of course we can be assured it’s not quite like that in reality. This is just another example of how our models for explaining the life are reflection of the current technological state.

Nobody considers that old clockwork universe now, and these AI inspired ideas are going to fall short all the same. Yet, progress is happening and all these ideas and talks are probably important steps that carry us forward.

0xWTF 4 days ago

Complete aside, but love the Tufte styles.

antiquark 3 days ago

Nice ideas... now build it!