rob74 4 days ago

> However, for a comprehensive track network, a simple linked list of track elements is insufficient. It must be possible to traverse it in both directions and accommodate switches (turnouts) and loops. Loops present a particular challenge.

Fun fact: there is at least one track network (the Munich U-Bahn) that avoids loops and interconnecting lines "the wrong way around", so trains always face the same way (https://www-u--bahn--muenchen-de.translate.goog/fahrzeuge/?_...). Because of this, two-carriage trainsets have a "north carriage" and "south carriage", and the newer six-carriage trainsets have north and south end cars plus four middle cars. Of course, not all tracks run north-south, but the name is taken from the way the carriages point in the first (and at the moment still the only) maintenance yard in Fröttmaning.

AstroJetson 4 days ago

Late to this conversation but a favorite game with my children was Lego Loco

https://en.wikipedia.org/wiki/Lego_Loco

You could build little villages with train layouts. The big thing for them, is you could put tunnels that would connect to another user. It let you send trains and the mail card had "post cards" you could send messages.

I miss it.

ngcc_hk 4 days ago

Doing one using lua and love2d and hence found some difference or things to ponder

- use ds and discrete event simulation instead of the usual dt (and continuous …)? Why? Seems all game engine offer dt …

- shall the track be one way and if two way traffic has to use 2 tracks? Otherwise if >1 train or train-group … crash ? And if there are junction … route and timing would be issues otherwise crash ?

- picolisp vs fennel ?

  • cess11 4 days ago

    "- picolisp vs fennel ?"

    Not really comparable. Picolisp is really small, really simple, and that makes it kind of weird. It has fexprs rather than sexprs, there are no seatbelts (as in you will segfault when reaching out of bounds), the numbers are fixnums, the database is object oriented and you query it with logic programming, and so on.

    As far as I know, no one has used it for 2D game scripting. If you want to be the first you probably can be, but it will be quite an adventure compared to Fennel.

    • isr 4 days ago

      I think you meant "fexprs instead of macros".

      Picolisp does have a function named 'macro', but it's not the same thing (it's more like 'string map' from tcl).

      There's a lot if interesting stuff in picolisp, but the one thing I found unbearably ugly was it's use of strings ("transient symbols") as a kind of lexically scoped symbol.

      • cess11 4 days ago

        Right, yes. In some sense it's not sexpr as usual either, the evaluation strategy gives it a rather different feel than one might be used to from other parens syntaxed languages.

        'macro can be used to control evaluation, https://software-lab.de/doc/refM.html#macro , but there is no compilation in Picolisp, hence no compile time macros.

        Don't see the problem with that. How is it different from using strings without quotation marks as global symbols?

        • isr 3 days ago

          Hmm, when you put it like that, my complaint does seem trite.

          It was just one of those things which bugged me ...

greathones 3 days ago

Any link to blogs/materials about programming train simulation (like in openttd)?

W-Stool 4 days ago

No one interested in railroad simulations running Run8 here?

taddevries 5 days ago

I'm just gonna say that I belive Factorio is the best railroad simulator. Second only to OpenTTD.

  • jjmarr 5 days ago

    Factorio is a bad railroad simulator because you cannot do grade separation (bridges or tunnels). This means all of your rail networks are artificially limited to being planar graphs which kills your throughput.

    OpenTTD allows grade separation at least but imho is inferior to Simutrans which ensures cargo and passengers have destinations and will not use your network unless you can path them to their destination.

    Because OpenTTD doesn't have this it devolves into forcing cargo or people to travel the longest possible distance for the most money, instead of to where they actually want to go.

    Simutrans-extended is something I'm really hyped for because the simulation of passenger desires is even more in-depth than the original.

    https://simutrans-germany.com/wiki/wiki/en_extended_Passenge...

    The most satisfying rail network for me is one that has both complex form and function.

    • NoboruWataya 5 days ago
      • jjmarr 4 days ago

        Cargodist isn't the same as what I'm describing. It creates destinations for cargo and passengers based on the existing network you have. In other words, "demand" is changed to fit your playstyle.

        In regular Simutrans, passengers want to go to a specific destination regardless of whether your network allows them to go there. This means you have to change your playstyle to fit what passengers want to do, since otherwise, people won't take trips.

        I believe Simutrans handles this better. If you're someone who wants to go to Europe but the only plane ticket you can buy is to South America, you won't take that trip (Simutrans model). But in OpenTTD, passenger demand is basically "yeah I just want to go somewhere" and then you deliver them to Antarctica. Cargodist means that if you now offer service to Greenland, they'll evenly distribute themselves between Antarctica and Greenland.

        I'm more of a product-minded person so I prefer Simutrans. Demand is the constant I want to optimize around; I don't want demand to be optimized around my transport network. But this also creates a much more difficult game.

    • ssl-3 4 days ago

      Factorio is a bad railroad simulator because it is was never designed to be a railroad simulator. This has only a little bit to do with the respective grade-levels of crossings (because, I mean, many, many very functional rail networks actually-operate on Earth using only flat crossings, because the terrain is flat where these networks are built).

      One way in that vanilla Factorio is a shit simulation is that it cannot couple and decouple cars. Trains are built by hand, and then those trains remain as they are until remogrified by hand.

      Want to send coal/copper/iron/something (or a combination of things) somewhere else? Cool beans! People do this every day in the real world.

      In the real world, cars are often left in yards and sidings to be swapped around and loaded by switchers and other mechanisms while the locomotive that delivered these cars has departed -- probably along with a train of other cars that this station isn't interested in.

      In Factorio, one lets trains fill up with things at A (according to rules), and then it goes from A to B and unload those things at B (according to rules). Stops for C, D, and E can be added, but even if they are: The whole train (including locomotives) stays coupled together together and there isn't any other way to do it.

      The locomotive is always waiting unless it is travelling with the entirety of its assigned cars. The trains are completely inflexible.

      Real-world train networks don't work that way. Got 50 containers to load up? Drop off 50 appropriate cars to be loaded up, and move on to the next problem while that station deals with putting containers onto cars. And at that next station, load up on already-full tankers. And then to the next station where a bunch of new Fords are dropped off in segments of TTX transport cars.

      Factorio is also a shit simulation because these fixed-unit trains have a predefined route: Not only can cars not be picked up or dropped off, no station can offer things and no other station can order things. In Factorio, if there is iron to deliver: The usual method is to pick up as much iron as will fit on this inflexible train (however long that takes), and take it to station B to unload (however long that takes): It's a neat way to approximate how a belt works in Factorio over a longer distance, but it is not a simulation of how rail actually works.

      It's a fun game and I love playing it, but it's not a fucking rail simulator[1]. Very few aspects of Factorio's rail system resemble actual rail systems in the real world that actually exists.

      (Actually, while I'm at it: Factorio isn't a simulator of anything. Just because it is a fun game does not mean that it has to be a simulation of...anything.)

      1: https://www.merriam-webster.com/dictionary/simulator

  • nalzok 5 days ago

    How do you like Simutrans? https://www.simutrans.com

    • TylerE 4 days ago

      I can't stand it because of the insanely low built in fixed framerate. It's something weird like 20fps. UI responsiveness is also terrible for the same reason, because everything is apparently spaghetti coded together and the game logic is tied to the framerate.

  • ssl-3 5 days ago

    Factorio is a wonderful video game that includes aspects of using railroads, but it is not a simulation of how railroads work on Earth.

    • MadnessASAP 4 days ago

      It is however a simulation of how railroads work as predators.

      Choo Choo

  • xedrac 5 days ago

    Factorio's railroads are so fun to build and play with. Add a circuit network and robots to the mix, and you have yourself endless hours of enjoyment. I eagerly anticipate the release of 2.0 this fall.

  • burgerrito 4 days ago

    Clearly you folks haven't tried playing A-Train, a Japanese train tycoon game...

    Seriously though, try it out! It's actually one of the best games I've ever played. I'd even say that it's so underrated.

    It's a train tycoon game, but also silently focuses on being a real estate simulator, just like how a lot of Japanese companies don't make money primarily on train tickets, but instead real estate or other non-farebox income.

    • zem 3 days ago

      one of my favourite games ever was "railroad tycoon 2"; it was not a particularly deep railway simulator, but it was a superbly immersive treatment of "railroads helped settle a continent". few games achieved that sort of immersive feeling for me, civ 1 and stellaris are the other ones that come to mind.

  • paxys 5 days ago

    So then...OpenTTD is the best railroad simulator?

nathan_compton 5 days ago

Picolisp - what a weird thing. I have to admire the temerity of a guy who in 2023 is still team "dynamic scope is better."

  • mppm 4 days ago

    It's not just temerity I think. Picolisp is a very different animal compared to all modern lisp and scheme flavors. It is the last "true lisp" that I am aware of -- it has an ultra-minimalist interpreter (hand-written in assembly, by the way) that actually represents programs as linked lists. A function is really just a list with the first element contains the argument and the second one the body, and the arguments are bare symbols. Picolisp has no compiler (not even a bytecode compiler), no lexical analysis and no other preprocessing. There is only the reader and the output goes directly to the interpreter. On the upside, this makes Picolisp the only language with truly "first class" functions in the sense that you can really create and manipulate them at runtime just like you would strings or integers, unlike pretty much every other language out there, where "lambdas" are just syntax sugar over function pointers. On the downside, this is all of course completely unchecked and pretty unsafe, and, to come back to the original point, you do not have such conveniences as lexical scoping. That would be literally impossible to implement without changing the nature of Picolisp into a proto-compiled language.

    • lispm 4 days ago

      > It is the last "true lisp" that I am aware of -- it has an ultra-minimalist interpreter (hand-written in assembly, by the way) that actually represents programs as linked lists.

      Strange, I thought many Lisp systems still have source level list-based interpreters. For Common Lisp I would think: SBCL (optional), CLISP, Allegro CL, LispWorks, ECL, ... They can also compile code. Compiling Lisp code was also already available in the first Lisp implementations and having a compiler was an explicit goal of the original implementors.

      Let's use the LispWorks Listener (the REPL tool):

          CL-USER 25 > (defun foo (bar) (break) (print (list :hello bar)))
          FOO
      
          CL-USER 26 > (foo 10)
      
          Break.
            1 (continue) Return from break.
            2 (abort) Return to top loop level 0.
      
          Type :b for backtrace or :c <option number> to proceed.
          Type :bug-form "<subject>" for a bug report template or :? for other options.
      
          CL-USER 27 : 1 > :bq
      
          INVOKE-DEBUGGER <- BREAK <- FOO <- EVAL <- CAPI::CAPI-TOP-LEVEL-FUNCTION <- CAPI::INTERACTIVE-PANE-TOP-LOOP
          <- MP::PROCESS-SG-FUNCTION
      
          CL-USER 28 : 1 > :n
          Call to INVOKE-DEBUGGER
      
          CL-USER 29 : 1 > :n
          Call to BREAK
      
          CL-USER 30 : 1 > :n
          Interpreted call to FOO
      
          CL-USER 31 : 1 > :lambda
          (LAMBDA (BAR) (DECLARE (SYSTEM::SOURCE-LEVEL #<EQ Hash Table{0} 81D03EFF03>)) (DECLARE (LAMBDA-NAME FOO)) (BREAK) (PRINT (LIST :HELLO BAR)))
      
      
      The debugger output of the currently running function looks like a linked list to me. I could modify it destructively, if I wanted.

      LispWorks has a list-level interpreter. One can also compile code.

      Typically Lisp interpreters tend to be written in C, since that usually is more portable than Assembler.

      > On the downside, this is all of course completely unchecked and pretty unsafe, and, to come back to the original point, you do not have such conveniences as lexical scoping.

      I would expect from a typical Lisp interpreter (a source level list-based interpreter) that it does all kinds of runtime checks and also provides lexical scoping. If there is a clojure, then this closure would be a combination of some function and an environment. In standard Common Lisp there is no access to that environment, but I could look from an inspector into it:

          CL-USER 54 > (defun example (a) (lambda () (break) (print a)))
          EXAMPLE
      
          CL-USER 55 > (example 10)
          #<anonymous interpreted function 8020001A59>
      
          CL-USER 56 > (describe *)
      
          #<anonymous interpreted function 8020001A59> is a TYPE::INTERPRETED-FUNCTION
          Code             (LAMBDA NIL (BREAK) (PRINT A))
          Environment      ((A . 10) (#:SOURCE-LEVEL-ENVIRONMENT-MARKER FUNCTION NIL . #<EQ Hash Table{0} 81D03EFF03>) (#:FUNCTOR-MARKER LAMBDA (A) (DECLARE (SYSTEM::SOURCE-LEVEL #<EQ Hash Table{0} 81D03EFF03>)) (DECLARE (LAMBDA-NAME EXAMPLE)) (LAMBDA NIL (BREAK) (PRINT A))))
      
      As we can see, the interpreted closure has code a list and an environment, where A = 10.
      • mppm 4 days ago

        > The debugger output of the currently running function looks like a linked list to me. I could modify it destructively, if I wanted.

        I doubt that actually, though I don't have LispWorks installed to try it. Modifying a function at runtime as if it were a list is actually the best test to see if your Lisp really represents functions as lists, or as some other internal object that is rendered as a list in the REPL by accessing the stored definition. E.g. both CLISP and guile error out if I try `(car (lambda (a b) (+ a b)))`.

        Another good test is to construct a function at runtime and try to call it. To do that you will probaby need to call `eval` or equivalent, just like you would in Lua or Python. Not in Picolisp though, which is why I consider it to be the only truly homoiconic programming language.

        • lispm 4 days ago

          > I doubt that actually

          You can doubt that. But I have done it. I know that it works.

              CL-USER 63 > (defun foo (a) (print 'hey) (print a))
              FOO
          
              CL-USER 64 > (foo 'jack)
          
              HEY 
              JACK 
              JACK
          
              CL-USER 65 > (function-lambda-expression 'foo)
              (LAMBDA (A) (DECLARE (SYSTEM::SOURCE-LEVEL #<EQ Hash Table{0} 81D03EFF03>)) (DECLARE (LAMBDA-NAME FOO)) (PRINT (QUOTE HEY)) (PRINT A))
              NIL
              FOO
          
              CL-USER 66 > (fifth *)
              (PRINT (QUOTE HEY))
          
              CL-USER 67 > (setf (fifth (function-lambda-expression 'foo)) '(print 'hello))
              (PRINT (QUOTE HELLO))
          
              CL-USER 68 > (foo 'jack)
          
              HELLO 
              JACK 
              JACK
          
          
          > E.g. both CLISP and guile error out if I try `(car (lambda (a b) (+ a b)))`.

          Sure, but in CLISP the function is still a list internally. An interpreted function is a record, which stores the code as a list internally. The internally stored list is interpreted.

          Python compiles the code to byte code. CLISP has both a list-based interpreter and a byte code interpreter.

          • mppm 4 days ago

            Mmh. Like I said, I'm not familiar with LispWorks, so take this with a grain of salt, but to me it looks like the system is just retrieving the original source expression that it keeps around in addition to the executable representation. But this is ultimately a question of implementation. My original point was that in Picolisp runtime-constructed lists are directly executable, without any processing. An unroller function that takes an action `foo` and a runtime number, e.g. 3, would return `'(() (foo) (foo) (foo))` and that would be it. In other Lisps you would first build the equivalent of this list and then pass it to `eval` to make it actually executable. Whether this step is expensive or not depends on the system. E.g. efficient closures require scanning the containing scopes and creating minimal state objects. Just storing the parent environment pointer would be super-inefficient and would prevent the entire environment from being garbage-collected, hence my claim that dynamic scope is the only thing that really makes sense in a direct-interpreted lisp, and that the presence of lexical scope implies some nontrivial processing before execution, though not necessarily as extensive as what would usually be called compilation.

            Edit: lexical analysis -> lexical scope

            • kazinator 3 days ago

              But the behavior changed accordingly when lispm mutated the source expression.

              So if there is another form that is actually being used for the execution, the change in source must have been detected and propagated to that other form.

              Anyway, that situation looks like true blue interpreted functions. There is nested list source you can tweak, and the tweaks somehow go into effect.

            • lispm 3 days ago

                  CL-USER 100 > (defun unroll (n exp) `(lambda () ,(loop repeat n collect exp)))
                  UNROLL
              
                  CL-USER 101 > (unroll 3 '(foo))
                  (LAMBDA NIL ((FOO) (FOO) (FOO)))
              
                  CL-USER 102 > (eval *)
                  #<anonymous interpreted function 8020000EC9>
              
                  CL-USER 103 > (describe *)
              
                  #<anonymous interpreted function 8020000EC9> is a TYPE::INTERPRETED-FUNCTION
                  CODE      (LAMBDA NIL ((FOO) (FOO) (FOO)))
              
              As you can see, the thing is basically the same as a cons cell with two entries the type and the code:

                  (TYPE::INTERPRETED-FUNCTION . (LAMBDA NIL ((FOO) (FOO) (FOO))))
              
              The above Lisp implementation does not use a cons cell, but a different type mechanism to easily and reliably identify the runtime type.

              I picolisp this is hardwired into the interpreter. The interpreter will also need to check every time at runtime, if the list structure is actually a function and what kind of function it is.

              In above Lisp, the type of the function is encoded during EVAL and the check for the type is then a type tag check.

              for this example here, using the LispWorks implementation, it also makes no difference for EVAL if it is a function with 10 or with 100000 subforms. The execution time is small. No special processing of the list of subforms takes place. For example the code is not compiled, not converted to byte code, not converted to another representation.

                  CL-USER 111 > (let ((f (unroll 10 '(foo)))) (time (eval f)))
                  Timing the evaluation of (EVAL F)
              
                  User time    =        0.000
                  System time  =        0.000
                  Elapsed time =        0.000
                  Allocation   = 184 bytes
                  0 Page faults
                  GC time      =        0.000
                  #<anonymous interpreted function 8020001DA9>
              
                  CL-USER 112 > (let ((f (unroll 100000 '(foo)))) (time (eval f)))
                  Timing the evaluation of (EVAL F)
              
                  User time    =        0.000
                  System time  =        0.000
                  Elapsed time =        0.000
                  Allocation   = 184 bytes
                  0 Page faults
                  GC time      =        0.000
                  #<anonymous interpreted function 8020000839>
              
                  CL-USER 113 > (defun unroll (n exp) `(lambda () ,(loop repeat n collect exp)))
                  UNROLL
              • mppm 3 days ago

                I stand corrected, thank you :)

                I always thought that `eval` in CL was an un-idiomatic and fairly expensive operation, even for code that is not compiled. You learn something every day...

                • lispm 3 days ago

                  A Common Lisp implementation may implement EVAL by calling the compiler. That would be more expensive. Several Common Lisp implementation use EVAL to create an interpreted function and then the user can call COMPILE to compile these.

        • kazinator 3 days ago

          > (car (lambda (a b) (+ a b)))

          An interpreted function in a Common Lisp cannot literally just be a lambda expression list, because that would not satisfy the type system. It has to be of type function and a function is not a subtype of list.

          What happens is that there is some container object which says "I'm an (interpreted) function", which has slots that hold the raw source code. It might not be a lambda form; for instance, the original lambda might be destructured into parameters and body that are separately held.

          There is some API by which the interpreter gets to those pieces and then it's just recursing over the nested lists.

          > Another good test is to construct a function at runtime and try to call it.

          Common Lisp doesn't provide a standard API for constructing an interpreted function (or even make provisions for the existence of such a thing). Lisps that have interpreted functions may expose a way for application code to construct them without having to eval a lambda expression.

          It's just a matter of constructing that aforementioned container object and stuffing it with the code piece or pieces. If that is possible then that object is something you can call.

          When you call that function, eval ends up used anyway.

      • cess11 4 days ago

        There's also the ultra-minimalist part. Picolisp has like one data structure, it's two cells with pointers, and that's it. Maybe symbols are implemented in some other way, I'm not sure, but pretty much everything is based around that.

        Portability is not a concern. Either you run something 64 bit POSIX or you aren't going to use Picolisp (except if you get your hands on an old 32 bit build). I think it's usually tested by a user on OpenBSD but outside of Debian you're basically on your own.

        There are like three basic data types. Fixnums, symbols and the linked list. If you do something similar to what you're showing from SBCL (or LispWorks, didn't read closely enough at first) it'll look pretty much like it does in source.

             $ pil +
             : (de myfun ()(prinl "yo")(prinl "world"))
             : (cdr myfun)
             -> ((prinl "yo") (prinl "world"))
             : myfun
             -> (NIL ((prinl "hey")) (prinl "world"))
             : (myfun)
             yo
             world
             -> "world"
             : (set (cdr myfun) '(prinl "hey"))
             -> (prinl "hey")
             : myfun
             -> (NIL (prinl "hey") (prinl "world"))
             : (myfun)
             hey
             world
             -> "world"
        
        Hijacking the 'de mechanism is not something you'll do often, but looking at definitions like this you'll do a lot, and from time to time navigate it with list browsing functions.

        It boils down to some very simple interpreter behaviours, and after some time surprises become quite rare. I find it takes off quite a bit of cognitive load when solving non-trivial scripting tasks compared to e.g. bash or Python. Especially since 'fork and 'in/'out are so easy to work with, with the former you just pass in an executable list, '((V1 V2 Vn)(code 'here)(bye)), with the latter you get a direct no-hassle connection to POSIX pipes.

        • lispm 4 days ago

          LispWorks, I change the interpreted code:

              CL-USER 63 > (defun foo (a) (print 'hey) (print a))
              FOO
          
              CL-USER 64 > (foo 'jack)
          
              HEY 
              JACK 
              JACK
          
              CL-USER 65 > (function-lambda-expression 'foo)
              (LAMBDA (A) (DECLARE (SYSTEM::SOURCE-LEVEL #<EQ Hash Table{0} 81D03EFF03>)) (DECLARE (LAMBDA-NAME FOO)) (PRINT (QUOTE HEY)) (PRINT A))
              NIL
              FOO
          
              CL-USER 66 > (fifth *)
              (PRINT (QUOTE HEY))
          
              CL-USER 67 > (setf (fifth (function-lambda-expression 'foo)) '(print 'hello))
              (PRINT (QUOTE HELLO))
          
              CL-USER 68 > (foo 'jack)
          
              HELLO 
              JACK 
              JACK
          • cess11 3 days ago

            Is the first line in the body executable?

              (LAMBDA (A) 
                (DECLARE (SYSTEM::SOURCE-LEVEL #<EQ Hash Table{0} 81D03EFF03>)) 
                (DECLARE (LAMBDA-NAME FOO)) 
                (PRINT (QUOTE HEY)) 
                (PRINT A))
            
            If not, one would probably need to do a bit of fiddling to tear away the function from the symbol if one should feel a sudden urge to do so.

            Perhaps similar to this:

                : (de myfun (D)(prinl "heyo") (prinl D))
                -> myfun
                : myfun
                -> ((D) (prinl "heyo") (prinl D))
                : (mapcar '((D) (prinl "heyo") (prinl D)) '("world"))
                heyo
                world
                -> ("world")
                : (mapcar '(NIL (prinl "hey")) '(lel))
                hey
                -> ("hey")
                : (car myfun)
                -> (D)
                : (cdr myfun)
                -> ((prinl "heyo") (prinl D))
                : (cons (car myfun) (cdr myfun))
                -> ((D) (prinl "heyo") (prinl D))
                : (mapcar (cons (car myfun) (cdr myfun)) '("world"))
                heyo
                world
                -> ("world")
            
            Mostly I use this to get at an implementation so I can test it or a portion of it against some particular value, or just to see how something works. Most builtins are implemented in assembler and their symbols only return a pointer, but for example 'doc is implemented in Picolisp:

                : doc
                -> ((Sym Browser) (raw T) (call (or Browser (sys "BROWSER") "w3m") (pack "file:" (and (= 47 (char (path "@"))) "//") (path (if (get Sym 'doc) (pack @ "#" Sym) "@doc/ref.html")))) (raw NIL))
                : de
                -> 270351
                : macro
                -> ("Prg" (run (fill "Prg")))
            
            I really like the Picolisp 'match (https://software-lab.de/doc/refM.html#match ) function. What's the easiest way to do the same in Common Lisp? If it's not obvious from the examples there, it can also be used with character lists, i.e. transient symbols, i.e. strings, chopped up into a list of UTF-8 characters. It's similar to unification in logic programming, which is something Picolisp supports.
            • lispm 3 days ago

              LispWorks:

                  CL-USER 130 > (defun myfun (d) (print "heyo") (print d))
                  MYFUN
              
                  CL-USER 131 > (let ((source (function-lambda-expression #'myfun)))
                                  (mapcar (eval (list* 'lambda
                                                       (second source)
                                                       (nthcdr 4 source)))
                                          '("world")))
              
                  "heyo" 
                  "world" 
                  ("world")
              
              Above can in some ways done in many Lisp implementations. It's in this form simply not widely used. For most applications it's more interesting to use macros to manipulate code, which then can be compiled to efficient code.

              Pattern matching is much implemented in Lisp.

              I had adopted this code for a pattern matcher from a book (LISP, Winston/Horn), probably >30 years ago:

                  CL-USER 115 > (pmatch:match '(#$a is #$b) '(this is a test))
                  ((B (A TEST)) (A (THIS)))
              
                  CL-USER 116 > (pmatch:match '(#$X (d #$Y) #$Z) '((a b c) (d (e f) g) h i))
                  ((Z (H I)) (Y ((E F) G)) (X ((A B C))))
              
              Not to say, that Picolisp isn't great for you, but it is not the only language where lists can be manipulated.
              • cess11 3 days ago

                "Not to say, that Picolisp isn't great for you, but it is not the only language where lists can be manipulated."

                Don't think I've made this claim.

                Can the #$a &c. be used as variables?

                    : (and (match '("h" "e" "l" "l" "o" @A ~(chop "ld")) (chop "helloworld")) @A) 
                    -> ("w" "o" "r")
                • lispm 2 days ago

                  > Don't think I've made this claim.

                  That's true, but it sounds a bit as if Lisp interpreters are something very unusual. The syntax and other details may differ, but the general idea of executing source code via an interpreter is very old, often implemented - similar also the idea that the code can be mutable. It's just not very fashionable, since some Lisp dialects are designed such that one wants the compiler to be able to statically check the code for various things, before runtime.

                  In Common Lisp we would not want to introduce variables into an outer scope by enclosed functions. One would explicitly set up a scope.

                  Example: this is a macro example, which creates a scope, where the matching match-variables are also Lisp variables.

                      CL-USER 165 > (pmatch:when-match (append (coerce "hello" 'list)
                                                               '(#$a)
                                                               (coerce "ld" 'list))
                                        (coerce "helloworld" 'list)
                                        (length a))
                  
                      3
  • nerdponx 5 days ago

    Wait, is it global by default (Lua, Bash) or truly dynamic? The latter would be kind of mind-bending to program with as the sole or default style. Was that ever a thing? Maybe I'm just too young to have experienced that.

    • bsder 4 days ago

      It's a shame that John N. Shutt is no longer with us.

      He created a very Scheme-like language called Kernel that seemed to walk the line between lexical and dynamic in a much more controlled way.

      https://web.cs.wpi.edu/~jshutt/kernel.html

      • nathan_compton 4 days ago

        I think Kernel is the other side of the extreme from Picolisp, since it wants all objects to be first class but wishes to maintain lexical scope information for all of them. I think this is hard because in a certain sense the names of things in a program have no natural correspondence to the meaning of the program from the point of view of a compiler writer in particular. Code calculates a value using values or changes the state of memory or however you want to conceive of it. The names one used to tell the compiler how to do that don't have any obvious relation to the transformation and keeping them around so that the programmer can meta-program is complex and makes the generated code slower. In a way, Common Lisp and Scheme seem like two distinct local maxima, out of which I prefer the latter. Kernel is neat though.

        • bsder 3 days ago

          Kernel is "mostly" just lexical unless you explicitly opt-out with FEXPRs. FEXPRs are what draw most people into Kernel.

          However, what is probably more important but doesn't immediately stick out until you poke at Kernel a lot harder are the fully reified "environments". "Environments are copy on write that don't destroy older references" has very subtle consequences that seem to make dynamic scope a lot better behaved.

          This also has the consequence that I can force things to explicitly evaluate in reference to the "ground environment" which is an explicit signal that it can always be compiled.

          I suspect there is a lot of fertile research ground here. In addition, there is a lot of implementation subtlety that I'm not sure he really grasped. Environments need a special data structure otherwise they cons up an enormous amount of garbage (I suspect it really needs a Bitmapped Vector Trie like Clojure).

          I wish Shutt were still around to talk to. :(

    • markasoftware 5 days ago

      Emacs Lisp had dynamic binding as the default (without any true support for lexical binding) until 2012.

      • tmalsburg2 5 days ago

        People talk as if dynamic scoping was objectively a mistake, but the fact that it works well and is really useful in a complex piece of software like Emacs seems to suggest otherwise.

        • p_l 4 days ago

          original opposition to lexical binding in Lisp circles was that lexical would be slower. That turned out to be false.

          Emacs Lisp explicitly kept to dynamic binding for everything because it made for simpler overriding of functions deep in, but resulted in lower performance and various other issues, and ultimately most benefit from such shadowing seems to be focus of defadvice and the like.

          • kazinator 4 days ago

            I can understand why that objection would be raised, because lexical binding is slower in code that is interpreted rather than compiled, compared to (shallow) dynamic binding. Under shallow dynamic binding, there isn't a chained dynamic environment structure. Variables are simply global: every variable is just the value cell of the symbol that names it. The value cell can be integrated directly into the representation of a symbol, and so accessing a variable under interpretation very fast, compared to accessing a lexical variable, which must be looked up in an environment structure.

        • DemocracyFTW2 4 days ago

          A rather weak argument when you consider what kind of mechanisms (like a digital clock with working seven-segment display) people have been programming / put together in Conway's Game of Life; to me this does not suggest in any way or manner that GoL could ever be my favored platform to simulate a digital clock (or anything more complex than a glider for that matter). Likewise vacuum cleaners and toothbrushes have likely been made hosts for playing doom, and people accomplish all kinds of stuff like quines and working software in brainf*ck. None of these feats are indicative of the respective platform being suitable or the right tool for a sizable number of programmers.

        • pjmlp 4 days ago

          As someone that used it in languages like Clipper, or Emacs Lisp, and ADL rule in C++ templates, cool to make programming tricks of wonder, a pain to debug when something goes wrong several months later.

        • nathan_compton 4 days ago

          Few deny the utility of dynamic-style variables for certain kinds of programming. But it can be helpful to segregate that behavior more carefully than in a language where it is the default.

    • actionfromafar 5 days ago

      I used to program in PicoLisp a long time ago but I have forgotten most of it.

      Hope you can make sense of this:

      https://picolisp.com/wiki/?firstclassenvironments

      • nathan_compton 4 days ago

        >PicoLisp uses dynamic binding for symbolic variables. This means that the value of a >symbol is determined by the current runtime context, not by the lexical context in the source file.

        >This has advantages in practical programming. It allows you to write independent code >fragments as data which can be passed to other parts of the program to be later executed >as code in that context.

        This amuses me because while its technically true this amazing feat is accomplished only by denuding of the code of substantial expressive power - namely the relation between the lexical denotation of the code and its meaning. I will say this - aesthetically, I prefer picolisp's approach to Common Lisp's, which is to just paper over this problem with gensyms, packages, etc. Give me hygienic macros or give me death.

        • kazinator 4 days ago

          Gensyms and packages are not required to make lexical scope work. Macros in an unhygienic macro system use these internally so that their expansions don't have unexpected behaviors in the scope where they are planted. The problems avoided by gensyms or packages are affect both dynamic and lexical scopes. A dynamic variable can be wrongly captured by an internal macro variable, not only a lexical variable.

          It may be there are solutions favored in Picolisp without using macros that would be done using macros in idiomatic Common Lisp, and so those solutions don't need gensyms and whatnot.

          • nathan_compton 4 days ago

            My point is only that unless you are using a hygienic macro system the idea that you are manipulating code in your macro is a (often white) lie. Code has semantics, a meaning, and unless the object you manipulate carries those semantics with it (that is, the syntax objects of eg `syntax-case`) you're just manipulating some data which has a necessarily superficial relationship with the code itself. Picolisp resolves this by simple "eliminating" lexical scope, which means that code really is trivially related to its denotation since the semantics of variable binding really are just "whatever is currently bound to this variable." Scheme resolves this by having syntax-transformations instead of macros: functions which genuinely manipulate syntax objects which carry along with them, among other things, information about their lexical context. Common Lisp accepts that most of the issues arising from the distinction between code itself and its nude denotation can be worked around and provides the tools to do that, but in Common Lisp one still transforms the denotation of the code, not the code itself. From my point of view, if one is purely interested in the aesthetics of the situation, the Scheme approach is much more satisfactory. From a practical point of view, it doesn't seem to be particularly onerous to program in, although the macros in scheme seem to lack the immediate intelligibility of the Common Lisp ones.

            • kazinator 4 days ago

              You are manipulating fragments of source code in a macro. Material in which tokens have been converted to objects and which has a nested structure. So, nicer than textual source code.

              • nathan_compton 4 days ago

                I mean yes and no. In a CL Macro you are manipulating lists of symbols and other atoms and in a sense that is code. But code has some static properties (of which lexical bindng is one) which are not reflected in that structure and which you can break pretty easily in a CL macro. A scheme syntax object carries that lexical information which is so critical to the meaning of the code and because it does it is much harder to accidentally manipulate the code in such a way that meaning of the code changes. It is exactly the static lexical binding semantics Common Lisp which introduce the conceptual tension in macro programming that requires the programmer to manually worry about gensyms. Because picolisp lacks lexical binding manipulating code lacks this complication (and, in fact, the complication of a macro system almost reduces to a trivial combination of quotation and evaluation).

                • kazinator 4 days ago

                  Programmers say that they are manipulating code when they go "vi foo.c" at their Unix prompt, so that's a bit of an upstream rhetorical paddle.

                  > It is exactly the static lexical binding semantics Common Lisp which introduce the conceptual tension in macro programming that requires the programmer to manually worry about gensyms.

                  A dynamically scoped Lisp (like Emacs lisp by default) with those kinds of macros needs gensyms all the same. It isn't the lexical scope.

                  When we have (let ((x 1)) (+ x x)), then regardless of whether x is lexical or dynamic, there is a lower level of binding going on. The x in (+ x x) physically belongs to the enclosing (let ...). That is not lexical scope; it's a fact about the position of the code pieces regardless of x being lexical or dynamic.

                  This is why in that strategy for implementing hygienic Scheme macros that you're alluding to, syntax objects, there is a different kind of closure at play: the syntactic closure. It is not a lexical closure.

                  The syntactic closure doesn't say that "x is bound as a variable". Only "this x expression is meant to be enclosed in this code".

                  Picolisp doesn't run into hygiene issues requiring gensym because it doesn't perform macro expansion:

                  https://picolisp.com/wiki/?macros

                  If you don't have a code manipulating process that invisibly transplants pieces of code from here to there, then of course you don't have the issues which that entails.

                  • nathan_compton 3 days ago

                    Lisps sure is fun! I didn't understand any of this kind of stuff until I learned Lisp.

                    • kazinator 3 days ago

                      Like Dijkstra said, you're able to think previously impossible thoughts.

        • More-nitors 4 days ago

          idk doesn't this mean I can't get any help from IDEs? code-completion? find-all-references?

          • cess11 4 days ago

            You'd probably be the only person using an IDE for development in Picolisp.

            The main author does (or at least did) a lot of development on a tablet, with his own software keyboard (https://play.google.com/store/apps/details?id=de.software_la... , which I've enjoyed for years on my handhelds, in part due to the tmux-arpeggio), and his own editor (https://picolisp.com/wiki/?vip ). I think most of us do something similar, using vim or vip, maybe on larger computers, but generally a pretty minimal setup.

            The REPL has string based completion, besides completing symbols it will also complete file paths. Development is heavily REPL based, you'd spend a lot more time inspecting the runtime than searching for string occurrences in files.

            From the REPL you'd also read the language reference, most likely in w3m, the preferred text web browser in this community. (doc 'macro) will open the reference on this entry if you started the REPL with 'pil +', where the + is a flag denoting debug mode. You can expect the web GUI framework to work rather well in w3m.

  • cess11 4 days ago

    Of all the quirks and weirdness in Picolisp, this is what gets to you?

    Either a Picolisp system is shortlived enough that your execution environment actually maps to the file your loading, or you're going to be interactively inspecting it anyway.

    • zem 3 days ago

      honestly, I feel the same way. for some reasons dynamic scope just feels "wrong" rather than quirky or weird, in that it doesn't fit my mental model of how a programming language should behave and what sort of bookkeeping needs to be the compiler's problem rather than my problem. never used picolisp, but I really wanted to like lush back in the day and the dynamic scope was the stumbling block.

      • cess11 3 days ago

        There is no compiler in Picolisp, only a very simple interpreter.

        The only bookkeeping problem I've encountered in practice is littering the runtime with symbols. As far as I know there's no way to make it forget about symbols it has encountered, and they are interned as soon as they are encountered. I think the namespacing is supposed to counter this, but I've never learnt it properly.

        I'm not sure in what situation the scoping would be a problem. The way I usually go about picolisping is using maybe a few globals (could be options, credentials, global stacks, something like that), conventionally marked with an asterisk, like *Global, and then everything else is functions that typically take one parameter, or perhaps a data parameter and a numeric limit for an iterator. Besides let-assignment inside functions variables rarely seem to be the right tool for me.

        Lush, if it's the Lisp-like shell thingie, seems like a rather different programming environment, what with the inline C and whatnot. Might try it out, seems it hasn't been updated in fifteen years or so, could be an adventure.

        • zem 3 days ago

          lush was basically an early attempt at what julia does today - a high level lisp like language and a high performance c-like language with good ffi support wrapped in one.

          I do see your point about picolisp being simple enough that the dynamic scope fits into the overall model; I might give it a try sometime just to see what it's like in practice.

          • cess11 3 days ago

            It's one of my favourite tools, the entire Picolisp system is like 1,5 MB or so, so it's really easy to move to some constrained or remote environment, as long as it's POSIX.

            When terminal incantations grow unwieldy I usually abstract by put them in a Picolisp function and start doing things in the REPL. It takes like ten minutes to write a wrapper around a MySQL CLI client that takes a query string and outputs a list of lists, and then you can wrap that in fork/bye and mapcar it on cuts from a list to run queries or imports in parallel. Similarly you can trivially hack up an 'Ansible at home' by running commands over SSH in parallel. If I'm worried about data races I let Linux handle it by writing lines to a file and then slurp that back in when every process is done.

            Surprisingly many things are heavier or slower than launching and disbanding a POSIX process, so it's often quite viable to just hand over to forks. Once I scraped out data about many thousands of people in tens of organisations by wrapping w3m and forking on URL:s I'd gathered in a similar way. Can probably be done with Beautiful Soup too, but I already knew how to use a browser to grab a copy of a web page and low level string parsing was probably faster and easier to implement than some abstracted XML representation or XPath. The value of that data set was easily a couple of orders of magnitude larger than what I got payed to assemble it.

            I mean, you can do these things in bash or Python or Clojure or something, but the Picolisp REPL has really good ergonomics and gets out of the way. Performance isn't great, but I find it good enough and have handled files with hundreds of thousands of lines without getting annoyed. Sometimes I reach for Elixir instead but it feels a bit clumsy in comparison.

  • actionfromafar 5 days ago

    It's like monkey patching in Ruby. It can be used for good and for evil.

    • DemocracyFTW2 4 days ago

      In a less balanced both-sides view Ruby as a social construct is an outlier, monkey patching being (rightly so IMHO) regarded as a more-than-questionable practice in most other mainstream PL communities. I mean, yeah, GOTO can be used for good and for evil, and so can GOSUB 200.