I have been following Unison for a veeery long time. Ever since those blog posts on Paul's personal website. It has been more than 10 years already so this is a great milestone. But I am just a bit disappointed. I love programming languages. I follow every programming language, even some you probably have never heard of. I have witnessed the rise of Rust, Go, Zig and others. At the age and level of polish that Unison has I have seen far more traction for those languages that actually have become something. I personally believe the reason is how hard they are trying to push their impossible "business model" by making most of the things going on on the ecosystem locked in to their cloud. I know there is a BYOC oferring but that isn't enough. The vibes are just off for me.
When it comes to the comparison with other languages like Zig, Rust and Go, I disagree. I think it's because it hits its """new weird thing""" budget really quickly with Abilities and code in a database.
The Share project is open source, in contrast to GitHub, which is also popular despite having forms of locked-inness in practice as well.
I'm saying this not to negate the vibes you feel, but I'd rather people try it out and maybe see how their favorite language could benefit from different design decisions.
I think they have other issues, for example, they have no FFI. I think focusing on the business is actually a pretty decent idea. Trying to make money will force them to focus on things that are important to users and not get distracted bike-shedding on things that I would if I were them (like typeclasses).
Agreed. I want to build things that collaborate locally if the internet goes away. It seems like the hash-addressed function thing would be a pretty nice way to do that: no name resolution needed, just talk in terms of hashes to whoever's in range.
But all of the resources available for learning the language are funneling me towards using cloud hosted infra that won't be available if the internet goes away. For all I know there is a Unison-y way forward for my idea, but the path is obscured by a layer of marketing haze.
I for one am glad there is a commercial angle to the project. Done right, it means more hours could go into making things better, in a sustainable way. Also, having paying users provides a strong incentive to keep the technology grounded / practical.
Without the commercial stuff, Unison would be just another esolang to me. Now I'm probably going to play with it in upcoming side projects.
Yeah, the core ideas sound great, but if the only way code can be published and imported is via their cloud platform, that would be a hard pass for me.
Glancing at their docs, I see mentions of Unison Share, which is also hosted on unison-lang.org.
So I would appreciate this being clarified upfront in all their marketing and documentation.
Ah, I do see the BYOC option you mention. It still requires a unison.cloud account and an active subscription, though...
Unison code is published on https://share.unison-lang.org/ which is itself open source (it's a Haskell + postgres app), as is the language and its tooling. You can use Unison like any other open source general-purpose language, and many people do that. (We ourselves did this when building Unison Cloud - we wrote Unison code and deployed that within containers running in AWS.)
The cloud product is totally separate and optional.
Maybe we'll have a page or a reference somewhere to make the lines more clear.
yeah, unison cloud is like the "heroku for functions" if you wanna not think about how deployments work. But you can just run unison programs standalone or in a docker container or whatever: https://www.unison-lang.org/docs/usage-topics/docker/
Hi. I don't know if you'll see this (I'm banned, so someone will have to upvote me if you have showdead off), but I would like to know what languages you personally like the most, or which ideas do you think a perfect language should have?
I've been following Unison for a long time, congrats on the release!
Unison is among the first languages to ship algebraic effects (aka Abilities [1]) as a major feature. In early talks and blog posts, as I recall, you were still a bit unsure about how it would land. So how did it turn out? Are you happy with how effects interact with the rest of the language? Do you like the syntax? Can you share any interesting details about how it's implemented under the hood?
> Unison is among the first languages to ship algebraic effects (aka Abilities [1]) as a major feature. In early talks and blog posts, as I recall, you were still a bit unsure about how it would land. So how did it turn out?
No regrets! This Abilities system is really straightforward and flexible. You find yourself saying that you don't miss monads, if you were already of the FP affiliation. But were glad that it means that you don't have to understand why a monad is like a burrito to do FP.
> Do you like the syntax?
So this one is very loaded. Yes we LOVE the syntax and it is very natural to us, but that is because most of us that are working on the language had either already been fluent in haskell, or at least had gotten to at least a basic understanding to the point of of "I need to be able to read these slides". However we recognize that the current syntax of the language is NOT natural to the bulk of who we would like to be our audience.
But here's the super cool thing about our language! Since we don't store your code in a text/source code representation, and instead as a typechecked AST, we have the freedom to change the surface syntax of the language very easily, which is something we've done several times in the past. We have this unique possibility that other languages don't have, in that we could have more than one "surface syntax" for the language. We could have our current syntax, but also a javascript-like syntax, or a python-like syntax.
And so we have had lots of serious discussions recently about changing the surface syntax to something that would be less "weird" to newcomers. The most obvious one being changing function application from the haskell style "function arg1 arg2" style to the more familier "c?" like style of "function(arg1, arg2)". The difficulties for us will be trying to figure out how to map some of our more unique features like "what abilities are available during function application" onto a more familiar syntax.
So changing the syntax is something that we are seriously considering, but don't yet have a short term plan for.
What is the data you actually store when caching a successful test run? Do you store the hash of the expression which is the test, and a value with a semantics of "passed". Or do you have a way to hash all values (not expressions/AST!) that Unison can produce?
I am asking because if you also have a way to cache all values, this might allow to carry some of Unison's nice properties a little further. Say I implement a compiler in Unison, I end up with an expression that has a free variable, which carries the source code of the program I am compiling.
Now, I could take the hash of the expression, the hash of the term that represents the source code, i.e., what the variable in my compiler binds to, and the hash of the output. Would be very neat for reproducibility, similar to content-addressed derivations in Nix, and extensible to distributed reproducibility like Trustix.
I guess you'll be inclined to say that this is out of scope for your caching, because your caching would only cache results of expressions where all variables are bound (at the top level, evaluating down). And you would be right. But the point is to bridge to the outside of Unison, at runtime, and make this just easy to do with Unison.
Feel free to just point me at material to read, I am completely new to this language and it might be obvious to you...
Yes, we have a way of hashing literally all values in the language, including arbitrary data types, functions, continuations, etc. For instance, here, I'm hashing a lambda function:[1]
> crypto.hash Sha3_256 (x -> x + 1)
⧩
0xs704e9cc41e9aa0beb70432cff0038753d07ebb7f5b4de236a7a0a53eec3fdbb5
The test result cache is basically keyed by the hash of the expression, and then the test result itself (passed or failed, with text detail).
We only do this caching for pure tests (which are deterministic and don't need to be re-run over and over), enforced by the type system. You can have regular I/O tests as well, and these are run every time. Projects typically have a mix of both kinds of tests.
It is true that you can only hash things which are "closed" / have no free variables. You might instead hash a function which takes its free variables as parameters.
Overall I think Unison would be a nice implementation language for really anything that needs to make interesting use of hashing, since it's just there and always available.
Thank you! (And thanks for following along for all the years!)
I'll speak a bit to the language audience, and others might weigh in as they see fit. The target is pretty broad: Unison is a general-purpose functional language for devs or teams who want to build applications with a minimal amount of ceremony around writing and shipping applications.
Part of the challenge of talking about that (the above might sound specious and bland) is that the difference isn't necessarily a one-shot answer: everything from diffing branches to deploying code is built atop a different foundation. For example, in the small: I upgraded our standard lib in some of my projects and because it is a relatively stable library; it was a single command. In the large: right now we're working on a workflow orchestration engine; it uses our own Cloud (typed, provisioned in Unison code, tested locally, etc) and works by serializing, storing, and later resuming the continuation of a program. That kind of framework would be more onerous to build, deploy, and maintain in many other languages.
Really cool project. To be honest, I think I don't fully understand the concept of a content addressed language. Initially I thought this was another BEAM language, but it seems to run on its own VM. How does Unison compare to BEAM languages when it comes to fault tolerance? What do you think is a use case that Unison shines that Erlang maybe falls short?
Erlang is great and was one inspiration for Unison. And a long time ago, I got a chance to show Joe Armstrong an early version of Unison. He liked the idea and was very encouraging. I remember that meant a lot to me at the time since he's a hero of mine. He had actually had the same idea of identifying individual functions via hashes and had pondered if a future version of Erlang could make use of that. We had a fun chat and he told me many old war stories from the early days of Erlang. I was really grateful for that. RIP, Joe.
Re: distributed computing, the main thing that the content-adressed code buys you is the ability to move computations around at runtime, deploying any missing dependencies on the fly. I can send you the expression `factorial 4` and what I'm actually sending is a bytecode tree with a hash of the factorial function. You then look this up in your local code cache - if you already have it, then you're good to go, if not, you ask me to send the code for that hash and I send it and you cache it for next time.
The upshot of this is that you can have programs that just transparently deploy themselves as they execute across a cluster of machines, with no setup needed in advance. This is a really powerful building block for creating distributed systems.
In Erlang, you can send a message to a remote actor, but it's not really advisable to send a message that is or contains a function since you don't know if the recipient has that function's implementation. Of course, you can set up an Erlang cluster so everyone has the same implementation (analogous to setting up a Spark cluster to have the same version of all dependencies everywhere), but this involves setup in advance and it can get pretty fragile as you start thinking about how these dependencies will evolve over time.
A lot of Erlang's ideas around fault tolerance carry over to Unison as well, though they play out differently due to differences in the core language and libraries.
Mobile or browser clients talk to a Unison backend services over HTTP, similar to any other language. Nothing fancy there.[1]
> sending code over the network to be executed elsewhere feels like a security risk to me?
I left out many details in my explanation and was just describing the core code syncing capability the language gives you. You can take a look at [2] to see what the core language primitives are - you can serialize values and code, ask their dependencies, deserialize them, and load them dynamically.
To turn that into a more industrial strength distributed computing platform, there are more pieces to it. For instance, you don't want to accept computations from anyone on the internet, only people who are authenticated. And you want sandboxing that lets you restrict the set of operations that dynamically loaded computations can use.
Within an app backend / deployed service, it is very useful to be able to fork computations onto other nodes and have that just work. But you likely won't directly expose this capability to the outside world, you instead expose services with a more limited API and which can only be used in safe ways.
[1] Though we might support Unison compiling to the browser and there have already been efforts in that direction - https://share.unison-lang.org/@dfreeman/warp This would allow a Unison front end and back end to talk very seamlessly, without manual serialization or networking
Not a dumb question at all! Unison's type system uses Abilities (algebraic effects) for functional effect management. On a type level, that means we can prevent effects like "run arbitrary IO" on a distributed runtime. Things that run on shared infrastructure can be "sandboxed" and prevented with type safety.
The browser or mobile apps cannot execute arbitrary code on the server. Those would typically call regular Unison services in a standard API.
I'm curious about how the persistence primitives (OrderedTable, Table, etc) are implemented under the hood. Is it calling out to some other database service? Is it implemented in Unison itself? Seems like a really interesting composable set of primitives, together with the Database abstraction, but having a bit of a hard time wrapping my head around it!
Hey there! Apologies for not getting to you sooner. The `Table` is a storage primitive implemented on top of DynamoDB (it's a lower-level storage building block - as you've rightfully identified; these entities were made to be composable, so other storage types can be made with them). Our `OrderedTable` docs might be of interest to you: they talk about their own implementation a bit more (BTrees); and `OrderedTable` is one of our most ergonomic storage types: https://share.unison-lang.org/@unison/cloud/code/releases/23...
The Database abstraction helps scope and namespace (potentially many) tables. It is especially important in scoping transactions, since one of the things we wanted to support with our storage primitives is transactionality across multiple storage types.
Congrats on 1.0! I've been interested in Unison for a while now, since I saw it pop up years ago.
As an Elixir/Erlang programmer, the thing that caught my eye about it was how it seemed to really actually be exploring some high level ideas Joe Armstrong had talked about. I'm thinking of, I think, [0] and [1], around essentially a content-addressable functions. Was he at all an influence on the language, or was it kind of an independent discovery of the same ideas?
I am not 100% sure the origin of the idea but I do remember being influenced by git and Nix. Basically "what if we took git but gave individual definitions a hash, rather than whole working trees?" Then later I learned that Joe Armstrong had thought about the same thing - I met Joe and talked with him about Unison a while back - https://news.ycombinator.com/item?id=46050943
Independent of the distributed systems stuff, I think it's a good idea. For instance, one can imagine build tools and a language-agnostic version of Unison Share that use this per-definition hashing idea to achieve perfect incremental compilation and hyperlinked code, instant find usages, search by type, etc. It feels like every language could benefit from this.
I know how fraught performance/micro-benchmarks are. But do you have any data on how performant it is? Should someone expect it to perform similar to Haskell?
aha yeah! good question! We have two different types of type declarations, and each has its own keyword: "structural" and "unique". So you can define two different types as as
structural type Optional a = Some a | None
structural type Maybe a = Just a | Nothing
and these two types would get the same hash, and the types and constructors could be used interchangeably. If you used the "unique" type instead:
unique type Optional a = Some a | None
uniqte type Maybe a = Just a | Nothing
Then these would be totally separate types with separate constructors, which I believe corresponds to the `BRANDED` keyword in Modula 3.
Originally, if you omitted both and just said:
type Optional a = Some a | None
The default was "structural". We switched that a couple of years ago so that now the default is "unique". We are interestingly uniquely able to do something like this, since we don't store source code, we store the syntax tree, so it doesn't matter which way you specified it before we made the change, we can just change the language and pretty print your source in the new format the next time you need it.
How does the implementation of unique types works? It seems you need to add some salt to the hashes of unique type data, but where does the entropy come from?
There's an algorithm for it. The thing that actually gets assigned a hash IS a mutually recursive cycle of functions. Most cycles are size 1 in practice, but some are 2+ like in your question, and that's also fine.
Does that algorithm detects arbitrary subgraphs with a cyclic component, or just regular cycles? (Not that it would matter in practice, I don't think many people write convoluted mutually recursive mess because it would be a maintenance nightmare, just curious on the algorithmic side of things).
I don’t totally understand the question (what’s a regular cycle?), but the only sort of cycle that matters is a strongly connected component (SCC) in the dependency graph, and these are what get hashed as a single unit. Each distinct element within the component gets a subindex identifier. It does the thing you would want :)
Unison does diverge a bit from the mainstream in terms of its design. There's a class of problems around deploying and serializing code that involve incidental complexity and repetitive work for many dev teams (IDLs at service boundaries and at storage boundaries, provisioning resources for cloud infrastructure) and a few "everyday programming" pain points that Unison does away with completely (non-semantic merge conflicts, dependency management resolution).
Hi there, and congrats on the launch. I've been following the project from the sidelines, as it has always seemed interesting.
Since everything in software engineering has tradeoffs, I have to ask: what are Unison's?
I've read about the potential benefits of its distributed approach, but surely there must be drawbacks that are worth considering. Does pulling these micro-dependencies or hashing every block of code introduce latency at runtime? Are there caching concerns w.r.t. staleness, invalidation, poisoning, etc.? I'm imagining different scenarios, and maybe these specific ones are not a concern, but I'd appreciate an honest answer about ones that are.
There are indeed tradeoffs; as an example, one thing that trips folks up in the "save typed values without encoders" world is that a stored value of a type won't update when your codebase's version of the type updates. On its face, that should be a self-evident concern (solvable with versioning your records); but you'd be surprised how easy it is to `Table.write personV1` and later update the type in place without thinking about your already written records. I mention this because sometimes the lack of friction around working with one part of Unison introduces confusion where it juts against different mental models.
Other general tradeoffs, of course, include a team's tolerance for newness and experimentation. Our workflow has stabilized over the years, but it is still off the beaten path, and I know that can take time to adjust to.
I hope others who've used Unison will chime in with their tradeoffs.
These seem to be mostly related to difficulties around adapting to a new programming model. Which is understandable, but do you have examples of more concrete tradeoffs?
For example, I don't think many would argue that for all the upsides a functional language with immutable state offers, performance can take a significant hit. And if can make certain classes of problems trickier, while simplifying others.
Surely with a model this unique, with the payoffs come costs.
Unison is one of the most exciting programming languages to me, and I'm a huge programming language nerd. A language with algebraic effects like Unison's really needs to hit the mainstream, as imo it's "the next big thing" after parametric polymorphism and algebraic data types. And Unison has a bunch of other cool ideas to go with it too.
This isn't really what they're going for, but I think it can potentially be a very interesting language for writing game mods in. One thing about game mods is that you want to run untrusted code that someone else wrote in your client, but you don't want to let just anyone easily hack your users. Unison seems well-designed for this use case because it seems like you could easily run untrusted Unison code without worrying about it escaping its sandbox due to the ability system. (Although this obviously requires that you typecheck the code before running it. And I don't know if Unison does that, but maybe it does.) There are other ways of implementing a sandbox, and Wasm is fairly well suited for this as well. But Unison seems like another interesting point in the design space.
Still on the subject of Game Dev, I also think that the ability system might be actually very cool for writing an ECS. For those who don't know, an ECS basically involves "entities" which have certain "components" on them, and then "systems" can run and access or modify the components on various entities. For performance, it can be very nice to be able to run different systems on different threads simultaneously. But to do this safely, you need to check that they're not going to try to access the same components. This limits current ECS implementations, because the user has to tediously tell the system scheduler what components each system is going to access. But Unison seems to have some kind of versatile system for inferring what abilities are needed by a given function. If it could do that, then accessing a component could be a an ability. So a function implementing a system that accesses 10 components would have 10 abilities. If those 10 abilities could be inferred, it would be a huge game changer for how nice it is to use an ECS.
> Unison seems well-designed for this use case because it seems like you could easily run untrusted Unison code without worrying about it escaping its sandbox due to the ability system. (Although this obviously requires that you typecheck the code before running it. And I don't know if Unison does that, but maybe it does.)
Indeed we do, and we use this for our Unison Cloud project [1]. With unison cloud we are inviting users to ship code to our Cloud for us to execute, so we built primitives in the language for scanning a code blob and making sure it doesn't do IO [2]. In Unison Cloud, you cannot use the IO ability directly, so you can't, for example, read files off our filesystem. We instead give you access to very specific abilities to do IO that we can safely handly. So for example, there is a `Http` ability you can call in Cloud to make web requests, but we can make sure you aren't hitting anything you shouldn't
I'm also excited about using this specifically for games. I've been thinking about how you could make a game in unison cloud and another user could contribute to the game by implementing an ability as a native service, which just becomes a native function call at runtime. I started working on an ECS [3] a while back, but I haven't had a chance to do much with it yet.
Oh man, I first looked at this project what feels like _forever_ ago and remember thinking--almost verbatim, "Wow I wish I could see this 5 years from now", and lo and behold I suppose it has been about that long!
Congratulations on the milestone. You are making one of the most radical PLs out there into something that is actually useable in an industry setting - that’s no mean feat.
I remember the day Rúnar told me he was going to work on this new language called Unison.
I have always thought it was an amazing project to set out on, and was paving the way for a new kind of paradigm. Super proud to see them release a 1.0 and I would love to say Unison is my go-to language in the near future!
I genuinely think systems like Unison are "the future of computing"...
But the question is when that future will be.
Part of the beauty of these sorts of systems is just that the context of what your system actually does is in one system, you aren't dealing with infra, data, and multi-service layers
Maybe that means it is a much better foundation for AI coding agents to work in? Or maybe AI slows it down? we continue to try and throw more code at the problem instead of re-examining the intermediate layers of abstraction?
I really don't know, but I do really want to learn more about is how the unison team is getting this out in the market. I do think that projects like this are best done outside of a VC backed model... but you do eventually need something sustainable, so curious how the team things about it. Transparently, I would love to work on a big bet like this... but it is hard to know if I could have it make financial sense.
With all that, a huge congrats to the team. This is a truly long-term effort and I love that.
I would love to see some benchmarks of unison somewhere on their website. I find knowing there performance characteristics helps a lot with understanding the use cases for a new language.
Even just a really rough "here our requests per second compared to Django, express.js and asp.net"
Would be great to get a rough read on where it sits among other choices for web stuff.
More generally, I do hope this goes well for unison, the ideas being explored are certainly fascinating.
I just hope it one day gets a runtime/target that's more applicable to non web stuff. I find it much easier to justify using a weird language for a little CLI tool then for a large web project.
So, there are a whole bunch of interesting ideas here, but…
It’s a huge, all-or-nothing proposition. You have to adopt the language, the source control, and figure out the hosting of it, all at once. If you don’t like one thing in the whole stack, you’re stuck. So, I suspect all those interesting ideas will not go anywhere (at least directly; maybe they get incorporated elsewhere).
You can gradually adopt Unison, it's not all or nothing. It's true that when programming in Unison, you use Unison's tooling (which is seriously one of the best things about it), but there are lightweight ways of integrating with existing systems and services and that is definitely the intent.
We ourselves make use of this sort of thing since (for instance) Unison Cloud is implemented mostly in Unison but uses Haskell for a few things.
There can be enormous value in creating multiple pieces of tech all designed to work really well together. We've done that for Unison where it made sense while also keeping an eye on ease of integration with other tech.
I think that the messaging around this is going to be pretty important in heading off gut-reaction "it's all or nothing locked in to their world" first takes. It's probably attractive marketing for things to be aimed at "look how easy it easy to use our entire ecosystem", but there's a risk to that too.
While I'm sure the creators would love to see their work become commercially successful and widespread, I don't think that a very interesting criteria to judge what's essentially cool computer science research.
That's still the case in Unison! This particular post doesn't dive into the codebase format, but the core idea is the same: Unison hashes your code by its AST and stores it in a database.
Same, but I knew there was a v2 as I remember upgrading but I thought this was bringing it back. Oh well, fond memories of the web back in the early 2000s when I was just getting started doing web design, coding for fun back in my highschool days.
Ok I tried it out. So I run ucm.cmd and it tells me: "I created a new codebase for you at C:\Users\myuser" but there is nothing there except a .unison folder. I didn't look too closely at first but maybe this all works by storing the code in the sqlite file inside that folder? Dot folders aren't usually relevant for project files. Because then even after the cli had me create a new project called happy-porcupine which is pretty unique on my computer I can't find any file or folder anywhere with that name on my machine. So then the getting started guide for unison tells me to create a scratch.u file and put the hello world instructions in there. But where am I supposed to put that scratch.u file? Ok so in desperation I put it right next to the ucm.cmd and just do "run helloWorld" on the cli even though I don't see why this would work but it does. Apparently I'm supposed to just dump my code directly into the downloaded compiler/folder? So then what is the C:\Users\myuser project folder for if I have to put all my .u files directly next to ucm.cmd anyway? And another weird thing is that everytime I make a change to the scratch.u file the cli tells me "Run `update` to apply these changes to your codebase." but even if I don't do that then rerunning "run helloWorld" still runs the new code.
I tried the unison vscode extension btw and despite ucm being on the path now it says: "Unison: Language server failed to connect, is there a UCM running? (version M4a or later)". I also seem to be required to close my ucm cli in order to run vscode because it says that the database is locked otherwise. And I guess there is no debugger yet? It just seems weird that I don't really know where my "project" even is or what this project model conceptually is. It seems like I just put all my unison code somewhere next to the compiler, it loads everything by default into the compiler and I merely do db updates into some kind of more permanent sqlite storage perhaps but then why do I even do that, wouldn't I still just put the .u files into a git repository? There is also no mention of how this language runtime works or performs, I'm assuming fully memory managed but perhaps slow because I'm seeing an interpreter mentioned?
I think you also really need a web based playground where you show off some of these benefits of unison in small self contained snippets because just reading through some examples is pretty hard, it's a very different language and I can't tell what I'm looking at as a lifelong C/Java/etc. tier programmer. Sure you explain the concepts but I'm looking for a far more hands on: "run this ability code, look: here is why this is cool because you are prevented from making mistakes thanks to ..." or "this cannot possibly error because thanks to abilities ..." instead of so much conceptual explanation: https://www.unison-lang.org/docs/fundamentals/abilities/usin...
The tooling takes a little getting used to but it’s extremely powerful. Here are a few benefits you’ll see -
UCM keeps a perfect incremental compilation cache as part of its codebase format, so you’re generally never waiting for code to build. When you pull from remote, there’s nothing to build either.
Pure tests are automatically cached rather than being run over and over.
Switching branches is instantaneous and doesn’t require recompiling.
Renaming is instantaneous, doesn’t break downstream usages, and doesn’t generate a huge text diff.
All code (and code diffs) are hyperlinked when rendered, supporting click through to definition.
I don’t know if you saw these getting started guides, they might be helpful -
You can come by the Discord (https://unison-lang.org/discord) if you have any questions as you’re getting going! I hope you will give it a shot and sorry for the trouble getting started. There are a lot of new ideas in Unison and it’s been tricky to find the best way to get folks up to speed.
The Unison website and docs are all open source btw -
That depends. What are you wanting to accomplish more broadly with the integration?
I'll mention a couple things that might be relevant - you could have the git repo reference a branch or an immutable namespace hash on Unison Share. And as part of your git repo's CI, pull the Unison code and compile and/or deploy it or whatever you need to do.
There's support for webhooks on Unison Share as well, so you can do things like "open a PR to bump the dependency on the git repo whenever a new commit is pushed to branch XYZ on Unison Share".
Basically, with webhooks on GH and/or Unison Share and a bit of scripting you can set up whatever workflow you want.
Feel free to come by the Discord https://unison-lang.org/discord if you're wanting to try out Unison but not sure how best to integrate with an existing git repo.
The tool you use to interact with the code database keeps track of the changes in an append-only log - if you're familiar with git, the commands for tracking changes echo those of git (push, pull, merge, etc) and many of them integrate with git tooling.
If you ever need this kind of stuff, you'll be better off building your own distributed interface by using plain regular GHC Haskell and https://haskell-distributed.github.io/
For interesting usage - we built Unison Cloud (a distributed computing platform) with the Unison language and also more recently an "AWS Kinesis over object storage" product. It's nice for distributed systems, though you can also use it like any other general-purpose language, of course.
In terms of core language features, the effect system / algebraic effects implementation is something you may not have seen before. A lot of languages have special cases of this (like for async I/O, say, or generators), but algebraic effects are the uber-feature that can express all of these and more.
I think Alvaro's at the Unison conference was a pretty cool demonstration of what you can do with the style of algebraic effects (called "abilities" in unison)
He implements an erlang style actor system, and then by using different handlers for the algebraic effects, he can "run" the actor system, but also optionally make a live diagram of the actor communications.
Oh, so this is not the bidirectional file synchronization tool huh? Name collisions are inevitable, but unison (the sync tool) has been around and in use since 1998, so this one feels especially egregious.
I have been following Unison for a veeery long time. Ever since those blog posts on Paul's personal website. It has been more than 10 years already so this is a great milestone. But I am just a bit disappointed. I love programming languages. I follow every programming language, even some you probably have never heard of. I have witnessed the rise of Rust, Go, Zig and others. At the age and level of polish that Unison has I have seen far more traction for those languages that actually have become something. I personally believe the reason is how hard they are trying to push their impossible "business model" by making most of the things going on on the ecosystem locked in to their cloud. I know there is a BYOC oferring but that isn't enough. The vibes are just off for me.
When it comes to the comparison with other languages like Zig, Rust and Go, I disagree. I think it's because it hits its """new weird thing""" budget really quickly with Abilities and code in a database.
The Share project is open source, in contrast to GitHub, which is also popular despite having forms of locked-inness in practice as well.
I'm saying this not to negate the vibes you feel, but I'd rather people try it out and maybe see how their favorite language could benefit from different design decisions.
I think they have other issues, for example, they have no FFI. I think focusing on the business is actually a pretty decent idea. Trying to make money will force them to focus on things that are important to users and not get distracted bike-shedding on things that I would if I were them (like typeclasses).
We do have an FFI now! https://github.com/unisonweb/unison/pull/6008
It's very recent and we'll be adding more types to it soon, this first PR was just focused on the core machinery. This is in the 1.0 release, btw.
Let us know if you give it a whirl.
Agreed. I want to build things that collaborate locally if the internet goes away. It seems like the hash-addressed function thing would be a pretty nice way to do that: no name resolution needed, just talk in terms of hashes to whoever's in range.
But all of the resources available for learning the language are funneling me towards using cloud hosted infra that won't be available if the internet goes away. For all I know there is a Unison-y way forward for my idea, but the path is obscured by a layer of marketing haze.
I for one am glad there is a commercial angle to the project. Done right, it means more hours could go into making things better, in a sustainable way. Also, having paying users provides a strong incentive to keep the technology grounded / practical.
Without the commercial stuff, Unison would be just another esolang to me. Now I'm probably going to play with it in upcoming side projects.
Yeah, the core ideas sound great, but if the only way code can be published and imported is via their cloud platform, that would be a hard pass for me.
Glancing at their docs, I see mentions of Unison Share, which is also hosted on unison-lang.org.
So I would appreciate this being clarified upfront in all their marketing and documentation.
Ah, I do see the BYOC option you mention. It still requires a unison.cloud account and an active subscription, though...
Unison code is published on https://share.unison-lang.org/ which is itself open source (it's a Haskell + postgres app), as is the language and its tooling. You can use Unison like any other open source general-purpose language, and many people do that. (We ourselves did this when building Unison Cloud - we wrote Unison code and deployed that within containers running in AWS.)
The cloud product is totally separate and optional.
Maybe we'll have a page or a reference somewhere to make the lines more clear.
Is a standalone Unison Code instance something that could be deployed in a docker container for personal use?
https://www.unison-lang.org/docs/usage-topics/docker/
I see, thanks. It's reassuring to know that I can use all language features without relying on any of your infrastructure.
yeah, unison cloud is like the "heroku for functions" if you wanna not think about how deployments work. But you can just run unison programs standalone or in a docker container or whatever: https://www.unison-lang.org/docs/usage-topics/docker/
Hi. I don't know if you'll see this (I'm banned, so someone will have to upvote me if you have showdead off), but I would like to know what languages you personally like the most, or which ideas do you think a perfect language should have?
Also, hi, I'm one of the language creators, feel free to ask any questions here!
I've been following Unison for a long time, congrats on the release!
Unison is among the first languages to ship algebraic effects (aka Abilities [1]) as a major feature. In early talks and blog posts, as I recall, you were still a bit unsure about how it would land. So how did it turn out? Are you happy with how effects interact with the rest of the language? Do you like the syntax? Can you share any interesting details about how it's implemented under the hood?
[1]: https://www.unison-lang.org/docs/fundamentals/abilities/
> congrats on the release
Thank you!
> Unison is among the first languages to ship algebraic effects (aka Abilities [1]) as a major feature. In early talks and blog posts, as I recall, you were still a bit unsure about how it would land. So how did it turn out?
No regrets! This Abilities system is really straightforward and flexible. You find yourself saying that you don't miss monads, if you were already of the FP affiliation. But were glad that it means that you don't have to understand why a monad is like a burrito to do FP.
> Do you like the syntax?
So this one is very loaded. Yes we LOVE the syntax and it is very natural to us, but that is because most of us that are working on the language had either already been fluent in haskell, or at least had gotten to at least a basic understanding to the point of of "I need to be able to read these slides". However we recognize that the current syntax of the language is NOT natural to the bulk of who we would like to be our audience.
But here's the super cool thing about our language! Since we don't store your code in a text/source code representation, and instead as a typechecked AST, we have the freedom to change the surface syntax of the language very easily, which is something we've done several times in the past. We have this unique possibility that other languages don't have, in that we could have more than one "surface syntax" for the language. We could have our current syntax, but also a javascript-like syntax, or a python-like syntax.
And so we have had lots of serious discussions recently about changing the surface syntax to something that would be less "weird" to newcomers. The most obvious one being changing function application from the haskell style "function arg1 arg2" style to the more familier "c?" like style of "function(arg1, arg2)". The difficulties for us will be trying to figure out how to map some of our more unique features like "what abilities are available during function application" onto a more familiar syntax.
So changing the syntax is something that we are seriously considering, but don't yet have a short term plan for.
What is the data you actually store when caching a successful test run? Do you store the hash of the expression which is the test, and a value with a semantics of "passed". Or do you have a way to hash all values (not expressions/AST!) that Unison can produce?
I am asking because if you also have a way to cache all values, this might allow to carry some of Unison's nice properties a little further. Say I implement a compiler in Unison, I end up with an expression that has a free variable, which carries the source code of the program I am compiling.
Now, I could take the hash of the expression, the hash of the term that represents the source code, i.e., what the variable in my compiler binds to, and the hash of the output. Would be very neat for reproducibility, similar to content-addressed derivations in Nix, and extensible to distributed reproducibility like Trustix.
I guess you'll be inclined to say that this is out of scope for your caching, because your caching would only cache results of expressions where all variables are bound (at the top level, evaluating down). And you would be right. But the point is to bridge to the outside of Unison, at runtime, and make this just easy to do with Unison.
Feel free to just point me at material to read, I am completely new to this language and it might be obvious to you...
Yes, we have a way of hashing literally all values in the language, including arbitrary data types, functions, continuations, etc. For instance, here, I'm hashing a lambda function:[1]
The test result cache is basically keyed by the hash of the expression, and then the test result itself (passed or failed, with text detail).We only do this caching for pure tests (which are deterministic and don't need to be re-run over and over), enforced by the type system. You can have regular I/O tests as well, and these are run every time. Projects typically have a mix of both kinds of tests.
It is true that you can only hash things which are "closed" / have no free variables. You might instead hash a function which takes its free variables as parameters.
Overall I think Unison would be a nice implementation language for really anything that needs to make interesting use of hashing, since it's just there and always available.
[1]: https://share.unison-lang.org/@unison/base/code/releases/7.4... [2]: https://share.unison-lang.org/@unison/base/code/releases/7.4...
(All Unison values can also be decompiled into an AST anyway.)
Congratulations and amazing job! I've loosely followed Unison for years; hitting 1.0 is a big deal.
Unison has many intriguing features, the foremost being hashed definitions. It's an incredible paradigm shift.
It does seem like a solution searching for a problem right now though.
Who is this language targeted at and who is using it in production besides Unison Cloud?
Thank you! (And thanks for following along for all the years!)
I'll speak a bit to the language audience, and others might weigh in as they see fit. The target is pretty broad: Unison is a general-purpose functional language for devs or teams who want to build applications with a minimal amount of ceremony around writing and shipping applications.
Part of the challenge of talking about that (the above might sound specious and bland) is that the difference isn't necessarily a one-shot answer: everything from diffing branches to deploying code is built atop a different foundation. For example, in the small: I upgraded our standard lib in some of my projects and because it is a relatively stable library; it was a single command. In the large: right now we're working on a workflow orchestration engine; it uses our own Cloud (typed, provisioned in Unison code, tested locally, etc) and works by serializing, storing, and later resuming the continuation of a program. That kind of framework would be more onerous to build, deploy, and maintain in many other languages.
Really cool project. To be honest, I think I don't fully understand the concept of a content addressed language. Initially I thought this was another BEAM language, but it seems to run on its own VM. How does Unison compare to BEAM languages when it comes to fault tolerance? What do you think is a use case that Unison shines that Erlang maybe falls short?
Erlang is great and was one inspiration for Unison. And a long time ago, I got a chance to show Joe Armstrong an early version of Unison. He liked the idea and was very encouraging. I remember that meant a lot to me at the time since he's a hero of mine. He had actually had the same idea of identifying individual functions via hashes and had pondered if a future version of Erlang could make use of that. We had a fun chat and he told me many old war stories from the early days of Erlang. I was really grateful for that. RIP, Joe.
Re: distributed computing, the main thing that the content-adressed code buys you is the ability to move computations around at runtime, deploying any missing dependencies on the fly. I can send you the expression `factorial 4` and what I'm actually sending is a bytecode tree with a hash of the factorial function. You then look this up in your local code cache - if you already have it, then you're good to go, if not, you ask me to send the code for that hash and I send it and you cache it for next time.
The upshot of this is that you can have programs that just transparently deploy themselves as they execute across a cluster of machines, with no setup needed in advance. This is a really powerful building block for creating distributed systems.
In Erlang, you can send a message to a remote actor, but it's not really advisable to send a message that is or contains a function since you don't know if the recipient has that function's implementation. Of course, you can set up an Erlang cluster so everyone has the same implementation (analogous to setting up a Spark cluster to have the same version of all dependencies everywhere), but this involves setup in advance and it can get pretty fragile as you start thinking about how these dependencies will evolve over time.
A lot of Erlang's ideas around fault tolerance carry over to Unison as well, though they play out differently due to differences in the core language and libraries.
Very dumb question - sending code over the network to be executed elsewhere feels like a security risk to me?
I’m also curious how this looks with browser or mobile clients. Surely they’re not sending code to the server?
Mobile or browser clients talk to a Unison backend services over HTTP, similar to any other language. Nothing fancy there.[1]
> sending code over the network to be executed elsewhere feels like a security risk to me?
I left out many details in my explanation and was just describing the core code syncing capability the language gives you. You can take a look at [2] to see what the core language primitives are - you can serialize values and code, ask their dependencies, deserialize them, and load them dynamically.
To turn that into a more industrial strength distributed computing platform, there are more pieces to it. For instance, you don't want to accept computations from anyone on the internet, only people who are authenticated. And you want sandboxing that lets you restrict the set of operations that dynamically loaded computations can use.
Within an app backend / deployed service, it is very useful to be able to fork computations onto other nodes and have that just work. But you likely won't directly expose this capability to the outside world, you instead expose services with a more limited API and which can only be used in safe ways.
[1] Though we might support Unison compiling to the browser and there have already been efforts in that direction - https://share.unison-lang.org/@dfreeman/warp This would allow a Unison front end and back end to talk very seamlessly, without manual serialization or networking
[2] https://share.unison-lang.org/@unison/base/code/releases/7.4...
Not a dumb question at all! Unison's type system uses Abilities (algebraic effects) for functional effect management. On a type level, that means we can prevent effects like "run arbitrary IO" on a distributed runtime. Things that run on shared infrastructure can be "sandboxed" and prevented with type safety.
The browser or mobile apps cannot execute arbitrary code on the server. Those would typically call regular Unison services in a standard API.
Maybe it's encrypted? I'm sure if you do any programming, you send code over the network and execute it about all the time!
I'm curious about how the persistence primitives (OrderedTable, Table, etc) are implemented under the hood. Is it calling out to some other database service? Is it implemented in Unison itself? Seems like a really interesting composable set of primitives, together with the Database abstraction, but having a bit of a hard time wrapping my head around it!
Hey there! Apologies for not getting to you sooner. The `Table` is a storage primitive implemented on top of DynamoDB (it's a lower-level storage building block - as you've rightfully identified; these entities were made to be composable, so other storage types can be made with them). Our `OrderedTable` docs might be of interest to you: they talk about their own implementation a bit more (BTrees); and `OrderedTable` is one of our most ergonomic storage types: https://share.unison-lang.org/@unison/cloud/code/releases/23...
The Database abstraction helps scope and namespace (potentially many) tables. It is especially important in scoping transactions, since one of the things we wanted to support with our storage primitives is transactionality across multiple storage types.
Congrats on 1.0! I've been interested in Unison for a while now, since I saw it pop up years ago.
As an Elixir/Erlang programmer, the thing that caught my eye about it was how it seemed to really actually be exploring some high level ideas Joe Armstrong had talked about. I'm thinking of, I think, [0] and [1], around essentially a content-addressable functions. Was he at all an influence on the language, or was it kind of an independent discovery of the same ideas?
[0] https://groups.google.com/g/erlang-programming/c/LKLesmrss2k...
[1] https://joearms.github.io/published/2015-03-12-The_web_of_na...
Thanks!
I am not 100% sure the origin of the idea but I do remember being influenced by git and Nix. Basically "what if we took git but gave individual definitions a hash, rather than whole working trees?" Then later I learned that Joe Armstrong had thought about the same thing - I met Joe and talked with him about Unison a while back - https://news.ycombinator.com/item?id=46050943
Independent of the distributed systems stuff, I think it's a good idea. For instance, one can imagine build tools and a language-agnostic version of Unison Share that use this per-definition hashing idea to achieve perfect incremental compilation and hyperlinked code, instant find usages, search by type, etc. It feels like every language could benefit from this.
Thanks for your work! Who wrote the big idea post? https://www.unison-lang.org/docs/the-big-idea/
I know how fraught performance/micro-benchmarks are. But do you have any data on how performant it is? Should someone expect it to perform similar to Haskell?
How do you deal with "branded" types, if you know what I mean.
Edit: I mean structurally identical types that are meant to be distinct. As I recall Modula 3 used a BRANDED keyword for this.
aha yeah! good question! We have two different types of type declarations, and each has its own keyword: "structural" and "unique". So you can define two different types as as
structural type Optional a = Some a | None structural type Maybe a = Just a | Nothing
and these two types would get the same hash, and the types and constructors could be used interchangeably. If you used the "unique" type instead:
unique type Optional a = Some a | None uniqte type Maybe a = Just a | Nothing
Then these would be totally separate types with separate constructors, which I believe corresponds to the `BRANDED` keyword in Modula 3.
Originally, if you omitted both and just said:
type Optional a = Some a | None
The default was "structural". We switched that a couple of years ago so that now the default is "unique". We are interestingly uniquely able to do something like this, since we don't store source code, we store the syntax tree, so it doesn't matter which way you specified it before we made the change, we can just change the language and pretty print your source in the new format the next time you need it.
How does the implementation of unique types works? It seems you need to add some salt to the hashes of unique type data, but where does the entropy come from?
Hello! Yes I am curious, how does one deal with cycles in the code hash graph? Mutually recursive functions for example?
There's an algorithm for it. The thing that actually gets assigned a hash IS a mutually recursive cycle of functions. Most cycles are size 1 in practice, but some are 2+ like in your question, and that's also fine.
If you could link to where this is implemented I'd be very grateful!
https://github.com/unisonweb/unison/blob/trunk/unison-hashin...
Does that algorithm detects arbitrary subgraphs with a cyclic component, or just regular cycles? (Not that it would matter in practice, I don't think many people write convoluted mutually recursive mess because it would be a maintenance nightmare, just curious on the algorithmic side of things).
I don’t totally understand the question (what’s a regular cycle?), but the only sort of cycle that matters is a strongly connected component (SCC) in the dependency graph, and these are what get hashed as a single unit. Each distinct element within the component gets a subindex identifier. It does the thing you would want :)
First, congratulations for the 1.0 milestone.
Then, a pretty basic question: I see that Unison has a quite radical design, but what problem does this design solves actually?
Thank you!
Unison does diverge a bit from the mainstream in terms of its design. There's a class of problems around deploying and serializing code that involve incidental complexity and repetitive work for many dev teams (IDLs at service boundaries and at storage boundaries, provisioning resources for cloud infrastructure) and a few "everyday programming" pain points that Unison does away with completely (non-semantic merge conflicts, dependency management resolution).
We wrote up some of that here at a high level: https://www.unison-lang.org/docs/what-problems-does-unison-s...
But also, feel free to ask more about the technical specifics if you'd like.
Hi there, and congrats on the launch. I've been following the project from the sidelines, as it has always seemed interesting.
Since everything in software engineering has tradeoffs, I have to ask: what are Unison's?
I've read about the potential benefits of its distributed approach, but surely there must be drawbacks that are worth considering. Does pulling these micro-dependencies or hashing every block of code introduce latency at runtime? Are there caching concerns w.r.t. staleness, invalidation, poisoning, etc.? I'm imagining different scenarios, and maybe these specific ones are not a concern, but I'd appreciate an honest answer about ones that are.
Great question.
There are indeed tradeoffs; as an example, one thing that trips folks up in the "save typed values without encoders" world is that a stored value of a type won't update when your codebase's version of the type updates. On its face, that should be a self-evident concern (solvable with versioning your records); but you'd be surprised how easy it is to `Table.write personV1` and later update the type in place without thinking about your already written records. I mention this because sometimes the lack of friction around working with one part of Unison introduces confusion where it juts against different mental models.
Other general tradeoffs, of course, include a team's tolerance for newness and experimentation. Our workflow has stabilized over the years, but it is still off the beaten path, and I know that can take time to adjust to.
I hope others who've used Unison will chime in with their tradeoffs.
These seem to be mostly related to difficulties around adapting to a new programming model. Which is understandable, but do you have examples of more concrete tradeoffs?
For example, I don't think many would argue that for all the upsides a functional language with immutable state offers, performance can take a significant hit. And if can make certain classes of problems trickier, while simplifying others.
Surely with a model this unique, with the payoffs come costs.
Unison is one of the most exciting programming languages to me, and I'm a huge programming language nerd. A language with algebraic effects like Unison's really needs to hit the mainstream, as imo it's "the next big thing" after parametric polymorphism and algebraic data types. And Unison has a bunch of other cool ideas to go with it too.
This isn't really what they're going for, but I think it can potentially be a very interesting language for writing game mods in. One thing about game mods is that you want to run untrusted code that someone else wrote in your client, but you don't want to let just anyone easily hack your users. Unison seems well-designed for this use case because it seems like you could easily run untrusted Unison code without worrying about it escaping its sandbox due to the ability system. (Although this obviously requires that you typecheck the code before running it. And I don't know if Unison does that, but maybe it does.) There are other ways of implementing a sandbox, and Wasm is fairly well suited for this as well. But Unison seems like another interesting point in the design space.
Still on the subject of Game Dev, I also think that the ability system might be actually very cool for writing an ECS. For those who don't know, an ECS basically involves "entities" which have certain "components" on them, and then "systems" can run and access or modify the components on various entities. For performance, it can be very nice to be able to run different systems on different threads simultaneously. But to do this safely, you need to check that they're not going to try to access the same components. This limits current ECS implementations, because the user has to tediously tell the system scheduler what components each system is going to access. But Unison seems to have some kind of versatile system for inferring what abilities are needed by a given function. If it could do that, then accessing a component could be a an ability. So a function implementing a system that accesses 10 components would have 10 abilities. If those 10 abilities could be inferred, it would be a huge game changer for how nice it is to use an ECS.
> Unison seems well-designed for this use case because it seems like you could easily run untrusted Unison code without worrying about it escaping its sandbox due to the ability system. (Although this obviously requires that you typecheck the code before running it. And I don't know if Unison does that, but maybe it does.)
Indeed we do, and we use this for our Unison Cloud project [1]. With unison cloud we are inviting users to ship code to our Cloud for us to execute, so we built primitives in the language for scanning a code blob and making sure it doesn't do IO [2]. In Unison Cloud, you cannot use the IO ability directly, so you can't, for example, read files off our filesystem. We instead give you access to very specific abilities to do IO that we can safely handly. So for example, there is a `Http` ability you can call in Cloud to make web requests, but we can make sure you aren't hitting anything you shouldn't
I'm also excited about using this specifically for games. I've been thinking about how you could make a game in unison cloud and another user could contribute to the game by implementing an ability as a native service, which just becomes a native function call at runtime. I started working on an ECS [3] a while back, but I haven't had a chance to do much with it yet.
[1] https://unison.cloud [2] https://share.unison-lang.org/@unison/base/code/releases/7.4... [3] https://share.unison-lang.org/@stew/ecs
Oh man, I first looked at this project what feels like _forever_ ago and remember thinking--almost verbatim, "Wow I wish I could see this 5 years from now", and lo and behold I suppose it has been about that long!
Very happy to see it finally hit 1.0
Congratulations on the milestone. You are making one of the most radical PLs out there into something that is actually useable in an industry setting - that’s no mean feat.
I remember the day Rúnar told me he was going to work on this new language called Unison.
I have always thought it was an amazing project to set out on, and was paving the way for a new kind of paradigm. Super proud to see them release a 1.0 and I would love to say Unison is my go-to language in the near future!
I genuinely think systems like Unison are "the future of computing"...
But the question is when that future will be.
Part of the beauty of these sorts of systems is just that the context of what your system actually does is in one system, you aren't dealing with infra, data, and multi-service layers
Maybe that means it is a much better foundation for AI coding agents to work in? Or maybe AI slows it down? we continue to try and throw more code at the problem instead of re-examining the intermediate layers of abstraction?
I really don't know, but I do really want to learn more about is how the unison team is getting this out in the market. I do think that projects like this are best done outside of a VC backed model... but you do eventually need something sustainable, so curious how the team things about it. Transparently, I would love to work on a big bet like this... but it is hard to know if I could have it make financial sense.
With all that, a huge congrats to the team. This is a truly long-term effort and I love that.
I would love to see some benchmarks of unison somewhere on their website. I find knowing there performance characteristics helps a lot with understanding the use cases for a new language.
Even just a really rough "here our requests per second compared to Django, express.js and asp.net" Would be great to get a rough read on where it sits among other choices for web stuff.
More generally, I do hope this goes well for unison, the ideas being explored are certainly fascinating.
I just hope it one day gets a runtime/target that's more applicable to non web stuff. I find it much easier to justify using a weird language for a little CLI tool then for a large web project.
I've found this helpful review on Unison albeit the article is from 2023 [1].
[1] A look at Unison: a revolutionary programming language:
https://renato.athaydes.com/posts/unison-revolution.html
So, there are a whole bunch of interesting ideas here, but…
It’s a huge, all-or-nothing proposition. You have to adopt the language, the source control, and figure out the hosting of it, all at once. If you don’t like one thing in the whole stack, you’re stuck. So, I suspect all those interesting ideas will not go anywhere (at least directly; maybe they get incorporated elsewhere).
You can gradually adopt Unison, it's not all or nothing. It's true that when programming in Unison, you use Unison's tooling (which is seriously one of the best things about it), but there are lightweight ways of integrating with existing systems and services and that is definitely the intent.
We ourselves make use of this sort of thing since (for instance) Unison Cloud is implemented mostly in Unison but uses Haskell for a few things.
Related:
https://news.ycombinator.com/item?id=46051750 (you can use Unison like any other open source general-purpose language)
https://news.ycombinator.com/item?id=46051939 (you can integrate Unison's tooling with git)
There can be enormous value in creating multiple pieces of tech all designed to work really well together. We've done that for Unison where it made sense while also keeping an eye on ease of integration with other tech.
I think that the messaging around this is going to be pretty important in heading off gut-reaction "it's all or nothing locked in to their world" first takes. It's probably attractive marketing for things to be aimed at "look how easy it easy to use our entire ecosystem", but there's a risk to that too.
Cool, those things are going to ease adoption.
While I'm sure the creators would love to see their work become commercially successful and widespread, I don't think that a very interesting criteria to judge what's essentially cool computer science research.
Fair enough.
I remember using unison few years ago and it had that cool idea that your codebase was saved as symbols on a database.
But I don't see any references to it anymore.
That's still the case in Unison! This particular post doesn't dive into the codebase format, but the core idea is the same: Unison hashes your code by its AST and stores it in a database.
I'm so old, I thought this was about Panic's Usenet client Unison.
Same, but I knew there was a v2 as I remember upgrading but I thought this was bringing it back. Oh well, fond memories of the web back in the early 2000s when I was just getting started doing web design, coding for fun back in my highschool days.
There's also the Unison sync tool but it's at 2.53.7.
I thought this was about the Unison sync tool when I clicked on it.
about time that gets a 1.0 release
This is so so, cool, but I feel like most of the features are lost once you need to use complex dependencies that aren’t in unison.
I've never used it, but watched from afar. So many interesting ideas. The website is also really good. Congrats Paul and team
Ok I tried it out. So I run ucm.cmd and it tells me: "I created a new codebase for you at C:\Users\myuser" but there is nothing there except a .unison folder. I didn't look too closely at first but maybe this all works by storing the code in the sqlite file inside that folder? Dot folders aren't usually relevant for project files. Because then even after the cli had me create a new project called happy-porcupine which is pretty unique on my computer I can't find any file or folder anywhere with that name on my machine. So then the getting started guide for unison tells me to create a scratch.u file and put the hello world instructions in there. But where am I supposed to put that scratch.u file? Ok so in desperation I put it right next to the ucm.cmd and just do "run helloWorld" on the cli even though I don't see why this would work but it does. Apparently I'm supposed to just dump my code directly into the downloaded compiler/folder? So then what is the C:\Users\myuser project folder for if I have to put all my .u files directly next to ucm.cmd anyway? And another weird thing is that everytime I make a change to the scratch.u file the cli tells me "Run `update` to apply these changes to your codebase." but even if I don't do that then rerunning "run helloWorld" still runs the new code.
I tried the unison vscode extension btw and despite ucm being on the path now it says: "Unison: Language server failed to connect, is there a UCM running? (version M4a or later)". I also seem to be required to close my ucm cli in order to run vscode because it says that the database is locked otherwise. And I guess there is no debugger yet? It just seems weird that I don't really know where my "project" even is or what this project model conceptually is. It seems like I just put all my unison code somewhere next to the compiler, it loads everything by default into the compiler and I merely do db updates into some kind of more permanent sqlite storage perhaps but then why do I even do that, wouldn't I still just put the .u files into a git repository? There is also no mention of how this language runtime works or performs, I'm assuming fully memory managed but perhaps slow because I'm seeing an interpreter mentioned?
I think you also really need a web based playground where you show off some of these benefits of unison in small self contained snippets because just reading through some examples is pretty hard, it's a very different language and I can't tell what I'm looking at as a lifelong C/Java/etc. tier programmer. Sure you explain the concepts but I'm looking for a far more hands on: "run this ability code, look: here is why this is cool because you are prevented from making mistakes thanks to ..." or "this cannot possibly error because thanks to abilities ..." instead of so much conceptual explanation: https://www.unison-lang.org/docs/fundamentals/abilities/usin...
Thanks for this report.
The tooling takes a little getting used to but it’s extremely powerful. Here are a few benefits you’ll see -
UCM keeps a perfect incremental compilation cache as part of its codebase format, so you’re generally never waiting for code to build. When you pull from remote, there’s nothing to build either.
Pure tests are automatically cached rather than being run over and over.
Switching branches is instantaneous and doesn’t require recompiling.
Renaming is instantaneous, doesn’t break downstream usages, and doesn’t generate a huge text diff.
All code (and code diffs) are hyperlinked when rendered, supporting click through to definition.
I don’t know if you saw these getting started guides, they might be helpful -
https://www.unison-lang.org/docs/quickstart/
And then this tour -
https://www.unison-lang.org/docs/tour/
You can come by the Discord (https://unison-lang.org/discord) if you have any questions as you’re getting going! I hope you will give it a shot and sorry for the trouble getting started. There are a lot of new ideas in Unison and it’s been tricky to find the best way to get folks up to speed.
The Unison website and docs are all open source btw -
https://share.unison-lang.org/@unison/website
If it helps, here's a side-by-side comparison guide between Java and Unison. It covers the syntax primarily: https://www.unison-lang.org/compare-lang/unison-for-java-dev...
How does the database of code work with git? Should you share it or version it too?
No, Unison has its own native version control, and a code sharing platform at https://share.unison-lang.org
What’s a good way to include Unison code in a more traditional Git monorepo?
That depends. What are you wanting to accomplish more broadly with the integration?
I'll mention a couple things that might be relevant - you could have the git repo reference a branch or an immutable namespace hash on Unison Share. And as part of your git repo's CI, pull the Unison code and compile and/or deploy it or whatever you need to do.
There's support for webhooks on Unison Share as well, so you can do things like "open a PR to bump the dependency on the git repo whenever a new commit is pushed to branch XYZ on Unison Share".
Basically, with webhooks on GH and/or Unison Share and a bit of scripting you can set up whatever workflow you want.
Feel free to come by the Discord https://unison-lang.org/discord if you're wanting to try out Unison but not sure how best to integrate with an existing git repo.
> What are you wanting to accomplish more broadly with the integration?
For me that would be:
- not lose my stuff
- share with friends
- let others contribute
[dead]
The tool you use to interact with the code database keeps track of the changes in an append-only log - if you're familiar with git, the commands for tracking changes echo those of git (push, pull, merge, etc) and many of them integrate with git tooling.
The projects in a codebase can absolutely be shared and versioned as well. Here's a log of release artifacts from a library as an example: https://share.unison-lang.org/@unison/base/releases.
If you ever need this kind of stuff, you'll be better off building your own distributed interface by using plain regular GHC Haskell and https://haskell-distributed.github.io/
Has anyone used this, any cool ideas?
https://www.unison-lang.org/docs/the-big-idea/ might be a good starting point!
For interesting usage - we built Unison Cloud (a distributed computing platform) with the Unison language and also more recently an "AWS Kinesis over object storage" product. It's nice for distributed systems, though you can also use it like any other general-purpose language, of course.
In terms of core language features, the effect system / algebraic effects implementation is something you may not have seen before. A lot of languages have special cases of this (like for async I/O, say, or generators), but algebraic effects are the uber-feature that can express all of these and more.
I think Alvaro's at the Unison conference was a pretty cool demonstration of what you can do with the style of algebraic effects (called "abilities" in unison)
https://www.youtube.com/watch?v=u5nWbXyrC8Y
He implements an erlang style actor system, and then by using different handlers for the algebraic effects, he can "run" the actor system, but also optionally make a live diagram of the actor communications.
I tried it once, but a core feature I needed (running code asynchronously on a timer) didn't seem to be available in the free version.
Very cool ideas but they've definitely blown their weirdness budget.
At first I thought it was about this.
https://github.com/bcpierce00/unison
[flagged]
Oh, so this is not the bidirectional file synchronization tool huh? Name collisions are inevitable, but unison (the sync tool) has been around and in use since 1998, so this one feels especially egregious.
It's literally a very common English word. There's probably a million things with this name.