> All of modern mathematics is built on the foundation of set theory
That's ignoring most of formalized mathematics, which is progressing rapidly and definitely modern. Lean and Rocq for example are founded on type theory, not set theory.
usually you're more interested in better ergonomics: can you do X with less work?
it's like picking a programming language - depending on what you're attempting, some will be more helpful.
and ZFC is a lot more low level than what day-to-day mathematics usually bothers with. So most mathematians actually work in an informally understood higher-order wrapper, hoping that what they write sufficiently explains the actual "machine code"
the idea then behind adopting alternative foundations is that these come with "batteries included" and map more directly to the domain language.
Isabelle/HOL is still types. The underlying type theory of Isabelle/HOL is not theory of dependent types, but theory of simple types.
Isabelle/ZF would be a better example as it encodes Zermelo–Fraenkel set theory.
And formalised mathematics are ignored in mathematical practice by most mathematicians, and even the ones that know of it often don't use that as a foundational language due to relative inconvenience.
This might be a dumb question but are there any current or foreseeable practical applications of this kind of result (like in the realm of distributed computing) or is this just pure mathematics for its own sake?
Not a dumb question. The links to mesh networking etc seem interesting. It sounds like the insights from descriptive set theory could yield new hardness/impossibility results in computational complexity, distributed algorithms etc.
are there any current or foreseeable practical applications of those results?
the math of infinity isn't very relevant to the finite programs that humans use. Even today's astronomically large computing systems have size approximately equal to 3 compared to infinity.
Confused why the article author believes this is a surprise. The foundations of mathematics and computer science are basically the same subject (imho) and dualities between representations in both fields have been known for decades.
If something people dedicated multiple years of their lives studying and proving seems trivial to you, you're either a genius or you didn't understand the problem.
There is a world of difference between the obvious and the trivial. The post you are replying to only implied it was obvious, the retort is unnecessary.
The notion of algorithms and computer science were invented to discuss the behavior of infinite sequences: examining if there’s descriptions of real numbers. This was extended by connecting complicated infinite structures like the hyperreals with decisions theory problems.
That descriptions of other infinite sets also corresponds to some kind of algorithm seems like a natural progression.
That it happens to be network theory algorithms rather than (eg) decision theory algorithms is worth noting — but hardly surprising. Particularly because the sets examined arose from graph problems, ie, a network.
I've done a bunch of theoretical PL work and I find this to be a very surprising result... historically the assumption has been that you need deeply "non-computational" classical axioms to work with the sorts of infinites described in the article. There was no fundamental reason that you could give a nice computational description of measure theory just because certain kinds of much better-behaved infinities map naturally to programs. In fact IIRC measure theory was one of the go to examples for a while of something that really needed classical set theory (specifically, the axiom of choice) and couldn't be handled nicely otherwise.
Much of your comment seems to be about your culture — eg, assuming things about axioms and weighting different heuristics. That we prioritize different heuristics and assumptions explains why I don’t find it surprising, but you do.
From my vantage, there’s two strains that make such discoveries unsurprising:
- Curry-Howard generally seems to map “nice” to “nice”, at least in the cases I’ve dealt with;
- modern mathematics is all about finding such congruences between domains (eg, category theory) and we seem to find ways to embed theories all over; to the point where my personal hunch is that we’re vastly underestimating the “elephant problem”, in which having started exploring the elephant in different places, we struggle to see we’re exploring the same object.
Neither of those is a technical argument, but I hope it helps understand why I’d be coming to the question from a different perspective and hence different level of surprise.
Perhaps it’s that a global solution in the language of set theory was hard to find, but distributed systems — which need to provide guarantees only from local node behavior, without access to global — offered an alternate perspective. They weren’t designed to do so but they ended up being useful.
They’re literally exploring the same object: properties of networks.
That you can express the constraints of network colorings (ie, the measure theory problem) as network algorithms strikes me as a “well duh” claim — at least if you take that Curry-Howard stuff seriously.
Curry-Howard is not some magic powder you can just sprinkle around to justify claims. The isomorphism provides a specific lens to move between mathematics and computation. It says roughly that types and logical propositions can be seen equivalently.
Nothing in the result in the article talks about types, and even if it could be, it’s not clear that the CH isomorphism would be a good lens to do so.
Curry-Howard literally says that a proof your object has a property is equivalent to an algorithm which constructs a corresponding type.
I’m not “sprinkling magic powder”, but using the very core of the correspondence:
A proof that your network has some property is an algorithm to construct an instance of appropriate type from your network.
In this case, we’re using algorithms originally designed for protocols in CS to construct a witness of a property about a graph coloring. In the article, it details exactly his realization this was true — during a lecture, seeing the types of things constructed by these algorithms corresponding to the types of objects he works with.
Do you have any actual evidence that this result can be viewed as an instance of CH?
The networks on the measure theory side and on the algorithmic side are not the same. They are not even the same cardinality. One has uncountably many nodes, the other has countably many nodes.
The correspondence outlined is also extremely subtle. Measurable colorings are related to speed of consensus.
You make it sound like this is a result of the type: "To prove that a coloring exists, I prove that an algorithm that colors the network exists." Which it is not, as far as I understand.
It seems to me you are mischaracterizing CH here as well:
> A proof that your network has some property is an algorithm to construct an instance of appropriate type from your object.
A proof that a certain network has some property is an algorithm that constructs an instance of an appropriate type that expresses this fact from the axioms you're working from.
> You make it sound like this is a result of the type: "To prove that a coloring exists, I prove that an algorithm that colors the network exists." Which it is not, as far as I understand.
This is the crux of the proof, as I understand it: to classify network coloring measurability, I classify algorithms that color the network.
Which I can do because there’s a correspondence between network colorings (in graph theory) and algorithms that color networks (in CS). Which I’m arguing is an instance of CH: they’re equivalent things, so classifying either is classifying both.
> They are not even the same cardinality. One has uncountably many nodes, the other has countably many nodes.
[…] Measurable colorings are related to speed of consensus.
Yes — this is why the work is technically impressive, because proving the intuition from above works when extending to the infinite case is non-trivial. But that doesn’t change that fundamentally we’re porting an algorithm. I’m impressed by the approach to dealing with labels in the uncountable context which allows the technique to work for these objects — but that doesn’t mean I’m surprised such a thing could work. Or that work on network colorings (in CS) turned out to have techniques useful for network colorings (in math).
> It seems to me you are mischaracterizing CH
You then go on to make some quibble about my phrasing that, contrary to your claim about mischaracterizing, doesn’t conflict with what you wrote.
Edit: removing last paragraph; didn’t realize new author.
This is enormously off topic but if one person sees this and ends up not missing the show then it was worth mentioning. I think there’s a reasonable crossover between math, discrete math, hacking, and mathcore :)
If you wanted to dive in, here is a Jazz version of one of their singles. The real version is 2 guitars, 1 bass, 1 drums, and someone screaming into the microphone -- its real rad.
I think this is pretty common for Quanta, and it might be sticking out more because it's a field we're familiar with.
I'm really torn about this, because I think they're providing a valuable service. But their general formula doesn't diverge a whole lot from run-of-the-mill pop-science books: a vague, clickbaity title and then an article that focuses on personalities and implications of discoveries while glancing over a lot of important details (and not teaching much).
I agree with our assessment of Quanta. I used to enjoy reading their articles, but the clickbait title formula has put me off. Also their status as a mouthpiece of the Simons foundation grantees.
I feel like I’m being a bit curmudgeonly, but I don’t read them much any more.
Computer science isn't concerned with the uncountable infinite though, which is what measure theory is mostly concerned with as countable sets all have measure zero.
What? You can find tons of textbook examples when dealing with the uncountable sets from non-deterministic finite automata. Those frequently deal with power sets (like when you have an infinite alphabet) which are quintessentially uncountable
Initially I too thought - but we try to approximate infinity in CS all the time.
But I have come to think, well actually, approximate is doing some heavy lifting there AND I have never used infinity for anything except to say "look I don't know how high this should go, so go as far as you can go, and double that, which is really saying, you are bound by finite boundaries, you'll have to work within them, and the uncountable thing that I was thinking about is really finite.
Edit: Think of it like this
We know that it's most likely that the universe is infinite, but we can only determine how big it is by how far we can see, which is bounded by the speed of light, and the fact that we can only see matter emitting light (I'm being careful here, if the big bang theory is right, and I am understanding it correctly, there is a finite amount of matter in the universe, but the universe itself is infinite)
I studied math for a long time.
I’m convinced math would be better without infinity. It doesn’t exist. I also think we don’t need numbers too big . But we can leave those
Infinity is simply the name for a process that does not end. Some of these processes increase faster than others (hence larger infinities). Personally, I also like the Riemann sphere model of infinity.
I haven't studied math beyond what was needed for my engineering courses.
However, I also am starting to believe that infinity doesn't exist.
Or more specifically, I want to argue that infinity is not a number, it is a process. When you say {1, 2, 3, ... } the "..." represents a process of extending the set without a halting condition.
There is no infinity at the end of a number line. There is a process that says how to extend that number line ever further.
There is no infinity'th prime number. There is a process by which you can show that a bigger primer number must always exist.
The statement and proof of the theorem are quite accessible and eye-opening. I think the number line with ordinals is way cooler than the one without them.
Whether you think infinity exists or not is up to you, but transfinite mathematics is very useful, it's used to prove theorems like Goodstein's sequence in a surprisingly elegant way. This sequence doesn't really have anything to do with infinity as first glance.
Actually, all numbers are functions in Peano arithmetic. :)
For example, S(0) is 1, S(S(0)) is 2, S(S(S(0))) is 3, and so on.
There is no end of a number line. There are lines, and line segments. Only line segments are finite.
> There is no infinity'th prime number. There is a process by which you can show that a bigger primer number must always exist.
You misunderstand the concept of infinity. Cantor's diagonal argument proves that such a bigger number must always exist. "Infinity'th" is not a place in a number line; Infinity is a set that may be countable or uncountable, depending on what kind of infinity you're working with.
There are infinities with higher cardinality than others. Infinity relates to set theory, and if you try to simply imagine it as a "position" in a line of real numbers, you'll understandably have an inconsistent mental model.
I highly recommend checking out Cantor's diagonal argument. Mathematicians didn't invent infinity as a curiosity; it solves real problems and implies real constraints. https://en.wikipedia.org/wiki/Cantor's_diagonal_argument
It turns out rather ok without actual infinity, by limiting oneself to potential infinity. Think a Turing Machine where every time it reaches the end of its tape, an operator ("tape ape") will come and put in another reel.
The hierarchy doesn't seem to work. The extremely weak conjecture is the Archimedean property, but I don't see how you prove it from "every number above 7 is the sum of two other numbers". That can be true even if there are only five numbers above 7.
Or in the other direction, if numbers stop at 3, that certainly won't falsify "every number above 7 is the sum of two other numbers". It will prove that it's true. And the extremely strong conjecture immediately proves that the extremely weak one is false.
> I’m convinced math would be better without infinity. It doesn’t exist.
I agree finitism and ultrafinitism are worth more development. Infinity is a hack in the sense that it effectively represents a program that doesn't halt, and when you start thinking of it this way, then all sorts of classical mathematical arguments start looking a little fishy, at least when applying math to the real world. For instance, I do think this is at the heart of some confusions and difficulties in physics.
Reality does, stuff happens, etc. But physics is an abstract model we use to make predictions about reality — and triangles are part of that abstract model, not things that actually exist.
You can’t, for instance, show me a triangle. Just objects that are approximated by the abstract concept in physics.
Is this a joke or are you deeply interested in some ZFC variant that im unaware of? We absolutely need infinity to make a ton of everyday tools work, its like saying we dont need negative numbers because those dont exist either.
A ZFC variant without infinity is basically just PA. (Because you can encode finite sets as natural numbers.) Which in practice is plenty enough to do a whole lot of interesting mathematics. OTOH by the same token, the axiom of infinity is genuinely of interest even in pure finitary terms, because it may provide much simpler proofs of at least some statements that can then be asserted to also be valid in a finitary context due to known conservation results.
In a way, the axiom of infinity seems to behave much like other axioms that assert the existence of even larger mathematical "universes": it's worth being aware of what parts of a mathematical development are inherently dependent on it as an assumption, which is ultimately a question of so-called reverse mathematics.
There’s tons of variants of ZFC without the “infinity”. Constructivism has a long and deep history in mathematics and it’s probably going to become dominant in the future.
Numbers don't exist, either. Not really. Any 'one' thing you can point to, it's not just one thing in reality. There's always something leaking, something fuzzy, something not accounted for in one. One universe, even? We can't be sure other universes aren't leaking into our one universe. One planet? What about all the meteorites banging into the one, so it's not one. So, numbers don't exist in the real world, any more than infinity does. Mathematics is thus proved to be nothing but a figment of the human imagination. And no, the frequency a supercooled isolated atom vibrates at in an atomic clock isn't a number either, there's always more bits to add to that number, always an error bar on the measurement, no matter how small. Numbers aren't real.
Why is it that when I have a stack of business cards, each with a picture of a different finger on my left hand, then when I arrange them in a grid, there’s only one way to do it,
but when I instead have each have a picture of either a different finger from either of my left or right hand, there is now two different arrangements of the cards in a grid?
I claim the reason is that 5 is prime, while 10 is composite (10 = 5 times 2).
You've already assumed 5 exists in order to assert that it's prime.
In any case existence of mathematical objects is a different meaning of existence to physical objects. We can say a mathematical object exists just by defining it, as long as it doesn't lead to contradiction.
I think your closing paragraph holds the key. 5 doesn't really exist, it's a constructor that parameterizes over something that does exist, eg. you never have "5", you have "5(something)". Saying 5 is prime is then saying that "for all x, 5(x) has the same structural properties as all other primes".
You’re abstracting to connect the math: 2, 5, 10, multiplication, and primality are all abstract concepts that don’t exist.
What you’ve pointed out is that the interactions of your cards, when confined to a particular set of manipulations and placements, is equivalent to a certain abstract model.
Wouldn't it follow that those "things" we're pointing to aren't really "things" because they're all leaking and fuzzy? Begging the question, what ends up on a list of things that do exist?
Math is a tool for structured reasoning. We often want to reason about things that physically exist. Why should we use a tool that assumes the existence of something that literally cannot physically exist? That introduces the danger that of admitting all sorts of unphysical possibilities into our theories.
I think Baez's paper, Struggles with the Continuum, shows a lot of past difficulties we've had that resulted from this:
To echo the sibling comment, that answer is specifically referring to the type theory behind Lean (which I’ve heard is pretty weird in a lot of ways, albeit usually in service of usability). Many type theories are weaker than ZFC, or even ZF, at least if I correctly skimmed https://proofassistants.stackexchange.com/a/1210/7.
Sure, but is discarding Type Theory and Category Theory really fair with a phrase like "All of modern mathematics"? Especially in terms of a connection with computer science.
Arguably, type theory is more influential, as it seems to me all the attempts to actually formalize the hand-wavy woo mathematicians tend to engage in are in lean, coq, or the like. We've pretty much given up on set theory except to prove things to ourselves. However, these methods are notoriously unreliable.
Regardless of the name, descriptive set theory does not seem to have all that much to do with "set theory" in a foundational sense; it can be recast in terms of types and spaces with comparative ease, and this can be quite advantageous. The article is a bit confusing in many ways; among other things, it seems quite obviously wrong to suggest that recasting a concept that seems to be conventionally related to mathematical "infinities" in more tangible computational terms is something deeply original to this particular work; if anything, it happens literally all the time when trying to understand existing math in type-theoretic or constructive terms.
"the study of how to organize abstract collections of objects" is not really a great explanation of set theory. But it is true that the usual way (surely not the only way) to formalize mathematics is starting with set-theoretic axioms and then defining everything in terms of sets.
What's important to note is that this is just a matter of convention. An historical accident. It is by no means a law of nature that math need be formalized with ZFC or any other set theory derivative, and there are usually other options.
As a matter of fact, ZFC fits CS quite poorly.
In ZFC, everything is a set. The number 2 is a set. A function is a set of ordered pairs. An ordered pair is a set of sets.
In ZFC: It is a valid mathematical question to ask, "Is the number 3 an element of the number 5?" (In the standard definition of ordinals, the answer is yes).
In CS: This is a "type error." A programmer necessarily thinks of an integer as distinct from a string or a list. Asking if an integer is "inside" another integer is nonsense in the context of writing software.
For a computer scientist, Type Theory is a much more natural foundation than Set Theory. Type Theory enforces boundaries between different kinds of objects, just like a compiler does.
But, in any case, that ZFC is "typical" is an accident of history, and whether or not it's appropriate at all is debatable.
In CS, types are usually a higher level of abstraction built on top of more fundamental layers. If you choose to break the abstraction, you can definitely use an integer as a string, a list, or a function. The outcome is unlikely to be useful, unless your construct was designed with such hacks in mind.
When I did a PhD in theoretical computer science, type theory felt like one niche topic among many. It was certainly of interest to some subfield, but most people didn't find it particularly relevant to the kind of TCS they were doing.
Sure, but it fits the rest of mathematics "poorly" for exactly the same reasons. No working mathematician is thinking about 3 as an element of 5.
The reason ZFC is used isn't because it's a particularly pedagogical way of describing any branch of math (whether CS or otherwise), but because the axioms are elegantly minimal and parsimonious.
Most modern mathematicians are not set theorists. There are certain specialists in metamathematics and the foundations of mathematics who hold that set theory is the proper foundation -- thus that most mathematical structures are rooted in set theory, and can be expressed as extensions of set theory -- but this is by no means a unanimous view! It's quite new, and quite heavily contested.
My impression (as a dilettante programmer without relevant credentials) is that there isn't really any question about whether mathematical structures can be rooted in set theory, or can be expressed as extensions of set theory. Disputes about foundations of mathematics are more about how easy or elegant it is to do so. (And in fact my impression is they're mostly about subjective, aesthetic considerations of elegance rather than practical considerations of how hard it is to do something in practice, even though the discussion tends to be nominally about the practical side. Quite similar to disputes about programming languages in that respect.)
yes of course, I just mean that from the set of foundational mathematics, set theory is the strongest one empirically, but that there are other options (possibly better)
One, in theory, can construct number sets (fields) with holes in them - that's truly discrete numbers. such number sets are at most countably infinite, but need not be.
One useful such set might have Planck's (length) constant as its smallest number beyond which there is a hole. The problem with using such number sets is that ordinary rules of arithmetic breakdown, ie division has to be defined as modulus Planck constant
…that existed in the world of descriptive set theory — about the number of colors required to color certain infinite graphs in a measurable way.
To Bernshteyn, it felt like more than a coincidence. It wasn’t just that computer scientists are like librarians too, shelving problems based on how efficiently their algorithms work. It wasn’t just that these problems could also be written in terms of graphs and colorings.
Perhaps, he thought, the two bookshelves had more in common than that. Perhaps the connection between these two fields went much, much deeper.
Perhaps all the books, and their shelves, were identical, just written in different languages — and in need of a translator.
When you talk about infinity, you are no longer talking about numbers. Mix it with numbers, you get all sorts of perplexing theories and paradoxes.
The reason is simple - numbers are cuts in the continuum while infinity isn't. It should not even be a symbolic notion of very large number. This is not to say infinity doesn't exist. It doesn't exist as a number and doesn't mix with numbers.
The limits could have been defined saying "as x increases without bound" instead of "as x approaches infinity". There is no target called infinity to approach.
Cantor's stuff can easily be trashed. The very notion of "larger than" belongs to finite numbers. This comparitive notion doesn't apply to concepts that can not be quantified using numbers. Hence one can't say some kind of infinity is larger than the other kinds.
Similarly, his diagonal argument about 1-to-1 mapping can not be extended to infinities, as there is no 1-to-1 mapping that can make sense for those which are not numbers or uniquely identifiable elements. The mapping is broken. No surprise you get weird results and multiple infinities or whatever that came to his mind when he was going through stressful personal situations.
I don't think you can just call Cantor's diagonal argument trash without providing a very strong, self-consistent replacement framework that explains the result without invoking infinity.
This sounds like ranting from someone who doesn't deeply understand the implications of set theory or infinite sets. Cardinality is a real thing, with real consequences, such as Gödel's incompleteness theorems. What "weird results" are tripping you up?
> when he was going through stressful personal situations
Ad hominem. Let's stick to arguing the subject material and not personally attacking Cantor.
The Löwenheim-Skolem theorem implies that a countable model of set theory has to exist. "Cardinality" as implied by Cantor's diagonal argument (which happens to be a straightforward special case of Lawvere's fixed point theorem) is thus not an absolute property: it's relative to a particular model. It's internally true that there is no bijection as defined within the model between the naturals and the reals, as shown by Cantor's argument; but externally there are models where all sets can nonetheless be seen as countable.
You're referring to Skolem's paradox. It just shows that first-order logic is incomplete.
Ernst Zermelo resolved this by stating that his axioms should be interpreted within second-order logic, and as such it doesn't contradict Cantor's theorem since the Löwenheim–Skolem theorem only applies in first-order logic.
The standard semantics for second-order logic are not very practical and arguably not even all that meaningful or logical (as argued e.g. by Willard Quine); you can use Henkin semantics (i.e. essentially a many-sorted first-order theory) to recover the model-theoretic properties of first-order logic, including Löwenheim-Skolem.
I just gave the reason - The notion of comparison and 1-to-1 mapping has an underlying assumption about the subjects being quantifiable and identifiable. This assumption doesn't apply to something inherently neither quantifiable nor is a cut in the continuum, similar to a number. What argument are you offering against this?
I'm not the person you replied to, and I doubt I'm going to convince you out of your very obviously strong opinions, but, to make it clear, you can't even define a continuum without a finite set to, as you non-standardly put it, cut it. It turns out, when you define any such system that behaves like natural numbers, objects like the rationals and the continuum pop out; explicitly because of situations like the one Cantor describes (thank you, Yoneda).
The point of transfinite cardinalities is not that they necessarily physically exist on their own as objects; rather, they are a convenient shorthand for a pattern that emerges when you can formally say "and so on" (infinite limits). When you do so, it turns out, there's a consistent way to treat some of these "and so ons" that behave consistently under comparison, and that's the transfinite cardinalities such as aleph_0 and whatnot.
Further, all math is idealist bullshit; but it's useful idealist bullshit because, when you can map representations of physical systems into it in a way that the objects act like the mathematical objects that represent them, then you can achieve useful predictive results in the real world. This holds true for results that require a concept of infinities in some way to fully operationalize: they still make useful predictions when the axiomatic conditions are met.
For the record, I'm not fully against what you're saying, I personally hate the idea of the axiom of choice being commonly accepted; I think it was a poorly founded axiom that leads to more paradoxes than it helps things. I also wish the axiom of the excluded middle was actually tossed out more often, for similar reasons, however, when the systems you're analyzing do behave well under either axiom, the math works out to be so much easier with both of them, so in they stay (until you hit things like Banac-Tarsky and you just kinda go "neat, this is completely unphysical abstract delusioneering" but, you kinda learn to treat results like that like you do when you renormalize poles in analytical functions: carefully and with a healthy dose of "don't accidentally misuse this theorem to make unrealistic predictions when the conditions aren't met")
About the 1-to-1 mapping of elements across infinite sets: what guarantees us that this mapping operation can be extended to infinite sets?
I can say it can not be extended or applied, because the operation can not be "completed". This is not because it takes infinite time. It is because we can't define completion of the operation, even if it is a snapshot imagination.
The difference between infinities is I can write every possible fraction on a piece of A4 paper, if the font gets smaller and smaller. I can say where to zoom in for any fraction.
You can't enumerate the real numbers, but you can grab them all in one go - just draw a line!
The more I learn about this stuff, the more I come to understand how the quantitative difference between cardinalities is a red herring (e.g. CH independent from ZFC). It's the qualitative difference between these two sets that matter. The real numbers are richer, denser, smoother, etc. than the natural numbers, and those are the qualities we care about.
That doesn't make one set "larger" than the other. You need to define "larger". And you need to make that definition as weird as needed to justify that comparison.
The fact that I can't even fit the real numbers between 0 and 1 on a single page, but I can fit every possible fraction in existence, doesn't mean anything?
I don't think this definition is that weird, for example by 'larger' I might say I can easily 'fit' all the rational numbers in the real numbers, but cannot fit the real numbers in the rational numbers.
It doesn't mean anything because, with arbitrary zooming for precision, every real number is a fraction. You can't ask for infinite zooming. There is no such thing.
So, let's inspect pi. It's a fraction, precision of which depends on how much you zoom in on it. You can take it as a constant just for having a name for it.
If infinite zooming/precision is ruled out, all real numbers can be written as fractions. Problem is with the assumption that some numbers such as pi or sqrt(2) can have absolute and full precision.
.. because that is where you are allowed to challenge some biblical stories of the math without the fear of expulsion from the elite clubs.
Most of math history is stellar, studded with great works of geniuses, but some results were sanctified and prohibited for questioning due to various forces that were active during the times.
Application of regular logic such as comparison, mapping, listing, diagonals, uniqueness - all are the rules that were bred in the realms of finiteness and physical world. You can't use these things to prove some theories about things are not finite.
I'm aware that I'm ignorant of many things, just like anyone else on this Earth. Some are less ignorant and some are more.
Could you be kind enough to explain the phrase "set of all integers" when the word all can not apply to an unbounded quantity? I think the word all is used loosely to extend it's meaning as used for finite sets, to a non-existent, unbounded set. For example, things such as all Americans, all particles in universe have a meaning because they have a boundary criterion. What is all integers?
I think one need to first define the realm of applicability or domain for the concepts such as comparison, 1-to-1 mapping, listing, diagonals, uniqueness, all etc.
First sentence:
> All of modern mathematics is built on the foundation of set theory
That's ignoring most of formalized mathematics, which is progressing rapidly and definitely modern. Lean and Rocq for example are founded on type theory, not set theory.
Not a mathematician, but AFAIK ZFC is a valid foundation. Dependent types helps a lot with bookkeeping, but cannot prove more theorems.
Lawrence Paulson is a great person to clarify those topics (Isabelle/HOL is not based on types yet it can proof most maths).
> but cannot prove more theorems
usually you're more interested in better ergonomics: can you do X with less work?
it's like picking a programming language - depending on what you're attempting, some will be more helpful.
and ZFC is a lot more low level than what day-to-day mathematics usually bothers with. So most mathematians actually work in an informally understood higher-order wrapper, hoping that what they write sufficiently explains the actual "machine code"
the idea then behind adopting alternative foundations is that these come with "batteries included" and map more directly to the domain language.
>hoping that what they write sufficiently explains the actual "machine code"
actual, having faith that what they write could compile, run, and pass tests, but never doing so.
Isabelle/HOL is still types. The underlying type theory of Isabelle/HOL is not theory of dependent types, but theory of simple types. Isabelle/ZF would be a better example as it encodes Zermelo–Fraenkel set theory.
Right!
It’s not really. It leads to paradoxes. There are alternative formulations which avoid these.
And formalised mathematics are ignored in mathematical practice by most mathematicians, and even the ones that know of it often don't use that as a foundational language due to relative inconvenience.
I think what is more accurate to say is that the broad crisis in foundations of mathematics was initially resolved by formalizing set theory.
This might be a dumb question but are there any current or foreseeable practical applications of this kind of result (like in the realm of distributed computing) or is this just pure mathematics for its own sake?
Not a dumb question. The links to mesh networking etc seem interesting. It sounds like the insights from descriptive set theory could yield new hardness/impossibility results in computational complexity, distributed algorithms etc.
are there any current or foreseeable practical applications of those results?
the math of infinity isn't very relevant to the finite programs that humans use. Even today's astronomically large computing systems have size approximately equal to 3 compared to infinity.
Confused why the article author believes this is a surprise. The foundations of mathematics and computer science are basically the same subject (imho) and dualities between representations in both fields have been known for decades.
If something people dedicated multiple years of their lives studying and proving seems trivial to you, you're either a genius or you didn't understand the problem.
There is a world of difference between the obvious and the trivial. The post you are replying to only implied it was obvious, the retort is unnecessary.
It's not a surprise that math and CS are related. It's a surprise that this particular subject in math and CS are so intimately related.
Why? Both are basically about how to handle information with a priori indefinitely many cases, types, instances, species
Is it?
The notion of algorithms and computer science were invented to discuss the behavior of infinite sequences: examining if there’s descriptions of real numbers. This was extended by connecting complicated infinite structures like the hyperreals with decisions theory problems.
That descriptions of other infinite sets also corresponds to some kind of algorithm seems like a natural progression.
That it happens to be network theory algorithms rather than (eg) decision theory algorithms is worth noting — but hardly surprising. Particularly because the sets examined arose from graph problems, ie, a network.
I've done a bunch of theoretical PL work and I find this to be a very surprising result... historically the assumption has been that you need deeply "non-computational" classical axioms to work with the sorts of infinites described in the article. There was no fundamental reason that you could give a nice computational description of measure theory just because certain kinds of much better-behaved infinities map naturally to programs. In fact IIRC measure theory was one of the go to examples for a while of something that really needed classical set theory (specifically, the axiom of choice) and couldn't be handled nicely otherwise.
Much of your comment seems to be about your culture — eg, assuming things about axioms and weighting different heuristics. That we prioritize different heuristics and assumptions explains why I don’t find it surprising, but you do.
From my vantage, there’s two strains that make such discoveries unsurprising:
- Curry-Howard generally seems to map “nice” to “nice”, at least in the cases I’ve dealt with;
- modern mathematics is all about finding such congruences between domains (eg, category theory) and we seem to find ways to embed theories all over; to the point where my personal hunch is that we’re vastly underestimating the “elephant problem”, in which having started exploring the elephant in different places, we struggle to see we’re exploring the same object.
Neither of those is a technical argument, but I hope it helps understand why I’d be coming to the question from a different perspective and hence different level of surprise.
> He wanted to show that every efficient local algorithm can be turned into a Lebesgue-measurable way of coloring an infinite graph
To me this is quite surprising. Distributed systems were not designed to solve measure theory problems.
Perhaps it’s that a global solution in the language of set theory was hard to find, but distributed systems — which need to provide guarantees only from local node behavior, without access to global — offered an alternate perspective. They weren’t designed to do so but they ended up being useful.
> They weren’t designed to do so but they ended up being useful.
an object doing something that it wasn’t designed for seems to me to be the definition of “surprising”
They’re literally exploring the same object: properties of networks.
That you can express the constraints of network colorings (ie, the measure theory problem) as network algorithms strikes me as a “well duh” claim — at least if you take that Curry-Howard stuff seriously.
Curry-Howard is not some magic powder you can just sprinkle around to justify claims. The isomorphism provides a specific lens to move between mathematics and computation. It says roughly that types and logical propositions can be seen equivalently.
Nothing in the result in the article talks about types, and even if it could be, it’s not clear that the CH isomorphism would be a good lens to do so.
Curry-Howard literally says that a proof your object has a property is equivalent to an algorithm which constructs a corresponding type.
I’m not “sprinkling magic powder”, but using the very core of the correspondence:
A proof that your network has some property is an algorithm to construct an instance of appropriate type from your network.
In this case, we’re using algorithms originally designed for protocols in CS to construct a witness of a property about a graph coloring. In the article, it details exactly his realization this was true — during a lecture, seeing the types of things constructed by these algorithms corresponding to the types of objects he works with.
Do you have any actual evidence that this result can be viewed as an instance of CH?
The networks on the measure theory side and on the algorithmic side are not the same. They are not even the same cardinality. One has uncountably many nodes, the other has countably many nodes.
The correspondence outlined is also extremely subtle. Measurable colorings are related to speed of consensus.
You make it sound like this is a result of the type: "To prove that a coloring exists, I prove that an algorithm that colors the network exists." Which it is not, as far as I understand.
It seems to me you are mischaracterizing CH here as well:
> A proof that your network has some property is an algorithm to construct an instance of appropriate type from your object.
A proof that a certain network has some property is an algorithm that constructs an instance of an appropriate type that expresses this fact from the axioms you're working from.
> You make it sound like this is a result of the type: "To prove that a coloring exists, I prove that an algorithm that colors the network exists." Which it is not, as far as I understand.
This is the crux of the proof, as I understand it: to classify network coloring measurability, I classify algorithms that color the network.
Which I can do because there’s a correspondence between network colorings (in graph theory) and algorithms that color networks (in CS). Which I’m arguing is an instance of CH: they’re equivalent things, so classifying either is classifying both.
> They are not even the same cardinality. One has uncountably many nodes, the other has countably many nodes. […] Measurable colorings are related to speed of consensus.
Yes — this is why the work is technically impressive, because proving the intuition from above works when extending to the infinite case is non-trivial. But that doesn’t change that fundamentally we’re porting an algorithm. I’m impressed by the approach to dealing with labels in the uncountable context which allows the technique to work for these objects — but that doesn’t mean I’m surprised such a thing could work. Or that work on network colorings (in CS) turned out to have techniques useful for network colorings (in math).
> It seems to me you are mischaracterizing CH
You then go on to make some quibble about my phrasing that, contrary to your claim about mischaracterizing, doesn’t conflict with what you wrote.
Edit: removing last paragraph; didn’t realize new author.
Reactions to your statement seem to hinge on whether or not the person replying has a CS degree from a math focused program!
It makes perfect sense to me.
It's cons'es all the way down.
That’s nothing wait til you see my math.
Finally - we can calculate infinity.
Been a long way towards it. \o/
If you’re in the Bay Area next month then The Dillinger Escape Plan are bringing the Calculating Infinity circus to town(s):
https://www.theregencyballroom.com/events/detail/?event_id=1...
This is enormously off topic but if one person sees this and ends up not missing the show then it was worth mentioning. I think there’s a reasonable crossover between math, discrete math, hacking, and mathcore :)
If you wanted to dive in, here is a Jazz version of one of their singles. The real version is 2 guitars, 1 bass, 1 drums, and someone screaming into the microphone -- its real rad.
https://www.youtube.com/watch?v=D4-erceTpc8
Took me an embarrassingly long time to realize you're talking about a music event not a math lecture
Idk, mathrock gets pretty close in an applied level lol
>Finally - we can calculate infinity.
And Beyond!
∞/1 + 1/∞
One infinitesimal step for the numerical, one giant step for mathkind.
Chuck Norris counted from one to infinity. Twice.
Come on, now you know better than that. It was infinity that counted to Chuck Norris
Fairly trivial, in haskell:
let x = x in x
Completely encapsulates a countable infinity.
This has indeed taken forever /s
> computer science with the finite
um... no... computer science is very concerned with the infinite. I'm surprised quanta published this. I always think highly of their reporting.
I think this is pretty common for Quanta, and it might be sticking out more because it's a field we're familiar with.
I'm really torn about this, because I think they're providing a valuable service. But their general formula doesn't diverge a whole lot from run-of-the-mill pop-science books: a vague, clickbaity title and then an article that focuses on personalities and implications of discoveries while glancing over a lot of important details (and not teaching much).
I agree with our assessment of Quanta. I used to enjoy reading their articles, but the clickbait title formula has put me off. Also their status as a mouthpiece of the Simons foundation grantees.
I feel like I’m being a bit curmudgeonly, but I don’t read them much any more.
Computer science isn't concerned with the uncountable infinite though, which is what measure theory is mostly concerned with as countable sets all have measure zero.
What? You can find tons of textbook examples when dealing with the uncountable sets from non-deterministic finite automata. Those frequently deal with power sets (like when you have an infinite alphabet) which are quintessentially uncountable
Initially I too thought - but we try to approximate infinity in CS all the time.
But I have come to think, well actually, approximate is doing some heavy lifting there AND I have never used infinity for anything except to say "look I don't know how high this should go, so go as far as you can go, and double that, which is really saying, you are bound by finite boundaries, you'll have to work within them, and the uncountable thing that I was thinking about is really finite.
Edit: Think of it like this
We know that it's most likely that the universe is infinite, but we can only determine how big it is by how far we can see, which is bounded by the speed of light, and the fact that we can only see matter emitting light (I'm being careful here, if the big bang theory is right, and I am understanding it correctly, there is a finite amount of matter in the universe, but the universe itself is infinite)
Asymptotics is a good example of CS using infinity practically.
Also, a small aside: there’s a finite amount of matter in the visible universe. We could have infinite matter in an infinite universe.
Another glaring example:
> Set theorists use the language of logic, computer scientists the language of algorithms.
Computer science doesn’t use logic? Hello, Booleans.
So lazy, especially when you can ask an AI to tell you if you’re saying something stupid.
[dead]
I studied math for a long time. I’m convinced math would be better without infinity. It doesn’t exist. I also think we don’t need numbers too big . But we can leave those
Infinity is simply the name for a process that does not end. Some of these processes increase faster than others (hence larger infinities). Personally, I also like the Riemann sphere model of infinity.
I haven't studied math beyond what was needed for my engineering courses.
However, I also am starting to believe that infinity doesn't exist.
Or more specifically, I want to argue that infinity is not a number, it is a process. When you say {1, 2, 3, ... } the "..." represents a process of extending the set without a halting condition.
There is no infinity at the end of a number line. There is a process that says how to extend that number line ever further.
There is no infinity'th prime number. There is a process by which you can show that a bigger primer number must always exist.
> There is no infinity at the end of a number line. There is a process that says how to extend that number line ever further.
Sure, but ordinal numbers exist and are useful. It's impossible to prove Goodstein's theorem without them.
https://en.wikipedia.org/wiki/Ordinal_number
https://en.wikipedia.org/wiki/Goodstein%27s_theorem
The statement and proof of the theorem are quite accessible and eye-opening. I think the number line with ordinals is way cooler than the one without them.
Whether you think infinity exists or not is up to you, but transfinite mathematics is very useful, it's used to prove theorems like Goodstein's sequence in a surprisingly elegant way. This sequence doesn't really have anything to do with infinity as first glance.
Actually, all numbers are functions in Peano arithmetic. :)
For example, S(0) is 1, S(S(0)) is 2, S(S(S(0))) is 3, and so on.
There is no end of a number line. There are lines, and line segments. Only line segments are finite.
> There is no infinity'th prime number. There is a process by which you can show that a bigger primer number must always exist.
You misunderstand the concept of infinity. Cantor's diagonal argument proves that such a bigger number must always exist. "Infinity'th" is not a place in a number line; Infinity is a set that may be countable or uncountable, depending on what kind of infinity you're working with.
There are infinities with higher cardinality than others. Infinity relates to set theory, and if you try to simply imagine it as a "position" in a line of real numbers, you'll understandably have an inconsistent mental model.
I highly recommend checking out Cantor's diagonal argument. Mathematicians didn't invent infinity as a curiosity; it solves real problems and implies real constraints. https://en.wikipedia.org/wiki/Cantor's_diagonal_argument
Infinity is a hack.
It is very tempting to use it in place of a number, and so mathematicians (being humans) did that.
Yes, mathematicians hack too. Maybe they even invented it.
Doing math without it is pretty ugly.
(Granted, there are other objects that may seem ugly which can only be constructed by reference to infinite things.)
You can do a lot of things without assuming that the natural numbers keep going, but it is plain awful to work with.
It turns out rather ok without actual infinity, by limiting oneself to potential infinity. Think a Turing Machine where every time it reaches the end of its tape, an operator ("tape ape") will come and put in another reel.
I think we can stop at 8.
7, if the Extremely Strong Goldbach Conjecture holds. [1]
[1] https://xkcd.com/1310/
The hierarchy doesn't seem to work. The extremely weak conjecture is the Archimedean property, but I don't see how you prove it from "every number above 7 is the sum of two other numbers". That can be true even if there are only five numbers above 7.
Or in the other direction, if numbers stop at 3, that certainly won't falsify "every number above 7 is the sum of two other numbers". It will prove that it's true. And the extremely strong conjecture immediately proves that the extremely weak one is false.
The extremes are probably chosen only for their humor value, not for mathematical rigor.
How would you handle things like probability distributions with infinite support?
> I’m convinced math would be better without infinity. It doesn’t exist.
I agree finitism and ultrafinitism are worth more development. Infinity is a hack in the sense that it effectively represents a program that doesn't halt, and when you start thinking of it this way, then all sorts of classical mathematical arguments start looking a little fishy, at least when applying math to the real world. For instance, I do think this is at the heart of some confusions and difficulties in physics.
Infinity doesn't exist, and neither does 4, or even a triangle. Everything is a concept or an approximation.
A triangle exists in physics.
Physics doesn’t exist.
Reality does, stuff happens, etc. But physics is an abstract model we use to make predictions about reality — and triangles are part of that abstract model, not things that actually exist.
You can’t, for instance, show me a triangle. Just objects that are approximated by the abstract concept in physics.
Not if you keep zooming in. And not when you consider that all its particles are moving in random directions due to having a non-0 temperature
Is this a joke or are you deeply interested in some ZFC variant that im unaware of? We absolutely need infinity to make a ton of everyday tools work, its like saying we dont need negative numbers because those dont exist either.
A ZFC variant without infinity is basically just PA. (Because you can encode finite sets as natural numbers.) Which in practice is plenty enough to do a whole lot of interesting mathematics. OTOH by the same token, the axiom of infinity is genuinely of interest even in pure finitary terms, because it may provide much simpler proofs of at least some statements that can then be asserted to also be valid in a finitary context due to known conservation results.
In a way, the axiom of infinity seems to behave much like other axioms that assert the existence of even larger mathematical "universes": it's worth being aware of what parts of a mathematical development are inherently dependent on it as an assumption, which is ultimately a question of so-called reverse mathematics.
There are a couple philosophies in that vein, like finitism or constructivism. Not exactly mainstream but they’ve proven more than you’d expect
https://en.wikipedia.org/wiki/Finitism
https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_...
That looks closer to https://en.wikipedia.org/wiki/Ultrafinitism
There’s tons of variants of ZFC without the “infinity”. Constructivism has a long and deep history in mathematics and it’s probably going to become dominant in the future.
Numbers don't exist, either. Not really. Any 'one' thing you can point to, it's not just one thing in reality. There's always something leaking, something fuzzy, something not accounted for in one. One universe, even? We can't be sure other universes aren't leaking into our one universe. One planet? What about all the meteorites banging into the one, so it's not one. So, numbers don't exist in the real world, any more than infinity does. Mathematics is thus proved to be nothing but a figment of the human imagination. And no, the frequency a supercooled isolated atom vibrates at in an atomic clock isn't a number either, there's always more bits to add to that number, always an error bar on the measurement, no matter how small. Numbers aren't real.
Why is it that when I have a stack of business cards, each with a picture of a different finger on my left hand, then when I arrange them in a grid, there’s only one way to do it, but when I instead have each have a picture of either a different finger from either of my left or right hand, there is now two different arrangements of the cards in a grid?
I claim the reason is that 5 is prime, while 10 is composite (10 = 5 times 2).
Therefore, 5 and 10, and 2, exist.
You've already assumed 5 exists in order to assert that it's prime.
In any case existence of mathematical objects is a different meaning of existence to physical objects. We can say a mathematical object exists just by defining it, as long as it doesn't lead to contradiction.
I think your closing paragraph holds the key. 5 doesn't really exist, it's a constructor that parameterizes over something that does exist, eg. you never have "5", you have "5(something)". Saying 5 is prime is then saying that "for all x, 5(x) has the same structural properties as all other primes".
You’re abstracting to connect the math: 2, 5, 10, multiplication, and primality are all abstract concepts that don’t exist.
What you’ve pointed out is that the interactions of your cards, when confined to a particular set of manipulations and placements, is equivalent to a certain abstract model.
Wouldn't it follow that those "things" we're pointing to aren't really "things" because they're all leaking and fuzzy? Begging the question, what ends up on a list of things that do exist?
The set of all things that exist - the first question that comes to mind is, is this a finite set, or an infinite set?
Some people have this view, but I don't get why.
Math is a tool for structured reasoning. We often want to reason about things that physically exist. Why should we use a tool that assumes the existence of something that literally cannot physically exist? That introduces the danger that of admitting all sorts of unphysical possibilities into our theories.
I think Baez's paper, Struggles with the Continuum, shows a lot of past difficulties we've had that resulted from this:
https://arxiv.org/abs/1609.01421
Me neither. David Deutsch had some interesting thoughts on why finitists are wrong in TBOI but I never fully understood it.
Neither does David Deutsch himself, he stopped making sense sometime in the late 80's
That’s nothing, ‘node_modules’ has been linking the math of infinity to my filesystem for years.
> All of modern mathematics is built on the foundation of set theory, the study of how to organize abstract collections of objects
What the hell. What about Type Theory?
Type theory is actually a stronger axiomatic system than ZFC, and is equiconsistent with ZFC+ a stronger condition.
See this mathoverflow response here https://mathoverflow.net/a/437200/477593
To echo the sibling comment, that answer is specifically referring to the type theory behind Lean (which I’ve heard is pretty weird in a lot of ways, albeit usually in service of usability). Many type theories are weaker than ZFC, or even ZF, at least if I correctly skimmed https://proofassistants.stackexchange.com/a/1210/7.
Is there a collection of type theory axioms anywhere near as influential as ZF or ZFC?
Sure, but is discarding Type Theory and Category Theory really fair with a phrase like "All of modern mathematics"? Especially in terms of a connection with computer science.
Arguably, type theory is more influential, as it seems to me all the attempts to actually formalize the hand-wavy woo mathematicians tend to engage in are in lean, coq, or the like. We've pretty much given up on set theory except to prove things to ourselves. However, these methods are notoriously unreliable.
Regardless of the name, descriptive set theory does not seem to have all that much to do with "set theory" in a foundational sense; it can be recast in terms of types and spaces with comparative ease, and this can be quite advantageous. The article is a bit confusing in many ways; among other things, it seems quite obviously wrong to suggest that recasting a concept that seems to be conventionally related to mathematical "infinities" in more tangible computational terms is something deeply original to this particular work; if anything, it happens literally all the time when trying to understand existing math in type-theoretic or constructive terms.
"the study of how to organize abstract collections of objects" is not really a great explanation of set theory. But it is true that the usual way (surely not the only way) to formalize mathematics is starting with set-theoretic axioms and then defining everything in terms of sets.
"Usual", "most common by far" etc. are all great phrases, but not "all of mathematics", esp when we talk about math related to computer science
Math related to CS is typically formalized starting with set theory, just like other branches of math.
What's important to note is that this is just a matter of convention. An historical accident. It is by no means a law of nature that math need be formalized with ZFC or any other set theory derivative, and there are usually other options.
As a matter of fact, ZFC fits CS quite poorly.
In ZFC, everything is a set. The number 2 is a set. A function is a set of ordered pairs. An ordered pair is a set of sets.
In ZFC: It is a valid mathematical question to ask, "Is the number 3 an element of the number 5?" (In the standard definition of ordinals, the answer is yes).
In CS: This is a "type error." A programmer necessarily thinks of an integer as distinct from a string or a list. Asking if an integer is "inside" another integer is nonsense in the context of writing software.
For a computer scientist, Type Theory is a much more natural foundation than Set Theory. Type Theory enforces boundaries between different kinds of objects, just like a compiler does.
But, in any case, that ZFC is "typical" is an accident of history, and whether or not it's appropriate at all is debatable.
In CS, types are usually a higher level of abstraction built on top of more fundamental layers. If you choose to break the abstraction, you can definitely use an integer as a string, a list, or a function. The outcome is unlikely to be useful, unless your construct was designed with such hacks in mind.
When I did a PhD in theoretical computer science, type theory felt like one niche topic among many. It was certainly of interest to some subfield, but most people didn't find it particularly relevant to the kind of TCS they were doing.
Sure, but it fits the rest of mathematics "poorly" for exactly the same reasons. No working mathematician is thinking about 3 as an element of 5.
The reason ZFC is used isn't because it's a particularly pedagogical way of describing any branch of math (whether CS or otherwise), but because the axioms are elegantly minimal and parsimonious.
Sorry but if you try to prove 2+2=4 in ZFC versus Type Theory (HoTT-style) – nothing looks elegant about ZFC afterwards.
the empirical modern mathematics are build on set theory, type and category theory are just other possible foundations
Most modern mathematicians are not set theorists. There are certain specialists in metamathematics and the foundations of mathematics who hold that set theory is the proper foundation -- thus that most mathematical structures are rooted in set theory, and can be expressed as extensions of set theory -- but this is by no means a unanimous view! It's quite new, and quite heavily contested.
My impression (as a dilettante programmer without relevant credentials) is that there isn't really any question about whether mathematical structures can be rooted in set theory, or can be expressed as extensions of set theory. Disputes about foundations of mathematics are more about how easy or elegant it is to do so. (And in fact my impression is they're mostly about subjective, aesthetic considerations of elegance rather than practical considerations of how hard it is to do something in practice, even though the discussion tends to be nominally about the practical side. Quite similar to disputes about programming languages in that respect.)
yes of course, I just mean that from the set of foundational mathematics, set theory is the strongest one empirically, but that there are other options (possibly better)
One, in theory, can construct number sets (fields) with holes in them - that's truly discrete numbers. such number sets are at most countably infinite, but need not be. One useful such set might have Planck's (length) constant as its smallest number beyond which there is a hole. The problem with using such number sets is that ordinary rules of arithmetic breakdown, ie division has to be defined as modulus Planck constant
Emdash all over the article.
It wasn’t just that.. all over the article..
I am getting really strong LLM article vibes.
…that existed in the world of descriptive set theory — about the number of colors required to color certain infinite graphs in a measurable way.
To Bernshteyn, it felt like more than a coincidence. It wasn’t just that computer scientists are like librarians too, shelving problems based on how efficiently their algorithms work. It wasn’t just that these problems could also be written in terms of graphs and colorings. Perhaps, he thought, the two bookshelves had more in common than that. Perhaps the connection between these two fields went much, much deeper. Perhaps all the books, and their shelves, were identical, just written in different languages — and in need of a translator.
When you talk about infinity, you are no longer talking about numbers. Mix it with numbers, you get all sorts of perplexing theories and paradoxes.
The reason is simple - numbers are cuts in the continuum while infinity isn't. It should not even be a symbolic notion of very large number. This is not to say infinity doesn't exist. It doesn't exist as a number and doesn't mix with numbers.
The limits could have been defined saying "as x increases without bound" instead of "as x approaches infinity". There is no target called infinity to approach.
Cantor's stuff can easily be trashed. The very notion of "larger than" belongs to finite numbers. This comparitive notion doesn't apply to concepts that can not be quantified using numbers. Hence one can't say some kind of infinity is larger than the other kinds.
Similarly, his diagonal argument about 1-to-1 mapping can not be extended to infinities, as there is no 1-to-1 mapping that can make sense for those which are not numbers or uniquely identifiable elements. The mapping is broken. No surprise you get weird results and multiple infinities or whatever that came to his mind when he was going through stressful personal situations.
I don't think you can just call Cantor's diagonal argument trash without providing a very strong, self-consistent replacement framework that explains the result without invoking infinity.
This sounds like ranting from someone who doesn't deeply understand the implications of set theory or infinite sets. Cardinality is a real thing, with real consequences, such as Gödel's incompleteness theorems. What "weird results" are tripping you up?
> when he was going through stressful personal situations
Ad hominem. Let's stick to arguing the subject material and not personally attacking Cantor.
https://en.wikipedia.org/wiki/Cantor's_diagonal_argument#Con...
The Löwenheim-Skolem theorem implies that a countable model of set theory has to exist. "Cardinality" as implied by Cantor's diagonal argument (which happens to be a straightforward special case of Lawvere's fixed point theorem) is thus not an absolute property: it's relative to a particular model. It's internally true that there is no bijection as defined within the model between the naturals and the reals, as shown by Cantor's argument; but externally there are models where all sets can nonetheless be seen as countable.
You're referring to Skolem's paradox. It just shows that first-order logic is incomplete.
Ernst Zermelo resolved this by stating that his axioms should be interpreted within second-order logic, and as such it doesn't contradict Cantor's theorem since the Löwenheim–Skolem theorem only applies in first-order logic.
The standard semantics for second-order logic are not very practical and arguably not even all that meaningful or logical (as argued e.g. by Willard Quine); you can use Henkin semantics (i.e. essentially a many-sorted first-order theory) to recover the model-theoretic properties of first-order logic, including Löwenheim-Skolem.
I just gave the reason - The notion of comparison and 1-to-1 mapping has an underlying assumption about the subjects being quantifiable and identifiable. This assumption doesn't apply to something inherently neither quantifiable nor is a cut in the continuum, similar to a number. What argument are you offering against this?
I'm not the person you replied to, and I doubt I'm going to convince you out of your very obviously strong opinions, but, to make it clear, you can't even define a continuum without a finite set to, as you non-standardly put it, cut it. It turns out, when you define any such system that behaves like natural numbers, objects like the rationals and the continuum pop out; explicitly because of situations like the one Cantor describes (thank you, Yoneda). The point of transfinite cardinalities is not that they necessarily physically exist on their own as objects; rather, they are a convenient shorthand for a pattern that emerges when you can formally say "and so on" (infinite limits). When you do so, it turns out, there's a consistent way to treat some of these "and so ons" that behave consistently under comparison, and that's the transfinite cardinalities such as aleph_0 and whatnot.
Further, all math is idealist bullshit; but it's useful idealist bullshit because, when you can map representations of physical systems into it in a way that the objects act like the mathematical objects that represent them, then you can achieve useful predictive results in the real world. This holds true for results that require a concept of infinities in some way to fully operationalize: they still make useful predictions when the axiomatic conditions are met.
For the record, I'm not fully against what you're saying, I personally hate the idea of the axiom of choice being commonly accepted; I think it was a poorly founded axiom that leads to more paradoxes than it helps things. I also wish the axiom of the excluded middle was actually tossed out more often, for similar reasons, however, when the systems you're analyzing do behave well under either axiom, the math works out to be so much easier with both of them, so in they stay (until you hit things like Banac-Tarsky and you just kinda go "neat, this is completely unphysical abstract delusioneering" but, you kinda learn to treat results like that like you do when you renormalize poles in analytical functions: carefully and with a healthy dose of "don't accidentally misuse this theorem to make unrealistic predictions when the conditions aren't met")
About the 1-to-1 mapping of elements across infinite sets: what guarantees us that this mapping operation can be extended to infinite sets?
I can say it can not be extended or applied, because the operation can not be "completed". This is not because it takes infinite time. It is because we can't define completion of the operation, even if it is a snapshot imagination.
The difference between infinities is I can write every possible fraction on a piece of A4 paper, if the font gets smaller and smaller. I can say where to zoom in for any fraction.
I can't do that for real numbers.
You can't enumerate the real numbers, but you can grab them all in one go - just draw a line!
The more I learn about this stuff, the more I come to understand how the quantitative difference between cardinalities is a red herring (e.g. CH independent from ZFC). It's the qualitative difference between these two sets that matter. The real numbers are richer, denser, smoother, etc. than the natural numbers, and those are the qualities we care about.
That doesn't make one set "larger" than the other. You need to define "larger". And you need to make that definition as weird as needed to justify that comparison.
The fact that I can't even fit the real numbers between 0 and 1 on a single page, but I can fit every possible fraction in existence, doesn't mean anything?
I don't think this definition is that weird, for example by 'larger' I might say I can easily 'fit' all the rational numbers in the real numbers, but cannot fit the real numbers in the rational numbers.
It doesn't mean anything because, with arbitrary zooming for precision, every real number is a fraction. You can't ask for infinite zooming. There is no such thing.
So, let's inspect pi. It's a fraction, precision of which depends on how much you zoom in on it. You can take it as a constant just for having a name for it.
Hi if you noticed, I never said anything about 'infinite zooming', instead I said I can write them all on the paper, which I can.
The zooming is finite for every fraction.
If infinite zooming/precision is ruled out, all real numbers can be written as fractions. Problem is with the assumption that some numbers such as pi or sqrt(2) can have absolute and full precision.
> If infinite zooming/precision is ruled out, all real numbers can be written as fractions.
How?
> Problem is with the assumption that some numbers such as pi or sqrt(2) can have absolute and full precision.
That's the least of your problems, I couldn't even draw different sized splodges per real number.
All very well but I used to say to my sister "I hate you infinity plus one", so how do you account for that?
> Cantor's stuff can easily be trashed.
Only on hackernews.
.. because that is where you are allowed to challenge some biblical stories of the math without the fear of expulsion from the elite clubs.
Most of math history is stellar, studded with great works of geniuses, but some results were sanctified and prohibited for questioning due to various forces that were active during the times.
Application of regular logic such as comparison, mapping, listing, diagonals, uniqueness - all are the rules that were bred in the realms of finiteness and physical world. You can't use these things to prove some theories about things are not finite.
This isn't iconoclasm, it's ignorance.
I'm aware that I'm ignorant of many things, just like anyone else on this Earth. Some are less ignorant and some are more.
Could you be kind enough to explain the phrase "set of all integers" when the word all can not apply to an unbounded quantity? I think the word all is used loosely to extend it's meaning as used for finite sets, to a non-existent, unbounded set. For example, things such as all Americans, all particles in universe have a meaning because they have a boundary criterion. What is all integers?
I think one need to first define the realm of applicability or domain for the concepts such as comparison, 1-to-1 mapping, listing, diagonals, uniqueness, all etc.