Thanks! I pretty much did this as a learning/hobby side project. Back in 2021, I was dealing with a lot of K8s clusters and had recently learnt what about ndots[1]. That prompted me to build an easy to use DNS client which can avoid surprises about the host environment.
I've built this using Go, my daily driver. It doesn't use any CLI frameworks, that was mostly out of choice as I didn't want to add external deps unless really required.
My favourite part was to build this small help.go[3] utility which renders colored/formatted help text.
Over the time I got some good quality external contributions, especially the one from @jedisct1 for adding DNSCrypt[2] support to it.
Releasing a v1.0 has been on the back burner forever (like, a whole year+ :'). Life, other projects, and probably a bit of procrastination got in the way.
Finally sat down last week and forced myself to come up with a deadline to push this out.
> "doggo is a modern command-line DNS client (like dig) written in Golang. It outputs information in a neat concise manner and supports protocols like DoH, DoT, DoQ, and DNSCrypt as well.
It's totally inspired from dog which is written in Rust. I wanted to add some features to it but since I don't know Rust, I found it as a nice opportunity to experiment with writing a DNS Client from scratch in Go myself. Hence the name dog +go => doggo."
I went to switch over to "dog" last week but it looks abandoned. I couldn't get a download to work because of outdated dependencies, and IIRC I couldn't get it to build, so I just gave up on it after seeing the latest release is 4 years old.
That seems crazy to me that a DNS client is broken after only 4 years due to incompatibility dependencies! I was going to suggest that maybe it's just completed software, but wow, this makes me really want to stay away from the Rust ecosystem.
Not sure why you're being downvoted, because this is how the JS and Python ecosystems are becoming as well. Obviously, there's a lot of innovation that's happening (or at least the authors think so), but it's still possible to move a package forward without breakage.
It'd be great if communities could adopt the Go backwards compatibility promise[0] (it's best-effort, after all) so that packages continue to compile for a decade into the future and only introduce breaking changes for security reasons.
It's actually not that difficult to do -- just needs to be made an important goal of any project, and it makes it much easier to trust the stability of the dependency.
Very nice to also have it dockerized. You might just want to add in the documentation the `--rm` parameter for cleanup after running and `-t` for the colors. So it will be
docker run --rm -it ghcr.io/mr-karan/doggo:latest mrkaran.dev MX
Long answer: Not OP, but sometimes you’re in a throwaway env and want to quickly do a DNS query and don’t have any tools (like Ubuntu minimal) available. Spawning an ephemeral container isn’t a bad idea in that case.
That's also where declarative package managers such as Nix and Guix shine. You can temporarily "install" packages that can be cleaned up later, garbage-collection style. Besides being their own distros, both can be used as add-on package managers on other Linux distros, for example.
You want an upfront assurance that there's not going to be any bullshit. There's nothing that technically prevents a docker container from failing, of course. But seeing that someone took the extra step to make one is enough assurance. And it also protects the author from incoming "it doesn't work on my machine" bs.
Try a few queries to an anycasted resolver for the same record. It's rather common that, for example, TTL values differ per response. (the magic trick here is that it's not guaranteed that your requests end up on the same resolver).
You could have a look at https://dnsdiag.org/ that provides a few tools for further introspection in the many (different) answers resolvers can give.
Would be useful in my unfortunate case where the ISP's DNS is broken for a bunch of domains. Most notably githubusercontent.com. Also a bunch of CDN domains.
Just tried on my Adguard DNS server, I am only getting one entry. I think it's related to your nameserver configuration. Could you try reproducing with specifying a custom one, with @1.1.1.1 or something?
An awesome project, I learned about it last year through developing the x-cmd pkg, at that time the latest version was 0.5.7, now it's 1.0.2, it seems necessary to upgrade the version.
> It's totally inspired from dog which is written in Rust. I wanted to add some features to it but since I don't know Rust, I found it as a nice opportunity to experiment with writing a DNS Client from scratch in Go myself. Hence the name dog +go => doggo.
I do always find it interesting when people just want to add some features, and the language stops them from doing it. I'm so used to bouncing back and forth between code and docs (api docs and language syntax docs depending on my familiarity) that the language itself is basically an implementation detail that I don't really care about. There are very few languages that would even give me pause, let alone avoid modifying the project entirely.
There’s are a bunch of cli tools: dig like tool called ‘dns’, a stub resolver called ‘resolve’, a recursive resolver called ‘recurse’, and some other random maintenance tools. These are all to make it easier to test certain details outside other people’s dependency trees.
What does "all records" mean though? `dig` defaults to just show A records and I've kept the behaviour same. Do you mean all possible record types under the sun, or just a bunch of common ones? `MX, AAAA, A, CNAME, TXT` etc ?
To be fair, I do not think the author understands what static binaries are and why they may or may not want them, and how what Go does differs to what C/C++ toolchain does. I’d be very surprised if they do, being a Go developer. Next time they will learn another excuse to promote their language.
Could be cultural. Go has a lot of exposure in the DevOps / cloud infrastructure space. Lots of stuff like k8s and Terraform (and its providers) are written in Go, and it competes with Python in popularity for internal tooling.
Yeah exactly. I wrote one orchestration tool in Python, another department used Go. Right now I'm using nodejs simply because that's the main project language and all the developers can help out :)
In this case it seems like the creator of this tool already knew golang so he used it.
From the readme:
> It's totally inspired from dog [0] which is written in Rust. I wanted to add some features to it but since I don't know Rust, I found it as a nice opportunity to experiment with writing a DNS Client from scratch in Go myself. Hence the name dog +go => doggo.
go: downloading github.com/mr-karan/doggo v0.5.7
go: github.com/mr-karan/doggo/cmd@latest:
module github.com/mr-karan/doggo@latest found (v0.5.7),
but does not contain package github.com/mr-karan/doggo/cmd
1. You have to audit/trust any software whether or not it’s distributed as part of your OS
2. What is there to configure?
3. It’s a Go program, so compilation happens transparently on installation provided the developers don’t release broken code. This isn’t the C/C++ world where you have complex bespoke build systems that only seem to work on the developers’ machines.
Except package repositories have maintainers, who tend to be trustworthy parties. Compare the number of supply chain attacks Debian's apt repos have compared to, say, npm.
1- If you are claiming you are auditing all your OS parts, not even your own mother believes you, are you delving into tcp.c and ps2.c? delusional
2- I meant run the configure file. Typically configures in what folder you install the program, under what user, that kind of stuff. I'm just alluding to the whole open source installation process, which is more complex than installing an .msi on Windows, or installing an apt distributed .deb package on debian.
3- oh, ok sure. Btw, now I have to install a go compiler. In order to install a program that's been done 100 times by first year Comp Sci students. I'd rather kill myself.
Is there a way to query all DNS records? (I was surprised to learn that isn't the default.) This would be really helpful for troubleshooting people's Caddy questions (which are actually DNS problems).
However, each lookup happens serially right now, I'll take a look at making it concurrent per resolver atleast.
Edit: I just pushed the concurrent version of lookups in each resolver. Speed up is quite good around 70-80% on most domains. Will test this more before releasing to main!
I just pushed the concurrent version of lookups in each resolver. Speed up is quite good around 70-80% on most domains. Will test this more before releasing to main!
Ah yes, but it's a static site, shouldn't be 404ing. Most probably you're hitting `/docs`. I've not setup the redirect of trailing slash. Will do that.
We developed "geodns" for situations where you want to do DNS lookups from different regions around the world. For example, ycombinator.com returns different IPs depending on your location:
$ geodns ycombinator.com
108.156.133.117 Singapore
108.156.133.21 Singapore
108.156.133.25 Singapore
108.156.133.59 Singapore
108.156.39.26 London
108.156.39.61 London
108.156.39.62 London
108.156.39.64 London
13.32.27.123 Frankfurt am Main
13.32.27.47 Frankfurt am Main
13.32.27.51 Frankfurt am Main
13.32.27.80 Frankfurt am Main
13.35.93.12 Clifton
13.35.93.14 Clifton
13.35.93.46 Clifton
13.35.93.47 Clifton
18.239.94.100 Amsterdam
18.239.94.114 Amsterdam
18.239.94.33 Amsterdam
18.239.94.79 Amsterdam
99.86.20.42 Doddaballapura
99.86.20.54 Doddaballapura
99.86.20.64 Doddaballapura
99.86.20.96 Doddaballapura
Is that because it's behind cloudflare? I'm pretty sure it still runs primarily on a single server in a Colo (i.e. except in times of hardware failure or other physical realities).
It's also possible to get a copy of HN from Cloudflare in addition to M5. I keep historical DNS data and can confirm there are Cloudlfare IPs that continue to work.
whois is returning AWS and I don't see any of the normal cloudfront headers, but I do see a server header of nginx. So it doesn't look like cloudflare to me, I'd guess they're just running some ec2 instances with nginx configured to give the exact behaviour they need (as I recall they return cached pages to non logged in users, which is why you can sometimes log out and get the page to load when they're having issues). I also see awsdns in their ns records, so it looks to be like they're just doing Geo-dns in route53 to route to the closest instance they're running.
Shameless plug for folks looking for something similar, but on the web: I was fed up with Google's slow/janky dig webface, so built my own. (Still very WIP, but already works better as a daily driver than Google's!)
This is pretty cool! But what does it mean when something is listed under "Services"? For example, one of my "services" is "52.45.50.190/32", an AWS IP. What does that actually mean? How did that IP get there?
It has a ton of bells and whistles, including summarize IPs, bulk enrichment, grepip, and a ton of network-related tools. I was writing a series of blog posts on the CLI, but I think the series got too long and left users to discover the features of the CLI on their own.
there's https://myip.wtf or wtfismyip.com which provides a strongly worded interface. You can also check which headers your browser is sending to the website.
Dig was an early and widespread DNS CLI tool. "dog" is a logical name for a next-gen dns cli, and of course that exists. "Doggo" is both a pretty standard linguistic drift pattern of English slang (random -> rando, weird -> weirdo), a common internet term of endearment for "dog", and a logical derivation of *-go for go-based tools and software.
Looks super cool! Can you share more about why you built this, design decisions, and other behind the scenes context?
Thanks! I pretty much did this as a learning/hobby side project. Back in 2021, I was dealing with a lot of K8s clusters and had recently learnt what about ndots[1]. That prompted me to build an easy to use DNS client which can avoid surprises about the host environment.
I've built this using Go, my daily driver. It doesn't use any CLI frameworks, that was mostly out of choice as I didn't want to add external deps unless really required. My favourite part was to build this small help.go[3] utility which renders colored/formatted help text.
Over the time I got some good quality external contributions, especially the one from @jedisct1 for adding DNSCrypt[2] support to it.
Releasing a v1.0 has been on the back burner forever (like, a whole year+ :'). Life, other projects, and probably a bit of procrastination got in the way. Finally sat down last week and forced myself to come up with a deadline to push this out.
[1]: https://mrkaran.dev/posts/ndots-kubernetes/
[2]: https://github.com/mr-karan/doggo/pull/17
[3]: https://github.com/mr-karan/doggo/blob/main/cmd/help.go
[flagged]
> "doggo is a modern command-line DNS client (like dig) written in Golang. It outputs information in a neat concise manner and supports protocols like DoH, DoT, DoQ, and DNSCrypt as well.
It's totally inspired from dog which is written in Rust. I wanted to add some features to it but since I don't know Rust, I found it as a nice opportunity to experiment with writing a DNS Client from scratch in Go myself. Hence the name dog +go => doggo."
That's pretty cool :) It makes a great visualizer for bbs-over-dns.com
bbs-over-dns is so interesting!
I liked the output of: doggo @bbs-over-dns.com txt bbs-over-dns.com | sort | less
Amazing naming choice - doggos like to dig!
TIL there's also dog[1] Which is probably also common typo for "dig"
[1] https://github.com/ogham/dog
doggo is actually dog written in Go.
I went to switch over to "dog" last week but it looks abandoned. I couldn't get a download to work because of outdated dependencies, and IIRC I couldn't get it to build, so I just gave up on it after seeing the latest release is 4 years old.
That seems crazy to me that a DNS client is broken after only 4 years due to incompatibility dependencies! I was going to suggest that maybe it's just completed software, but wow, this makes me really want to stay away from the Rust ecosystem.
Not sure why you're being downvoted, because this is how the JS and Python ecosystems are becoming as well. Obviously, there's a lot of innovation that's happening (or at least the authors think so), but it's still possible to move a package forward without breakage.
It'd be great if communities could adopt the Go backwards compatibility promise[0] (it's best-effort, after all) so that packages continue to compile for a decade into the future and only introduce breaking changes for security reasons.
It's actually not that difficult to do -- just needs to be made an important goal of any project, and it makes it much easier to trust the stability of the dependency.
0. https://go.dev/blog/compat
Upgrade the openssl dependencies if you are on new Ubuntu:
Very nice to also have it dockerized. You might just want to add in the documentation the `--rm` parameter for cleanup after running and `-t` for the colors. So it will be
Just curious, but why go through all the trouble to create a docker container for a DNS cli utility with no dependencies?
Long answer: Not OP, but sometimes you’re in a throwaway env and want to quickly do a DNS query and don’t have any tools (like Ubuntu minimal) available. Spawning an ephemeral container isn’t a bad idea in that case.
Short answer: Why not :)
I don't specifically know Ubuntu minimal. But suggesting that a minimal OS has containerd pre-installed isn't exactly the "minimal" I'm thinking of.
It's a lot of code to "just run a UDP packet" IMHO.
That's also where declarative package managers such as Nix and Guix shine. You can temporarily "install" packages that can be cleaned up later, garbage-collection style. Besides being their own distros, both can be used as add-on package managers on other Linux distros, for example.
Have you ever heard of Nix?
You want an upfront assurance that there's not going to be any bullshit. There's nothing that technically prevents a docker container from failing, of course. But seeing that someone took the extra step to make one is enough assurance. And it also protects the author from incoming "it doesn't work on my machine" bs.
Makes it easy to drop it into a k8s cluster, maybe?
Yep, that makes sense. Thanks, just updated docs.
I have a silly question I guess... why does it print everything out twice?
~ doggo google.com
NAME TYPE CLASS TTL ADDRESS NAMESERVER
google.com. A IN 296s 142.250.67.14 127.0.2.2:53
google.com. A IN 296s 142.250.67.14 127.0.2.3:53
~ doggo news.ycombinator.com
NAME TYPE CLASS TTL ADDRESS NAMESERVER
news.ycombinator.com. A IN 1s 209.216.230.207 127.0.2.2:53
news.ycombinator.com. A IN 1s 209.216.230.207 127.0.2.3:53
Looks like once per nameserver that you're configured to use. I guess just in case they give different answers, which is uncommon in practice.
Oh man, if I had a penny for every time this issue came up, my grandchildren wouldn't have to work a day in their lives.
Try a few queries to an anycasted resolver for the same record. It's rather common that, for example, TTL values differ per response. (the magic trick here is that it's not guaranteed that your requests end up on the same resolver).
You could have a look at https://dnsdiag.org/ that provides a few tools for further introspection in the many (different) answers resolvers can give.
Would be useful in my unfortunate case where the ISP's DNS is broken for a bunch of domains. Most notably githubusercontent.com. Also a bunch of CDN domains.
Just tried on my Adguard DNS server, I am only getting one entry. I think it's related to your nameserver configuration. Could you try reproducing with specifying a custom one, with @1.1.1.1 or something?
Would be amazing is this tool would add support for the equivalent of query type ANY
Just did: https://github.com/mr-karan/doggo/pull/128
Will release soon
Wow, that's really amazing. Will definitely try soon!
An awesome project, I learned about it last year through developing the x-cmd pkg, at that time the latest version was 0.5.7, now it's 1.0.2, it seems necessary to upgrade the version.
Here is a demo video, you can take a look: https://x-cmd.com/pkg/doggo
Reminds me of https://github.com/ogham/dog
From the doggo github readme:
> It's totally inspired from dog which is written in Rust. I wanted to add some features to it but since I don't know Rust, I found it as a nice opportunity to experiment with writing a DNS Client from scratch in Go myself. Hence the name dog +go => doggo.
I do always find it interesting when people just want to add some features, and the language stops them from doing it. I'm so used to bouncing back and forth between code and docs (api docs and language syntax docs depending on my familiarity) that the language itself is basically an implementation detail that I don't really care about. There are very few languages that would even give me pause, let alone avoid modifying the project entirely.
And dog seems unmaintained, sadly
We have similar utilities in the Hickory DNS project, https://github.com/hickory-dns/hickory-dns/tree/main/util
There’s are a bunch of cli tools: dig like tool called ‘dns’, a stub resolver called ‘resolve’, a recursive resolver called ‘recurse’, and some other random maintenance tools. These are all to make it easier to test certain details outside other people’s dependency trees.
The documentation is a little sparse…
Yea a lot of lingering PRs to fix building with OpenSSL or update packages in some way. Last change was ~3 years ago.
Is this related to Dog [1]? They look almost identical in functionality.
Both ask for the specific query to run (A, AAAA, etc.). Why not default to query all records? (at least when querying a single domain).
--
1: https://github.com/mr-karan/doggo
What does "all records" mean though? `dig` defaults to just show A records and I've kept the behaviour same. Do you mean all possible record types under the sun, or just a bunch of common ones? `MX, AAAA, A, CNAME, TXT` etc ?
I did not realize the list of record types was so long! [1] But I was thinking the common ones, yes.
---
1: https://en.wikipedia.org/wiki/List_of_DNS_record_types#/medi...
Actually authoritative list: <https://www.iana.org/assignments/dns-parameters/dns-paramete...>
This is now live: https://doggo.mrkaran.dev/docs/features/any/
Really nice!
Is there any reason why so many of those tools are written in Go? Is it because of a stdlib or just accidental?
Go is produces (mostly) statically compiled binaries by default. No runtime, no interpreter, no dependencies.
Python, Java, JavaScript, C#, etc. can't say the same.
In my experience, I've always had to install Go and it's ecosystem to `go install foo` for Go programs to work. Is anyone distributing binaries?
you'll usually find binaries in github releases (see the assets section: https://github.com/mr-karan/doggo/releases )
Oh duh, thanks!
Correct me if I am wrong. but this is the feature of C#/.Net as well as of recent versions.
[dead]
dotnet publish -o . -p:PublishAot=true :)
To be fair, I do not think the author understands what static binaries are and why they may or may not want them, and how what Go does differs to what C/C++ toolchain does. I’d be very surprised if they do, being a Go developer. Next time they will learn another excuse to promote their language.
Also to be fair, they did say "by default", and those options you provided must be set because they are not default.
[flagged]
Could be cultural. Go has a lot of exposure in the DevOps / cloud infrastructure space. Lots of stuff like k8s and Terraform (and its providers) are written in Go, and it competes with Python in popularity for internal tooling.
Yeah exactly. I wrote one orchestration tool in Python, another department used Go. Right now I'm using nodejs simply because that's the main project language and all the developers can help out :)
In this case it seems like the creator of this tool already knew golang so he used it.
From the readme:
> It's totally inspired from dog [0] which is written in Rust. I wanted to add some features to it but since I don't know Rust, I found it as a nice opportunity to experiment with writing a DNS Client from scratch in Go myself. Hence the name dog +go => doggo.
[0] https://github.com/ogham/dog
Go gets it done and for years to come.
So the name can be a pun, of course.
Congrats for the 1.0 release!
doggo has been my main DNS tool for a while, now. Love it!
Very kind of you, thanks!
Was just mentioning on my other comment[1] about your contributions to the tool.
[1]: https://news.ycombinator.com/item?id=40848420
Happy user for years here. Keep up the good work!
I'd not encourage the usage of an AUR helper. Just pointing to the AUR page should be enough.
There's a curl | sh too. Would be kind of ironic to suffer from a DNS poisoning attack while trying to download a DNS client.
Does this command not work?
Hit the same issue, but it goes fine if you just clone the repo, cd down into the cmd directory and 'go build -o doggo'.
Does this install in a way I can upgrade it with topgrade?
I guess not using one of the proxies? for some reason their v1.0.0 commit isn't attached to anything... https://github.com/mr-karan/doggo/commit/fe3958594df46c000ef...
Just pushed a minor patch release. Tagged on main: https://github.com/mr-karan/doggo/commit/8f60428f6ae154918d9...
Installs now but installs as `cmd` not `doggo`
Sorry about that, it was a silly typo on my end. I've just pushed a fix:
Thanks + love your work.
So like dig but I have to compile build, configure, audit and trust instead of just having it packaged in my OS, nice.
1. You have to audit/trust any software whether or not it’s distributed as part of your OS
2. What is there to configure?
3. It’s a Go program, so compilation happens transparently on installation provided the developers don’t release broken code. This isn’t the C/C++ world where you have complex bespoke build systems that only seem to work on the developers’ machines.
Except package repositories have maintainers, who tend to be trustworthy parties. Compare the number of supply chain attacks Debian's apt repos have compared to, say, npm.
1- If you are claiming you are auditing all your OS parts, not even your own mother believes you, are you delving into tcp.c and ps2.c? delusional
2- I meant run the configure file. Typically configures in what folder you install the program, under what user, that kind of stuff. I'm just alluding to the whole open source installation process, which is more complex than installing an .msi on Windows, or installing an apt distributed .deb package on debian.
3- oh, ok sure. Btw, now I have to install a go compiler. In order to install a program that's been done 100 times by first year Comp Sci students. I'd rather kill myself.
Then don't use it. No is forcing you. No need for all the whining here.
Love this, thank you!
Is there a way to query all DNS records? (I was surprised to learn that isn't the default.) This would be really helpful for troubleshooting people's Caddy questions (which are actually DNS problems).
You can't really query all DNS records these days anymore by using the ANY query type. The closest alternative is to run dig across all record types.
At Andrew McWatters & Co., we use a small internal utility called digany(1)[1][2] that does this for you.
[1]: https://github.com/andrewmcwattersandco/digany
[2]: https://github.com/andrewmcwattersandco/digany/blob/main/dig...
or AXFR... but it is allowed on even less places than ANY.
I've just created a PR for supporting common record types: https://github.com/mr-karan/doggo/pull/128
However, each lookup happens serially right now, I'll take a look at making it concurrent per resolver atleast.
Edit: I just pushed the concurrent version of lookups in each resolver. Speed up is quite good around 70-80% on most domains. Will test this more before releasing to main!
https://github.com/mr-karan/doggo/pull/128#issuecomment-2202...
Would this be something close to what you're looking for?
When I run that:
It takes 5+ seconds to get a response.Classic `dig` though takes 50ms.
I just pushed the concurrent version of lookups in each resolver. Speed up is quite good around 70-80% on most domains. Will test this more before releasing to main!
https://github.com/mr-karan/doggo/pull/128#issuecomment-2202...
Hm it took around 2.9s on my system. Let me see if I can concurrently lookup records for different records and optimise this. Thanks for sharing.
404 page not found, have you received many requests? The project is very interesting, I like the interface. Congratulations
Ah yes, but it's a static site, shouldn't be 404ing. Most probably you're hitting `/docs`. I've not setup the redirect of trailing slash. Will do that.
Meanwhile, the docs link is https://doggo.mrkaran.dev/docs/
Pretty! Time to alias dig to doggo for a few days ;)
BTW I really enjoyed reading your blog on Nomad while setting up our own clusters, kudos!
Is this a client to query DNS servers, like digg or dogg?
Or is it a client to control and configure the DNS servers a computer is using?
or both?
It's a DNS client for querying DNS servers.
Is there a way to run the web interface locally?
BTW, the "visit demo" link in the docs returns 404.
Sorry the demo link is fixed now: https://doggo.mrkaran.dev/
Yes, you can run the web server locally: https://github.com/mr-karan/doggo/tree/main/web
Is this how the kids "rawdog" DNS these days?
I use `bore` which works about the same, interesting to see new options! https://crates.io/crates/bore
Ok, now hear me out. You bundle this with Bruno and some other networking tools as… The Woof Pack
very nice! looks clean and simple.
We developed "geodns" for situations where you want to do DNS lookups from different regions around the world. For example, ycombinator.com returns different IPs depending on your location:
https://gitlab.com/shodan-public/geonet-rsIs that because it's behind cloudflare? I'm pretty sure it still runs primarily on a single server in a Colo (i.e. except in times of hardware failure or other physical realities).
You’re thinking about news.ycombinator.com, run on a single server from M5, which is not the same as ycombinator.com.
It was moved to AWS temporarily the last time the servers failed: https://news.ycombinator.com/item?id=32031136It's also possible to get a copy of HN from Cloudflare in addition to M5. I keep historical DNS data and can confirm there are Cloudlfare IPs that continue to work.
whois is returning AWS and I don't see any of the normal cloudfront headers, but I do see a server header of nginx. So it doesn't look like cloudflare to me, I'd guess they're just running some ec2 instances with nginx configured to give the exact behaviour they need (as I recall they return cached pages to non logged in users, which is why you can sometimes log out and get the page to load when they're having issues). I also see awsdns in their ns records, so it looks to be like they're just doing Geo-dns in route53 to route to the closest instance they're running.
https://geonet.shodan.io/api/geodns/ycombinator.com?rtype=A
Shameless plug for folks looking for something similar, but on the web: I was fed up with Google's slow/janky dig webface, so built my own. (Still very WIP, but already works better as a daily driver than Google's!)
https://www.shovel.report/ycombinator.com
Did you use DNS over HTTPS (DoH)? I love how easy it is to perform DNS lookups in web apps too through this.
This is pretty cool! But what does it mean when something is listed under "Services"? For example, one of my "services" is "52.45.50.190/32", an AWS IP. What does that actually mean? How did that IP get there?
Another shameless plug for my website, you can use ipkitten.com to get your public IP address from your terminal:
And if you visit it in a browser, you get your IP address and a kitten GIF!:https://ipkitten.com
I love using ipinfo.com for the extra details it provides like hostname, ISP, ASN, etc.
or you can look up these details for any other IP:We have a CLI as well: https://github.com/ipinfo/cli
It has a ton of bells and whistles, including summarize IPs, bulk enrichment, grepip, and a ton of network-related tools. I was writing a series of blog posts on the CLI, but I think the series got too long and left users to discover the features of the CLI on their own.
Just don't let grow up and eat http://ipchicken.com.
there's https://myip.wtf or wtfismyip.com which provides a strongly worded interface. You can also check which headers your browser is sending to the website.
icanhazip.com is another service which does this.
And the easy-to-remember-for-Unix-nerds:
if you use curl it'll do user agent detection and just give you your ip
or google search, which you tell you (might be fourth or more result)
https://www.google.com/search?q=what%27s+my+ip
Or ifconfig.co
v6! Yay!
One of the reasons I dig it.
The q client has a feature comparison: https://github.com/natesales/q?tab=readme-ov-file#feature-co...
After having issues with QUIC timeouts with Doggo, I switched to q and it has been great.
[flagged]
What's stupid about it?
Dig was an early and widespread DNS CLI tool. "dog" is a logical name for a next-gen dns cli, and of course that exists. "Doggo" is both a pretty standard linguistic drift pattern of English slang (random -> rando, weird -> weirdo), a common internet term of endearment for "dog", and a logical derivation of *-go for go-based tools and software.
Surprised nobody went for "dug", à la DigDug.
[flagged]