Dependency Injection has a fancy name that makes some developers uncomfortable, but it's really just all about making the code easier to test. Basically everything that the class depends upon has to be informed during construction.
This can be done manually, but becomes a chore super fast - and will be a very annoying thing to maintain as soon as you change the constructor of something widely used in your project to accept a new parameter.
Frameworks typically just streamline this process, and offers some flexibility at times - for example, when you happen to have different implementations of the same thing. I find it funny that people rally against those Frameworks so often.
To make things more concrete, let's say you have a method that gets the current date, and has some logic there (for example, it checks if today is EOM to do something). In Java, you could do `Instance.now()` to do this.
This will be a pain in the ass to test, you might need to test, for example a date when there's a DST change, or February 28th in a leap year, etc. With DI you can instead inject an `InstantSource` to your code, and on testing you can just mock the dependency to have a predictable date on each test.
Why is it a pain to inject dependencies manually? I think this is because people assume for some reason that a class isn't allowed to instantiate its own dependencies.
It means that in most cases, you just call a static factory method like create() rather than the constructor. This method will default to using Instance.now() but also gives you a way to provide your own now() for tests (or other non-standard situations).
At the top of the call stack, you call App.create() and boom - you have a whole tree of dependencies.
If the class instantiates its own dependencies then, by definition, you're not injecting those dependencies, so you're not doing dependency injection at all!
That seems too dogmatic. Does the fact that Foo has a static function that knows how to create its dependencies disqualify the class from being a case of DI?
Yes because injecting nowFn is trivial and not a case for DI.
Consider a database handle, or network socket, or http response payload. Clearly each class shouldn't be making its own version of those.
You're nitpicking for no good reason. You can create global handles to each of those items, and let them instantiate with the class or override them with a create function.
Dependency injection boils down the question of whether or not you can dynamically change a dependency at runtime.
many of the things I dependeon are shared services where there should only be one instance. Singleton means a global variable. a di framework lets me have on without a global - meaning if I need a second service I can do it with just a change to how the di works.
there is no right answer of course. Time should be a global so that all timers/clocks advance in lock step. I hav a complex fake time system that allows my tests to advance minutes at a time without waiting on the wall clock. (If you deal with relativity this may not work - for everyone else I encourage it)
Maybe I'm misunderstanding what you're saying but a connection pool seems like almost a canonical example of something that shouldn't be a singleton. You might want connection pools that connect to different databases or to the same database in read-only vs. read/write mode, etc.
I meant "singleton" in the sense of a single value for a type shared by anything that requires one, i.e. a Guice singleton ( https://github.com/google/guice/wiki/scopes#singleton ) not a value in global scope. Or maybe a single value by type with an annotation, the salient point is that there are values in a program that must be shared for correctness. Parameterless constructors prohibit you from using these (unless you have global variables).
Then these different pools can be separate singletons. You still don't want to instantiate multiple identical pools.
You can use the type system to your advantage. Cut a new type and inject a ReadOnlyDataSource or a SecondDatabaseDataSource or whatnot. Figure out what should only have one instance in your app, wrap a type around it, put it in the singleton scope, and inject that.
This has the advantage that you don't need an extra framework/dependency to handle DI, and it means that dependencies are usually much easier to trace (because you've literally got all the code in your project, no metaprogramming or reflection required). There are limits to this style of DI, but in practice I've not reached those limits yet, and I suspect if you do reach those limits, your DI is just too complicated in the first place.
I think most people using these frameworks are aware that DI is just automated instantiation. If your program has a limited number of ways of composing instantiations, it may not be useful to you. The amount of ceremony reduced may not be worth the overhead.
This conversation repeats itself ad infinitum around DI, ORMs, caching, security, logging, validation, etc, etc, etc... no, you don't need a framework. You can write your own. There are three common outcomes of this:
* Your framework gets so complicated that someone rips it out and replaces it with one of the standard frameworks.
* Your framework gets so complicated that it turns into a popular project in its own right.
* Your app dies and your custom framework dies with it.
I'm not suggesting a custom framework here, I'm suggesting no DI framework at all. No reflection, no extra configuration, no nothing, just composing classes manually using the normal structure of the language.
At some point this stops working, I agree — this isn't necessarily an infinitely scalable solution. At that point, switching to (1) is usually a fairly simple endeavour because you're already using DI, you just need to wire it together differently. But I've been surprised at the number of cases where just going without a framework altogether has been completely sufficient, and has been successful for far longer than I'd have originally expected.
If you construct an object once, then what is the difference between that and a singleton?
The answer is scope. Singletons exist explicitly for the purpose of violating scoping for the convenience of not having to thread that dependency through constructors.
Calling these "singletons" maybe created confusion here, I'm more talking about singleton in the sense of a value that is created once and shared (i.e. injecting a value as a "Singleton" via something like Guice). I'm not arguing that you need to have values in global scope, I'm arguing that parameterless constructors prevent you from using shared values (actually _unless_ you have singletons in global scope).
> Dependency Injection has a fancy name that makes some developers uncomfortable, but it's really just all about making the code easier to test.
It's not just a fancy name. I'd argue it's a confusing name. The "$25 name for a 5c concept" quote is brilliant. The name makes it sound like it's some super complicated thing that is difficult to learn, which makes it harder to understand. I would say "dynamic programming" suffers the same problem. Maybe "monads".
How about we rename it? "Generic dependencies" or "Non hard-coded dependencies" or even "dependency parameters"?
The name is confusing because originally [0] it was exclusively about dynamic injection by frameworks. Somehow it morphed into just meaning suitable constructor parameters, at least in some language communities.
I like “dependency parameters”. Dependencies in that sense are usually what is called service objects, so “service parameters” might be even clearer.
I third "dependency parameters". It concisely describes what the actual thing is, and isn't intimidating.
And yes, even though some languages/frameworks allow deps to be "passed in" via mechanisms that aren't technically parameters (like member variables that are just somehow magically initialised due to annotations), doing that only obfuscates control flow and should be avoided IMHO.
I think the "injection" is referring to a particular style of frameworks which rely on breaking language abstractions and modifying otherwise private fields to provide dependencies at runtime. I really, really, REALLY dislike that approach and therefore also dislike the name "dependency injection".
Why not call it "dependency resolution"? The only problem frameworks solve is to connect a provider of thing X with all users of thing X, possibly repeating this process until all dependencies have been resolved. It also makes it more clear that this process is not about initialization and lifecycling, it is only about connecting interfaces to implementations and instantiating them.
Edit: The only DI framework I have used and actually kind of like is Dagger 2, which resolves all dependencies at compile time. It also allows a style of use where implementation code does not have to know about the framework at all - all dependency resolution modules can be written separately.
All other runtime DI frameworks I have used I have hated with a passion because they add so much cognitive overhead by imposing complex lifecycle constraints. Your objects are not initialized when the constructor has finished running because you have to somehow wait for the DI framework to diddle the bits, and good luck debugging when this doesn't work as expected.
Or don’t use a DI framework, and DI just becomes a fancy name for "creating instances" and "passing parameters". That’s what we do in Go and there’s no way I would EVER use a DI framework again. I’d rather be unemployed than work with Spring.
I'm a small brained primate but when I get down to what dependency injection is doing it's like my firmware written in C setting a couple of function pointers in a radio handler's struct to tell it which SPI bus and DIO's to use. Which seems trivially okay.
No need to be aggressive, I just disagree that DI frameworks streamline anything, they just make things more opaque and hard to trace.
> will be a very annoying thing to maintain as soon as you change the constructor of something widely used in your project to accept a new parameter.
That for example is just not true. You add a new parameter to inject and it breaks the injection points? Yeah that’s expected, and suitable. I want to know where my changes have any impact, that’s the point of typing things.
A lot of things deemed "more maintainable" really aren’t. Never has a DI framework made anything simpler.
> That for example is just not true. You add a new parameter to inject and it breaks the injection points?
Perhaps you never worked in a sufficiently large codebase?
It is very annoying when you need to add a dependency and suddenly you have to touch 50+ injection points because that thing is widely used. Been there, done that, and by God I wished I had Dagger or Spring or anything really to lend me a hand.
DI frameworks are a tool like any other. When properly used in the correct context they can be helpful.
This is the only reason why DI frameworks exist. However, this issue can be largely avoided by working with configuration objects (parameter objects [0]) from the start, that get passed around. Then you only need to apply the change in a small number of parameter-object classes, if not just a single one.
Eh, having built a whole codebase around these configuration objects I really regret not going for more traditional DI IoC container. It's thousands over thousands of additional parameters passed all over the place when creating objects just for the sake of saving five minutes of explanation to newcomers.
"It is very annoying when you need to add a dependency and suddenly you have to touch 50+ injection points because that thing is widely used"
You don't have to update the injection points, because the injection points don't know the concrete details of what's being injected. That's literally the whole point of dependency injection.
Edited to add: Say you have a class A, and this is a dependency of classes B, C, etc. Using dependency injection, classes B and C are passed instances of A, they don't construct it themselves. So if you add a dependency to A, you have to change the place that constructs A, of course, but you don't have to change B and C, because they have nothing to do with the construction of A.
Ya very confused by this. Either the change is to the constructor of the object being injected, in which case there is no difference either way, or the change is to the constructor receiving the injection, in which case there’s no difference either way.
I think you're being downvoted because you're agreeing with the post you're quoting, but arguing as if they're wrong: the example in question was there to show how DI can be useful, so there's nothing to argue against.
The one thing DI frameworks unarguably and decisively solve by design (if accidentally, but it doesn’t matter) is control over static initialization. I’d say you haven’t truly lived if your C++ didn’t crash before calling main(), but it helps in large JS and Python projects just the same.
How do they solve that? If constructors require certain parameters, someone has to pass them. If it’s not a top-level object, the instantiating code will pass it and have a corresponding constructor or method parameter itself. At the top level, main() or equivalent will pass the parameter. Where is the problem?
Exactly, there is no problem when you do it this way and DI frameworks force you to.
The problem when you don't do it this way is when you depend on order of initialization in a way you are not aware of until it breaks, and it breaks in all kinds of interesting ways.
If it becomes a chore to instantiate your dependency tree with `new` or whatever in the root of your app, it's a good indication that the dependency tree _is too complex_. You _should_ feel the pain of that, to align incentives for simplification.
Using an IoC container is endemic in the Java ecosystem, but unheard of in the Go ecosystem - and it's not hard to see which of them favours simplicity!
Well in that case, every Java DI framework I’ve encountered has that smell. Spring, Guice, all of them. Same in C#. Perhaps it’s an enterprise (in the pejorative) thing.
It is interesting to note that these frameworks are effectively non-existent in Go, Rust, Python - even Typescript, and no one complains you can’t build “real things” in any of those languages.
It's not just about testing. When any code constructs its own object, and that object is actually an abstraction of which we have many implementations, that code becomes stupidly inflexible.
For instance, some code which prints stuff, but doesn't take the output stream as a parameter, instead hard-coding to a standard output stream.
That leaves fewer options for testing also, as a secondary problem.
> Frameworks typically just streamline this process, and offers some flexibility at times - for example, when you happen to have different implementations of the same thing.
The very purpose of DI is to allow using a different implementation of the same thing in the first place. You shouldn’t need a framework to achieve that. And my personal experience happens to match that.
You're talking from the perspective of Java, which has been designed from the ground up with dependency injection in mind.
Dependency injection is the inversion of control pattern at the heart, which is something like oxygen to a Java dev.
In other languages, these issues are solved differently. From my perspective as someone whoes day job has been roughly 60+% Java for over 10 yrs now... I think I agree with the central message of the article. Unless you're currently in Java world, you're probably better off without it.
These patterns work and will on paper reduce complexity - but if comes at the cost of a massively increased mental overhead if you actually need to address a bug that touches more then a miniscule amount of code.
/Edit: and I'd like to mention that the article actually only dislikes the frameworks, not the pattern itself
DI wasn't around when Java (or .Net) came out. DI is a fairly new thing too, relatively speaking, like after ORMs and anonymous methods. Like after Java 7 I think. Or maybe 8? Not a Java person myself.
I know in .net, it was only really the switch to .net core where it became an integral part of the frameworks. In MVC 5 you had to add a third party DI container.
So how can it have been designed for it from the ground up?
In fact, if you're saying 10 years, that's roughly when DI became popular.
You're wrong about other languages not needing it, yes statically typed languagess need it for unit testing, but you don't seem to realize that from a practical perspective DI solves a lot of the problems around request lifetimes too. And from an architectural context it solves a lot of the problem of how to stop bad developers overly coupling their services.
Before DI people often used static methods, so you'd have a real mess of heavily interdependent services. It can still happen now but.its nowhere near as bad as the mess of programming in the 2000s.
DI helped reduce coupling and spaghetti code.
DI also forces you to 'declare' your dependencies, so it's easy to see when a class has got out of control.
Edit: I could keep on adding, but one final thing. Java and .Net are actually quite cumbersome to use DI, and Go is actually easier. Because Go has implicit interfaces, but older languages don't and it would really help reduce boiler plate DI code.
A lot of interfaces in Java/C# only exist to allow DI tow work, and are otherwise a pointless waste of time/code.
It’s not correct that Java was designed for it, unless you want to call class loading dependency injection. It’s merely that Java’s reflection mechanism happened to enable DI frameworks. The earlier Java concept was service locaters (also discussed in the article linked above).
> but it's really just all about making the code easier to test. Basically everything that the class depends upon has to be informed during construction.
It is useful for more than testing (although, depending on the kind of tests being made, it might not always be useful for all kind of tests). It also allows you to avoid a program having too many dependencies that you might not need (although this can also cause a problem, it could perhaps be avoided by providing optional dependencies, and macros (or whatever other facility is appropriate in the programming language you are using) to use them), and allows more easily for the caller to specify customized methods for some things (which is useful in many programs, e.g. if you want customized X.509 certificate validation in a program, or customized handling of displaying/requesting/manipulation of text, or use of a virtual file system).
In a C program, you can use a FILE object for I/O. Instead of using fopen or standard I/O, a library could accept a FILE object that you had previously provided, which might or might not be an actual file, so it does not need to deal with file names.
> This will be a pain in the ass to test, you might need to test, for example a date when there's a DST change, or February 28th in a leap year, etc.
I think that better operating system design with capability-based security would help with this and other problems, although having dependency injection can also be helpful for other purposes too.
Capability-based security is useful for many things. Not only it helps with testing, but also helps to work around a problem if a program does not work properly on a leap year, you can tell that specific program that the current date is actually a different date, and it can also be used for security, etc. (With my idea, it also allows a program to execute in a deterministic way, which also helps with testing and other things, including resist fingerprinting.)
I frequently find DI pattern to show up in Java... But I also frequently find that Java gives me all the handcuffs of systems languages with few of the benefits of more flexible duck-typing languages.
If you can't monkey-patch the getDate function with a mock in a testing context because your language won't allow it, that's a language smell, not a pattern smell.
> If you can't monkey-patch the getDate function with a mock in a testing context because your language won't allow it, that's a language smell, not a pattern smell.
Of course you can do it in Java. But it is widely considered poor practice, for good reason, and is generally avoided.
> If you can't monkey-patch the getDate function with a mock in a testing context because your language won't allow it, that's a language smell, not a pattern smell.
Not so fast. Constraints like "no monkeypatching allowed" are part of what make it possible to reason about code at an abstract level, i.e., without having to understand in detail every control path that could possibly have run before. Allowing monkeypatching at the language level means discarding that useful reasoning tool.
I'm not saying that "no monkeypatching allowed" is always ideal, but it is a tradeoff.
(Consider why so many languages have something like a "const" modifier, which purely restricts what you can do with the object in question. The restriction reduces what you can do with it, but increases what you know about it.)
Sure, a policy like that helps, but relying on programmer discipline like this only scales so far. In a large enough code base, if something can be done, someone will have done it somewhere.
(If you have linter rules or static analysis that can detect monkeypatching and runs on every commit (or push to main, or whatever), you're good.)
You CAN in fact monkeypatch getDate - look at a Mockito add-on known as PowerMockito! While it's impossible to mock it out in the normal JVM "happy path," the JVM is powerful enough to let you mess with classloading and edit the bytecode at load-time to mock out even system classes.
(Disclaimer: have not used PowerMockito in ages, am not confident it works with the new module system.)
Regarding 'frameworks'. Golang already ships with a framework natively because of the design of the language. Therefore the point is moot in that specific context. Hence the post.
Another downside of DI is how it breaks code navigation in IDEs. Without DI, I can easily navigate from an instance to where it's constructed, but with DI this becomes detached. This variable implements Foo, but which implementation is it?
If your IDE starts to decide how you code and what kind of architecture/design you can use, I kind of feel like the IDE is becoming something more than just an IDE and probably you should try to find something else. But I mainly program in vim/nvim so maybe it's just par for the course with IDEs and I don't know what I'm talking about.
Are you not using an LSP with your text editor? If you are then you'll run into the same issue because it's the underlying technology. If you aren't using an LSP then you're probably leaving some workflow efficiency on the table.
I think probably when I write Rust, it does use LSP somehow, but most of the time I use Conjure (by Olical), and I don't think it uses LSP, as far as I know at least, but haven't dug around in the internals much.
> then you're probably leaving some workflow efficiency on the table
Typical HN to assume what the best workflow efficiency is, and that it mostly hinges on a specific technology usage :)
Imagine I'd claim that since you're not using nrepl and a repl connected to your editor, you're leaving some workflow efficiency on the table, even though I know nothing about your environment, context or even what language you program in usually.
Ok, don't forget looking into nrepl, so you can be as productive as me and others. That's the second time, now we just wait for the third for you to seriously consider it.
This is what I hate most about DI as well and when I told some other devs about this pet peave of mine they were looking at me like I had 2 head or something.
Definitely still an issue in C#. C# devs are just comfortable with the way it is because they don't know better and are held hostage. Everything in C# world after a certain size will involve IOC/DI and the entire ecosystem of frameworks that has co-evolved with it.
The issues are still there. You can't just "go to definition" of the class being injected into yours, even if there is only one. You get the Interface you expect (because hey you have to depend on Interfaces because of something something unit-testing), and then see what implements that interface. And no, it will not just point to your single implementation, it'll find the test implementation too.
But where that "thing" gets instantiated is still a mystery and depends on config-file configured life-cycles, the bootstrapping of your application, whether the dependency gets loaded from a DLL, etc. It's black-box elephants all the way to the start of your application. And all that you see at the start is something vague like: var myApp = MyDIFramework.getInstance(MyAppClass); Your constructors, and where they get called from is in a never-ending abyss of thick and unreadable framework code that is miles away from your actual app. Sacrificed at the alter of job-creation, unit-testing and evangelist's talk-resume padding.
Java is virtual by default. C# is not. You could mark every single method virtual and mock it like Java. But it’s easier to define a contract and mock that.
Yes, the comments about "$25 name for a 5c concept" ring true when you're looking at a toy example with constructor(logger) { .. }.
Then you look at an enterprise app with 10 years of history, with tests requiring 30 mocks, using a custom DI framework that only 2 people understand, with multiple versions of the same service, and it feels like you've entered another world where it's straight up impossible to debug code.
> You can't just "go to definition" of the class being injected into yours, even if there is only one.
This situation isn't unique when using DI (although admittedly DI does make using interfaces more common). However, that's what the "go to implementation" menu option is for.
For a console app, you're right that a DI framework adds a lot of complexity. But for a web app, you've already got all that framework code managing controller construction. If you've got the black box anyways, might as well embrace it.
Make those dependency interfaces dynamic enough to be practically untyped, introduce arbitrary implicit ordering requirements, and we have now invented Middleware.
I haven't really done any c# for 5+ years. What has changed?
I remember trying to effectively reverse-engineer a codebase (code available but nobody knew how it worked) with a lot of DI and it was fairly painful.
Maybe it was possible back then and I just didn't know how ¯\_(ツ)_/¯
If the rules of the dependency injection framework are well understood, the IDE can build a model in the background and make it navigable. I can't speak for C#, but Spring is navigable in IntelliJ. It will tell you which implementation is used, or if one is missing.
In a Spring application there are a lot of (effective) singletons, the "which implementation of the variable that implements Foo is it" becomes also less of a question.
In any case, we use Spring on a daily basis, and what you describe is not a real issue for us.
Also, what I think is also important to differentiate between: dependency injection, and programming against interfaces.
Interfaces are good, and there was a while where infant DI and mocking frameworks didn't work without them, so that folks created an interface for every class and only ever used the interface in the dependent classes. But the need for interfaces has been heavily misunderstood and overstated. Most dependencies can just be classes, and that means you can in fact click right into the implementation, not because the IDE understands DI, but because it understands the language (Java).
Don't hate DI for the gotten-out-of-control "programming against interfaces".
In every language/IDE I've ever used ctrl-click would take you to the interface definition, then you have a second "Show implementations" step that lists the implementations (which is usually really slow) and finally you can have to select the right implementation from the list.
It's technically a flaw of using generic interfaces, rather than DI. But the latter basically always implies the former.
I’m not sure why you’re being down voted despite being correct.
If there are multiple implementations it gives a list to navigate to. If there’s 1 it goes straight to it. Don’t know about IntelliJ but rider and vs do this. And if the solution is indexed this is fast.
Why, as a professional, would you not use professional tooling. Not just for DI, but there are many benefits to using an IDE. If you want to hone your skills in your own time by using a text editor, why not. But as a professional, denying the use of an IDE is a disservice to your team. (But hey, everyone's entitled their opinion!)
Edit: upon rereading I realize your point was about reading code, not writing it, so I guess that could be a different use case...
Being able to understand a system under fire with minimal tooling available is a property one must design for. If you get woken up at 3am with a production outage, the last thing you want to do is start digging through some smart-ass framework's idea of what is even running to figure out where the bug is.
There's nothing wrong with using an IDE most of the time, but building dependence on one such that you can't do anything without it is absolute folly.
"Dependency injection is too complicated, look at this one straight-line implementation" is not exactly a fair argument.
The whole point of DI is that when you can't just write that straight-line implementation it becomes easier, not harder. What if I've got 20 different handlers, each of which need 5-10 dependencies out of 30+, and which run in 4 different environments, using 2 different implementations. Now I've got hundreds of conditionals all of which need to line up perfectly across my codebase (and which also don't get compile time checks for branch coverage).
I work with DI at work and it's pretty much a necessity. I work without it on a medium sized hobby project at home and it's probably one of the things I'd most like to add there.
But can’t you do your DI by hand? The frameworks can really become absurd in their complexity for a lot of tasks (I’m sure they make sense in many situations). DI is a concept that can be orchestrated with normal code at the top of the execution just fine, eliminating a lot of cruft.
To be fair, the numbers you h the row out sound like a framework becomes valuable, but most places I’ve seen DI frameworks, they could be replaced with manual DI and it would be much simpler.
When you say "do your DI by hand", do you mean wiring up each case, or do you mean writing a DI system that treats the cases generically? The former is what I'm suggesting becomes untenable at some point. The latter is just writing your own DI framework for, which I think would be fine.
DI frameworks are complicated, but they're a constant level of complicated, they don't get more complicated as the codebase grows. Not using a DI framework is simple at the beginning, but it grows, possibly exponentially, and at some point crosses the line of constant complexity from a DI framework.
Finding where those lines intersect is just good engineering. Ignoring the fact that they do intersect is not.
> Not using a DI framework is simple at the beginning, but it grows, possibly exponentially, and at some point crosses the line of constant complexity from a DI framework.
Look at my example numbers, and think through how you would write the setup process for that piece of software. You'd need a ton of conditionals checking all sorts of different things, woven throughout the codebase, in many different classes, etc.
With a DI framework of some kind, those conditionals would likely not exist, instead you'd be able to specify all the options, and the DI framework takes over stitching them together and finding the dependencies between them.
> What if I've got 20 different handlers, each of which need 5-10 dependencies out of 30+, and which run in 4 different environments, using 2 different implementations.
Listen to your code. If it’s hard to write all that, it probably means you have too many dependencies. Making it easier to jam more dependencies in your modules is just going to make things worse. “But I need all those…” Do you really? They rarely ever are all necessary, in my experience. Usually there’s a better way to untangle the dependency tree.
This was a hypothetical, but I've seen plenty of codebases like this. There's going to be some cruft in there, but even just the baseline for a service capable of releasing, with feature flags, safely, with canaries etc, to XXk-Xm requests per second and <XXms per request, with no downtime, is quite a lot.
Meh, I've worked on services with in languages with a DI framework (Java+Guice or Spring Boot) and not (C++, Go) and the latter is much nicer. And yeah some had Xm requests.
You just dispense with all the frippery that hides the fact that you are depending on global variables if you really want the Guice/Spring Boot experience.
The C++ code was much, much easier to trace by hand. It was easier to test. It started much much faster, speeding iteration. Meanwhile the Java service was a PITA to trace, took 30+ seconds to boot, and couldn't be AOT compiled to address that because of all the reflection.
I am referring to DI as a general practice. Go does not seem particularly well suited to DI, but I'm not a fan of it as a language in general because I don't think it lets you build the right abstractions.
I've done DI in Java (Guice), Python (pytest), Go (internal), and a little in C++ (internal). The best was Pytest, very simple but obviously quite specific. Guice is a bit of a hassle but actually fine when you get the hang of it, I found it very productive. The internal Go framework I've used is ok but limited by Go, and I don't have enough experience to draw conclusions from the C++ framework I've used.
Every so often a developer challenges the status quo.
Why should we do it like this, why is the D in SOLID so important when it causes pain?
This is lack of experience showing.
DI is absolutely not needed for small projects, but once you start building out larger projects the reason quickly becomes apparent.
Containers...
- Create proxies wrapping the objects, if you don't centralise construction management it becomes difficult.
- Cross cutting concerns will be missed and need to be wired everywhere manually.
- Manage objects life cycles, not just construction
It also ensures you code to the interface.
Concrete classes are bad, just watch what happens when a team mate decides they want to change your implementation to suit their own use cases, rather than a new implementation of the interface. Multiply that by 10x when in a stack.
Once you realise the DI pain is for managing this (and not just allowing you to swap implementation, as is often the the poster boy), automating areas prone to manual bugs, and enforcing good practices, the reasons for using it should hopefully be obvious. :)
The D in SOLID is for dependency INVERSION not injection.
Most dependency injection that I see in the wild completely misses this distinction. Inversion can promote good engineering practices, injection can be used to help with the inversion, but you don’t need to use it.
Agreed, and I conflated the two since I've been describing SOLID in ways other devs in my team would understand for years.
Liskov substitution for example is an overkill way of saying don't create an implementation that throws an UnsupportedOperationException, instead break the interfaces up (Interface Segregation "I" in SOLID) and use the interface you need.
Quoting the theory to junior devs instead just makes their eyes roll :D
Honestly inversion kinda sucks because everybody does it wrong. Inversion only makes sense if you also create adapters, and it only makes sense to create adapters if you want to abstract away some code you don’t own. If you own all the code (ie layered code), dependency inversion is nonsensical. Dependency injection is great in this case but not inversion.
It's not just not needed for small projects it is actively harmful.
It's also actively unhelpful for large projects which have relatively more simple logic but complex interfaces with other services (usually databases).
DI multiplies the amount of code you need - a high cost for which there must be a benefit. It only pays off in proportion to the ratio of complexity of domain logic to integration logic.
Once you have have enough experience on a variety of different projects you should hopefully start to pick up on the trade offs inherent in using it to see when it is a good idea and when it has a net negative cost.
While I agree this is largely a "skill issue", I'm not so sure it's in the direction you seem to think it is.
Almost nothing written using Go uses an IoC container (which is what I assume you're meaning by DI here). It's hard to argue that "larger projects" cannot or indeed are not built using Go, so your argument is simply invalid.
I've written a couple large apps using Uber's FX and it was great. The reason why it worked so well was that it forced me to organize my code in such a way as to make it super easy to test. It also had a few features around startup/shutdown and the concept of "services" and "logging" that are extremely convenient in an app that runs from systemd.
All of the complexity boils down to the fact that you have to remember to register your services before you can use them. If you forget, the stack trace is pretty hard to debug. Given that you're already deep into FX, it becomes pretty natural to remember this.
That said, I'd say that if you don't care about unit tests or you are good enough about always writing code that already takes things in constructors, you probably don't need this.
Separating your glue code from your business logic is a good idea for several reasons. That's all dependency injection, or inversion of control is. It's more of a design pattern than a framework thing. And structuring your code right means that things are a bit easier to test and understand as well (those two things go hand in hand). Works in C, Rust, Kotlin, Javascript, Java, Ruby, Python, Scala, Php, etc. The language doesn't really matter. Glue code needs to be separate from whatever the code does.
Some languages seem to naturally invite people to do the wrong thing. Javascript is a great example of this that seems to bring out the worst in people. Many of the people wielding that aren't very experienced and when they routinely initialize random crap in the middle of their business logic executed asynchronously via some event as a side effect of a butterfly stirring its wings on the other side of the planet, you end up with the typical flaky untestable, and unholy mess that is the typical Javascript code base. Exaggerating a bit here of course but I've seen some epically bad code and much of that was junior Javascript developers being more than a little bit clueless on this front.
Doing DI isn't that hard. Just don't initialize stuff in places that do useful things. No exceptions. Unfortunately, it's hard to fix in a code base that violates that rule. Because you first have to untangle the whole spaghetti ball before you can begin beating some sense into it. The bigger the code base, the more likely it is that it's just easier to just burn it down to the ground and starting from scratch. Do it right and your code might still be actively maintained a decade or more in the future. Do it wrong and your code will probably be unceremoniously deleted by the next person that inherits your mess.
I think people's associations might be wrong then. In general, people seem to have a lot of misconceptions about DI. Like needing frameworks. Basically by inverting control of what initializes code, you create a hard separation between glue code and logic. Any logic that initializes code would violate that principle. You inject your dependencies because you are not allowed to create them yourself.
And yes, that is good coding practice. That kind of was my point.
> I think people's associations might be wrong then
I don't disagree, just that when talking about any given subject having an understanding of how the audience already thinks about that subject is somewhat important.
The bigger better goal here is probably to try and get folks to internally separate DI as a pattern from DI/IOC frameworks.
It always blew my mind that "dependency injection" is this big brouhaha and warrants making frameworks, when dynamic vars in Lisp basically accomplish the same task without any fanfare or glory.
> when you need a delayed job server to have the user context of different users depending who triggered the job
I feel this is just a facet of the same confusion that leads to creating beautiful declarative systems, which end up being used purely imperatively because it's the only way to use them to do something useful in the real world; or, the "config file format lifecycle" phenomenon, where config files naturally tend to become ugly, half-assed Turing-complete programming languages.
People design systems too simple and constrained for the job, then notice too late and have to hack around it, and then you get stuff like this.
For the standard web page lifecycle it's fine, but for instances like this it really does become fiddly.
But often it's possible, but often a ideological stance the framework team have taken that leads to a poor documentation issue.
The asp.net core team have some weird hills they die on, and some incredibly poor designs that stem from an over adherence to trendy patterns. It often feels they don't understand why those patterns exists.
This results in them hardly documentating how to use the DI outside of their 'ideal' flow.
They also try and push devs to use DI for injecting config. Which no other language does and is just unnecessarily complicated. And it's ended up with a system no-one really understands while the old System.Configuration, while clunky, at least automatically rebooted the app when you edited the config. Which is the 95% use case most devs would want.
FWIW, when I thought about it in the larger enterprise context, I realized that I also hold a seemingly opposite view. I presented that elsewhere in this thread:
TL;DR: the goal of enterprise frameworks isn't to make Perfect Software Framework or to make code beautiful, devoid of bloat, or even easy. Their goal is to make programming consistent and predictable, to make programmers exchangeable. It's to allow an average developer to churn around working results at a predictable pace, as long as the project is just standard stuff, and they don't bring their own opinions into it. Large businesses want things this way, because that's how they think about everything (see also: Seeing Like a State).
Of course, this doesn't mean the framework authors succeed at that goal either :). Some decisions are plain stupid. But less than one would think.
DI is a very religious concept, people hate it or love it.
I myself am on the dislike camp, I have found that mocking modules (like you can with NodeJS testing frameworks) for tests gives most of the benefits with way less development hell. However you do need to be careful with the module boundaries (basically structure them as you would with DI) otherwise you can end up with a very messy testing system.
The value of DI is also directly proportional to the size of the service being tested, DI went into decline as things became more micro-servicy with network-enforced module boundaries. People are just mocking external services in these kind of codebases instead of internal modules, which makes the boundaries easier.
I can see strict DI still being useful in large monolith codebases worked by a lot of hands, if only to force people to structure their modules properly.
This will vary from firm to firm depending on what you're writing, but I generally find DI to be more complexity than needed. Granted,
- I'm willing to rewrite some code if we decide that a core library needs to get swapped out
- I'm always using languages that allow monkey-patching, so I'm not struggling to test my code because, for example, it's hard to substitute a mock implementation for `Date.now()`.
DI makes more sense if you're not in that position. But in that position, DI adds "you need these three files in your brain at the same time to really understand what's going on" complexity that I seek to avoid.
(Also, DI overlaps with generics, and to the extent that you can make things generic and it makes sense to do so, you should).
I think you might be living in a bubble of some sort? Spring, specifically Spring Boot, is extremely popular. Calling it unfashionable is simply wrong.
I agree that it is a cancer. Monocultures are rarely a good idea. And I strongly prefer explicit dependencies and/or compile time magic over runtime magic. But it is "convenient" and very much en vogue.
I think the problem I'm trying to describe is yes, Spring is popular within the people that do work on Java appications, however the Java/Spring platform is the reason most developers do not want to use Java. Java/Quarkus, Java/Micronaut or even Java/VertX would be more popular if they became the default Java framework in stead of Java/Spring.
>Strangely I seem to have built all of my software without dependency injection
I'm going to guess that you've most likely used dependency injection without even thinking about it. It's one of those things you naturally do because it makes sense, even if you don't know it has an actual name, frameworks, and all that other stuff that often only makes it more confusing.
You must not work in an object-oriented language, then? (Which is very possible.) Or did you mean that you have never built software with a dependency injection framework?
Yeah I once got a job and after I got the job when they found out I'd never done dependency injection they said "we'd never have hired you if we knew that." Mind you that same manager also believed no code should ever be written if it doesn't have a test written first - real code is only ever an outcome of writing something to match what a test expects - poof - all the fun and creativity went out of programming there in an instant.
My philosophy of programming is "there's no right way to do it, do what works for you and makes you happy and if someone tell you you're doing it wrong, pay no attention - they're a bully and a fool".
This isn't about bullying someone into writing tests, it is about creating value that lasts over an extended period of time.
The value of tests doesn't generally come from when you first write them. It comes from when you're working on a codebase written by someone else (who has long ago quit, or been fired).
It helps me understand and be able to refactor their code. It gives me the confidence to routinely ship something to production and know that it won't break.
What confidence would you have on written tests by the person who left long time ago? Having tests (somebody else wrote) could mislead you in believing they got your back.
I’m not sure what point you’re trying to make. This is clearly better than having no tests at all. It’s not like I’m flying blind, the tests are right there, and I can read through them. If the coverage isn’t good enough, I can always add more. And let’s not ignore the fact that the presence of tests in the first place means someone gave a damn.
That only works if if what you're doing actually works - not just in terms of producing code that works once, but in terms of producing code that's maintainable. I don't know for sure that you're a "terrible programmer", but you're saying all the things that the terrible programmers I've worked with tended to say.
I think I can understand the boat you're in, bro. Both of the things that you don't do, I also didn't do for quite a long time, and I didn't particularly see the value in doing them (once upon a time); but I've been on a journey to make them part of how I code, and I'm pretty sure that I'm a better coder now than I was back then.
Writing tests for nearly all my code, in particular, is these days the only way I roll - and as for TDD (i.e. write the test and let it fail first, then write the actual code and make the test pass), I do it quite often, and I guarantee you that - contrary to your opinion - it makes coding a whole new kind of fun and creative. Dependency injection I still consider myself less of a ninja at, but I've done it (and seen it done) enough times now that I get it and I see the value in it.
I think it's a bit stupid for an employer to say "we'd never have hired you if we knew you had no experience in X" (sure, this doesn't apply to all skills, but I'd say it applies to quite a few). If you're worth hiring, then you'll pick up X within a few months on the job. I'm grateful to several past employers of mine, for showing me the ropes of TDD and DI (among many other things).
Anyway, I'm not saying that the above things are "the (only) right way to do it", and please don't take my above ramblings as making a judgement on your coding prowess. I agree, do what works for you. I'm just saying that there's always more to learn, and that you should always strive to be open-minded to new skills and new approaches.
What is there to be a "ninja" about when it comes to DI? As the article explains in the beginning it just means that you initialize and pass something into whatever depends on it instead of initializing it inside that thing.
It's too complicated of a term for what it is because we generally don't say we inject arguments into a function when we call a function.
But maybe you mean patterns building on that, e.g. repository/adapter patterns.
Not really. Frame or not frame a house would be a core requirement if the jobs needs that. Having used DI is not a core requirement. It is something you can learn in 2 hours if you are experienced. It might be like a carpenter not having used a specific tool but instead used another tool and there is a 4hr training at the local college on how to use the new tool.
Or like a pilot doesn't get a job because they flew a slightly older Airbus model and need to do some sim time.
I noticed usually DI is not necessary at runtime, but rather at compile (or boot) time.
In practice I noticed I'm ok with direct dependency as long as I can change the implementation with a compile time variable. For the tests, I use an alternative implementation, for development another.
I don't swap an implementation for another within the same code. It is an option, but it happens so rarely that it seems absurd optimizing for it.
So, I like dependency injection as a concept, but I avoid it to reduce complexity. The advantage is that you can get by with a lot more "global" code. In Go this is particularly nice since DI is really nasty (struct methods have various limitations)
I highly agree. I especially believe that manual DI should always be the starting point. Eventually one can evaluate if there really is a need for a framework. It's already dangerous if I have to change the code significantly just to satisfy the framework.
As someone who was raised in the religion of Java and Spring and SpringBoot (many years over many companies). It was a revelation to work on micro-services that didn’t use a DI framework. I’m now thoroughly against them.
Rules are different with microservices, like globals can be ok. But even in large projects with big services, idk what problem these frameworks are trying to solve. I've never felt the slightest need to introduce some metaprogramming to get dependencies where they need to be.
Though my daily work involves plenty of DI, and I see the need for it, I see some unfortunate side-effects, in the behaviours it 'causes'.
- the 'autopilot GPS' problem: Colleagues who basically have no idea how things fit together, because DI connects the dots for them. So they end up with either a vague or no mental model of that happens below the surface.
- the same, but related to scope and costs:
Costs: Because they don't touch what is 'built behind the scenes', they get no sense of how costly it is ('every time you do use that thing, it instantiates and throws away a million things').
Scope: Often business logic dictates that the construction hierarchy consists of finely tuned parts: You don't just need an instance of 'Foo', you need a Foo instance that originates from a specific / certain request. And if you then use two Bar's together, where Bar 1 is tied to Foo 1 but Bar 2 is tied to Foo 2, you will get strange spurious errors (think, for example, ORMS and database transactions - the two Foos or Bars may relate to different connections, transactions or cursors.)
One antipattern I have seen (which may actually be an argument FOR DI..), is the 'query everything' service, which incorporates 117 other sub-services.
And some of the junior developers just love that service, because "then I can query everything, from a single place!"
(yes.. but you just connected to 4 databases with 7 connections, and you are only trying to select a single row from one of them. And again, code with the everything-service becomes quite untestable).
The best part of the article is its advice for triggering the broken dependencies at compile-time, I really hate when I have to go through complicated flow #127 to learn that a dependency is broken.
Can you show a project that effectively uses effect-ts? The docs is a tsunami of information that just looks to try to make a whole new language out of TS. If someone else had to review my code I doubt they knew what was going on
You probably don't need functional programming. Here is how to do it with a for-loop.
You don't see many articles written like that because it kinda would be obvious that the author hasn't bothered to understand the approach that he is critizing.
Yet when it comes to OO concepts people from "superior" platforms like Go or the FP crowd just cannot let go of airing their ignorance.
Just leave OO alone unless you are genuinely interested in the approach.
I don't like DI as a concept because it typically obscures the path/source of the file where relevant code is located. DI trades off long term readability for short term implementation convenience. Maintainability requires strict adherence and awareness of conventions which it something that many developers are terrible with.
> But that reflection-driven magic is also where the pain starts. As your graph grows, it gets harder to tell which constructor feeds which one. Some constructor take one parameter, some take three. There’s no single place you can glance at to understand the wiring. It’s all figured out inside the container at runtime.
That's the whole point. Depdendency Inversions allows you to write part of the code in separation, without worrying about all the dependencies of each component you create and what creates what where.
If your code is small enough that you can keep all the dependencies in your head at the same time and it doesn't slow you down much to pass them all around all the time - DI isn't worth it.
If it becomes an issue - DI starts to shine. There are other solutions as well, obviously (mostly in the form of Object-Orientified global variables - for example you keep everything in GameWorld object and pass it everywhere).
As a (mainly) Python dev, I'm aware that there are DI frameworks out there, but personally I haven't to date used any of them.
My favourite little hack for simple framework-less DI in Python these days looks something like this:
# The code that we want to call
def do_foo(sleep_func = None):
_sleep_func = sleep_func if sleep_func is not None else time.sleep
for _ in range(10):
_sleep_func(1)
# Calling it in non-test code
# (we want it to actually take 10 seconds to run)
def main():
do_foo()
# Calling it in test code
# (we want it to take mere milliseconds to run, but nevertheless we
# want to test that it sleeps 10 times!)
def test_do_foo():
mock_sleep_func = MagicMock()
do_foo(sleep_func=mock_sleep_func)
assert mock_sleep_func.call_count == 10
However, in Python I prefer to use true DI. I mostly like Injector[0] because it’s lightweight and more like a set of primitives than an actual framework. Very easy to build functionality on top of and reuse - I have one set of modules that can be loaded for an API server, CLI, integration tests, offline workers, etc.
That said, I have a few problems with it - 2 features which I feel are bare minimum required, and one that isn’t there but could be powerful. You can’t provide async dependencies natively, which is not usable in 2025 - and it’s keyed purely on type, so if you want to return different string instances with some additional key you need a wrapper type.
Between these problems and missing features (Pydantic-like eval time validation of wiring graphs) I really want to write my own library.
However, as a testament to the flexibility of Injector, I could implement all 3 of these things as a layer on top without modifying its code directly.
Yeah, I use this sometimes too (even though Python makes "monkey patching" easy). However, note that it's simpler and clearer to use a default value for the argument:
def do_foo(sleep=time.sleep):
for _ in range(10):
sleep(1)
The basic pattern of not having one object construct some other object that has external references to it is.. kind of obvious. I didn't know there was a name for it but sure, I agree, fine.
But the way DI is usually implemented is with this bag of global variables which you just reach in and grab the first object of the desired type. I call this the little jack horner pattern. Stick in your thumb and pull out a plum. That, is stupid. You've reinvented global variables, but actually worse. Congratulations.
The system is composed of classes which are nicely encapsulated, independent and obey Liskov substitution and all that. You can connect them in different arrangements and they play along nicely.
But then some classes which use other classes hard code those classes in their constructor. They then work with those specific hard-coded classes. It's like if someone crazy-glued some of our Lego blocks together.
We recognize this problem and allow the sister objects to be configurable.
Then some opinionated nubmnut comes along and says, "hey, we should call this simple correction 'dependency injection'". And somehow, everyone listens.
Common sense is in short supply these days. It's a shame we need blog posts like these to outline how much you lose when you go with the "magic" approach. Devs just seem to be allergic to simple but verbose code.
The key to using a framework effectively, whether it's Spring in Java or SAP for your business, is to accept that the framework knows better than you - especially when it objectively does not- and when there's a difference between how you or your business think of things, vs. how the framework frames them, it's your thoughts and your business that must change. Otherwise, you're fighting the framework, and that's worse than just not using it.
Do you not think I've heard that line before? The framework knows nothing. It's made by a bunch of children that make CRUD apps wrapping an SQL query behind an HTTP server over and over again. They don't make any applications that do anything commercially or technically interesting, resorting themselves to infinitely copying data structures with increasingly complex annotations, a practice they call "business logic", to trick themselves into feeling like they're doing something.
I've been there. I've seen it. It doesn't lead anywhere. The abstractions that spring (and other heavyweight JavaEE-style frameworks) provide is razor thin, and usually implemented in the most trivial possible way. The frameworks, like the applications often built on them, do nothing interesting.
EDIT: I realize this is a pretty unkind way to put it. I hope readers can understand the argument along with the indignation I express. I do believe very strongly in these points, but wish I could express them without quite as much anger. I can't tough. Parse out what useful stuff you can glean, and leave the rest along with the knowledge that you don't have to impress me.
I'm sorry too, I realize I didn't make my point clear. Yes, frameworks are stupid. Their designs are likely suboptimal from the start, and only get worse over time, accumulating hacks to address biggest issue while they double down on going the wrong direction. A competent engineer will easily come up with better ways of doing any individual thing a framework offers, and they have a good shot at designing a framework much better for the needs of their team and their business.
Which is why I brought up SAP. It's well-known that adopting SAP usually ends up burning untold millions of dollars on getting expensive consultants to customize it for specific needs of the company, until it either gets written off and abandoned, or results in a "successful" adoption that everyone hates.
It's less well-known that the key to adopting SAP effectively and getting the promised value out of it is to change your business process to fit SAP. Yes, whatever processes a business had originally were likely much smarter and better for their specific situation and domain, but if they really want the benefits that come with SAP, abandoning wisdom of your old ways is the price of admission.
I say the same is true of software frameworks. Most businesses don't do anything interesting or deep either; you don't integrate with SAP to handle novel challenges, you do it to scale and streamline your business, which actually involves making most work more mindless and boring, so it can be reliably and consistently done by average employees. Software frameworks, too, aren't there to help you with novel technical challenges; they're there to allow businesses to be more efficient at doing the same boring shit every other business is doing.
I personally hate those frameworks, but that's because they're not meant for people like me; it doesn't mean they don't work. They're just another form of bureaucracy - they suck all life and fun and individuality from work, but by doing that, they enable it to scale.
Of the countless pro's for Dependency Injection, it allows for near perfect test isolation, among many other benefits. Test the code under test, mock the rest. Not not mention limitless composition instead of inheritance.
The real con of Dependency Injection = {Developer Egos + ( Boredom | Lax Deadlines ) Lack of senior oversight} which inevitably yields needless overengineering.
I'm ok with "dependency injection" being confused with "dependency injection framework," cause it's silly to have a name for the first thing. Might as well call it "parameter injection" when I call a function, and "memory carburation" when I instantiate a variable.
I agree. I had to do what the article says in Node for a project for $reasons but secretly I loved not using a framework, and having the construction explicit. I've also seen bugs because tests may set up DI different to prod.
Don't hate a paradigm because you only experienced one bad implementation of it.
In IntelliJ, with the Spring Framework, you can have thorough tooling: You can inspect beans, their dependencies, you even get a visual bean graph, you can write mocks and test dependencies and don't even need interfaces anymore and if a dependency is missing, you will receive an IDE warning before runtime.
I do not understand why people are so excited about a language and its frameworks where the wheel is still actively being reinvented in a worse way.
IoC is nice (or DI as a concept in particular), but DI frameworks/libraries sometimes are a mess.
I've had my fair share of Java and Spring Boot projects and it breaks in all sorts of stupid ways there, even things like the same exact code and runtime environment working in a container that's built locally, but not working when the "same" container is built on a CI server: https://blog.kronis.dev/blog/it-works-on-my-docker
Literally a case where Spring Boot DI just throws a hissy fit that you cannot easily track down, where I had to mess around with the @Lazy annotation (despite the configuration to permit that being explicitly turned on too) in over 100 places to resolve the issue, plus then when you try to inject a list of all classes that implement an interface with @Lazy it doesn't seem like their order is guaranteed either so your DefaultValidator needs to be tacked on to that list manually at the end.
Sorry about the Java/Spring rant.
It very much feels like the proper place for most DI is at compile time (like Dagger does for Java, seems closer to wire) not at runtime, or just keep IoC without a DI framework/library and having your code look a bit more like this:
@Override
public void run(final BackendConfiguration configuration,
final Environment environment) throws IOException, TimeoutException {
// Initialize our data stores
mariaDBManager = new MariaDBManager(configuration, environment);
redisManager = new RedisManager(configuration);
rabbitMQManager = new RabbitMQManager(configuration);
// Initialize our generic services
keyValueService = new KeyValueService(redisManager);
sessionService = new SessionService(keyValueService, configuration);
queueService = new QueueService(rabbitMQManager);
// Initialize services needed by resources
accountService = new AccountService(mariaDBManager);
accountBalanceService = new AccountBalanceService(mariaDBManager);
auctionService = new AuctionService(mariaDBManager);
auctionLotService = new AuctionLotService(mariaDBManager);
auctionLotBidService = new AuctionLotBidService(mariaDBManager);
// Initialize background processes based on feature configuration
if (configuration.getApplicationConfiguration().getFeaturesConfiguration().isProcessBids()) {
bidListener = new BidListener(queueService, auctionLotBidService, auctionLotService, accountBalanceService);
try {
bidListener.start();
logger.info("BidListener started");
} catch (IOException e) {
logger.error("Error starting BidListener: {}", e.getMessage(), e);
}
}
// Register resources based on feature configuration
if (configuration.getApplicationConfiguration().getFeaturesConfiguration().isAccounts()) {
environment.jersey().register(new AccountResource(accountService, accountBalanceService, sessionService, configuration));
}
if (configuration.getApplicationConfiguration().getFeaturesConfiguration().isBids()) {
environment.jersey().register(new AuctionResource(
auctionService, auctionLotService, auctionLotBidService, sessionService, queueService));
}
...
}
Just a snippet of code from a Java Dropwizard example project, not all of its contents either, but should show that it's nothing impossibly difficult. Same principles apply to other languages and tech stacks, plus the above is unequivocally easier to put a breakpoint in and debug, vs some dynamic annotation or convention based mess.
Overall, I agree with the article, even across multiple languages.
Dependency Injection has a fancy name that makes some developers uncomfortable, but it's really just all about making the code easier to test. Basically everything that the class depends upon has to be informed during construction.
This can be done manually, but becomes a chore super fast - and will be a very annoying thing to maintain as soon as you change the constructor of something widely used in your project to accept a new parameter.
Frameworks typically just streamline this process, and offers some flexibility at times - for example, when you happen to have different implementations of the same thing. I find it funny that people rally against those Frameworks so often.
To make things more concrete, let's say you have a method that gets the current date, and has some logic there (for example, it checks if today is EOM to do something). In Java, you could do `Instance.now()` to do this.
This will be a pain in the ass to test, you might need to test, for example a date when there's a DST change, or February 28th in a leap year, etc. With DI you can instead inject an `InstantSource` to your code, and on testing you can just mock the dependency to have a predictable date on each test.
Why is it a pain to inject dependencies manually? I think this is because people assume for some reason that a class isn't allowed to instantiate its own dependencies.
If you lift that assumption, the problem kind of goes away. James Shore calls this Parameterless Instantiation: https://www.jamesshore.com/v2/projects/nullables/testing-wit...
It means that in most cases, you just call a static factory method like create() rather than the constructor. This method will default to using Instance.now() but also gives you a way to provide your own now() for tests (or other non-standard situations).
At the top of the call stack, you call App.create() and boom - you have a whole tree of dependencies.
If the class instantiates its own dependencies then, by definition, you're not injecting those dependencies, so you're not doing dependency injection at all!
That seems too dogmatic. Does the fact that Foo has a static function that knows how to create its dependencies disqualify the class from being a case of DI?
Yes because injecting nowFn is trivial and not a case for DI. Consider a database handle, or network socket, or http response payload. Clearly each class shouldn't be making its own version of those.
You're right, stateful dependencies like DB handles need to be passed in manually, and that's a bit of extra legwork you need to do.
Just use a static variable to the DB instance / pool, maybe using singletons, so everyone needing it have access.
You're nitpicking for no good reason. You can create global handles to each of those items, and let them instantiate with the class or override them with a create function.
Dependency injection boils down the question of whether or not you can dynamically change a dependency at runtime.
In other words, despite all the noise to the contrary, hard-coded dependencies are fine.
James explains this a lot better than I can: https://www.jamesshore.com/v2/blog/2023/the-problem-with-dep...
[dead]
> James Shore calls this Parameterless Instantiation
Mark Seemann calls it the Constrained Construction anti-pattern: https://blog.ploeh.dk/2011/04/27/Providerisnotapattern/#4c7b...
many of the things I dependeon are shared services where there should only be one instance. Singleton means a global variable. a di framework lets me have on without a global - meaning if I need a second service I can do it with just a change to how the di works.
there is no right answer of course. Time should be a global so that all timers/clocks advance in lock step. I hav a complex fake time system that allows my tests to advance minutes at a time without waiting on the wall clock. (If you deal with relativity this may not work - for everyone else I encourage it)
It often is not enough. Singletons frequently need to be provided to things (a connection pool, etc.).
Maybe I'm misunderstanding what you're saying but a connection pool seems like almost a canonical example of something that shouldn't be a singleton. You might want connection pools that connect to different databases or to the same database in read-only vs. read/write mode, etc.
I meant "singleton" in the sense of a single value for a type shared by anything that requires one, i.e. a Guice singleton ( https://github.com/google/guice/wiki/scopes#singleton ) not a value in global scope. Or maybe a single value by type with an annotation, the salient point is that there are values in a program that must be shared for correctness. Parameterless constructors prohibit you from using these (unless you have global variables).
Then these different pools can be separate singletons. You still don't want to instantiate multiple identical pools.
You can use the type system to your advantage. Cut a new type and inject a ReadOnlyDataSource or a SecondDatabaseDataSource or whatnot. Figure out what should only have one instance in your app, wrap a type around it, put it in the singleton scope, and inject that.
I think the point is that you can already do all of that with hand-wired dependency injection. I wrote about an example of that a couple of years ago here: https://jonathan-frere.com/posts/how-i-do-dependency-injecti...
This has the advantage that you don't need an extra framework/dependency to handle DI, and it means that dependencies are usually much easier to trace (because you've literally got all the code in your project, no metaprogramming or reflection required). There are limits to this style of DI, but in practice I've not reached those limits yet, and I suspect if you do reach those limits, your DI is just too complicated in the first place.
I think most people using these frameworks are aware that DI is just automated instantiation. If your program has a limited number of ways of composing instantiations, it may not be useful to you. The amount of ceremony reduced may not be worth the overhead.
This conversation repeats itself ad infinitum around DI, ORMs, caching, security, logging, validation, etc, etc, etc... no, you don't need a framework. You can write your own. There are three common outcomes of this:
* Your framework gets so complicated that someone rips it out and replaces it with one of the standard frameworks.
* Your framework gets so complicated that it turns into a popular project in its own right.
* Your app dies and your custom framework dies with it.
The third one is the most common.
I'm not suggesting a custom framework here, I'm suggesting no DI framework at all. No reflection, no extra configuration, no nothing, just composing classes manually using the normal structure of the language.
At some point this stops working, I agree — this isn't necessarily an infinitely scalable solution. At that point, switching to (1) is usually a fairly simple endeavour because you're already using DI, you just need to wire it together differently. But I've been surprised at the number of cases where just going without a framework altogether has been completely sufficient, and has been successful for far longer than I'd have originally expected.
If you construct an object once, then what is the difference between that and a singleton?
The answer is scope. Singletons exist explicitly for the purpose of violating scoping for the convenience of not having to thread that dependency through constructors.
https://testing.googleblog.com/2008/08/root-cause-of-singlet...
Calling these "singletons" maybe created confusion here, I'm more talking about singleton in the sense of a value that is created once and shared (i.e. injecting a value as a "Singleton" via something like Guice). I'm not arguing that you need to have values in global scope, I'm arguing that parameterless constructors prevent you from using shared values (actually _unless_ you have singletons in global scope).
> Dependency Injection has a fancy name that makes some developers uncomfortable, but it's really just all about making the code easier to test.
It's not just a fancy name. I'd argue it's a confusing name. The "$25 name for a 5c concept" quote is brilliant. The name makes it sound like it's some super complicated thing that is difficult to learn, which makes it harder to understand. I would say "dynamic programming" suffers the same problem. Maybe "monads".
How about we rename it? "Generic dependencies" or "Non hard-coded dependencies" or even "dependency parameters"?
The name is confusing because originally [0] it was exclusively about dynamic injection by frameworks. Somehow it morphed into just meaning suitable constructor parameters, at least in some language communities.
I like “dependency parameters”. Dependencies in that sense are usually what is called service objects, so “service parameters” might be even clearer.
[0] https://www.martinfowler.com/articles/injection.html#FormsOf...
I third "dependency parameters". It concisely describes what the actual thing is, and isn't intimidating.
And yes, even though some languages/frameworks allow deps to be "passed in" via mechanisms that aren't technically parameters (like member variables that are just somehow magically initialised due to annotations), doing that only obfuscates control flow and should be avoided IMHO.
I think the "injection" is referring to a particular style of frameworks which rely on breaking language abstractions and modifying otherwise private fields to provide dependencies at runtime. I really, really, REALLY dislike that approach and therefore also dislike the name "dependency injection".
Why not call it "dependency resolution"? The only problem frameworks solve is to connect a provider of thing X with all users of thing X, possibly repeating this process until all dependencies have been resolved. It also makes it more clear that this process is not about initialization and lifecycling, it is only about connecting interfaces to implementations and instantiating them.
Edit: The only DI framework I have used and actually kind of like is Dagger 2, which resolves all dependencies at compile time. It also allows a style of use where implementation code does not have to know about the framework at all - all dependency resolution modules can be written separately.
All other runtime DI frameworks I have used I have hated with a passion because they add so much cognitive overhead by imposing complex lifecycle constraints. Your objects are not initialized when the constructor has finished running because you have to somehow wait for the DI framework to diddle the bits, and good luck debugging when this doesn't work as expected.
“Global variables that can pass code review”
Or don’t use a DI framework, and DI just becomes a fancy name for "creating instances" and "passing parameters". That’s what we do in Go and there’s no way I would EVER use a DI framework again. I’d rather be unemployed than work with Spring.
I'm a small brained primate but when I get down to what dependency injection is doing it's like my firmware written in C setting a couple of function pointers in a radio handler's struct to tell it which SPI bus and DIO's to use. Which seems trivially okay.
Maybe you could even use a different handler in tests!? Shocking that you can do this without using an IoC container and a thousand lines of YAML!
/sarcasm (in case anyone doubted it).
[flagged]
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Please don't fulminate. Please don't sneer, including at the rest of the community.
https://news.ycombinator.com/newsguidelines.html
No need to be aggressive, I just disagree that DI frameworks streamline anything, they just make things more opaque and hard to trace.
> will be a very annoying thing to maintain as soon as you change the constructor of something widely used in your project to accept a new parameter.
That for example is just not true. You add a new parameter to inject and it breaks the injection points? Yeah that’s expected, and suitable. I want to know where my changes have any impact, that’s the point of typing things.
A lot of things deemed "more maintainable" really aren’t. Never has a DI framework made anything simpler.
> That for example is just not true. You add a new parameter to inject and it breaks the injection points?
Perhaps you never worked in a sufficiently large codebase?
It is very annoying when you need to add a dependency and suddenly you have to touch 50+ injection points because that thing is widely used. Been there, done that, and by God I wished I had Dagger or Spring or anything really to lend me a hand.
DI frameworks are a tool like any other. When properly used in the correct context they can be helpful.
This is the only reason why DI frameworks exist. However, this issue can be largely avoided by working with configuration objects (parameter objects [0]) from the start, that get passed around. Then you only need to apply the change in a small number of parameter-object classes, if not just a single one.
[0] https://wiki.c2.com/?ParameterObject
Eh, having built a whole codebase around these configuration objects I really regret not going for more traditional DI IoC container. It's thousands over thousands of additional parameters passed all over the place when creating objects just for the sake of saving five minutes of explanation to newcomers.
"It is very annoying when you need to add a dependency and suddenly you have to touch 50+ injection points because that thing is widely used"
You don't have to update the injection points, because the injection points don't know the concrete details of what's being injected. That's literally the whole point of dependency injection.
Edited to add: Say you have a class A, and this is a dependency of classes B, C, etc. Using dependency injection, classes B and C are passed instances of A, they don't construct it themselves. So if you add a dependency to A, you have to change the place that constructs A, of course, but you don't have to change B and C, because they have nothing to do with the construction of A.
Ya very confused by this. Either the change is to the constructor of the object being injected, in which case there is no difference either way, or the change is to the constructor receiving the injection, in which case there’s no difference either way.
I think you're being downvoted because you're agreeing with the post you're quoting, but arguing as if they're wrong: the example in question was there to show how DI can be useful, so there's nothing to argue against.
The one thing DI frameworks unarguably and decisively solve by design (if accidentally, but it doesn’t matter) is control over static initialization. I’d say you haven’t truly lived if your C++ didn’t crash before calling main(), but it helps in large JS and Python projects just the same.
How do they solve that? If constructors require certain parameters, someone has to pass them. If it’s not a top-level object, the instantiating code will pass it and have a corresponding constructor or method parameter itself. At the top level, main() or equivalent will pass the parameter. Where is the problem?
Exactly, there is no problem when you do it this way and DI frameworks force you to.
The problem when you don't do it this way is when you depend on order of initialization in a way you are not aware of until it breaks, and it breaks in all kinds of interesting ways.
Chill dawg
If it becomes a chore to instantiate your dependency tree with `new` or whatever in the root of your app, it's a good indication that the dependency tree _is too complex_. You _should_ feel the pain of that, to align incentives for simplification.
Using an IoC container is endemic in the Java ecosystem, but unheard of in the Go ecosystem - and it's not hard to see which of them favours simplicity!
> instantiate your dependency tree with `new` or whatever in the root of your app
Then you are doing dependency injection, just replacing the benefits of a proper framework by instantiating everything at once the root of your app.
Whatever floats your boat, I guess. Thankfully I don't need to work on your code.
Indeed, you get the benefits of dependency injection, without the complexity and shit show of a truth-obscuring framework.
I think we can both agree that it’s good we don’t need to work with each other though.
> truth-obscuring framework.
That you think that a DI framework obscures anything is just a developer smell.
Having worked with multiple of those, it was always pretty clear what was being instantiated and how.
Well in that case, every Java DI framework I’ve encountered has that smell. Spring, Guice, all of them. Same in C#. Perhaps it’s an enterprise (in the pejorative) thing.
It is interesting to note that these frameworks are effectively non-existent in Go, Rust, Python - even Typescript, and no one complains you can’t build “real things” in any of those languages.
It's not just about testing. When any code constructs its own object, and that object is actually an abstraction of which we have many implementations, that code becomes stupidly inflexible.
For instance, some code which prints stuff, but doesn't take the output stream as a parameter, instead hard-coding to a standard output stream.
That leaves fewer options for testing also, as a secondary problem.
That’s just parameterization, though. It’s overkill to call every parameter (or even most parameters) a “dependency”.
So even "dependency parametrization" would be better than "injection".
Since what we want to parametrize are objects held by composition, maybe "composition parametrization".
"To promote better flexibility and code reuse and to make testing easier, parametrize object composition."
> Frameworks typically just streamline this process, and offers some flexibility at times - for example, when you happen to have different implementations of the same thing.
The very purpose of DI is to allow using a different implementation of the same thing in the first place. You shouldn’t need a framework to achieve that. And my personal experience happens to match that.
You're talking from the perspective of Java, which has been designed from the ground up with dependency injection in mind.
Dependency injection is the inversion of control pattern at the heart, which is something like oxygen to a Java dev.
In other languages, these issues are solved differently. From my perspective as someone whoes day job has been roughly 60+% Java for over 10 yrs now... I think I agree with the central message of the article. Unless you're currently in Java world, you're probably better off without it.
These patterns work and will on paper reduce complexity - but if comes at the cost of a massively increased mental overhead if you actually need to address a bug that touches more then a miniscule amount of code.
/Edit: and I'd like to mention that the article actually only dislikes the frameworks, not the pattern itself
DI wasn't around when Java (or .Net) came out. DI is a fairly new thing too, relatively speaking, like after ORMs and anonymous methods. Like after Java 7 I think. Or maybe 8? Not a Java person myself.
I know in .net, it was only really the switch to .net core where it became an integral part of the frameworks. In MVC 5 you had to add a third party DI container.
So how can it have been designed for it from the ground up?
In fact, if you're saying 10 years, that's roughly when DI became popular.
You're wrong about other languages not needing it, yes statically typed languagess need it for unit testing, but you don't seem to realize that from a practical perspective DI solves a lot of the problems around request lifetimes too. And from an architectural context it solves a lot of the problem of how to stop bad developers overly coupling their services.
Before DI people often used static methods, so you'd have a real mess of heavily interdependent services. It can still happen now but.its nowhere near as bad as the mess of programming in the 2000s.
DI helped reduce coupling and spaghetti code.
DI also forces you to 'declare' your dependencies, so it's easy to see when a class has got out of control.
Edit: I could keep on adding, but one final thing. Java and .Net are actually quite cumbersome to use DI, and Go is actually easier. Because Go has implicit interfaces, but older languages don't and it would really help reduce boiler plate DI code.
A lot of interfaces in Java/C# only exist to allow DI tow work, and are otherwise a pointless waste of time/code.
The Spring framework was released in 2003 when Java was at v1.4. From memory it wasn't the first DI framework either.
The term was coined in 2004: https://www.martinfowler.com/articles/injection.html
It’s not correct that Java was designed for it, unless you want to call class loading dependency injection. It’s merely that Java’s reflection mechanism happened to enable DI frameworks. The earlier Java concept was service locaters (also discussed in the article linked above).
> but it's really just all about making the code easier to test. Basically everything that the class depends upon has to be informed during construction.
It is useful for more than testing (although, depending on the kind of tests being made, it might not always be useful for all kind of tests). It also allows you to avoid a program having too many dependencies that you might not need (although this can also cause a problem, it could perhaps be avoided by providing optional dependencies, and macros (or whatever other facility is appropriate in the programming language you are using) to use them), and allows more easily for the caller to specify customized methods for some things (which is useful in many programs, e.g. if you want customized X.509 certificate validation in a program, or customized handling of displaying/requesting/manipulation of text, or use of a virtual file system).
In a C program, you can use a FILE object for I/O. Instead of using fopen or standard I/O, a library could accept a FILE object that you had previously provided, which might or might not be an actual file, so it does not need to deal with file names.
> This will be a pain in the ass to test, you might need to test, for example a date when there's a DST change, or February 28th in a leap year, etc.
I think that better operating system design with capability-based security would help with this and other problems, although having dependency injection can also be helpful for other purposes too.
Capability-based security is useful for many things. Not only it helps with testing, but also helps to work around a problem if a program does not work properly on a leap year, you can tell that specific program that the current date is actually a different date, and it can also be used for security, etc. (With my idea, it also allows a program to execute in a deterministic way, which also helps with testing and other things, including resist fingerprinting.)
I frequently find DI pattern to show up in Java... But I also frequently find that Java gives me all the handcuffs of systems languages with few of the benefits of more flexible duck-typing languages.
If you can't monkey-patch the getDate function with a mock in a testing context because your language won't allow it, that's a language smell, not a pattern smell.
> If you can't monkey-patch the getDate function with a mock in a testing context because your language won't allow it, that's a language smell, not a pattern smell.
Of course you can do it in Java. But it is widely considered poor practice, for good reason, and is generally avoided.
Interesting. Why is it considered poor practice to monkey-patch getDate for testing purposes in Java?
> If you can't monkey-patch the getDate function with a mock in a testing context because your language won't allow it, that's a language smell, not a pattern smell.
Not so fast. Constraints like "no monkeypatching allowed" are part of what make it possible to reason about code at an abstract level, i.e., without having to understand in detail every control path that could possibly have run before. Allowing monkeypatching at the language level means discarding that useful reasoning tool.
I'm not saying that "no monkeypatching allowed" is always ideal, but it is a tradeoff.
(Consider why so many languages have something like a "const" modifier, which purely restricts what you can do with the object in question. The restriction reduces what you can do with it, but increases what you know about it.)
In practice, we accomplish that by not monkeypatching the production code.
But for unit testing? Go nuts.
Sure, a policy like that helps, but relying on programmer discipline like this only scales so far. In a large enough code base, if something can be done, someone will have done it somewhere.
(If you have linter rules or static analysis that can detect monkeypatching and runs on every commit (or push to main, or whatever), you're good.)
You CAN in fact monkeypatch getDate - look at a Mockito add-on known as PowerMockito! While it's impossible to mock it out in the normal JVM "happy path," the JVM is powerful enough to let you mess with classloading and edit the bytecode at load-time to mock out even system classes.
(Disclaimer: have not used PowerMockito in ages, am not confident it works with the new module system.)
Regarding 'frameworks'. Golang already ships with a framework natively because of the design of the language. Therefore the point is moot in that specific context. Hence the post.
Another downside of DI is how it breaks code navigation in IDEs. Without DI, I can easily navigate from an instance to where it's constructed, but with DI this becomes detached. This variable implements Foo, but which implementation is it?
Ye debugability and grepability is terrible.
DI seems like some sort of job security by obscurity.
If your IDE starts to decide how you code and what kind of architecture/design you can use, I kind of feel like the IDE is becoming something more than just an IDE and probably you should try to find something else. But I mainly program in vim/nvim so maybe it's just par for the course with IDEs and I don't know what I'm talking about.
Are you not using an LSP with your text editor? If you are then you'll run into the same issue because it's the underlying technology. If you aren't using an LSP then you're probably leaving some workflow efficiency on the table.
I think probably when I write Rust, it does use LSP somehow, but most of the time I use Conjure (by Olical), and I don't think it uses LSP, as far as I know at least, but haven't dug around in the internals much.
> then you're probably leaving some workflow efficiency on the table
Typical HN to assume what the best workflow efficiency is, and that it mostly hinges on a specific technology usage :)
Imagine I'd claim that since you're not using nrepl and a repl connected to your editor, you're leaving some workflow efficiency on the table, even though I know nothing about your environment, context or even what language you program in usually.
> Imagine I'd claim that since you're not using {X}
Usually on the third time someone recommends {X} I would have looked into it and formed my own conclusions with first hand experience.
Ok, don't forget looking into nrepl, so you can be as productive as me and others. That's the second time, now we just wait for the third for you to seriously consider it.
[dead]
In IntelliJ at least this is a non-issue.
How?
The IDE understands the DI frameworks and can show you which class or classes will be injected.
ctrl shift b = show implementations
This is what I hate most about DI as well and when I told some other devs about this pet peave of mine they were looking at me like I had 2 head or something.
Which language? Android Studio, for example, allows you to navigate to Hilt injection points.
Not an issue in C#
Definitely still an issue in C#. C# devs are just comfortable with the way it is because they don't know better and are held hostage. Everything in C# world after a certain size will involve IOC/DI and the entire ecosystem of frameworks that has co-evolved with it.
The issues are still there. You can't just "go to definition" of the class being injected into yours, even if there is only one. You get the Interface you expect (because hey you have to depend on Interfaces because of something something unit-testing), and then see what implements that interface. And no, it will not just point to your single implementation, it'll find the test implementation too.
But where that "thing" gets instantiated is still a mystery and depends on config-file configured life-cycles, the bootstrapping of your application, whether the dependency gets loaded from a DLL, etc. It's black-box elephants all the way to the start of your application. And all that you see at the start is something vague like: var myApp = MyDIFramework.getInstance(MyAppClass); Your constructors, and where they get called from is in a never-ending abyss of thick and unreadable framework code that is miles away from your actual app. Sacrificed at the alter of job-creation, unit-testing and evangelist's talk-resume padding.
> You can't just "go to definition" of the class being injected into yours, even if there is only one.
Yes, I can? At least Rider can jump to the only implementation, no questions asked.
> And no, it will not just point to your single implementation, it'll find the test implementation too.
It will, but is it a problem to click the correct class from a list of two options?
If you only have one implementation, why do you even have an interface?
So I can mock it out for unit tests.
Then you have two implementations…
You don't necessarily need to implement interfaces [in your code] to stub them in unit tests:
If you’re doing that, you may as just mock your concrete class. Mockito supports this in Java. Perhaps this is needed in C#?
Java is virtual by default. C# is not. You could mark every single method virtual and mock it like Java. But it’s easier to define a contract and mock that.
Not one that shows up when I hit "go to implementation".
LOL ok so you’ve never used C#
I'm glad someone knows how I feel.
Yes, the comments about "$25 name for a 5c concept" ring true when you're looking at a toy example with constructor(logger) { .. }.
Then you look at an enterprise app with 10 years of history, with tests requiring 30 mocks, using a custom DI framework that only 2 people understand, with multiple versions of the same service, and it feels like you've entered another world where it's straight up impossible to debug code.
> You can't just "go to definition" of the class being injected into yours, even if there is only one.
This situation isn't unique when using DI (although admittedly DI does make using interfaces more common). However, that's what the "go to implementation" menu option is for.
For a console app, you're right that a DI framework adds a lot of complexity. But for a web app, you've already got all that framework code managing controller construction. If you've got the black box anyways, might as well embrace it.
Make those dependency interfaces dynamic enough to be practically untyped, introduce arbitrary implicit ordering requirements, and we have now invented Middleware.
I haven't really done any c# for 5+ years. What has changed?
I remember trying to effectively reverse-engineer a codebase (code available but nobody knew how it worked) with a lot of DI and it was fairly painful.
Maybe it was possible back then and I just didn't know how ¯\_(ツ)_/¯
If the rules of the dependency injection framework are well understood, the IDE can build a model in the background and make it navigable. I can't speak for C#, but Spring is navigable in IntelliJ. It will tell you which implementation is used, or if one is missing.
In a Spring application there are a lot of (effective) singletons, the "which implementation of the variable that implements Foo is it" becomes also less of a question.
In any case, we use Spring on a daily basis, and what you describe is not a real issue for us.
Is it ctrl+click takes you to the main implementation directly? If not it is reaaaaaallly annoying
I think so.
Also, what I think is also important to differentiate between: dependency injection, and programming against interfaces.
Interfaces are good, and there was a while where infant DI and mocking frameworks didn't work without them, so that folks created an interface for every class and only ever used the interface in the dependent classes. But the need for interfaces has been heavily misunderstood and overstated. Most dependencies can just be classes, and that means you can in fact click right into the implementation, not because the IDE understands DI, but because it understands the language (Java).
Don't hate DI for the gotten-out-of-control "programming against interfaces".
In every language/IDE I've ever used ctrl-click would take you to the interface definition, then you have a second "Show implementations" step that lists the implementations (which is usually really slow) and finally you can have to select the right implementation from the list.
It's technically a flaw of using generic interfaces, rather than DI. But the latter basically always implies the former.
Maybe you should read the manual then. Or change to a better Ide. Both Rider and IntelliJ can do this with no frills.
I’m not sure why you’re being down voted despite being correct.
If there are multiple implementations it gives a list to navigate to. If there’s 1 it goes straight to it. Don’t know about IntelliJ but rider and vs do this. And if the solution is indexed this is fast.
This is the point, you need an IDE with advanced features while a text editor should be all you need to understand what the code is doing..
Why, as a professional, would you not use professional tooling. Not just for DI, but there are many benefits to using an IDE. If you want to hone your skills in your own time by using a text editor, why not. But as a professional, denying the use of an IDE is a disservice to your team. (But hey, everyone's entitled their opinion!)
Edit: upon rereading I realize your point was about reading code, not writing it, so I guess that could be a different use case...
Being able to understand a system under fire with minimal tooling available is a property one must design for. If you get woken up at 3am with a production outage, the last thing you want to do is start digging through some smart-ass framework's idea of what is even running to figure out where the bug is.
There's nothing wrong with using an IDE most of the time, but building dependence on one such that you can't do anything without it is absolute folly.
C# code bases are all about ruining code navigation with autofac and mediatr
"Dependency injection is too complicated, look at this one straight-line implementation" is not exactly a fair argument.
The whole point of DI is that when you can't just write that straight-line implementation it becomes easier, not harder. What if I've got 20 different handlers, each of which need 5-10 dependencies out of 30+, and which run in 4 different environments, using 2 different implementations. Now I've got hundreds of conditionals all of which need to line up perfectly across my codebase (and which also don't get compile time checks for branch coverage).
I work with DI at work and it's pretty much a necessity. I work without it on a medium sized hobby project at home and it's probably one of the things I'd most like to add there.
But can’t you do your DI by hand? The frameworks can really become absurd in their complexity for a lot of tasks (I’m sure they make sense in many situations). DI is a concept that can be orchestrated with normal code at the top of the execution just fine, eliminating a lot of cruft.
To be fair, the numbers you h the row out sound like a framework becomes valuable, but most places I’ve seen DI frameworks, they could be replaced with manual DI and it would be much simpler.
When you say "do your DI by hand", do you mean wiring up each case, or do you mean writing a DI system that treats the cases generically? The former is what I'm suggesting becomes untenable at some point. The latter is just writing your own DI framework for, which I think would be fine.
DI frameworks are complicated, but they're a constant level of complicated, they don't get more complicated as the codebase grows. Not using a DI framework is simple at the beginning, but it grows, possibly exponentially, and at some point crosses the line of constant complexity from a DI framework.
Finding where those lines intersect is just good engineering. Ignoring the fact that they do intersect is not.
> Not using a DI framework is simple at the beginning, but it grows, possibly exponentially, and at some point crosses the line of constant complexity from a DI framework.
How so?
Look at my example numbers, and think through how you would write the setup process for that piece of software. You'd need a ton of conditionals checking all sorts of different things, woven throughout the codebase, in many different classes, etc.
With a DI framework of some kind, those conditionals would likely not exist, instead you'd be able to specify all the options, and the DI framework takes over stitching them together and finding the dependencies between them.
> What if I've got 20 different handlers, each of which need 5-10 dependencies out of 30+, and which run in 4 different environments, using 2 different implementations.
Listen to your code. If it’s hard to write all that, it probably means you have too many dependencies. Making it easier to jam more dependencies in your modules is just going to make things worse. “But I need all those…” Do you really? They rarely ever are all necessary, in my experience. Usually there’s a better way to untangle the dependency tree.
This was a hypothetical, but I've seen plenty of codebases like this. There's going to be some cruft in there, but even just the baseline for a service capable of releasing, with feature flags, safely, with canaries etc, to XXk-Xm requests per second and <XXms per request, with no downtime, is quite a lot.
Meh, I've worked on services with in languages with a DI framework (Java+Guice or Spring Boot) and not (C++, Go) and the latter is much nicer. And yeah some had Xm requests.
You just dispense with all the frippery that hides the fact that you are depending on global variables if you really want the Guice/Spring Boot experience.
The C++ code was much, much easier to trace by hand. It was easier to test. It started much much faster, speeding iteration. Meanwhile the Java service was a PITA to trace, took 30+ seconds to boot, and couldn't be AOT compiled to address that because of all the reflection.
Dan, are you referring to handling/implementing DI principle in Golang projects or not. I am curious.
I am referring to DI as a general practice. Go does not seem particularly well suited to DI, but I'm not a fan of it as a language in general because I don't think it lets you build the right abstractions.
I've done DI in Java (Guice), Python (pytest), Go (internal), and a little in C++ (internal). The best was Pytest, very simple but obviously quite specific. Guice is a bit of a hassle but actually fine when you get the hang of it, I found it very productive. The internal Go framework I've used is ok but limited by Go, and I don't have enough experience to draw conclusions from the C++ framework I've used.
You are missing the point the post is about Golang specifically.
Every so often a developer challenges the status quo.
Why should we do it like this, why is the D in SOLID so important when it causes pain?
This is lack of experience showing.
DI is absolutely not needed for small projects, but once you start building out larger projects the reason quickly becomes apparent.
Containers...
- Create proxies wrapping the objects, if you don't centralise construction management it becomes difficult.
- Cross cutting concerns will be missed and need to be wired everywhere manually.
- Manage objects life cycles, not just construction
It also ensures you code to the interface. Concrete classes are bad, just watch what happens when a team mate decides they want to change your implementation to suit their own use cases, rather than a new implementation of the interface. Multiply that by 10x when in a stack.
Once you realise the DI pain is for managing this (and not just allowing you to swap implementation, as is often the the poster boy), automating areas prone to manual bugs, and enforcing good practices, the reasons for using it should hopefully be obvious. :)
The D in SOLID is for dependency INVERSION not injection.
Most dependency injection that I see in the wild completely misses this distinction. Inversion can promote good engineering practices, injection can be used to help with the inversion, but you don’t need to use it.
https://blog.ploeh.dk/2025/01/27/dependency-inversion-withou...
Moreover, dependency inversion is explicitly not about construction, which conversely is exactly what dependency injection is about.
Agreed, and I conflated the two since I've been describing SOLID in ways other devs in my team would understand for years.
Liskov substitution for example is an overkill way of saying don't create an implementation that throws an UnsupportedOperationException, instead break the interfaces up (Interface Segregation "I" in SOLID) and use the interface you need.
Quoting the theory to junior devs instead just makes their eyes roll :D
LSP is about much more than not throwing UnsupportedOperationException, that’s a complete mischaracterization.
ISP isn’t about avoiding UnsupportedOperationException as well, it’s about reducing dependencies.
In Java land this is really the closest analogy I could create an example for. Do you have better example I could use with Java pls?
Honestly inversion kinda sucks because everybody does it wrong. Inversion only makes sense if you also create adapters, and it only makes sense to create adapters if you want to abstract away some code you don’t own. If you own all the code (ie layered code), dependency inversion is nonsensical. Dependency injection is great in this case but not inversion.
It's not just not needed for small projects it is actively harmful.
It's also actively unhelpful for large projects which have relatively more simple logic but complex interfaces with other services (usually databases).
DI multiplies the amount of code you need - a high cost for which there must be a benefit. It only pays off in proportion to the ratio of complexity of domain logic to integration logic.
Once you have have enough experience on a variety of different projects you should hopefully start to pick up on the trade offs inherent in using it to see when it is a good idea and when it has a net negative cost.
While I agree this is largely a "skill issue", I'm not so sure it's in the direction you seem to think it is.
Almost nothing written using Go uses an IoC container (which is what I assume you're meaning by DI here). It's hard to argue that "larger projects" cannot or indeed are not built using Go, so your argument is simply invalid.
Agreed. DI Containers / Injectors are so fundamental to writing software that will be testable, and makes it much easier to review code.
I've written a couple large apps using Uber's FX and it was great. The reason why it worked so well was that it forced me to organize my code in such a way as to make it super easy to test. It also had a few features around startup/shutdown and the concept of "services" and "logging" that are extremely convenient in an app that runs from systemd.
All of the complexity boils down to the fact that you have to remember to register your services before you can use them. If you forget, the stack trace is pretty hard to debug. Given that you're already deep into FX, it becomes pretty natural to remember this.
That said, I'd say that if you don't care about unit tests or you are good enough about always writing code that already takes things in constructors, you probably don't need this.
Separating your glue code from your business logic is a good idea for several reasons. That's all dependency injection, or inversion of control is. It's more of a design pattern than a framework thing. And structuring your code right means that things are a bit easier to test and understand as well (those two things go hand in hand). Works in C, Rust, Kotlin, Javascript, Java, Ruby, Python, Scala, Php, etc. The language doesn't really matter. Glue code needs to be separate from whatever the code does.
Some languages seem to naturally invite people to do the wrong thing. Javascript is a great example of this that seems to bring out the worst in people. Many of the people wielding that aren't very experienced and when they routinely initialize random crap in the middle of their business logic executed asynchronously via some event as a side effect of a butterfly stirring its wings on the other side of the planet, you end up with the typical flaky untestable, and unholy mess that is the typical Javascript code base. Exaggerating a bit here of course but I've seen some epically bad code and much of that was junior Javascript developers being more than a little bit clueless on this front.
Doing DI isn't that hard. Just don't initialize stuff in places that do useful things. No exceptions. Unfortunately, it's hard to fix in a code base that violates that rule. Because you first have to untangle the whole spaghetti ball before you can begin beating some sense into it. The bigger the code base, the more likely it is that it's just easier to just burn it down to the ground and starting from scratch. Do it right and your code might still be actively maintained a decade or more in the future. Do it wrong and your code will probably be unceremoniously deleted by the next person that inherits your mess.
What you’re describing is generally good coding practice. But not, I don’t think, what people associate with DI.
I think people's associations might be wrong then. In general, people seem to have a lot of misconceptions about DI. Like needing frameworks. Basically by inverting control of what initializes code, you create a hard separation between glue code and logic. Any logic that initializes code would violate that principle. You inject your dependencies because you are not allowed to create them yourself.
And yes, that is good coding practice. That kind of was my point.
> I think people's associations might be wrong then
I don't disagree, just that when talking about any given subject having an understanding of how the audience already thinks about that subject is somewhat important.
The bigger better goal here is probably to try and get folks to internally separate DI as a pattern from DI/IOC frameworks.
It always blew my mind that "dependency injection" is this big brouhaha and warrants making frameworks, when dynamic vars in Lisp basically accomplish the same task without any fanfare or glory.
Because "big brouhaha" is what people really want.
They don't want simple and easy to read code, then want to seem smart.
[flagged]
Because in statically typed languages they require a bit more scaffolding to get working.
And it is a bit magic, and then when you need something a bit odd, it suddenly becomes fiddly to get working.
An example is when you need a delayed job server to have the user context of different users depending who triggered the job
They're pretty good in 95% of cases when you understand them, but a bit confusing magic when you don't.
> when you need a delayed job server to have the user context of different users depending who triggered the job
I feel this is just a facet of the same confusion that leads to creating beautiful declarative systems, which end up being used purely imperatively because it's the only way to use them to do something useful in the real world; or, the "config file format lifecycle" phenomenon, where config files naturally tend to become ugly, half-assed Turing-complete programming languages.
People design systems too simple and constrained for the job, then notice too late and have to hack around it, and then you get stuff like this.
Yeah, I get where you're coming from.
For the standard web page lifecycle it's fine, but for instances like this it really does become fiddly.
But often it's possible, but often a ideological stance the framework team have taken that leads to a poor documentation issue.
The asp.net core team have some weird hills they die on, and some incredibly poor designs that stem from an over adherence to trendy patterns. It often feels they don't understand why those patterns exists.
This results in them hardly documentating how to use the DI outside of their 'ideal' flow.
They also try and push devs to use DI for injecting config. Which no other language does and is just unnecessarily complicated. And it's ended up with a system no-one really understands while the old System.Configuration, while clunky, at least automatically rebooted the app when you edited the config. Which is the 95% use case most devs would want.
FWIW, when I thought about it in the larger enterprise context, I realized that I also hold a seemingly opposite view. I presented that elsewhere in this thread:
https://news.ycombinator.com/item?id=44087215
TL;DR: the goal of enterprise frameworks isn't to make Perfect Software Framework or to make code beautiful, devoid of bloat, or even easy. Their goal is to make programming consistent and predictable, to make programmers exchangeable. It's to allow an average developer to churn around working results at a predictable pace, as long as the project is just standard stuff, and they don't bring their own opinions into it. Large businesses want things this way, because that's how they think about everything (see also: Seeing Like a State).
Of course, this doesn't mean the framework authors succeed at that goal either :). Some decisions are plain stupid. But less than one would think.
Golang is statically typed.
There is absolutely fanfare and glory, even more than about dependency injection.
And "dynamic scope" is also a lofty-sounding term, on par with "dependency injection".
DI is a very religious concept, people hate it or love it.
I myself am on the dislike camp, I have found that mocking modules (like you can with NodeJS testing frameworks) for tests gives most of the benefits with way less development hell. However you do need to be careful with the module boundaries (basically structure them as you would with DI) otherwise you can end up with a very messy testing system.
The value of DI is also directly proportional to the size of the service being tested, DI went into decline as things became more micro-servicy with network-enforced module boundaries. People are just mocking external services in these kind of codebases instead of internal modules, which makes the boundaries easier.
I can see strict DI still being useful in large monolith codebases worked by a lot of hands, if only to force people to structure their modules properly.
This will vary from firm to firm depending on what you're writing, but I generally find DI to be more complexity than needed. Granted,
- I'm willing to rewrite some code if we decide that a core library needs to get swapped out
- I'm always using languages that allow monkey-patching, so I'm not struggling to test my code because, for example, it's hard to substitute a mock implementation for `Date.now()`.
DI makes more sense if you're not in that position. But in that position, DI adds "you need these three files in your brain at the same time to really understand what's going on" complexity that I seek to avoid.
(Also, DI overlaps with generics, and to the extent that you can make things generic and it makes sense to do so, you should).
Love this article, Spring is a cancer in Java, its one of the reasons the language isn't fashionable.
Cancer? It's poisoned blood and slayer of puppies, hopes and dreams. It's the lord of hell.
It's still miles better than what was there before in Java ecosystem.
I think you might be living in a bubble of some sort? Spring, specifically Spring Boot, is extremely popular. Calling it unfashionable is simply wrong.
I agree that it is a cancer. Monocultures are rarely a good idea. And I strongly prefer explicit dependencies and/or compile time magic over runtime magic. But it is "convenient" and very much en vogue.
I think the problem I'm trying to describe is yes, Spring is popular within the people that do work on Java appications, however the Java/Spring platform is the reason most developers do not want to use Java. Java/Quarkus, Java/Micronaut or even Java/VertX would be more popular if they became the default Java framework in stead of Java/Spring.
Strangely I seem to have built all of my software without dependency injection. I must be a terrible programmer.
>Strangely I seem to have built all of my software without dependency injection
I'm going to guess that you've most likely used dependency injection without even thinking about it. It's one of those things you naturally do because it makes sense, even if you don't know it has an actual name, frameworks, and all that other stuff that often only makes it more confusing.
You must not work in an object-oriented language, then? (Which is very possible.) Or did you mean that you have never built software with a dependency injection framework?
It just means testing can become a lot harder. I wouldn't say you are necessarily a bad programmer because you don't write a gazillion tests.
I would say you are a bad programmer for implying that DI is useless though.
Can you expand on that?
Yeah I once got a job and after I got the job when they found out I'd never done dependency injection they said "we'd never have hired you if we knew that." Mind you that same manager also believed no code should ever be written if it doesn't have a test written first - real code is only ever an outcome of writing something to match what a test expects - poof - all the fun and creativity went out of programming there in an instant.
My philosophy of programming is "there's no right way to do it, do what works for you and makes you happy and if someone tell you you're doing it wrong, pay no attention - they're a bully and a fool".
This isn't about bullying someone into writing tests, it is about creating value that lasts over an extended period of time.
The value of tests doesn't generally come from when you first write them. It comes from when you're working on a codebase written by someone else (who has long ago quit, or been fired).
It helps me understand and be able to refactor their code. It gives me the confidence to routinely ship something to production and know that it won't break.
What confidence would you have on written tests by the person who left long time ago? Having tests (somebody else wrote) could mislead you in believing they got your back.
I’m not sure what point you’re trying to make. This is clearly better than having no tests at all. It’s not like I’m flying blind, the tests are right there, and I can read through them. If the coverage isn’t good enough, I can always add more. And let’s not ignore the fact that the presence of tests in the first place means someone gave a damn.
That only works if if what you're doing actually works - not just in terms of producing code that works once, but in terms of producing code that's maintainable. I don't know for sure that you're a "terrible programmer", but you're saying all the things that the terrible programmers I've worked with tended to say.
I think I can understand the boat you're in, bro. Both of the things that you don't do, I also didn't do for quite a long time, and I didn't particularly see the value in doing them (once upon a time); but I've been on a journey to make them part of how I code, and I'm pretty sure that I'm a better coder now than I was back then.
Writing tests for nearly all my code, in particular, is these days the only way I roll - and as for TDD (i.e. write the test and let it fail first, then write the actual code and make the test pass), I do it quite often, and I guarantee you that - contrary to your opinion - it makes coding a whole new kind of fun and creative. Dependency injection I still consider myself less of a ninja at, but I've done it (and seen it done) enough times now that I get it and I see the value in it.
I think it's a bit stupid for an employer to say "we'd never have hired you if we knew you had no experience in X" (sure, this doesn't apply to all skills, but I'd say it applies to quite a few). If you're worth hiring, then you'll pick up X within a few months on the job. I'm grateful to several past employers of mine, for showing me the ropes of TDD and DI (among many other things).
Anyway, I'm not saying that the above things are "the (only) right way to do it", and please don't take my above ramblings as making a judgement on your coding prowess. I agree, do what works for you. I'm just saying that there's always more to learn, and that you should always strive to be open-minded to new skills and new approaches.
What is there to be a "ninja" about when it comes to DI? As the article explains in the beginning it just means that you initialize and pass something into whatever depends on it instead of initializing it inside that thing.
It's too complicated of a term for what it is because we generally don't say we inject arguments into a function when we call a function.
But maybe you mean patterns building on that, e.g. repository/adapter patterns.
Which is rediculous as a taxi driver not getting the job if they have never taken a passenger with a trombone.
More like a carpenter not getting the job because he doesn't know how to frame a house.
Which, of course, is fine if their job is building fine furniture...
Not really. Frame or not frame a house would be a core requirement if the jobs needs that. Having used DI is not a core requirement. It is something you can learn in 2 hours if you are experienced. It might be like a carpenter not having used a specific tool but instead used another tool and there is a 4hr training at the local college on how to use the new tool.
Or like a pilot doesn't get a job because they flew a slightly older Airbus model and need to do some sim time.
Mark Seemann has written extensively about the subject.
He a tremendous source of knowledge in that regard.
https://blog.ploeh.dk/2017/01/27/from-dependency-injection-t...
His AutoFixture C# NuGet takes away so much pain from unit test maintenance. It does have a learning curve.
https://github.com/AutoFixture/AutoFixture
I noticed usually DI is not necessary at runtime, but rather at compile (or boot) time.
In practice I noticed I'm ok with direct dependency as long as I can change the implementation with a compile time variable. For the tests, I use an alternative implementation, for development another. I don't swap an implementation for another within the same code. It is an option, but it happens so rarely that it seems absurd optimizing for it.
So, I like dependency injection as a concept, but I avoid it to reduce complexity. The advantage is that you can get by with a lot more "global" code. In Go this is particularly nice since DI is really nasty (struct methods have various limitations)
I highly agree. I especially believe that manual DI should always be the starting point. Eventually one can evaluate if there really is a need for a framework. It's already dangerous if I have to change the code significantly just to satisfy the framework.
Isn't that true for every framework/library out there to some extent?
As someone who was raised in the religion of Java and Spring and SpringBoot (many years over many companies). It was a revelation to work on micro-services that didn’t use a DI framework. I’m now thoroughly against them.
Rules are different with microservices, like globals can be ok. But even in large projects with big services, idk what problem these frameworks are trying to solve. I've never felt the slightest need to introduce some metaprogramming to get dependencies where they need to be.
Though my daily work involves plenty of DI, and I see the need for it, I see some unfortunate side-effects, in the behaviours it 'causes'.
- the 'autopilot GPS' problem: Colleagues who basically have no idea how things fit together, because DI connects the dots for them. So they end up with either a vague or no mental model of that happens below the surface.
- the same, but related to scope and costs: Costs: Because they don't touch what is 'built behind the scenes', they get no sense of how costly it is ('every time you do use that thing, it instantiates and throws away a million things'). Scope: Often business logic dictates that the construction hierarchy consists of finely tuned parts: You don't just need an instance of 'Foo', you need a Foo instance that originates from a specific / certain request. And if you then use two Bar's together, where Bar 1 is tied to Foo 1 but Bar 2 is tied to Foo 2, you will get strange spurious errors (think, for example, ORMS and database transactions - the two Foos or Bars may relate to different connections, transactions or cursors.)
One antipattern I have seen (which may actually be an argument FOR DI..), is the 'query everything' service, which incorporates 117 other sub-services. And some of the junior developers just love that service, because "then I can query everything, from a single place!" (yes.. but you just connected to 4 databases with 7 connections, and you are only trying to select a single row from one of them. And again, code with the everything-service becomes quite untestable).
The best part of the article is its advice for triggering the broken dependencies at compile-time, I really hate when I have to go through complicated flow #127 to learn that a dependency is broken.
Language and/or library issue. DI helps code be easier to follow, more decoupled and read with less boilerplate AND helps testing much easier.
If you are on node/ts look at effect-ts.
Can you show a project that effectively uses effect-ts? The docs is a tsunami of information that just looks to try to make a whole new language out of TS. If someone else had to review my code I doubt they knew what was going on
The article can be summarized by: “I’m using language that is stuck in 80s”.
Java and Dagger 2 have solved the DI years ago. Fast, compile time safe and easy to use.
[flagged]
You probably don't need functional programming. Here is how to do it with a for-loop.
You don't see many articles written like that because it kinda would be obvious that the author hasn't bothered to understand the approach that he is critizing.
Yet when it comes to OO concepts people from "superior" platforms like Go or the FP crowd just cannot let go of airing their ignorance.
Just leave OO alone unless you are genuinely interested in the approach.
What is the OO approach?
Erlang.
I don't like DI as a concept because it typically obscures the path/source of the file where relevant code is located. DI trades off long term readability for short term implementation convenience. Maintainability requires strict adherence and awareness of conventions which it something that many developers are terrible with.
> But that reflection-driven magic is also where the pain starts. As your graph grows, it gets harder to tell which constructor feeds which one. Some constructor take one parameter, some take three. There’s no single place you can glance at to understand the wiring. It’s all figured out inside the container at runtime.
That's the whole point. Depdendency Inversions allows you to write part of the code in separation, without worrying about all the dependencies of each component you create and what creates what where.
If your code is small enough that you can keep all the dependencies in your head at the same time and it doesn't slow you down much to pass them all around all the time - DI isn't worth it.
If it becomes an issue - DI starts to shine. There are other solutions as well, obviously (mostly in the form of Object-Orientified global variables - for example you keep everything in GameWorld object and pass it everywhere).
> doesn't slow you down much to pass them all around all the time - DI isn't worth it.
You are confusing DI principles and using a "DI framework". Re-read the article.
As a (mainly) Python dev, I'm aware that there are DI frameworks out there, but personally I haven't to date used any of them.
My favourite little hack for simple framework-less DI in Python these days looks something like this:
I’ve used this exact pattern.
However, in Python I prefer to use true DI. I mostly like Injector[0] because it’s lightweight and more like a set of primitives than an actual framework. Very easy to build functionality on top of and reuse - I have one set of modules that can be loaded for an API server, CLI, integration tests, offline workers, etc.
That said, I have a few problems with it - 2 features which I feel are bare minimum required, and one that isn’t there but could be powerful. You can’t provide async dependencies natively, which is not usable in 2025 - and it’s keyed purely on type, so if you want to return different string instances with some additional key you need a wrapper type.
Between these problems and missing features (Pydantic-like eval time validation of wiring graphs) I really want to write my own library.
However, as a testament to the flexibility of Injector, I could implement all 3 of these things as a layer on top without modifying its code directly.
[0]: https://pypi.org/project/injector/
Yeah, I use this sometimes too (even though Python makes "monkey patching" easy). However, note that it's simpler and clearer to use a default value for the argument:
The basic pattern of not having one object construct some other object that has external references to it is.. kind of obvious. I didn't know there was a name for it but sure, I agree, fine.
But the way DI is usually implemented is with this bag of global variables which you just reach in and grab the first object of the desired type. I call this the little jack horner pattern. Stick in your thumb and pull out a plum. That, is stupid. You've reinvented global variables, but actually worse. Congratulations.
The system is composed of classes which are nicely encapsulated, independent and obey Liskov substitution and all that. You can connect them in different arrangements and they play along nicely.
But then some classes which use other classes hard code those classes in their constructor. They then work with those specific hard-coded classes. It's like if someone crazy-glued some of our Lego blocks together.
We recognize this problem and allow the sister objects to be configurable.
Then some opinionated nubmnut comes along and says, "hey, we should call this simple correction 'dependency injection'". And somehow, everyone listens.
Common sense is in short supply these days. It's a shame we need blog posts like these to outline how much you lose when you go with the "magic" approach. Devs just seem to be allergic to simple but verbose code.
Looking forward to someone writing the Spring equivalent this on the JVM
Why? It would be nearly identical, just changing the names of the frameworks.
[flagged]
Or maybe they just don't have oversized egos.
The key to using a framework effectively, whether it's Spring in Java or SAP for your business, is to accept that the framework knows better than you - especially when it objectively does not- and when there's a difference between how you or your business think of things, vs. how the framework frames them, it's your thoughts and your business that must change. Otherwise, you're fighting the framework, and that's worse than just not using it.
Do you not think I've heard that line before? The framework knows nothing. It's made by a bunch of children that make CRUD apps wrapping an SQL query behind an HTTP server over and over again. They don't make any applications that do anything commercially or technically interesting, resorting themselves to infinitely copying data structures with increasingly complex annotations, a practice they call "business logic", to trick themselves into feeling like they're doing something.
I've been there. I've seen it. It doesn't lead anywhere. The abstractions that spring (and other heavyweight JavaEE-style frameworks) provide is razor thin, and usually implemented in the most trivial possible way. The frameworks, like the applications often built on them, do nothing interesting.
EDIT: I realize this is a pretty unkind way to put it. I hope readers can understand the argument along with the indignation I express. I do believe very strongly in these points, but wish I could express them without quite as much anger. I can't tough. Parse out what useful stuff you can glean, and leave the rest along with the knowledge that you don't have to impress me.
I'm sorry too, I realize I didn't make my point clear. Yes, frameworks are stupid. Their designs are likely suboptimal from the start, and only get worse over time, accumulating hacks to address biggest issue while they double down on going the wrong direction. A competent engineer will easily come up with better ways of doing any individual thing a framework offers, and they have a good shot at designing a framework much better for the needs of their team and their business.
Which is why I brought up SAP. It's well-known that adopting SAP usually ends up burning untold millions of dollars on getting expensive consultants to customize it for specific needs of the company, until it either gets written off and abandoned, or results in a "successful" adoption that everyone hates.
It's less well-known that the key to adopting SAP effectively and getting the promised value out of it is to change your business process to fit SAP. Yes, whatever processes a business had originally were likely much smarter and better for their specific situation and domain, but if they really want the benefits that come with SAP, abandoning wisdom of your old ways is the price of admission.
I say the same is true of software frameworks. Most businesses don't do anything interesting or deep either; you don't integrate with SAP to handle novel challenges, you do it to scale and streamline your business, which actually involves making most work more mindless and boring, so it can be reliably and consistently done by average employees. Software frameworks, too, aren't there to help you with novel technical challenges; they're there to allow businesses to be more efficient at doing the same boring shit every other business is doing.
I personally hate those frameworks, but that's because they're not meant for people like me; it doesn't mean they don't work. They're just another form of bureaucracy - they suck all life and fun and individuality from work, but by doing that, they enable it to scale.
Of the countless pro's for Dependency Injection, it allows for near perfect test isolation, among many other benefits. Test the code under test, mock the rest. Not not mention limitless composition instead of inheritance.
The real con of Dependency Injection = {Developer Egos + ( Boredom | Lax Deadlines ) Lack of senior oversight} which inevitably yields needless overengineering.
DI is fine, if it is fully typed, and objects are explicitly initiated by the user, and the DI only does thread-safe dependency resolution.
I'm ok with "dependency injection" being confused with "dependency injection framework," cause it's silly to have a name for the first thing. Might as well call it "parameter injection" when I call a function, and "memory carburation" when I instantiate a variable.
I agree. I had to do what the article says in Node for a project for $reasons but secretly I loved not using a framework, and having the construction explicit. I've also seen bugs because tests may set up DI different to prod.
Don't hate a paradigm because you only experienced one bad implementation of it.
In IntelliJ, with the Spring Framework, you can have thorough tooling: You can inspect beans, their dependencies, you even get a visual bean graph, you can write mocks and test dependencies and don't even need interfaces anymore and if a dependency is missing, you will receive an IDE warning before runtime.
I do not understand why people are so excited about a language and its frameworks where the wheel is still actively being reinvented in a worse way.
One thing that can motivate a dependency container is a complex chain of constructors.
IoC is nice (or DI as a concept in particular), but DI frameworks/libraries sometimes are a mess.
I've had my fair share of Java and Spring Boot projects and it breaks in all sorts of stupid ways there, even things like the same exact code and runtime environment working in a container that's built locally, but not working when the "same" container is built on a CI server: https://blog.kronis.dev/blog/it-works-on-my-docker
Literally a case where Spring Boot DI just throws a hissy fit that you cannot easily track down, where I had to mess around with the @Lazy annotation (despite the configuration to permit that being explicitly turned on too) in over 100 places to resolve the issue, plus then when you try to inject a list of all classes that implement an interface with @Lazy it doesn't seem like their order is guaranteed either so your DefaultValidator needs to be tacked on to that list manually at the end.
Sorry about the Java/Spring rant.
It very much feels like the proper place for most DI is at compile time (like Dagger does for Java, seems closer to wire) not at runtime, or just keep IoC without a DI framework/library and having your code look a bit more like this:
Just a snippet of code from a Java Dropwizard example project, not all of its contents either, but should show that it's nothing impossibly difficult. Same principles apply to other languages and tech stacks, plus the above is unequivocally easier to put a breakpoint in and debug, vs some dynamic annotation or convention based mess.Overall, I agree with the article, even across multiple languages.
DI is a confusing fancy name for "global variables".
Or "function arguments". Either way it's a stupid name.
[dead]
[dead]
[dead]
DI frameworks add confusion and require more unnecessary memory in advance