janalsncm 4 days ago

There are security vulnerabilities due to expediency and there are others due to ignorance. Expediency is an organizational issue. Ignorance could be considered organizational or personal.

Part of the issue with software engineering as a field is we keep telling ourselves that university isn’t vocational training, even though we act as if it is. So it’s entirely possible (even likely) that a new grad hasn’t heard of any of the OWASP top 10 vulnerabilities, but they will know how to reverse a linked list. Organizations can harden themselves by making sure everyone has a minimum understanding of the most common threats. If you save a password in plaintext it’s malpractice at this point.

Second thing is you need to have someone in the organization whose job isn’t beholden to expediency. Studies have to pass an ethics review before they’re greenlit. Maybe a product should at least get some eyeballs before it’s implemented.

  • atoav 3 days ago

    The problem is that studying CS does not automatically a good programmer make. And the truth is that many graduates would be glad if their software works. Designing in security from the start means you are experienced enough to no longer have to worry about it working at all, you worry about resources, security, extendability, dependecies — so you try to compose software from many single choices that yields you the needed functionality, while being computationally efficient, scalable, safe, secure, reliable, extendable, modular, well documented etc.

    You won't be able to write a masterful Haiku without learning to speak the language you want to write it in fluently first.

    • x0x0 3 days ago

      we're still an industry where many companies can't reliably ship basic crud apps. getting from there to solid security in the absence of extraordinary investment... not sure how that happens.

      Rachel of rachelbythebay related a story from a faang where some genius replaced code that created a directory with a shell call to `mkdir -p` in a core lib. Because it was convenient to get a mkdir call that would create parents. This got through code review! "Surprise, we're dumping the args you hand us straight to sh/bash/zsh/whatever. Hope every caller has a deep understanding of shell escaping rules and can correctly clean these args! tada!"

      You hear things like that and I'm not sure what hope most of us have...

      edit: https://rachelbythebay.com/w/2021/12/24/mkdir/

      • ozim 2 hours ago

        Sorry but that is not true and we have tools and processes that allow us to reliably deliver new versions of crud apps.

        I am doing that with multiple teams for over 10 years now.

        Yes I have seen teams that cannot get basic app to prod over the year because they mess around with tooling instead of doing the job.

        If you expect that there be no bugs - just go for an inspection with building inspector and check how many buildings have not even a single flaw.

      • Log_out_ 2 days ago

        contain and recover from it,instead of fixing th complex cheese layers that is Software ? one huge try catch and a os that fires exception in case of the unexceptional? sandbox is all security?

        • x0x0 2 days ago

          Oh, it definitely seems like, by default, a random process should not be allowed to fork a shell, and so doing should kill the process tree...

  • infamouscow 4 days ago

    There's always going to be a new person that has to learn from experience -- I'm way less concerned about that problem. In fact, I will argue this is a complete waste of time to be concerned about at it. Mistakes happen, security-minded folks get sick, and teams get so busy they lose track of the daily news. The real problem stems from the organization issues. We didn't get the OPM breach, Equifax breach, $MASSIVE_ENTITY breach from 0-day's that went unpatched for a few days. We got those breaches from systemic incompetence of management for years and years.

  • harimau777 2 days ago

    I think that even ignorance is organizational. As the article points out, we don't reward people for becoming less ignorant about security so why should they put in the effort to do so?

    • abbadadda 2 days ago

      We can punish those who make egregious errors, like saving passwords in plain text, either literally or figuratively during code reviews (i.e., if a senior developer rejects a PR because it is too insecure)

    • water-your-self 2 days ago

      Becoming less ignorant is time that could have been spent on this sprint, and is often punished.

  • myworkinisgood 3 days ago

    > If you save a password in plaintext it’s malpractice at this point.

    And yet there is no standardized way for applications like git, docker etc to interface with an encrypted password store.

    • paulddraper 3 days ago

      Why would git need passwords?

      • mjlee 3 days ago

        To authenticate with remote repositories.

        • paulddraper 3 days ago

          SSH keys my guy

          better than passwords

          • myworkinisgood 2 days ago

            right, and what happens when you want to use that stuff in CIs?

tgma 4 days ago

Interestingly, very often security is also not the real goal of entities that are supposedly in charge of security, whether specialized security vendors or in house security teams; compliance and cover-your-ass is. I have seen more than once that the CYA security theater in fact directly causes security problems.

Quite often, the rare secure systems are built by great product teams who understand and are passionate about user data and security of the product themselves and architect the system accordingly.

  • BrandoElFollito 3 days ago

    Working in cybersecurity in an average large company is usually miserable when you want to do it right. You are trapped between business metrics, laziness, and lack of interest in modern technologies.

    On the one hand, you have businesses that claim that they will lose millions in revenues if their shitty site/service is down.

    Then you have users that do not give a crap about security: they have goals and security s not one of them.

    Finally, you have IT where there you either have under-the-radar operations for people interested in something, or an average to low understanding of technology.

    And there you are: you are "responsible" for making sure that the company is secure (per its ads) but cannot hurt the business. If there is an incident it is your fault anyway.

    It s not a surprise that burnout in cybersecurity is high. Or that security experts' goal is to have all the chekmarks no matter the reality (the ass covered approach).

    Or you have the rare CISO who will be pissed off by such things and will force their way to bring in real security (which is not rocket science: patch, configure correctly, use MFA and occasionally think before you do something). Some fail and leave, some become the enterprise assholes you do not want to deal with and, alllll right, you implement what they want because they piss you off. The unsung hero.

    • tiltowait 3 days ago

      I work in security, and we just got the kind of CISO you mentioned. It’s glorious. People listen to me now.

      • BrandoElFollito 2 days ago

        I work in security as well and hope to be that CISO for my team :)

        It's kinda sad that one needs to become that annoying rock you cannot just push away to put in place sensible security measures that profit everyone. It does not help at all that there is so many, so fucking much shitty security requirements|standards|under-the-shower-visons arend and this security theater makes us lose credibility.

        As an example, it had been, what, 10 years that I have been fighting with external auditors regarding password complexity. I was pushing what is now the NIST standard and every year I was confronted with the checkmark "small, caps, digits, special characters (but not <>') changed every 30 days" and was just saying that I would not sign off because we have something that simply makes sense. As in common sense, backed with 4th grade maths.

  • ang_cire 3 days ago

    To be frank, if I could choose which of those to achieve, and it would be perfectly achieved, I'd choose actual security, but the reality is that it's impossible to actually meet most SLAs. I have met exactly 0 infosec folks who can honestly claim their company meets patching SLAs, much less vuln SLAs in general. And as the saying goes, if one employee is failing, that's a personnel issue. If many employees are failing, that's a management issue.

    The entities in charge of security are always the boardroom Executives of the company, ultimately, because they set the strategic priorities and silo budgets. And their main goal is never actually security.

Buttons840 3 days ago

Red teams should be legally allowed to attack companies (within certain parameters), even without the company's permission. As long as the red team attackers report what they find responsibly.

I'm jaded and think our best hope is that the good guys can find the vulnerabilities before the bad guys. We can't depend on the companies to do the right thing when they have no incentive. There are no negative consequences for companies with bad security currently.

The bad guys get to attack-at-will and the good guys have to beg permission, "please Equifax, you have all my data, may I please test the effectiveness of your security for myself?" They say no, and I'll risk going to prison if I try.

I'm jaded. I think we don't do this because it would embarrass many powerful people. We might also realize many of our institutions are incapable of building secure systems, which would be a national embarrassment.

This is important though. It's no exaggeration to say this is a matter of national security.

Currently we sacrifice national security so companies can avoid being embarrassed.

  • karmajunkie 3 days ago

    i get the cynicism but all you’re talking about is legalized extortion.

    • ErikBjare 2 days ago

      What part of it is extortion?

      • r2_pilot 2 days ago

        Presumably the part where the red team gets its funding, or by legally mandated remediations that the company may not feel are necessary.

        • Buttons840 18 hours ago

          I'm this scenario the "things the company may not feel is necessary" are good security practices and security fixes, right?

          I feel this is a good place to reiterate my "currently we sacrifice national security for the convenience of companies" argument.

skybrian 4 days ago

Maybe this is better viewed through a cultural lens?

Culture is "here's how we do things." I suspect that incentives work a lot better if they're reinforcing culture than if they're working against it. Going against cultural practices requires strong incentives, and even then it's hard.

I'm reminded of businesses trying to keep employees from holding doors open for strangers who look the part. This is going against ingrained cultural practices about common courtesy and incentives alone are likely not enough. If it's important enough, you might need a doorman.

moody__ 3 days ago

What is said here in this blog I think is true, but it is only a single part of the perverted incentive puzzle. Folks up in the c suite have realized that they can just say they care about security and reap the benefits. In my experience average Joe is not going to inconvenience himself on account of there being some security breach, and if the company is at least _saying_ they care about it then Joe can write it off as incompetence and go about his day.

Which makes security spending like entertainment spending, when you have extra money to spend you do it to make yourself and potentially your customers feel good. If the economy is bad you lie about your security posturing just like you lie about how much you care about the customer in general.

g-b-r 3 days ago

For the rare company seriously interested in security, a way to reward it might be occasional audits on the work of some randomly picked employee

The audits would grade the work in comparison to the company's accepted level of security, and for those which fall below measures are taken, while for those above there's an unbounded, significant bonus, higher the better the security practices of the employee.

A company might also set a upper limit for unapproved, revenue-affecting practices, of course

GrumpyYoungMan 4 days ago

Sure but what does "rewarding security", as the author suggests, in a way that is genuinely meaningful look like? The direct metric would a low number of security holes or bugs in the product but then you run straight into the problem that many holes/bugs are not found until much, much later, if ever. Perhaps code review failed to notice it, perhaps QA didn't cover that case, perhaps security scanning tools missed it, perhaps no black or white hat hacker ever bothered to try to break it, etc. Without a meaningful metric, what will likely happen is that people get rewarded for some kind of security theater.

  • nocsi 4 days ago

    Then go the opposite route. South Korea fines companies thousands of dollars every day a vulnerability isn't fixed. Security is one of those areas where negative reinforcement works better than positive reinforcement.

    • GrumpyYoungMan 4 days ago

      Sure, I'd be fine with that but that's going to have knock-on effects on developers because they're the ones writing the code and therefore the vulnerabilities / bugs. Software engineering would turn into something like civil or aerospace engineering or medicine where where practitioners are required to be certified in various ways, either they or their employers carry liability insurance for bugs they write, and endure onerous processes / audits that their employers and insurers demand of them to reduce the risk of bugs. That I'm fine with too since there's so much crap code being churned out but most software developers probably wouldn't.

    • Manouchehri 4 days ago

      Mind providing a source? (Tried to Google it but didn’t find any relevant info.)

      I can think of multiple situations where a vendor from SK has left things unpatched for months, and sometimes years..

    • felixhammerl 4 days ago

      "thousands of dollars every day" does not a negative reinforcement make. That us not even a rounding error for even mid sized companies.

      • paulryanrogers 4 days ago

        Then use 1% of revenue or 2K per day, whichever is greater.

        • Manouchehri 4 days ago

          So after 4 months, the company would lose more than their entire revenue?

          • BobbyTables2 4 days ago

            Why not?

            A $20k car can do far more than $200k in damage.

            We don’t limit liability to the price of the vehicle.

            • Manouchehri 2 days ago

              The equivalent would be a $20k Ford resulting in a $1,762,000k fine.

          • paulryanrogers 4 days ago

            Yeap, should deter building vulnerability riddled solutions.

  • harimau777 2 days ago

    Treat fixing a security issue or implementing a security component the same as implementing a feature for the purpose of raises and promotion.

felixhammerl 4 days ago

Even the most massive hacks or breaches or cyber attacks barely put a dent into any reasonable business. One or two news cycles and a management rotation, that's it. Okta? Target? Equifax? Capital One? Uber? Even Solarwinds for crying out loud.

Everyone does enough to not be accused of gross negligence, but really I have not seen anyone pay more than lip service. And I don't blame them. No matter how much this hurts to say as a security professional.

  • StressedDev 4 days ago

    The biggest group of people paying lip service to security are software engineers, and ops people. Both groups regularly choose implementation speed, and reduced work over sound security practices.

    A good example of this is in C/C++. Most C code bases I have seen spread buffer use and allocation code over hundreds or thousands of files. Anyone of these files could have a security bug because some code does not check the buffer size before writing data into a buffer. There is no way this pattern will ever be secure because it requires software engineers to get every check right which is impossible.

    Even worse, many software engineers do not care about security, or even correctness. They will happily write dangerous code because it takes less time.

    Another example of both operations and software engineers having a blind spot is cloud computing. When you write software in the cloud, you want to minimize secrets for the following reasons:

    1) They have to be periodically rotated (changed). Rotation takes time, and it is error prone. Making a mistake leads to an outage. Not rotating them can lead to a hack when an employee leaves the team or when a breach occurrs and the attacker gets a copy of the secret.

    2) If a breach occurs, secrets have to be rotated very quickly. This is hard to do unless a team has spent a lot of effort on automated secret rotation.

    The solution is to use managed identities (i.e. identities which automatically rotate their credentials every X days). I know Azure provides them, and I bet AWS, GCE, etc. also provide them. It takes a little more work but now, you do not have to worry about secret rotation anymore.

    The problem is, more work means a lot of people just won't do it.

    The final example is the principal of least privilege. Convincing people to only give the appropriate privileges to an account, managed identity, person, etc. is hard. Lots of people just give as much access as possible "in case someone needs it", or because it is easier. This leads to much worse security breaches.

    My basic point is security problems are not just because companies don't care or are not punished enough. They also occur because software engineers, ops, and other technical people don't really care. If the people doing the actual work don't care, the situation is not going to ever improve.

    • chronid 3 days ago

      This is not my experience, working in small shops/enterprise companies (some regulated). What I've seen is a constant, hard resistance from security "departments" to do anything that is not making policies (one company I worked with for a while had a security policy denying usage of managed identities in Azure...) and buying yet another magic solution from a vendor that will fix all our security problems (offloading its maintenance on... operations teams!), sometime with configurations that resemble the proverbial "very expensive firewall with ACCEPT ALL policies in all directions".

      The companies with working security in my - limited, sure - experience had security teams owning the tools and making the life easier for developers and ops, from something "simple" like certificate rotation automation, to mTLS that is "transparent" for apps, to authn/authz, to secret management, all owned and managed by the security org.

    • IggleSniggle 4 days ago

      The problem with the principle of least privilege is that you don't know how much privilege you need until you need it. And once you need it, you need to define a scope for it. If you wish to bake an apple pie from scratch, you must first invent the universe. But are you done with the universe once the apple pie is baked, or does it still need to be eaten, digested, and excreted? Are you done then? And what specific portions of the universe did you need in order to accomplish this goal? You're not sure? I'll see you in a few years when you're done with the research.

      Sorry to be so cynical, as I do actually believe the principle of least privilege is an appropriate goal; I just think that there's no getting around that the engineers themselves are the ones who really must uphold this virtue, and even then, it can go overboard. At some point, the software should do something.

heelix 3 days ago

In many, many cases - teams don't have enough time to do maintenance. The programs where our developers become line order cooks, only having cycles to add the features for that sprint, almost always spiral into despair. Dependencies will go unpatched, tests become untrustworthy, and the code review cycles from those who should catch things goes away. Business gets what they wanted and then starts to shrink the pool of developers to save even more money - either with fewer resources or less experienced. You really have to force teams to give time to polish things, because I don't think business understand they need to change the oil on the car.

twelve40 3 days ago

Same as performance, or any other aspect of basic engineering competency. Product, or business, will never tell you, "don't use this query for this feature because it won't scale". As an engineer, you can take a shortcut and query the database in a way that will explode after a year of growth, after you've moved on to greener pastures. Or not.

sentrysapper 3 days ago

Huh, never really thought about it this way. There really is no carrot to implementing secure solutions, only the stick when there is an incident.

nonameiguess 3 days ago

This touches on larger problems of compensation and reward even at the theory level. How do you attribute success at the organizational level to individual contributions? Outside of things like annual contract value for salespeople or win shares for pro athletes, there is often a lot of remove between what any employee does and the outcome(s) the organization cares about.

Stepping back from security, consider where else this kind of problem arises. I'm not going to remember who, but a decade back or so, one of the service branch secretaries said something to the effect that he'd rather reduce the rate at which servicemembers rape each other than win wars. Compliance with anti sexual harassment policies and contribution to positive culture became direct bullet points you had to meet on an officer evaluation report to get promoted. Is this right? I don't claim to know, personally, but consider the arguments. Nominally, what a military cares about is winning wars, but how much does what any individual officer does contribute to that? Throughout the Global War on Terror years, we regularly completely destroyed all of the leadership and fighting apparatus of elements in Iraq and Afghanistan that opposed US strategic goals. But we couldn't forcefully install competent local government with no loyalty to the prior regimes, and we couldn't eliminate sympathizers and supporters and even direct contributors to opposing efforts that took refuge in other countries the US military had no authority to operate in. In that sense, winning or losing was outside of the scope of what the military could even do. It was only one part of a larger national strategy that had many civilian pieces that could also fail.

On the other hand, if they believed that reducing rape rates could promote long-term credibility and public image for the military and it was something that was actually under the military's own full control to do, then it arguably makes sense to place emphasis there.

IT security is kind of like that. Software companies nominally care about profit above all else, or in reality scoring personal profit for their owners, whether or not that is because the company itself ever earns accounting profit. But how on earth do you assess the contribution of any individual developer to that outcome? We're looking here at people being promoted and rewarded based on quickly shipping features that seem to work and address some use case, but is that ultimately even what causes a company's owners to get wealthy? The company with the best software doesn't always win, and the company that wins doesn't even always make its owners the richest.

Like reducing rape in the military, reducing security vulnerabilities in software is more of a social goal than an organizational goal. It's rare that it ultimately matters when companies have very large, very public breaches. It's a cost when it happens, but typically one that is drowned out by other considerations. Many large incumbents effectively can't fail over any short period of time of their own accord. They need to be outcompeted, and simply being more secure is not enough of a differentiator for users to switch. If we want to force companies to give a shit anyway and reward employees based on security, we need to make it matter to the organizations. Or you could better professionalize software development and make attention to security a matter of licensing, the same way you don't leave it up to financial advisors and attorneys to act in the best interests of their clients by organizational reward. You take away their ability to get a job at all if they don't do it, by means outside of the employers themselves.

Ultimately, the military didn't really care about soldiers being raped until public outcry forced them to care. I don't think software companies are going to care, either, until public pressure hits some kind of breaking point, probably involving the force of government.