Ask HN: Translate high lvl AI risk policies to dev tasks and ensure enforcement?

1 points by percfeg 8 hours ago

I am a dev lead in a regulated industry. We're increasingly integrating GenAI apps into our stack, love them, but running into challenges ensuring these apps are aligned with internal AI risk policies before they get deployed. Often these policies are written by GRC team and hence very high level and business oriented, which making them hard to translate into actionable dev items. This ambiguity also makes it hard to effectively test whether the implemented controls actually enforce the intended policy.

I'm interested to know if others are facing similar hurdles and how you are tackling them. Specifically:

- How do you turn abstract AI policies into specific, testable requirements for your development teams?

- Are you automating the enforcement of these AI specific policies within your CI/CD pipelines or are you primarily relying on post deployment monitoring?

- What specific tools, frameworks, or platforms are you using for this purpose?

- What other challenges are you encountering in operationalise AI risk management/governance in SDLC?

Thanks in advance!