About the role
StackTalk's AI systems are the core of what makes us different. We use large language models to parse regulatory text, identify compliance requirements, generate policy mappings, and surface actionable insights — all with the accuracy and reliability that financial institutions demand.
As a Sr. AI Engineer, you'll own the design and development of these systems end-to-end. You'll work on retrieval-augmented generation pipelines, fine-tuning strategies, evaluation frameworks, and the infrastructure that keeps our AI systems fast, accurate, and auditable.
What you'll do
- Design and build production AI pipelines for regulatory document analysis, compliance mapping, and evidence generation
- Develop and maintain RAG systems that ground model outputs in authoritative regulatory sources
- Build evaluation frameworks to measure accuracy, hallucination rates, and compliance-specific quality metrics
- Optimize LLM inference for latency and cost while maintaining output quality
- Collaborate with compliance domain experts to encode regulatory knowledge into our AI systems
- Design prompt engineering strategies and fine-tuning approaches for domain-specific performance
- Build tooling for model versioning, A/B testing, and continuous monitoring in production
What we're looking for
- 5+ years of software engineering experience, with at least 2 years focused on ML/AI systems in production
- Hands-on experience building applications with large language models (fine-tuning, RAG, agents, or similar)
- Strong Python skills and familiarity with the modern AI/ML stack (PyTorch, LangChain, vector databases, etc.)
- Experience building evaluation and monitoring systems for AI applications
- Understanding of information retrieval, NLP fundamentals, and embedding models
- Comfort working in a fast-moving startup where you'll need to balance research rigor with shipping speed
- Experience in regulated industries or with compliance/legal document processing is a strong plus
Why StackTalk
This is a rare opportunity to build AI systems where accuracy truly matters. In compliance, getting it wrong isn't just a bad user experience — it's a regulatory risk. You'll work on hard, meaningful problems at the intersection of AI and financial regulation.
We offer competitive compensation, meaningful equity, and the chance to build the AI infrastructure of a company from day one. Our NYC office is where the team collaborates daily on the hardest problems in regtech.