cv
Professional experience, education, and selected projects.
Contact Information
| Name | Sean Sica |
| Professional Title | Research Engineer |
| sean@sica.io |
Professional Summary
Research engineer focused on mechanistic interpretability and AI safety. Leading software at MITRE ATT&CK while conducting interpretability research using sparse autoencoders and causal interventions. MS in Data Science from UC Berkeley.
Experience
-
2023 - Bedford, MA
Lead Software Engineer
The MITRE Corporation
Leading the ATT&CK software team and conducting mechanistic interpretability research on MITRE’s Federal AI Sandbox.
- Bootstrapped MITRE’s interpretability research effort; led sprint developing in-house tooling for SAE training and automated feature analysis
- Contributed Kubernetes support and Azure inference integration to Neuronpedia (open-source interpretability platform)
- Building agent-based system for ML experiment and training run management
- Lead 10+ open-source ATT&CK tools used by thousands of organizations worldwide
- Authored the ATT&CK Data Model — first codified expression of the ATT&CK taxonomy as a TypeScript library
-
2021 - 2023 Bedford, MA
Senior Software Engineer
The MITRE Corporation
Software engineer on the MITRE ATT&CK team.
- Designed, deployed, and maintained the production ATT&CK TAXII 2.1 server (attack-taxii.mitre.org)
- Built REST APIs and open-source Python/TypeScript libraries for ATT&CK data distribution
-
2018 - 2021 Bedford, MA
Network Engineer
The MITRE Corporation
Infrastructure engineer for MITRE’s Bedford campus datacenter.
- Designed and productionized Cisco ACI network fabric
- Completed BS in Computer Science at Boston University while working full-time
-
2017 - 2018 Portsmouth, NH
Lead IT Systems Engineer
Neoscope
- Lead integration engineer for managed services provider
- End-to-end infrastructure refresh projects, from assessment through delivery
Education
-
2023 - 2025 Berkeley, CA
Master of Information and Data Science
University of California, Berkeley
Information and Data Science
- Focus: Generative AI, NLP, and mechanistic interpretability
- Research: Causal effects of fine-tuning on sparse autoencoder features
- Capstone: F1 Safety Car Prediction Engine — selected for Berkeley Summer 2025 Showcase
-
2017 - 2021 Boston, MA
Publications
-
2024 Unveiling the Black Box: Causal Inference and Feature Analysis in Fine-Tuned Language Models Using Sparse Autoencoders
UC Berkeley DATASCI 266
Examined the causal effects of fine-tuning on language model interpretability using sparse autoencoders and mechanistic interventions.
Skills
ML & Interpretability: PyTorch, TransformerLens, SAELens, NNSight, Sparse Autoencoders, DeepSpeed, LangChain
Languages: Python, TypeScript, JavaScript, Java, SQL
Infrastructure & MLOps: Docker, Kubernetes, Slurm, AWS, MongoDB, PostgreSQL
Frameworks: Express.js, Nest.js, Spring Boot, Next.js, React
Certificates
- CCNA Routing & Switching - Cisco (2017)
- CCNA Wireless - Cisco (2018)
Interests
AI Safety & Alignment: Mechanistic Interpretability, Sparse Autoencoders, Activation Analysis, Representation Engineering
Open Source: Neuronpedia, MITRE ATT&CK, Developer Tools