Senior Data Engineer (Data + Intelligence)
Data Engineering Lead – Data & Intelligence
We’re seeking an experienced data engineer to drive our strategy across business intelligence, machine learning, and GenAI. As we embed intelligence and automation into trading workflows, building reliable, high-quality data infrastructure is mission-critical—and this role leads that effort.
Unlike orgs that split these functions, we value a hands-on leader who understands both the data and its downstream use cases. You’ll work with software engineers, data engineers, data scientists, and quants, and help shape our technical direction.
What You’ll Do
-
Architect & build: Design and implement scalable, reliable data pipelines powering everything from BI dashboards to production ML/AI models.
-
Own core infrastructure: Lead our cloud data warehouse, ETL/ELT orchestration, and real-time processing stacks; define and execute the technical roadmap.
-
Mentor & lead: Coach engineers and set best practices for code quality, testing, CI/CD, and operational excellence.
-
Collaborate & deliver: Partner with Quant Researchers, ML Engineers, and business stakeholders to translate requirements into robust, production-grade data solutions.
-
Champion data quality: Establish data quality, governance, and observability standards to build trust with all data consumers.
About You
You’re a seasoned data engineering leader who thinks like an owner—strong analytically, pragmatic in problem-solving, and focused on systems that stand the test of time. You’re comfortable setting technical direction, guiding architecture, and raising the bar for delivery. You see the full lifecycle—from raw ingestion to the models and dashboards that depend on it—and know that clean, well-architected data is the foundation for every data-driven initiative.
Qualifications
-
7+ years designing, building, and productionizing data-intensive systems.
-
Proven experience mentoring engineers and leading complex technical projects.
-
Expert-level SQL and Python; deep familiarity with data manipulation libraries and backend patterns.
-
Track record owning mission-critical data platforms (data warehouses/lakes; ETL/ELT pipelines).
-
Strong with modern batch processing & orchestration (e.g., Airflow, Dagster, Prefect).
-
Experience with real-time processing & messaging (e.g., Apache Kafka, GCP Pub/Sub).
-
Bachelor’s or advanced degree in Computer Science, Engineering, Mathematics, Economics, or related field.
Nice to Have
-
Financial services or trading technology experience.
-
ML/AI infrastructure exposure (feature engineering, model deployment, monitoring).
-
Familiarity with GenAI/LLM ecosystem (vector DBs, RAG).
-
Containerization & orchestration (e.g., Docker, Kubernetes).
