We need Data Engineers who can design, build, and maintain robust data infrastructure on Azure. You’ll build scalable pipelines, ensure data quality, and enable downstream analytics and AI workloads.
You’ll work closely with AI engineers, analysts, and business stakeholders to keep data clean, accessible, and reliable. We value engineers who think beyond moving data—people who understand business context and proactively improve systems.

What You’ll Work On

  • Design and optimize data pipelines using Azure Data Factory, Synapse Analytics, and Databricks.
  • Architect data lakes and warehouses on Azure Data Lake Storage and Synapse.
  • Implement real-time and batch processing with Stream Analytics, Event Hubs, and Databricks Streaming.
  • Ensure data quality, governance, and lineage through validation and monitoring.
  • Build and maintain reliable, scalable ETL/ELT processes.
  • Manage Azure Cosmos DB, Azure SQL Database, and other data stores.
  • Prepare and serve data for ML models and GenAI applications.
  • Support BI/reporting needs via Power BI-ready data models.
  • Own data infrastructure, identify bottlenecks, and drive improvements.

What We Value

  • Strong hands-on Azure data stack experience: Data Factory, Synapse, Databricks, Data Lake Storage.
  • Proficiency in SQL and Python or Scala for data transformation.
  • Experience building production-grade ETL/ELT pipelines at scale.
  • Familiarity with real-time processing: Stream Analytics, Event Hubs, or Kafka on Azure.
  • Understanding of data modeling, warehousing, and dimensional modeling.
  • Experience with Cosmos DB or other NoSQL databases.
  • Knowledge of data governance and cataloging (Microsoft Purview).
  • Familiarity with CI/CD for data pipelines (Azure DevOps or GitHub Actions).
  • Basic Power BI proficiency (reports, dashboards, visualizations).
  • Must actively use AI-assisted development tools (GitHub Copilot, Cursor, Claude Code, etc.).
  • Strong problem-solving and communication skills.
Preferred:
  • Experience with large-scale data systems in enterprise or financial projects.
  • Experience with Delta Lake, Apache Spark, or other big data technologies.
  • Curiosity to explore new tools, frameworks, and problem domains.
  • Must actively use AI-assisted development tools (GitHub Copilot, Cursor, Claude Code, etc.).
  • Must actively use AI-assisted development tools (Antigravity, Claude Code, etc.).