Data Engineer

About Trantor:

Trantor is a technology services company focused on outsourced product development and digital re-engineering. Leveraging our CaptiveCoE™ engagement model, we operate as a seamless extension of our client’s teams to provide rapid scalability with predictable budgets. Founded in 2012, Trantor has worked with customers across Tech, FinTech, Media & Cybersecurity industries. We have centers in the US, India, Canada, and Costa Rica. We are consistently rated as the #1 employer in the region with the ability to attract and retain technical talent. Our commitment to excellence and impactful results has translated to long-term relationships and value for our clients and solution partners.                                                                                                                                       


Job Description:

We are seeking a data engineer to design, implement, and optimize cloud-based data pipelines using Microsoft Azure services, including ADF, Synapse, and ADLS.


Job Role & Responsibilities

  • Develop and maintain ETL/ELT pipelines using Azure Data Factory to ingest, transform, and load data from diverse sources (databases, APIs, flat files).
  • Design and manage data storage solutions using Azure Blob Storage and ADLS Gen2, ensuring proper partitioning, compression, and lifecycle policies for performance and cost efficiency.
  • Build and optimize data models and analytical queries in Azure Synapse Analytics, collaborating with data architects to support reporting and BI needs.
  • Ensure data quality, consistency, and reliability through validation, reconciliation, auditing, and monitoring frameworks.
  • Collaborate with data architects, BI developers, and business teams to define architecture, integration patterns, and performance tuning strategies.
  • Implement data security best practices, including encryption, access control, and role-based access management (RBAC).
  • Create and maintain documentation of data workflows, pipelines, and architecture to support knowledge transfer, compliance, and audits.

 

Skills Required

  • 5+ years of hands-on experience in data engineering with a strong focus on Azure Data Factory, Azure Synapse Analytics, and ADLS Gen2.
  • Strong expertise in SQL, performance tuning, and query optimization for large-scale datasets.
  • Experience designing and managing data pipelines for structured and semi-structured data (CSV, JSON, Parquet, etc.).
  • Proficiency in data modeling (star schema, snowflake, normalized models) for analytics and BI use cases.
  • Practical knowledge of data validation, reconciliation frameworks, and monitoring pipelines to ensure data reliability.
  • Solid understanding of data security best practices (encryption, RBAC, compliance standards like GDPR).
  • Strong collaboration skills, with the ability to work closely with architects, BI teams, and business stakeholders.
  • Excellent skills in documentation and process standardization.

 

Good-to-Have Skills

  • Experience with Python/Scala scripting for automation of ETL and data quality checks.
  • Exposure to Power BI or other BI tools (Tableau, Qlik) for understanding downstream analytics requirements.
  • Familiarity with CI/CD pipelines for data projects using Azure DevOps or Git-based workflows.
  • Knowledge of big data frameworks (Databricks, Spark) for large-scale transformations.
  • Hands-on experience with metadata management, data lineage tools, or governance frameworks.
  • Exposure to cloud cost optimization practices in Azure environments.
  • Understanding of API-based ingestion and event-driven architectures (Kafka, Event Hub)
Job Category: data engineer
Job Type: Full Time
Job Location: Chandigarh/Gurgaon/Noida/Remote
Shift Timing: General

Apply for this position

Allowed Type(s): .pdf