Title:  Group Head - Data Engineering

Date:  3 Sept 2025
State:  Maharashtra

Job description

 

The position is based out of Mumbai and will be part of D&IT (Digitalization & Information Technology), tasked with improving the data infrastructure, architecture, governance, and delivery excellence that enables data/insights driven decision-making across Clusters (Generation, Renewables, Transmission & Distribution, NBS etc.) and Corporate Functions (Finance, HR etc.). You will lead and mentor a team of data engineers as you partner closely with peer data scientists and web/Power BI developers to create scalable/reusable data driven solutions to business problems. The ability to assess data requirement, design scalable architecture, build reusable data assets, drive data governance & best practices are critical to be successful in this role. Our business is evolving quickly and we need you to think long term, but deliver incrementally and drive business impact.

 

Tata Power with its competitive edge of resources is playing a key role in the transformation process and aims to emerge as a most admired integrated Power and Energy company. We at Tata Power believe that investment in people and their potential is one of the greatest investments we can make. For this, we are constantly in search of talent that is curious, creative, communicative, and passionate, that can perform excellently. Does that sound like a compelling place to work?

 

You’ll spend time on the following:

  • 7 to 12+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud (AWS) and Data Governance domains.
  • Take ownership of the technical aspects of implementing data pipeline & migration requirements, ensuring that the platform is being used to its fullest potential through designing and building applications around business stakeholder needs.
  • Interface directly with stakeholders to gather requirements and own the automated end-to-end data engineering solutions.
  • Implement data pipelines to automate the ingestion, transformation, and augmentation of both structured, unstructured, real time data, and provide best practices for pipeline operations
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability
  • Troubleshoot and remediate data quality issues raised by pipeline alerts or downstream consumers. Implement Data Governance best practices.
  • Comprehensive knowledge of Functional and technical impact analysis. Provide advice and ideas for technical solutions and improvements to data systems
  • Create and maintain clear documentation on data models/schemas as well as transformation/validation rules
  • Implement tools that help data consumers to extract, analyse, and visualize data faster through data pipelines
  • Leading the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for batch ETL's.
  • Work directly with our internal product/technical teams to ensure that our technology infrastructure is seamlessly and effectively integrated
  • Migrate current data applications & pipelines to Cloud (AWS) leveraging PaaS technologies

 

 

We’re excited if you have:

  • Graduate with Engineering Degree (CS/Electronics/IT) / MCA / MCS or equivalent with substantial data engineering experience.
  • 8+ years of recent hands-on experience with a modern programming language (Scala, Python, Java) is required; Spark/ Pyspark is preferred.
  • Experience with configuration management and version control apps (ie: Git) and experience working within a CI/CD framework is a plus. An even bigger plus if you have experience building framework
  • 8+ years of recent hands-on SQL programming experience in a Big Data environment is required; Hadoop/Hive experience is preferred.
  • Working knowledge of PostgreSQL, RDBMS, NoSQL and columnar databases
  • Hands on experience in AWS Cloud data engineering components including API Gateway, Glue, IoT Core, EKS, ECS, S3, RDS, Redshift, EMR etc.
  • Experience developing and maintaining ETL applications and data pipelines using big data technologies is required; Apache Kafka, Spark, Airflow experience is a must!
  • Knowledge of API and microservice integration with applications
  • Experience building data solutions for Power BI and Web visualization applications
  • Experience with Cloud is a plus
  • Experience in managing multiple projects and stakeholders with excellent communication and interpersonal skills
  • Ability to develop and organize high-quality documentation
  • Superior analytical skills and a strong sense of ownership in your work
  • Collaborate with data scientists on several projects. Contribute to development and support of analytics including AI/ML.
  • Ability to thrive in a fast-paced environment, and to manage multiple, competing priorities simultaneously
  • Prior Energy & Utilities industry experience is a big plus.

 

WL: ME01 (SME) /MD02 (GH)

Number of positions: 1

Experience (Min. – Max. in yrs.): 7-12 years

Location: Mumbai (Onsite)