AWS Glue Developer

Work Experience: 6 to 8 Years

Work Location:  Chennai & Hyderabad

Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops, 

Job Reference ID:BT/F21/IND

Job Description:

Design, build and configure applications to meet business process and application requirements.


7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.

Technical Experience:

Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.

  •  Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
  •  Create data pipeline architecture by designing and implementing data ingestion solutions.
  •  Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.
  • Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.
  • Author ETL processes using Python, Pyspark.
  • Build Redshift Spectrum direct transformations and data modelling using data in S3.
  • ETL process monitoring using CloudWatch events.
  • You will be working in collaboration with other teams. Good communication must.
  • Must have experience in using AWS services API, AWS CLI and SDK

Professional Attributes:

  • Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.
  • Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.
  • Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.


  • Degree in Computer Science, Computer Engineering or equivalent.

Salary:  Commensurate with experience and demonstrated competence Contact:


Contact Informations:

Drop us a message