Data Engineer - Python & Pyspark with AWS
Job Description
Job description
5+ years of experience in data engineering, data warehousing, and big data processing.
Strong expertise in Python and SQL for data manipulation, automation, and pipeline development.
Hands-on experience with Kafka for real-time data ingestion and processing.
Deep understanding of AWS data services (Redshift, Glue, S3, Lambda, Kinesis, Athena, DynamoDB, etc.).
Experience in data modeling, schema design, and ETL best practices.
Familiarity with infrastructure as code (IaC) tools like Terraform or CloudFormation is a plus.
Strong problem-solving skills with the ability to debug complex data workflows.
Mandate Skill - PySpark + Python + AWS (Glue, Lambda, EMR, etc.) + SQL.
Role: Data EngineerIndustry Type: Analytics / KPO / Research
Department: Engineering - Software & QA
Employment Type: Full Time, Permanent
Role Category: Software Development
Education
UG: Any GraduatePG: Any Postgraduate