Vacancy expired!
They need Software developer (Data Engineer) who has experience Hadoop and Apache Spark, AWS Services, Lambda, batch, sQL, in python (Flask) .
- 4+ years with EMR Cluster (MapReduce frameworks), pySpark.
- 4+ years of Python, SQL, Spark SQL & PySpark
- 4+ years of recent experience with building and deploying applications in AWS (S3, Hive, Glue, AWS Batch, Dynamo DB, Redshift, EMR, Cloudwatch, RDS, Lambda, SNS, SWS etc.)