Vacancy expired!
- You think out of the box and are passionate to chase your dreams in the Big data space
- You’re self-motivated and want to make an impact by helping customers explore their data to uncover hidden insights
- You like to innovate and help customers find simple solutions to complex problems that has not been solved through legacy technologies
- You like to work where teams collaborate to craft transformational strategies
- Bachelor’s/Master’s in Computer science or related
- Strong Java/Scala/Python experience.
- Strong previous professional experience building Distributed Solutions dealing with high volumes of data
- Hands on experience on HDFS, Hive, Pig, Sqoop and NOSQL
- Experience/ knowledge working with batch processing/ real-time systems using various open-source technologies like Solr, Spark, Storm, Kafka, etc.
- Experience in Apache Spark and/or Spark Streaming (at least 6 months)
- Good understanding of algorithms, data structure, performance optimization techniques and exposure to complete SDLC and PDLC
- Well aware of architectural concepts (Multi-tenancy, SOA, SCA etc.) and NFR’s (performance, scalability, monitoring etc.)