Vacancy expired!
NAVA Software solutions is looking for a Data Engineer Details: Data Engineer Location: 100% Remote (Need to come to office when needed for important meetings- expenses paid) Duration: Full time / Direct Hire Responsibilities
- Working with large and complex sets of data that meet non-functional and functional business requirements
- Implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition
- Working with stakeholders to assist them with data-related technical issues.
- Create and maintain optimal data pipeline architecture.
- Implement data governance to maintain data integrity
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Minimum of 3-4+ years of experience with solution consulting
- Working knowledge and experience working with data warehouses, data lakes, and data lake houses
- Experience building and optimizing data pipelines, architectures, and data sets
- Experience with data transformation, data structures, metadata, and automation of data workflows
- Experience with manipulating, processing, and extracting value from large datasets
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra
- Experience with AWS cloud services: Big Query, Synapse, Redshift, etc.
- Analytic skills related to working with structured and unstructured datasets
- Excellent communication skills including writing white papers position paper, giving presentations and other forms or Knowledge Transfer
- Experience supporting and working with cross-functional teams in a dynamic environment
- Experience building and optimizing data pipelines, architectures, and data sets
- Experience with data transformation, data structures, metadata, and dependency of the data workflow
- Experience with manipulating, processing, and extracting value from large datasets
- Nvidia DGX experience and/or Nvidia DGX installation experience
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C, Scala, JavaScript/TypeScript
- Experience with multiple data integration, visualization, analytics, cataloging tools
- Current training/certification in multiple data intelligence tools/systems
- Cloud Data Certifications from AWS, Azure, and/or Google Cloud Platform
- Position requires up to 50% domestic travel