Vacancy expired!
- Experience building and optimizing "big data" data pipelines, architectures and data sets.
- Working knowledge of message queuing, stream processing and highly scalable "big data" data stores.
- Advanced working SQL knowledge and experience working with cloud and relational databases.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Experience building processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large, disconnected datasets.
- Experience using the following software/tools:
- Relational SQL and NoSQL databases, including Postgres.
- Data pipeline and workflow management tools.
- Azure cloud services.
- Object-oriented/object function scripting languages: Python, PySpark Java, C, R/RStudio/RSpark.
- CI/CD systems.
- Strong understanding across cloud and infrastructure components (server, storage, network, data, and applications) and ability to deliver end to end cloud infrastructure, architectures, and designs.
- Knowledge and implementation of enterprise scale cloud security platforms and tooling.
- Experience with enterprise applications, solutions, and data center infrastructures.
- Bachelor's degree in computer science or similar field; master's degree a plus.
- Exceptional product, project and client management skills.
- Azure, AWS or any other cloud/data engineering certifications are preferred.