Vacancy expired!
This role offers a hybrid work schedule; offering the flexibility to work remotely two days a week, while providing the opportunity for in-person collaboration.
This position is available in Buffalo, NY or Wilmington, DE About the Team Our team is on a mission of unleashing the power of data to support decision making. Our team builds enduring data products, provides platforms to access & derive insights from data, enables confidence in decision making with appropriate data governance, delivers actionable insights through use of data science and activates value for the business by innovatively solving problems with data. We love translating data, insights, and anecdotes into action, we operate with a sense of urgency, have a startup mentality, build data analytics products at scale, and innovate solutions on behalf of our customers. We work hard but value the need for downtime to unplug and recharge. We embrace our differences and view them as a key driver of innovation. We are Data@M&TRole Overview: This position offers an opportunity to build a data ingestion framework using Big-Data technologies (such as HDFS, Spark and Kafka) for batch and real-time streaming use cases. Features include data ingestion, standardization, metadata management, business rule curation, data enrichments (lookups, calculations, etc.), and statistical computations. The data persistence serializations include: datastores (relational, No-SQL, HDFS, file-based, and cloud), XML, JSON, etc. Additional use cases include: streaming analytics and REST APIs.Primary Responsibilities:- Understand, prepare, process and analyze data to drive operational, analytical and strategic business decisions
- Create, modify and maintain both Sqoop and BDM code and complex SQL for BI/DW data flows. Program in Spark and Python/Scala/Java where BDM Spark co-generation is not adequate
- Build end to end data flows from sources to fully curated and enhanced data sets. This can include the effort to locate and analyze source data, create data flows to extract, profile, and store ingested data, define and build data cleansing and imputation, map to a common data model, transform to satisfy business rules and statistical computations, and validate data content.
- Minimum of an Associate degree and 6 years' systems analysis/application development experience, or in lieu of a degree, a combined minimum of 8 years' higher education and/or work experience
- Experience with Agile Methodology
- An ability to build out data products & product enhancements from idea through to launch
- Strong collaboration with technology partners and customers on feature requirements and prioritization
- A team player mindset with an ability to thrive and effectively communicate in a fast-paced, constantly evolving environment
- In-Depth Knowledge of SQL and Other Database Solutions - preferable in Teradata and Oracle
- Experience in big-data technologies, such as: Hive, Impala, Kudu
- Informatica (or equivalent) tools: PowerCenter, MDM, Big Data Manager, Data Quality
- Experience working with multiple file structures - Mainframe, Flat files, Json, XML's
- Experience in Unix/Linux operating systems, with scripting expertise
- Programming skillsets in at least one of these languages: Java, Scala, Python
- Batch processing like Sqoop, Spark
- CA atomic or equivalent job scheduling tool experience
- Realtime processing like Spark streaming, Kafka
- Cloud computing (Azure, AWS)
- NOSQL Databases such as Mongo, Cassandra, HBase
- Agile (such as JIRA), Source code repository (Git, Bitbucket, etc.) CI/CD tools (Jenkins, Teamcity, etc.)
- Competitive compensation
- Health, welfare, and retirement benefits
- 401(k) match at 5%
- Work-life balance and flexible work arrangements
- Banking Officers start with 25 days PTO plus 12 paid holidays