The Data Engineer supports the DM&BI team to organize, leverage, expand, optimize and distribute Crown Media’s select data from and to different sources and destinations for internal and external vendors through the use of existing and/or introduction of new technologies with high awareness on industry and best practices. The Sr Data Engineer will support software developers, database architects, data analysts, and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.
- Develop high quality, reliable, and fault-tolerant data solutions based on internal and external customers/users’ requirements, applying best practices and new trends/technologies all along the solution lifecycle. ETL tool of preference is Pentaho Data Integration (PDI).
- Integration of external data into data warehouses using REST API and web services among others.
- Delivery of data to internal customers and external vendors through different data formats (CSV, Excel, XML, JSON) and delivery methods (ex. Email, SFTP, S3, etc.), including REST API.
- Provide daily monitoring, management, troubleshooting, and issue resolution to existing and new data solutions and systems’ interfaces affected by them.
- Follow and implement Agile Best Practices for Data Warehousing.
- Support the Business Intelligence team through the delivery of reliable and timely data.
- University degree in computer science, college diploma, technical certification, equivalent relevant academic qualifications or a minimum of 3 years of professional experience.
- Proficiency as an advanced SQL Programmer/Analyst/Data Warehouse practitioner, with experience in analysis, programing, technical documentation, unit testing and training.
- Proficiency with SQL DML (Data Manipulation Language) with special focus on query tuning/performance.
- Experience with AWS ecosystem (Data Lake Formation, Glue, Data Pipelines, EC2, Redshift, S3, Glacier, DynamoDB, Lambda, etc.)
- Background on Data Modeling and Data Warehouse design (Methodologies: Kimball, OLAP, EDW).
- Proficiency with ETL programming tools (preferably Pentaho PDI), jobs and scheduling management.
- Experience with reporting, visualization and dashboarding solutions using Tableau or similar.
- Experience with Version Control Systems. Tool of preference is git.
- Experience with JSON, XML, CSV and other data formats. Experience with Python is a plus.
- Experience with Project Management, SDLC & CI/CD methodologies.
- Experience and ability to lead a team of Data Engineers/ETL Developers as well as work as a team-member and with minimal supervision.
- Experience on NoSQL or non-relational DBMS is a plus.
- Curious, self-motivated and autodidact, always keeping up to date with the latest ETL/BI tools and technologies.
- Organized, meticulous and quality oriented.