Design, develop, and optimize data pipelines and ETL processes to support data ingestion, transformation, and storage.
Analyze data provided from external systems to understand its business use
Develop and maintain data models, schemas, and metadata to support efficient data storage and retrieval of the data from external systems
Implement data quality checks and monitoring processes to ensure the accuracy, completeness, and consistency of data.
Apply best practices to continuously improve data processes and systems.
Provide support for other members of the project team in data related tasks
Job Requirements:
Bachelor’s or master’s degree in computer science, engineering, or a related field.
8-10 years in Data Engineering – consuming, wrangling, validating, developing pipelines for data.
5+ years of experience working with Python and Pandas.
5+ years of experience working with SQL.
Familiarity with the basic principles of distributed computing and data modeling.
Excellent problem-solving and analytical skills, with the ability to troubleshoot complex data issues and optimize data processes.
Experience with object-oriented design and coding and testing patterns, including experience with engineering software platforms and data infrastructures.
Working experience with Dimensional Modeling.
Working experience with SQLServer is a must.
Working experience with Typescript / Javascript is a plus.
Working experience with Snowflake / Alteryx is a plus.
WebApp development experience is a plus
Strong written and verbal communication skills.
Be open to receiving constructive feedback.
Ability to work in a fast-paced, rapidly growing company and handle a wide variety of challenges, deadlines, and a diverse array of contacts.
Analytics products and business insights solutions that gives clients unmatched speed to increased revenues, reduced costs, and maximized margins
No results available
Reset