Description
Job Title: Senior Data Engineer
Overview
We are searching for a dedicated and experienced Senior Data Engineer to join our dynamic team. At [Company Name], we believe in fostering an inclusive environment where everyone feels empowered to bring their authentic selves to work. In this role, you will leverage your technical skills and creativity to design, develop, and deliver robust data pipelines, analytics solutions, and scalable architectures that support strategic business goals. Collaboration with cross-functional teams will be key as you navigate a fast-paced Agile environment to drive success.
Responsibilities
- Design, implement, and optimize scalable data pipelines and workflows using Python (with Apache Spark) and SQL.
- Develop and maintain data models and datasets using languages like DAX.
- Work with cloud-native platforms, specifically Microsoft Azure, to create and maintain highly available, scalable, and secure data architectures.
- Collaborate with Product Owners, Data Analysts, and other stakeholders in an Agile framework (e.g., Scrum and Kanban) to meet evolving business needs.
- Ensure data integrity, confidentiality, and security through best practices in database design and management.
- Monitor, troubleshoot, and improve the performance of data systems and applications.
- Stay current with emerging technologies and techniques in the field of Data Engineering to ensure industry-leading practices.
- Mentor and guide junior engineers in the team, fostering a culture of learning, collaboration, and innovation.
Qualifications
Required:
- Degree in Computer Engineering, Data Science, Statistics, Physics, or a related field in IT.
- Advanced proficiency in Python (with Apache Spark), SQL, and DAX.
- Strong experience with cloud computing platforms, especially Azure Data Services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Knowledge and experience with Agile methodologies such as Scrum and/or Kanban.
- Proven ability to design and build ETL/ELT solutions and handle data integration, including API-based solutions.
- Expertise in data modeling, data pipelines, data wrangling, and database schema design.
- Strong understanding of data quality and data governance best practices.
- Analytical and problem-solving mindset, with attention to detail and commitment to delivering high-quality solutions.
Preferred:
- Experience with tools/systems like GitHub, AWS, Power BI, and Linux.
- Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field.
- Familiarity with file types such as parquet, SQL, CSV, XLSX, JSON, and TXT.
- Knowledge of DevOps practices and tools (e.g., CI/CD pipelines, Git, Docker).
Nice to Have:
- Experience with LS Central, Dynamics 365 (Business Central), Microsoft SQL Server, Tableau, JavaScript, HTML, CSS, and AL.
- Experience with statistics, code/performance optimization, and breaking down complex data engineering problems.
Day-to-day
- Engage daily with the team in Agile ceremonies such as stand-ups, sprint planning, and retrospectives.
- Write and refine code for scalable data pipelines using Python (with Apache Spark) and SQL.
- Design datasets and develop robust models for business reporting and analytics using DAX.
- Manage, monitor, and optimize data workflows in Azure cloud environments to ensure reliability and performance.
- Troubleshoot and resolve issues in existing workflows while identifying opportunities for improvement.
- Partner with stakeholders to understand data requirements and propose tailored engineering solutions.
- Stay up to date with industry trends through research, knowledge sharing, and attending relevant training or conferences.