About Fusemachines
Fusemachines is a leading AI strategy, talent, and education services provider. Founded by Sameer Maskey Ph.D., Adjunct Associate Professor at Columbia University, Fusemachines has a core mission of democratizing AI. With a presence in 4 countries (Nepal, United States, Canada, and Dominican Republic and more than 450 full-time employees). Fusemachines seeks to bring its global expertise in AI to transform companies around the world.
About the role:
This is a remote, 1 year contract position responsible for leading, designing, building, and maintaining the infrastructure required for data integration, storage, processing, and analytics (BI, visualization and Advanced Analytics) using Microsoft Azure in the Media domain.
We are seeking a Lead Data Engineer with hands-on Python, Spark experience and proven abilities to lead software development on Data and Analytics products using Agile methodology. We are seeking a well-rounded senior data engineer to lead a cloud based Big Data product team using a variety of technologies. The ideal candidate will possess strong technical, analytical, and interpersonal skills. In addition, the candidate will lead data engineers, data scientists on the team to achieve architecture and design objectives as agreed with stakeholders.
Qualification & Experience
Must have a full-time Bachelor's degree in Computer Science or similar from a top tier school.
4+ years of experience with Azure DevOps, Azure Cloud Platform, or other hyperscalers.
At least 4 years of experience as a data engineer with strong expertise in Azure, working on generation of big datasets using different data sources, in the Media industry.
Proven experience delivering projects and products for Data and Analytics as a data engineer.
Following certifications:
Microsoft Certified: Azure Fundamentals
Microsoft Certified: Azure Data Engineer Associate
Microsoft Certified: Azure Solutions Architect Expert: nice to have
Databricks Certified Associate Developer for Apache Spark
Databricks Certified Data Engineer Associate, nice to have
Required skills/Competencies
Strong programming Skills in one or more languages such as Python (must have), Scala, and proficiency in writing efficient and optimized code for data integration, storage, processing and manipulation.
Strong experience using Markdown to document code or automated documentation tools (e.g PyDoc).
Strong experience with scalable and distributed Data Processing Technologies such as Spark/PySpark (must have, experience with Azure Databricks is a plus), DBT and Kafka, to be able to handle large volumes of data.
Expert in designing and implementing efficient ELT/ETL processes in Azure (experience with Azure Data Factory is a plus) and using open source solutions being able to develop custom integration solutions as needed.
Skilled in Data Integration from different sources such as APIs, databases, flat files, event streaming, with technologies such as Azure Data Factory.
Expertise in data cleansing, transformation, and validation.
Hands-on experience with Jupyter Notebooks and python packaging and dependency management: Poetry, PipEnv.
Proficiency with Relational Databases (Oracle, SQL Server, MySQL, Postgres, or similar) and NonSQL Databases (MongoDB or Table).
Good understanding of Data Modeling and Database Design Principles. Being able to design and implement efficient database schemas that meet the requirements of the data architecture to support data solutions.
Strong understanding and experience with SQL and writing advanced SQL queries.
Strong experience in designing and implementing Data Warehousing solutions in Azure with Azure Synapse Analytics and/or Snowflake.
Familiarity with migration of code from one or more of SAS , R, Julia, SPSS to Python.
Proven technical leadership on prior Big Data projects.
Strong understanding of the software development lifecycle (SDLC), especially Agile methodologies.
Strong knowledge of SDLC tools and technologies Azure DevOps, including project management software (Jira, Azure Boards or similar), source code management (GitHub, Azure Repos, Bitbucket or similar), CI/CD system (GitHub actions, Azure Pipelines, Jenkins or similar) and binary repository manager (Azure Artifacts or similar).
Strong understanding of DevOps principles, including continuous integration, continuous delivery (CI/CD), infrastructure as code (IaC), configuration management, automated testing and cost management.
Strong knowledge in cloud computing specifically in Microsoft Azure services related to data and analytics, such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics (formerly SQL Data Warehouse), Azure Stream Analytics, SQL Server, Azure Blob Storage, Azure Data Lake Storage, Azure SQL Database, etc.
Experience in Orchestration using technologies like Apache Airflow
Strong analytical skills to identify and address technical issues, performance bottlenecks, and system failures.
Proficiency in debugging and troubleshooting issues in complex data and analytics environments and pipelines.
Good understanding of Data Quality and Governance, including implementation of data quality checks and monitoring processes to ensure that data is accurate, complete, and consistent.
Good understanding of BI solutions including PowerBI and Tableau.
Knowledge in containers and their environments (Docker, Podman, Docker-Compose, Kubernetes, Minikube, Kind, etc.) is a plus
Effective and strong written and verbal communication skills to collaborate with cross-functional teams, including data architects, DevOps engineers, data analysts, data scientists, developers, and operations teams, and ability to think strategically and work cross-functionally with multiple stakeholders and audiences.
Ability to document processes, procedures, and deployment configurations.
Possesses strong leadership skills with a willingness to lead, create Ideas, and be assertive
Understanding of Azure security practices, including network security groups, Azure Active Directory, encryption, and compliance standards.
Ability to implement security controls and best practices within data and analytics solutions, including proficient knowledge and working experience on various cloud security vulnerabilities and ways to mitigate them.
A willingness to stay updated with the latest Azure services, Data Engineering trends, and best practices in the field.
Must be well organized, comfortable working in a fast-paced environment, and able to prioritize effectively.
Team player and self-motivated.
Ability to work and learn independently to answer questions, while maintaining attention to detail and quality.
Commitment to agility, continuous learning, and ability to adapt to changing business needs.
Responsibilities
Lead engineers on the team to meet product deliverables.
Architect, design, develop, test, optimize and maintain high-performance, large-scale data architectures prioritizing best practices, for data intake, validation, mining, and engineering for delivering data products.
Support data integration (batch and real-time), storage, processing, and infrastructure and ensure scalability, reliability, and performance of data systems.
Provide mentorship, coaching and guidance to junior data engineers and foster their professional growth and development.
Collaborate with cross-functional teams (Product, Engineering, Data Scientists, Analysts, Cloud Architects, DevOps engineers) to drive discovery and requirements gathering for data management and business analytics efforts to meet product deliverables, supporting the implementation of code and procedures for data products.
Work with the product management team to understand the roadmap commitments and communicate design and implementation milestones effectively. Understanding data requirements, Interacting with a multi-disciplined team, and translating them into effective solutions.
Identify and solve code/design optimization.
Learn and integrate with a variety of systems, APIs, and platforms.
Establish a full-proof QA process for data validations and overall quality control on the Product.
Continuously evaluate and implement new technologies and tools. Promote the development of reusable components.
Design, implement, and maintain data governance solutions. Manage cataloging, lineage, data quality (enhancing data accuracy), and governance frameworks. Implement data validation and quality assurance processes. Lead data security and privacy efforts.
Work independently and collaboratively on a multi-disciplined project team in an Agile development environment, actively participating in Agile ceremonies.
Work on a Scrum team to deliver on projects and initiatives that impact business development, client support, business management, investment management, and all functional areas of the organization.
Contribute to continuous improvement activities.
Stay updated on market trends and emerging technologies.
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.
Powered by JazzHR