Description
Amazon Music is awash in data! To help make sense of it all, the DISCO (Data, Insights, Science & Optimization) team: (i) enables the Consumer Product Tech org make data driven decisions that improve the customer retention, engagement and experience on Amazon Music. We build and maintain automated self-service data solutions, data science models and deep dive difficult questions that provide actionable insights. We also enable measurement, personalization and experimentation by operating key data programs ranging from attribution pipelines, northstar weblabs metrics to causal frameworks. (ii) delivering exceptional Analytics & Science infrastructure for DISCO teams, fostering a data-driven approach to insights and decision making. As platform builders, we are committed to constructing flexible, reliable, and scalable solutions to empower our customers. (iii) accelerates and facilitates content analytics and provides independence to generate valuable insights in a fast, agile, and accurate way.
This domain provides analytical support for the Consumer Product Tech org to make data driven decisions while launching new features and evaluating existing features with the end goal of improving the customer experience.
DISCO team enables repeatable, easy, in depth analysis of music customer behaviors. We reduce the cost in time and effort of analysis, data set building, model building, and user segmentation. Our goal is to empower all teams at Amazon Music to make data driven decisions and effectively measure their results by providing high quality, high availability data, and democratized data access through self-service tools.
If you love the challenges that come with big data then this role is for you. We collect billions of events a day, manage petabyte scale data on Redshift and S3, and develop data pipelines using Spark/Scala EMR, SQL based ETL, Airflow services.
We are looking for talented, enthusiastic, and detail-oriented Data Engineer, who knows how to take on big data challenges in an agile way. Duties include big data design and analysis, data modeling, and development, deployment, and operations of big data pipelines. You'll help build Amazon Music's most important data pipelines and data sets, and expand self-service data knowledge and capabilities through an Amazon Music data university.
DISCO team develops data specifically for a set of key business domains like personalization and marketing and provides and protects a robust self-service core data experience for all internal customers. We deal in AWS technologies like Redshift, S3, EMR, EC2, DynamoDB, Kinesis Firehose, and Lambda. Your team will manage the data exchange store (Data Lake) and EMR/Spark processing layer using Airflow as orchestrator. You'll build our data university and partner with Product, Marketing, BI, and ML teams to build new behavioural events, pipelines, datasets, models, and reporting to support their initiatives. You'll also continue to develop big data pipelines.
Key job responsibilities
You will work with Product Managers, Data scientists and other Data Engineers to help design, develop and deliver scalable data analytics platform and data pipeline solutions to support various Science, ML initiatives and at the scale and speed of Amazon Music. In addition, you will help design, develop, and deliver components for the analytics platform at the broader org level and streamline/automate workflows for the broader DISCO organization.
What You’ll Do:
-Collaborate with cross-functional teams, including data scientists, data scientists, business intelligence engineers, to design and architect a modern data analytics platform on AWS, utilizing the AWS Cloud Development Kit (CDK).
-Develop robust and scalable data pipelines using SQL/PySpark/Airflow to efficiently ingest, process, and transform large volumes of data from various sources into a structured format, ensuring data quality and integrity.
-Design and implement an efficient and scalable data warehousing solution on AWS, using appropriate NoSQL/SQL storage and database technologies for structured and unstructured data.
-Automate ETL/ELT processes to streamline data integration from diverse data sources and ensure the platform's reliability and efficiency.
-Create data models to support business intelligence, providing actionable insights and interactive reports to end-users.
-Enable advanced analytics and machine learning capabilities within the platform to derive predictive and prescriptive insights from the data through tools like EMR/SageMaker Notebooks
-Continuously monitor and optimize the performance of data pipelines, databases, and applications, ensuring low-latency data access for analytics and machine learning tasks.
-Implement robust security measures and ensure data compliance with internal requirements, industry standards, and regulations to safeguard sensitive information.
-Work closely with data scientists and business intelligence engineers to understand their requirements and collaborate on data-related projects.
-Create comprehensive technical documentation for the platform's architecture, data models, and APIs to facilitate knowledge sharing and maintainability.
About the team
Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators. From personalized music playlists to exclusive podcasts, concert livestreams to artist merch, Amazon Music is innovating at some of the most exciting intersections of music and culture. We offer experiences that serve all listeners with our different tiers of service: Prime members get access to all the music in shuffle mode, and top ad-free podcasts, included with their membership; customers can upgrade to Amazon Music Unlimited for unlimited, on-demand access to 100 million songs, including millions in HD, Ultra HD, and spatial audio; and anyone can listen for free by downloading the Amazon Music app or via Alexa-enabled devices. Join us for the opportunity to influence how Amazon Music engages fans, artists, and creators on a global scale.
We are open to hiring candidates to work out of one of the following locations:
Bangalore, KA, IND
Basic Qualifications
Experience with data modeling, warehousing and building ETL pipelines
3+ years of data engineering experience
3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
Bachelor's degree in Computer Science, Data Science, or a related field.
Experience working with Amazon Web Services (AWS) and proficiency in leveraging various AWS services for data storage, processing, and analytics.
Solid programming skills in SQL for ETL/ELT jobs.
Strong programming skills in Python for data processing, ETL, and scripting tasks.
Familiarity with big data technologies such as Apache Spark, Apache Hadoop, or AWS Elastic MapReduce (EMR).
Solid understanding of database management systems, both relational and NoSQL, and expertise in query optimization and database performance tuning.
Excellent problem-solving and analytical skills with the ability to resolve complex data engineering challenges.
Strong communication and collaboration skills, with a demonstrated ability to work effectively in a team-oriented environment.
Experience in Unix
Experience in Troubleshooting the issues related to Data and Infrastructure issues.
Preferred Qualifications
Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
Knowledge of batch and streaming data architectures like Kafka, Kinesis, Flink, Storm, Beam
Knowledge of distributed systems as it pertains to data storage and computing
Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
Experience in building or administering reporting/analytics platforms
Master's degree in Computer Science, Data Science, or a related field.
Minimum of 5+ years of professional experience in data engineering roles, with a strong track record of building data analytics platforms.
Extensive experience working with Amazon Web Services (AWS) and proficiency in leveraging various AWS services for data storage, processing, and analytics.
Hands-on experience with AWS Cloud Development Kit (CDK) and Typescript to build infrastructure as code (IaC) for AWS resources.
Familiarity with big data technologies such as Apache Spark, Apache Hadoop, or AWS Elastic MapReduce (EMR).
Solid understanding of database management systems, both relational and NoSQL, and expertise in query optimization and database performance tuning.
Proficiency in data modeling and designing efficient data structures for analytical workloads.
Experience with CI/CD pipelines and a strong DevOps mindset to ensure continuous integration and delivery.
Excellent problem-solving and analytical skills with the ability to resolve complex data engineering challenges.
Strong communication and collaboration skills, with a demonstrated ability to work effectively in a team-oriented environment.