Job Description
Description:
• Computer Science/Software Engineering (or related) degree
• 4+ years' experience with end-to-end application development on Teradata data-warehouse and analytical platforms
• 2+ years' experience with end-to-end application development on Big Data Technologies -- Hadoop Hive , PySpark
• Extensive experience developing complex Teradata SQL-based ETL and analytic workflows using native utilities (bteq, tpt, fastexport)
• Very good knowledge of Unix/Linux shell Scripting and scheduling (like Autosys)
• Experience working with Big Data Technologies, programs and toolsets like Hadoop, Hive, Sqoop, Impala, Kafka, and Python/Spark/PySpark workloads
• Working knowledge of CI / CD based development and deployment -- JIRA, BitBucket
• Excellent written, communication and diagramming skills
• Strong analytical and problem solving abilities
• Speaking / presentation skills in a professional setting
• Excellent interpersonal skills and a team player to work all along with Global teams and business partners
• Positive attitude and flexible
• Willingness to learn new skills and adapt to changes
• Industry certifications like Teradata, Hadoop, Big Data
Good to have conducting discussions with business owners to streamline the requirements, performing requirements analysis, design and development, preparing Unit test plan and test execution, performing code reviews, build managements, coordination with offshore team, release documentation and coordination, application support and maintenance.Skills and Requirements
Description:
§ Computer Science/Software Engineering (or related) degree
§ 4+ years' experience with end-to-end application development on Teradata data-warehouse and analytical platforms
§ 2+ years' experience with end-to-end application development on Big Data Technologies -- Hadoop Hive , PySpark
§ Extensive experience developing complex Teradata SQL-based ETL and analytic workflows using native utilities (bteq, tpt, fastexport)
§ Very good knowledge of Unix/Linux shell Scripting and scheduling (like Autosys)
§ Experience working with Big Data Technologies, programs and toolsets like Hadoop, Hive, Sqoop, Impala, Kafka, and Python/Spark/PySpark workloads
§ Working knowledge of CI / CD based development and deployment -- JIRA, BitBucket
§ Excellent written, communication and diagramming skills
§ Strong analytical and problem solving abilities
§ Speaking / presentation skills in a professional setting
§ Excellent interpersonal skills and a team player to work all along with Global teams and business partners
§ Positive attitude and flexible
§ Willingness to learn new skills and adapt to changes
§ Industry certifications like Teradata, Hadoop, Big Data
Good to have conducting discussions with business owners to streamline the requirements, performing requirements analysis, design and development, preparing Unit test plan and test execution, performing code reviews, build managements, coordination with offshore team, release documentation and coordination, application support and maintenance. nullWe are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal employment opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment without regard to race, color, ethnicity, religion,sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military oruniformed service member status, or any other status or characteristic protected by applicable laws, regulations, andordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to [email protected].