We are now looking for an Architecture Energy Modeling Engineer - New College Grad!
At NVIDIA, we pride ourselves in having energy-efficient products. We believe that continuing to maintain our products' energy efficiency compared to the competition is key to our continued success. Our team is responsible for researching, developing, and deploying methodologies to help NVIDIA's products become more energy efficient; and is responsible for building energy models that integrate into architectural simulators, RTL simulation, emulation and silicon platforms. Key responsibilities include developing Machine Learning based power models to analyze and reduce power consumption of NVIDIA GPUs. As a member of the Power Modeling, Methodology and Analysis Team, you will collaborate with Architects, ASIC Design Engineers, Low Power Engineers, Performance Engineers, Software Engineers, and Physical Design teams to study and implement energy modeling techniques for NVIDIA's next generation GPUs, CPUs and Tegra SOCs. Your contributions will help us gain early insight into energy consumption of graphics and artificial intelligence workloads, and will allow us to influence architectural, design, and power management improvements.
What you'll be doing:
Work with architects, designers, and performance engineers to develop an energy-efficient GPU.
Identify key design features and workloads for building Machine Learning based unit power/energy models.
Develop and own methodologies and workflows to train models using ML and/or statistical techniques.
Improve the accuracy of trained models by using different model representations, objective functions, and learning algorithms.
Develop methodologies to estimate data movement power/energy accurately.
Correlate the predicted energy from models built at different stages of the design cycle, with the goal of bridging early estimates to silicon.
Work with performance infrastructure teams to integrate power/energy models into their platforms to enable combined reporting of performance and power for various workloads.
Develop tools to debug energy inefficiencies observed in various workloads run on silicon, RTL, and architectural simulators. Identify and suggest solutions to fix the energy inefficiencies.
Prototype new architectural features, build an energy model for those new features, and analyze the system impact.
Identify, suggest, and/or participate in studies for improving GPU perf/watt.
What we need to see:
Pursuing a BS in Electrical Engineering or Computer Engineering or related degree or equivalent experience. Advanced degrees (MS, PhD) a plus.
Strong coding skills, preferably in Python, C++.
Background in machine learning, AI, and/or statistical modeling.
Background in computer architecture and interest in energy-efficient GPU designs.
Ways to stand out from the crowd:
Familiarity with Verilog and ASIC design principles is a plus.
Ability to formulate and analyze algorithms, and comment on their runtime and memory complexities.
Basic understanding of fundamental concepts of energy consumption, estimation, and low power design.
The base salary range is 92,000 USD - 172,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits (https://www.nvidia.com/en-us/benefits/) . NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA is a Learning Machine
NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and the metaverse is transforming the world's largest industries and profoundly impacting society.
Learn more about NVIDIA .