Summary:
The Meta Reality Labs Research Team brings together a world-class team of researchers, developers, and engineers to create the future of virtual and augmented reality, which together will become as universal and essential as smartphones and personal computers are today. And just as personal computers have done over the past 45 years, AR and VR will ultimately change everything about how we work, play, and connect. We are developing all the technologies needed to enable breakthrough AR glasses and VR headsets, including optics and displays, computer vision, audio, graphics, brain-computer interfaces, haptic interaction, eye/hand/face/body tracking, perception science, and true telepresence. Some of those will advance much faster than others, but they all need to happen to enable AR and VR that are so compelling that they become an integral part of our lives.In particular, the Meta Reality Labs Research audio team is focused on two goals; creating virtual sounds that are perceptually indistinguishable from reality, and redefining human hearing. See more about our work here: https://tech.fb.com/inside-facebook-reality-labs.../. These two initiatives will allow us to connect people by allowing them to feel together despite being physically apart, and allow them to converse in even the most difficult listening environments.Meta Reality Labs Research is looking for experienced interns who are passionate about ground breaking research in audio signal processing, machine learning and audio visual learning to solve important audio-driven problems for AR/VR applications. We currently have multiple open positions for a range of projects in multimodal representation learning, audio visual scene analysis, egocentric audio visual learning, multi-sensory speech enhancement and acoustic activity localization. Our internships are twelve (12) to twenty four (24) weeks long and we have various start dates throughout the year.
Required Skills:
Research Scientist Intern, Audio, Machine Learning and Computer Vision (PhD) Responsibilities:
Research, model, design, develop and test novel audio and speech processing algorithms using machine learning, signal processing, and computer vision.
Collaborate with researchers and engineers across diverse disciplines.
Design and implementation of novel algorithms to solve audio research problems.
Experimental design, implementation, and execution to evaluate new audio technologies.
Collaboration with other researchers across audio and acoustic engineering disciplines.
Communication of research agenda, progress, and results.
Minimum Qualifications:
Minimum Qualifications:
Currently has, or is in the process of obtaining, a PhD degree in the field of Computer Science, Artificial Intelligence, Signal Processing, Machine learning, Computer vision, Electrical Engineering, Applied Math, Acoustics Engineering or a related STEM field.
Experience with Python, Matlab, or similar.
Experience with machine learning software platforms such as PyTorch, TensorFlow, etc.
Experience building novel computational models in audio or audio-visual or speech application domains using machine learning or signal processing.
Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment.
Preferred Qualifications:
Preferred Qualifications:
Demonstrated software engineer experience via an internship, work experience, coding competitions, or widely used contributions in open source repositories (e.g. Github).
Strong background in statistical modeling techniques and/or signal processing.
Proven track record of achieving results as demonstrated in accepted papers at top computer vision and machine learning related conferences such as CVPR, ECCV, NIPS, ICASSP, InterSpeech etc.
Experience working and communicating cross functionally in a team environment.
Intent to return to a degree-program after the completion of the internship/co-op.
Industry: Internet