Research Scientist, GenAI Safety Evaluations - Computer Vision

23 Nov 2024
Apply

Summary: The GenAI Evaluations Foundations team is looking to hire a Research Scientist with Vision experience for the Safety Evaluations Dev team. Safety Evaluations are key to help us both internally align as well as externally share on how the LLM is being safe in its responses to adversarial or unsafe prompts. When our models are safe, we can feel confident and comfortable open-sourcing our models for other developers and our internal product teams to use.Evaluations & benchmarks are what steer AI progress, as we can inject them in all stages of the model training. The sooner we can catch issues, the faster we can fix things resulting in us saving millions of dollars and compute cycles while avoiding harming Meta’s reputation in the world. As Llama further builds on its image understanding and image generation modalities, we are looking to hire a Research Scientist who is passionate about Safety and has experience designing evaluations and dataset for Computer vision models.Required Skills: Research Scientist, GenAI Safety Evaluations - Computer Vision Responsibilities:

Design and implement datasets to evaluate our LLMs on safety, with a focus on Vision.

Adapt standard machine learning methods to best exploit modern parallel environments (e.g. distributed clusters, multicore SMP, and GPU).

Work with a large and globally distributed team across multiple functions to understand the needs and align on goals and outcomes.

Play a significant role in healthy cross-functional collaboration.

Minimum Qualifications: Minimum Qualifications:

Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.

PhD in Computer Science, Computer Engineering, relevant technical field

3+ year(s) of work experience in a university, industry, or government lab with emphasis on AI Research in machine learning, deep learning, and computer vision.

Programming experience in Python and experience with frameworks such as PyTorch.

Exposure to architectural patterns of large scale software applications.

Domain-Relevant Research Publications accepted at peer-reviewed AI conferences (e.g., NeurIPS, CVPR, ICML, ICLR, ICCV, and ACL)

Preferred Qualifications: Preferred Qualifications:

Experience working with Safety or related areas.

Direct experience in building evals for generative AI and LLM research

First author publications at peer-reviewed AI conferences (e.g., NeurIPS, CVPR, ICML, ICLR, ICCV, and ACL).

Public Compensation: $177,000/year to $251,000/year + bonus + equity + benefitsIndustry: InternetEqual Opportunity: Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.

Full-time
  • ID: #52942743
  • State: California Menlopark 94025 Menlopark USA
  • City: Menlopark
  • Salary: USD TBD TBD
  • Showed: 2024-11-23
  • Deadline: 2025-01-22
  • Category: Et cetera
Apply