The Machine Intelligence Research Institute (MIRI) is a nonprofit based in Berkeley, California, focused on reducing existential risks from the transition to smarter-than-human AI. We've historically been very focused on technical alignment research. Since summer 2023, we have shifted our focus towards increasing the chances of good AI regulation happening. See our strategy update post for more details.
We are looking to build a dynamic and versatile team that can quickly produce a large range of research outputs for the technical governance space. We are currently not actively hiring, but are open to expressions of interest to join our team. Please feel free to fill out this form, or contact us at techgov@intelligence.org.
We focus on researching and designing technical aspects of regulations and policy that could lead to safe AI. The team works on:
Limitations of current AI safety proposals and policies
Inputs into regulations, requests for comments by policy bodies (e.g. NIST/US AISI, EU, UN)
Technical research to improve international coordination
Communicating with and consulting for policymakers and governance organizations
In this role, you will report to the Research Manager (Lisa Thiergart), and work on the Technical Governance Team. You would have the chance to work on all of the above areas. The work will be a mixture of researching, writing (for internal and external use), engaging with collaborators and policymakers, as well as possibly building some evaluations. Some example tasks could include:
Reading a government or AI developer’s AI policy document, and writing a report on its limitations
Threat modeling, working out how AI systems could cause large-scale harm, and hopefully what actions could be taken to prevent this
Responding to a US government agency’s Request for Comment related to the AI Executive Order
Learning about risk management practices in other industries, and applying these to AI
Designing and implementing evaluations of AI models, for example to demonstrate failure modes with current policy
Preparing and presenting informative briefings to policymakers, such as explaining the basics and current state of AI evaluations
Designing new AI policies and standards which address the limitations of current approaches
In the above work, maintain particular focus on what is needed for solutions to scale to smarter-than-human intelligence and conduct research on which new challenges may emerge at that stage
We are not actively looking to hire currently, but may pursue candidates who are a particularly good fit.
There are no formal degree requirements to work on the team, however we are especially excited about applicants who have a strong background in AI Safety and have particular previous experience or familiarity working in (or as) one or more of:
AI evaluations and benchmarks. The role may involve building some evaluations, although this would be more like demonstrations than benchmarking systems. There is the possibility of doing more in-depth work here in the future
Policy (including AI policy). Experience here could involve writing legislation or white papers, engaging with policy makers or other research in AI policy and governance
Strong AI Safety generalist. For example, you have produced good AI safety research and have a good overview-level understanding of empirical, theory and conceptual approaches, or otherwise have a demonstrated ability to think clearly and carefully about AI safety.
Bonus: Having technical knowledge of hardware / chips manufacturing and/or compute governance
We are also excited about candidates who are particularly strong in the following areas:
Conscientiousness – You are diligent and hard-working, and complete your work reliably and dependably. You desire to do tasks well and effectively. You pay attention to details and are organized, and able to manage lots of small tasks and projects.
Comfort learning on the job - You enjoy and are able to quickly acquire new skills and knowledge as needed. You feel comfortable working on underspecified tasks where part of your job is to further develop the research questions appropriately.
Agency – You get things done without someone constantly looking over your shoulder. You notice problems and are motivated to fix them. You focus on solving the problem, not waiting to be told what to do. You know when to defer to another’s decision, and when to ask for guidance. You are an active member of the team, not a mindless cog in the machine.
Generative thinking - You enjoy coming up with and iterating on new ideas. You can generate original work as well as extend others’ thoughts. You aren’t afraid to suggest things, or point out flaws in your or others’ thoughts.
Communication (Internal) – You are a team player who is excited to work together with others and willing to attend several weekly meetings. You proactively keep teammates/manager in the loop about the status of projects you manage, when things are falling behind, when you need more information. You voice your confusions.
Communication (External) - You are able to communicate effectively to external stakeholders who have a range of technical expertise, including policymakers. You can produce concise, clear, and compelling writing, and deliver presentations on the team's research and ideas.
In addition, we are looking for candidates who:
Are broadly aligned with MIRI's values.
Are passionate about MIRI’s mission and excited to support our work in reducing existential risks from AI.
Application deadline - This expression of interest does not have a deadline.
Location – Prefer in-office (in Berkeley, CA).
Compensation – $120–200k.
The range is due to the wide possibility space of experience and skills that candidates may bring.
We strive to ensure that all staff are paid an appropriate and comfortable living wage such that they feel fairly compensated and are able to focus on doing great work.
Benefits – MIRI offers a variety of benefits including:
Health insurance (the best available plans from Kaiser and Blue Shield) as well as dental and vision coverage. (We cannot always offer comparable benefits to international staff.)
“No vacation policy” – staff are encouraged to take vacation when they want/need to in coordination with their manager.
Visas - We can potentially sponsor visas for particularly promising candidates.