OpenAI–Superalignment Fast Grants: Superhuman AI systems will be capable of complex and creative behaviors that humans cannot fully understand. For example, if a superhuman model generates a million lines of extremely complicated code, humans will not be able to reliably evaluate whether the code is safe or dangerous to execute. Existing alignment techniques like RLHF that rely on human supervision may no longer be sufficient. This leads to the fundamental challenge: how can humans steer and trust AI systems much smarter than them? OFR Contact: Catherine Cotter. Amount: $100K-$2 million for researchers; $150K graduate fellowships. Deadline: 2/18/24.
- Home
- Uncategorized
- OpenAI: Superalignment Fast Grants, Due Feb. 18, 2024