Scott Aaronson to study how to prevent AI from spiraling out of control at OpenAI

Well-known quantum computer expert Scott Aaronson has announced that he will be leaving UT Austin for a year to pursue theoretical research at AI startup OpenAI (mostly remotely), where his work will focus on researching the theoretical basis for preventing AI from running out of control, and what computational complexity does for this as. He admits that he has no clue at the moment, so it takes a whole year to think about it. OpenAI’s mission is to ensure that AI benefits all humanity, but it is also a profitable entity. Scott Aaronson said he is unlikely to disclose any proprietary information, even if he does not sign a non-disclosure agreement, but shares general thoughts on AI security. The short-term concern about AI security is the misuse of AI for spam, surveillance and propaganda, he said, and the long-term concern is what happens when AI outsmarts humans in all areas. One solution is to find ways to align AI with human values.

This article is reprinted from: https://www.solidot.org/story?sid=71870
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment