{"id":7948,"date":"2021-11-02T15:13:58","date_gmt":"2021-11-02T15:13:58","guid":{"rendered":"https:\/\/theoreti.ca\/?p=7948"},"modified":"2021-11-02T15:13:58","modified_gmt":"2021-11-02T15:13:58","slug":"ask-delphi","status":"publish","type":"post","link":"https:\/\/theoreti.ca\/?p=7948","title":{"rendered":"Ask Delphi"},"content":{"rendered":"<p><a href=\"http:\/\/theoreti.ca\/wp-content\/uploads\/2021\/11\/DelphiAI.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone  wp-image-7950\" src=\"http:\/\/theoreti.ca\/wp-content\/uploads\/2021\/11\/DelphiAI-300x95.jpg\" alt=\"Delphi Screen Shot\" width=\"517\" height=\"211\" \/><\/a><\/p>\n<p><a href=\"https:\/\/delphi.allenai.org\/\">Ask Delphi<\/a> is an intriguing AI that you can use to ponder ethical questions. You type in a situation and it will tell you if it is morally acceptable or not. It is apparently built not on Reddit data, but on crowdsourced data, so it shouldn&#8217;t be as easy to provoke into giving toxic answers.<\/p>\n<p>In their paper, <a href=\"https:\/\/arxiv.org\/abs\/2110.07574#\">Delphi: Towards Machine Ethics and Norms<\/a> they say that they have created a Commonsense Norm Bank, &#8220;<span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif;\">a collection of 1.7M ethical judgments on diverse real-life situations.&#8221; This contributes to Delphi&#8217;s sound pronouncements, but it doesn&#8217;t seem available for others yet.<\/span><\/p>\n<p><a href=\"https:\/\/www.aiweirdness.com\/stealing-a-giraffe-from-the-zoo-only-if-its-a-really-cool-giraffe\/\">AI Weirdness has a nice story<\/a> on how she fooled Delphi.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Ask Delphi is an intriguing AI that you can use to ponder ethical questions. You type in a situation and it will tell you if it is morally acceptable or not. It is apparently built not on Reddit data, but on crowdsourced data, so it shouldn&#8217;t be as easy to provoke into giving toxic answers. &hellip; <a href=\"https:\/\/theoreti.ca\/?p=7948\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Ask Delphi<\/span><\/a><\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[74,54,6],"tags":[],"class_list":["post-7948","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-big-data","category-playful-or-cool"],"_links":{"self":[{"href":"https:\/\/theoreti.ca\/index.php?rest_route=\/wp\/v2\/posts\/7948","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/theoreti.ca\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/theoreti.ca\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/theoreti.ca\/index.php?rest_route=\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/theoreti.ca\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7948"}],"version-history":[{"count":2,"href":"https:\/\/theoreti.ca\/index.php?rest_route=\/wp\/v2\/posts\/7948\/revisions"}],"predecessor-version":[{"id":7951,"href":"https:\/\/theoreti.ca\/index.php?rest_route=\/wp\/v2\/posts\/7948\/revisions\/7951"}],"wp:attachment":[{"href":"https:\/\/theoreti.ca\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7948"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/theoreti.ca\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7948"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/theoreti.ca\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7948"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}