↳ Staff Picks

September 17th, 2018

↳ Staff Picks

Machine Ethics, Part One: An Introduction and a Case Study

The past few years have made abundantly clear that the artificially intelligent systems that organizations increasingly rely on to make important decisions can exhibit morally problematic behavior if not properly designed. Facebook, for instance, uses artificial intelligence to screen targeted advertisements for violations of applicable laws or its community standards. While offloading the sales process to automated systems allows Facebook to cut costs dramatically, design flaws in these systems have facilitated the spread of political misinformation, malware, hate speech, and discriminatory housing and employment ads. How can the designers of artificially intelligent systems ensure that they behave in ways that are morally acceptable--ways that show appropriate respect for the rights and interests of the humans they interact with?

The nascent field of machine ethics seeks to answer this question by conducting interdisciplinary research at the intersection of ethics and artificial intelligence. This series of posts will provide a gentle introduction to this new field, beginning with an illustrative case study taken from research I conducted last year at the Center for Artificial Intelligence in Society (CAIS). CAIS is a joint effort between the Suzanne Dworak-Peck School of Social Work and the Viterbi School of Engineering at the University of Southern California, and is devoted to “conducting research in Artificial Intelligence to help solve the most difficult social problems facing our world.” This makes the center’s efforts part of a broader movement in applied artificial intelligence commonly known as “AI for Social Good,” the goal of which is to address pressing and hitherto intractable social problems through the application of cutting-edge techniques from the field of artificial intelligence.

⤷ Full Article

September 25th, 2018

Cash and Income Studies: A Literature Review of Theory and Evidence

What happens when you give people cash? How do they use the money, and how does it change their lives? Every cash study on this list is different: the studies vary in intervention type, research design, location, size, disbursement amount, and effects measured. The interventions listed here include basic income and proxies--earned income tax credits, negative income tax credits, conditional cash transfers, and unconditional cash transfers. The variety present here prevents us from being able to make broad claims about the effects of universal basic income. But because of its variety, this review provides a sense of the scope of research in the field, capturing what kinds of research designs have been used, and what effects have been estimated, measured, and reported. The review also allows us to draw some revealing distinctions across experimental designs.

If you’re interested in creating a UBI policy, there are roughly three levels of effects (after ODI) that you can examine.

⤷ Full Article

October 2nd, 2018

Who cares about stopping rules?

Can you bias a coin?

Challenge: Take a coin out of your pocket. Unless you own some exotic currency, your coin is fair: it's equally likely to land heads as tails when flipped. Your challenge is to modify the coin somehow—by sticking putty on one side, say, or bending it—so that the coin becomes biased, one way or the other. Try it!

How should you check whether you managed to bias your coin? Well, it will surely involve flipping it repeatedly and observing the outcome, a sequence of h's and t's. That much is obvious. But what's not obvious is where to go from there. For one thing, any outcome whatsoever is consistent both with the coin's being fair and with its being biased. (After all, it's possible, even if not probable, for a fair coin to land heads every time you flip it, or a biased coin to land heads just as often as tails.) So no outcome is decisive. Worse than that, on the assumption that the coin is fair any two sequences of h's and t's (of the same length) are equally likely. So how could one sequence tell against the coin's being fair and another not?

We face problems like these whenever we need to evaluate a probabilistic hypothesis. Since probabilistic hypotheses come up everywhere—from polling to genetics, from climate change to drug testing, from sports analytics to statistical mechanics—the problems are pressing.

Enter significance testing, an extremely popular method of evaluating probabilistic hypotheses. Scientific journals are littered with reports of significance tests; almost any introductory statistics course will teach the method. It's so popular that the jargon of significance testing—null hypothesis, $p$-value, statistical significance—has entered common parlance.

⤷ Full Article

October 10th, 2018

The "Next Big Thing" is a Room

If you don’t look up, Dynamicland seems like a normal room on the second floor of an ordinary building in downtown Oakland. There are tables and chairs, couches and carpets, scattered office supplies, and pictures taped up on the walls. It’s a homey space that feels more like a lower school classroom than a coworking environment. But Dynamicland is not a normal room. Dynamicland was designed to be anything but normal.

Led by the famous interface designer Bret Victor, Dynamicland is the offshoot of HARC (Human Advancement Research Community), most recently part of YCombinator Research. Dynamicland seems like the unlikeliest vision for the future of computers anyone could have expected.

Let’s take a look. Grab one of the scattered pieces of paper in the space. Any will do as long as it has those big colorful dots in the corners. Don’t pay too much attention to those dots. You may recognize the writing on the paper as computer code. It’s a strange juxtaposition: virtual computer code on physical paper. But there it is, in your hands. Go ahead and put the paper down on one of the tables. Any surface will do.

⤷ Full Article