# ↳ Letters

## FEED FEEDBACK

### Sociologist Zeynep Tufekci engages with Adam Mosseri, who runs the Facebook News Feed

Tufekci: “…Facebook does not ask people what they want, in the moment or any other way. It sets up structures, incentives, metrics & runs with it.”

Mosseri: “We actually ask 10s of thousands of people a day how much they want to see specific stories in the News Feed, in addition to other things.”

Tufekci: “That’s not asking your users, that’s research on your product. Imagine a Facebook whose customers are users—you’d do so much differently. I mean asking all people, in deliberate fashion, with sensible defaults—there are always defaults—even giving them choices they can change…Think of the targeting offered to advertisers—with support to make them more effective—and flip the possibilities, with users as customers. The users are offered very little in comparison. The metrics are mostly momentary and implicit. That’s a recipe to play to impulse.”

The tweets are originally from Zeynep Tufekci in response to Benedict Evans (link), but the conversation is much easier to read in Hamza Shaban’s screenshots here.

See the end of this newsletter for an extended comment from Jay.

• On looping effects (paywall): “This chapter argues that today's understanding of causal processes in human affairs relies crucially on concepts of ‘human kinds’ which are a product of the modern social sciences, with their concern for classification, quantification, and intervention. Child abuse, homosexuality, teenage pregnancy, and multiple personality are examples of such recently established human kinds. What distinguishes human kinds from ‘natural kinds’, is that they have specific ‘looping effects’. By coming into existence through social scientists' classifications, human kinds change the people thus classified.” Link. ht Jay

## THE MESO-LEVEL

### Mechanisms and causes between micro and macro

Daniel Little, the philosopher of social science behind Understanding Society, haswritten numerous posts on the topic. Begin with this one from 2014:

“It is fairly well accepted that there are social mechanisms underlying various patterns of the social world — free-rider problems, communications networks, etc. But the examples that come readily to mind are generally specified at the level of individuals. The new institutionalists, for example, describe numerous social mechanisms that explain social outcomes; but these mechanisms typically have to do with the actions that purposive individuals take within a given set of rules and incentives.

“The question here is whether we can also make sense of the notion of a mechanism that takes place at the social level. Are there meso-level social mechanisms? (As always, it is acknowledged that social stuff depends on the actions of the actors.)”

In the post, Little defines a causal mechanism and a meso-level mechanism, then offers example research.

“…It is possible to identify a raft of social explanations in sociology that represent causal assertions of social mechanisms linking one meso-level condition to another. Here are a few examples:

• Al Young: decreasing social isolation causes rising inter-group hostility (link)
• Michael Mann: the presence of paramilitary organizations makes fascist mobilization more likely (link)
• Robert Sampson: features of neighborhoods influence crime rates (link)
• Chuck Tilly: the availability of trust networks makes political mobilization more likely (link)
• Robert Brenner: the divided sovereignty system of French feudalism impeded agricultural modernization (link)
• Charles Perrow: legislative control of regulatory agencies causes poor enforcement performance (link)

More of Little’s posts on the topic are here. ht Steve Randy Waldman

## "A DOLL POSSESSED BY A DEMON"

### Recommender systems power YouTube's controversial kids' videos

Familiar cartoon characters are placed in bizarre scenarios, sometimes by human content creators, sometimes by automated systems, for the purpose of attracting views and ad money. First, from the New York Times:

“But the app [YouTube Kids] contains dark corners, too, as videos that are disturbing for children slip past its filters, either by mistake or because bad actors have found ways to fool the YouTube Kids algorithms.

“In recent months, parents like Ms. Burns have complained that their children have been shown videos with well-known characters in violent or lewd situations and other clips with disturbing imagery, sometimes set to nursery rhymes. Many have taken to Facebook to warn others, and share video screenshots showing moments ranging from a Claymation Spider-Man urinating on Elsa of ‘Frozen’ to Nick Jr. characters in a strip club.”

Full piece by SAPNA MAHESHWARI in the Times here.

On Medium, JAMES BRIDLE expands on the topic, and criticizes the structure of YouTube itself for incentivizing these kinds of videos, many of which have millions of views.

“These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place.

“While it is tempting to dismiss the wilder examples as trolling, of which a significant number certainly are, that fails to account for the sheer volume of content weighted in a particularly grotesque direction. It presents many and complexly entangled dangers, including that, just as with the increasing focus on alleged Russian interference in social media, such events will be used as justification for increased control over the internet, increasing censorship, and so on.”

## PREDICTIVE JUSTICE

### How to build justice into algorithmic actuarial tools

Key notions of fairness contradict each other—something of an Arrow’s Theorem for criminal justice applications of machine learning.

"Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them."

Full paper from JON KLEINBERG, SENDHIL MULLAINATHAN and MANISH RAGHAVAN here. h/t research fellow Sara, who recently presented on bias in humans, courts, and machine learning algorithms, and who was the source for all the papers in this section.

In a Twitter thread, ARVIND NARAYANAN describes the issue in more casual terms.

"Today in Fairness in Machine Learning class: a comparison of 21 (!) definitions of bias and fairness [...] In CS we're used to the idea that to make progress on a research problem as a community, we should first all agree on a definition. So 21 definitions feels like a sign of failure. Perhaps most of them are trivial variants? Surely there/s one that's 'better' than the rest? The answer is no! Each defn (stat. parity, FPR balance, contextual fairness in RL...) captures something about our fairness intuitions."

Jay comments: Kleinberg et al. describe their result as choosing between conceptions of fairness. It’s not obvious, though, that this is the correct description. The criteria (calibration and balance) discussed aren’t really conceptions of fairness; rather, they’re (putative) tests of fairness. Particular questions about these tests aside, we might have a broader worry: if fairness is not an extensional property that depends upon, and only upon, the eventual judgments rendered by a predictive process, exclusive of the procedures that led to those judgments, then no extensional test will capture fairness, even if this notion is entirely unambiguous and determinate. It’s worth consideringNozick’s objection to “pattern theories” of justice for comparison, and (procedural) due process requirements in US law.

## ARTIFICIAL AGENCY AND EXPLANATION

### The gray box of XAI

A recent longform piece in the New York Times identifies the problem of explaining artificial intelligence. The stakes are high because of the European Union’s controversial and unclear “right-to-explanation” law, which will become active in May 2018.

“Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.”

An interdisciplinary group addresses the problem:

"Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should often be technically feasible but may sometimes be practically onerous—there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard."

Full article by FINALE DOSHI-VELEZ et al. here. ht Margarita For the layperson, the most interesting part of the article may be its general overview of societal norms around explanation and explanation in the law.

Michael comments: Human cognitive systems have generated similar questions in vastly different contexts. The problem of chick-sexing (see Part 3) gave rise to a mini-literature within epistemology.

From Michael S. Moore’s book Law and Society: Rethinking the Relationship: “A full explanation in terms of reasons for action requires two premises: the major premise, specifying the agent’s desires (goals, objectives, moral beliefs, purposes, aims, wants, etc.), and the minor premise, specifying the agent’s factual beliefs about the situation he is in and his ability to achieve, through some particular action, the object of his desires.” Link. ht Margarita

• A Medium post with an illustrated summary of some XAI techniques. Link.

## THE FUTURE OF UNDERGRADUATE EDUCATION

### A new report argues that quality, not access, is the pivotal challenge for colleges and universities

From the American Academy of Arts and Sciences, a 112-page report with "practical and actionable recommendations to improve the undergraduate experience":

"Progress toward universal education has expanded most recently to colleges and universities. Today, almost 90 percent of high school graduates can expect to enroll in an undergraduate institution at some point during young adulthood and they are joined by millions of adults seeking to improve their lives. What was once a challenge of quantity in American undergraduate education, of enrolling as many students as possible, is now a challenge of quality—of making sure that all students receive the rigorous education they need to succeed, that they are able to complete the studies they begin, and that they can do this affordably, without mortgaging the very future they seek to improve."

Link to the full report. Co-authors include Gail Mellow, Sherry Lansing, Mitch Daniels, and Shirley Tilghman. ht Will, who highlights a few of the report's recommendations that stand out:

• From page 40: "Both public and private colleges and universities as well as state policy-makers [should] work collaboratively to align learning programs and expectations across institutions and sectors, including implementing a transferable general education core, defined transfer pathway maps within popular disciplines, and transfer-focused advising systems that help students anticipate what it will take for them to transfer without losing momentum in their chosen field."
• From page 65: "Many students, whether coming straight out of high school or adults returning later to college, face multiple social and personal challenges that can range from homelessness and food insecurity to childcare, psychological challenges, and even imprisonment. The best solutions can often emerge from building cooperation between a college and relevant social support agencies.
• From page 72: "Experiment with and carefully assess alternatives for students to manage the financing of their college education. For example, income-share agreements allow college students to borrow from colleges or investors, which then receive a percentage of the student’s after-graduation income."
• On a related note, see this 2016 paper from the Miller Center at the University of Virginia: "Although interest in the ISA as a concept has ebbed and flowed since Milton Friedman first proposed it in the 1950s, today it is experiencing a renaissance of sorts as new private sector partners and institutions look to make the ISA a feasible option for students. ISAs offer a novel way to inject private capital into higher education systems while striking a balance between consumer preferences and state needs for economic skill sets. The different ways ISAs can be structured make them highly suitable as potential solutions for many states’ education system financing problems." Link.
• Meanwhile, Congress is working on the reauthorization of the Higher Education Act: "Much of the proposal that House Republicans released last week is controversial and likely won’t make it into the final law, but the plan provides an indication of Congressional Republicans’ priorities for the nation’s higher education system. Those priorities include limiting the federal government’s role in regulating colleges, capping graduate student borrowing, making it easier for schools to limit undergraduate borrowing — and overhauling the student loan repayment system. Many of those moves have the potential to create a larger role for private industry." Link.

## HOW TO HANDLE BAD CONTENT

### Two articles illustrate the state of thought on moderating user-generated content

Ben Thompson of Stratechery rounds up recent news on content moderation on Twitter/Facebook/Youtube and makes a recommendation:

“Taking political sides always sounds good to those who presume the platforms will adopt positions consistent with their own views; it turns out, though, that while most of us may agree that child exploitation is wrong, a great many other questions are unsettled.

“That is why I think the line is clearer than it might otherwise appear: these platform companies should actively seek out and remove content that is widely considered objectionable, and they should take a strict hands-off policy to everything that isn’t (while — and I’m looking at you, Twitter — making it much easier to avoid unwanted abuse from people you don’t want to hear from). Moreover, this approach should be accompanied by far more transparency than currently exists: YouTube, Facebook, and Twitter should make explicitly clear what sort of content they are actively policing, and what they are not; I know this is complicated, and policies will change, but that is fine — those changes can be transparent too.”

Full blog post here.

“… If we want to really make progress towards solving these issues we need to recognize there’s not one single type of bad behavior that the internet has empowered, but rather a few dimensions of them.”

The piece goes on to describe four types of bad content. Link.

Michael comments: The discussion of content moderation--and digital curation more broadly--conspicuously ignores the possibility of algorithmic methods for analyzing and disseminating (ethically or evidentiarily) valid information. Thompson and Social Capital default to traditional and cumbersome forms of outright censorship, rather than methods to “push” better content.

We'll be sharing more thoughts on this research area in future letters.

## The Year in Review

INCOME SHARE AGREEMENTS Purdue, BFF, the national conversation “Long discussed in college policy and financing circles, income share agreements, or ISAs, are poised to become more mainstream.” That's from a September Wall Street Journal article. 2017 saw new pilots and the introduction of legislation in Congress, as well as the continued growth of Purdue’s Back a Boiler program, which was covered in PBS Newshour with JFI staff featured. Better Future Forward (incubated by JFI) was founded to originate ISAs and structure ISA pilots. Launched in February 2017 with support from the Arnold Foundation and the Lumina Foundation, BFF has formed partnerships with (and funded students through) Opportunity@Work, College Possible, and the Thurgood Marshall College Fund. Various research organizations are tracking ISAs closely: From the American Academy of Arts and Sciences, a 112-page report with "practical and actionable recommendations to improve the undergraduate experience": “What was once a challenge of quantity in American undergraduate education, of enrolling as many students as possible, is now a challenge of quality—of making sure that all students receive the rigorous education they need to succeed, that they are able to complete the studies they begin, and that they can do this affordably, without mortgaging the very future they seek to improve." Link to the full report. See page 72 for mention of ISAs. From the Miller Center at the University of Virginia (2016): "ISAs offer a novel way to inject private capital into higher education systems while striking a balance between consumer preference...

## THE YEAR IN ECONOMICS

### Nominations from top economists, including selections by Raj Chetty, Sendhil Mullainathan, and Angus Deaton

One favorite from this excellent round-up is by Hulten and Nakamura on metrics, selected by Diane Coyle (we previously sent her Indigo Prize paper):

Accounting for Growth in the Age of the Internet: The Importance of Output-Saving Technical Change by Charles Hulten and Leonard Nakamura

Main finding: Living standards may be growing faster than GDP growth.
Nominating economist: Diane Coyle, University of Manchester
Specialization: Economic statistics and the digital economy
Why?: “This paper tries to formalize the intuition that there is a growing gap between the standard measure of GDP, capturing economic activity, and true economic welfare and to draw out some of the implications.”

Robert Allen's "Absolute Poverty: When Necessity Displaces Desire" is another metrics-related piece on the list.

Also noteworthy, on the future of work:

Valuing Alternative Work Arrangements by Alexandre Mas and Amanda Pallais

Main finding: The average worker does not value an Uber-like ability to set their own schedule.
Nominating economist: Emily Oster, Brown University
Specialization: Health economics and research methodology
Why? “This paper looks at a question increasingly important in the current labor market: How do people value flexible work arrangements? The authors have an incredibly neat approach, using actual worker hiring to generate causal estimates of how workers value various employment setups.”

Full piece by DAN KOPF here.

## THE WAGE EFFECT

### Higher minimum wages and the EITC may reduce recidivism

“Using administrative prison release records from nearly six million offenders released between 2000 and 2014, we use a difference-in-differences strategy to identify the effect of over two hundred state and federal minimum wage increases, as well as 21 state EITC programs, on recidivism. We find that the average minimum wage increase of 8% reduces the probability that men and women return to prison within 1 year by 2%. This implies that on average the wage effect, drawing at least some ex-offenders into the legal labor market, dominates any reduced employment in this population due to the minimum wage.”

Full paper by AMANDA Y. AGAN and MICHAEL D. MAKOWSKY here.

• Jennifer Doleac responds, “The results in this new paper…definitely surprised me—my prior was that raising the min wage would increase recidivism.” She explains: “Those coming out of prison are very likely to be on the margin of employment (last hired, first fired). Given some disemployment effects, marginal workers are the ones who are going to be hurt. Amanda and Mike find that the positive effects of pulling some (higher-skilled?) offenders into the legal labor market outweigh those negative effects.” Link to Doleac’s Twitter thread.
• One of the co-authors, Makowsky, adds, “The EITC, dollar for household dollar, generates larger effects [than minimum wage increases], but is hampered by its contingency on dependent children. This is one more reason to remove the contingency, extending it to everyone.” Link to Makowsky’s Twitter thread.
• Another piece sourced from Makowsky’s thread: “The Impact of Living-Wage Ordinances on Urban Crime.” “Using data on annual crime rates for large cities in the United States, we find that living-wage ordinances are associated with notable reductions in property-related crime and no discernable impact on nonproperty crimes.” Link.
• Noah Smith rounds up recent studies on increasing the minimum wage, many of which come to contradictory conclusions. "At this point, anyone following the research debate will be tempted to throw up their hands. What can we learn from a bunch of contradictory studies, each with its own potential weaknesses and flaws?" Link.

## PERVERSE CONSEQUENCES

### Does banning the box increase hiring discrimination?

“Our results support the concern that BTB [Ban the Box] policies encourage racial discrimination: the black-white gap in callbacks grew dramatically at companies that removed the box after the policy went into effect. Before BTB, white applicants to employers with the box received 7% more callbacks than similar black applicants, but BTB increased this gap to 43%. We believe that the best interpretation of these results is that employers are relying on exaggerated impressions of real-world racial differences in felony conviction rates.”

Newly published by AMANDA AGAN and SONJA STARR, in line with their previous work on the same topic, available here.

• These results bolster longstanding concerns about perverse consequences arising from ban the box legislation. (Similar studies include this one from 2006, and this one from 2016.) A 2008 paper provides a theoretical accompaniment to these worries, arguing that a privacy tradeoff is required to ensure race is not being used as a proxy for criminal history: “By increasing the availability of information about individuals, we can reduce decisionmakers’ reliance on information about groups.… reducing privacy protections will reduce the prevalence of statistical discrimination.” Link.
• In a three part series from 2016, Noah Zatz at On Labor took on the perverse consequences argument and its policy implications, levelling three broad criticisms: “it places blame in the wrong place, it relies upon the wrong definition of racial equality, and it ignores cumulative effects.” Linkto part one.
• A 2017 study of ban the box that focussed on the public sector—where anti-discrimination enforcement is more robust—found an increase in the probability of hiring for individuals with convictions and “no evidence of statistical discrimination against young low-skilled minority males.” Link.
• California’s Fair Chance Act went into effect January 1, 2018, joining a growing list of fair hiring regulations in many other states and counties by extending ban the box reforms to the private sector. The law provides that employers can only conduct criminal background checks after a conditional offer of employment has been made. More on the bill can be found here.
• Two posts on the California case, again by Zatz at On Labor, discuss several rich policy design questions raised by the “bright line” standards included in this legislation, and how they may interact with the prima facie standard of disparate impact discrimination: “Advocates fear, however, that bright lines would validate the exclusion of people on the wrong side of the line, despite individualized circumstances that vindicate them. But of course, the opposite could also be the case.” Link.
• Tangentially related, Ben Casselman reports in the New York Times that a tightening labor market may be encouraging some employers to hire beyond the box—without legislative guidance. Link.

## A Ha?

### A flurry of articles in December and January assess the state of artificial intelligence

From Erik Brynjolfsson et al, optimism about productivity growth:

“To be clear, we are optimistic about the ultimate productivity growth fueled by AI and complementary technologies. The real issue is that it takes time to implement changes in processes, skills and organizational structure to fully harness AI’s potential as a general-purpose technology (GPT). Previous GPTs include the steam engine, electricity, the internal combustion engine and computers.

“In other words, as important as specific applications of AI may be, the broader economic effects of AI, machine learning and associated new technologies stem from their characteristics as GPTs: They are pervasive, improved over time and able to spawn complementary innovations.”

Full post at The Hill here.

## MASS PRIVACY

This week, an Australian college student noticed how data from Strava, a fitness-tracking app, can be used to discover the locations of military bases. Many outlets covered the news and its implications, including Wired and the Guardian. In the New York Times, Zeynep Tufekci’s editorial was characteristically insightful:

“Data privacy is not like a consumer good, where you click ‘I accept’ and all is well. Data privacy is more like air quality or safe drinking water, a public good that cannot be effectively regulated by trusting in the wisdom of millions of individual choices. A more collective response is needed.”

Samson Esayas considers the collective nature of data privacy from a legal perspective:

"This article applies lessons from the concept of ‘emergent properties’ in systems thinking to data privacy law. This concept, rooted in the Aristotelian dictum ‘the whole is more than the sum of its parts’, where the ‘whole’ represents the ‘emergent property’, allows systems engineers to look beyond the properties of individual components of a system and understand the system as a single complex... Informed by the discussion about emergent property, the article calls for a holistic approach with enhanced responsibility for certain actors based on the totality of the processing activities and data aggregation practices."

• A Twitter note on Strava from Sean Brooks: “So who at Strava was supposed to foresee this? Whose job was it to prevent this? Answer is almost certainly no one…I’ve always hated the ‘data is the new oil’ metaphor, but here it seems disturbingly accurate. And ironically, organizations with something to hide (military, IC, corporate R&D) have the resource curse. They want to benefit from the extraction, but they also have the most to lose.” Link.
• We mentioned Glen Weyl last week as a noteworthy economist engaging with ethical issues (see Beatrice Cherrier’s Twitter thread). A speculative paper he co-wrote on "data as labor" imagines a world in which companies paid users for their data. (We find the framing "data as labor" slightly misleading—Weyl's larger point seems to be about data as assets—the product of labor.) Link. ht Chris Kanich for bringing the two threads together on Twitter.
• This Economist article also covers Weyl's paper: “Still, the paper contains essential insights which should frame discussion of data’s role in the economy. One concerns the imbalance of power in the market for data. That stems partly from concentration among big internet firms. But it is also because, though data may be extremely valuable in aggregate, an individual’s personal data typically are not.” Link.
• A potentially exciting aspect of the GDPR is the right to data portability: "A free portability of personal data from one controller to another can be a strong tool for data subjects in order to foster competition of digital services and interoperability of platforms and in order to enhance controllership of individuals on their own data." Link.

## AUTOMATIC PRECISION

### Tranlating randomized controlled trials into policy action

"A randomized experiment is performed,a statistically significant comparison is found, and then story time begins, and continues and continues—as if the rigor from the randomized experiment somehow suffuses through the entire analysis."

From a short paper by ANDREW GELMAN, who adds his analysis to the debate on the use of RCTs in policy development. Link.

The paper that Gelman is commenting on, by ANGUS DEATON and NANCY CARTWRIGHT, tackles misunderstandings and misuses of the form across disciplines:

"RCTs are both under- and over-sold. Oversold because extrapolating or generalizing RCT results requires a great deal of additional information that cannot come from RCTs; under-sold, because RCTs can serve many more purposes than predicting that results obtained in a trial population will hold elsewhere…

The gold standard or 'truth' view does harm when it undermines the obligation of science to reconcile RCTs results with other evidence in a process of cumulative understanding."

## BASIC OPPORTUNITY

### Considerations on funding UBI in Britain

The RSA (Royal Society for the encouragement of Arts, Manufactures and Commerce) published a discussion paper on UBI. ANTHONY PAINTER outlines some key points here, including some thoughts on funding:

“To fund the ‘Universal Basic Opportunity Fund’ (UBOF), the Government would finance an endowment to cover the fund for 14 years from a public debt issue (at current low interest rates). This endowment would be invested to both fund asset growth and public benefit. The fund could be invested in housing, transport, energy and digital infrastructure and invested for high growth in global assets such as equity and real estate. This seems radical but actually, similar mechanisms have been established in Norway, Singapore and Alaska. In the latter case, Basic Income style dividends are paid to all Alaskans. Essentially, the UBOF is a low-interest mortgage to invest in infrastructure and human growth that brings forward the benefits of a sovereign wealth fund to the present rather than waiting for it to accumulate over time.”

Full paper is available here. And here is the longer section on “The technicalities of a Universal Basic Opportunity Fund,” including building and administering the fund. ht Lauren

• A new working paper on the Alaska Permanent Fund: "Overall, our results suggest that a universal and permanent cash transfer does not significantly decrease aggregate employment." Link.

## DEFERRED ACTION

### On the effects of DACA

Last week we linked to a paper that outlines the effects of DACA status on educational attainment and productivity:

"High school graduation rates increased by 15 percent while teenage births declined by 45 percent.… College attendance increased by 25 percent among women, suggesting that DACA raised aspirations for education above and beyond qualifying for legal status."

Given reader interest in that paper, we've compiled an overview, inspired by current events, of DACA-related studies across a range of domains.

• On the economic effects of legal status for DREAMers, including the modeled impact of the DREAM Act: “We estimate DACA increased GDP by almost 0.02% (about $3.5 billion), or$7,454 per legalized worker. Passing the DREAM Act would increase GDP by around 0.08% (or $15.2 billion), which amounts to an average of$15,371 for each legalized worker.” Link.
• The Cato Institute estimates the fiscal impact of the elimination of DACA, inclusive of projected productivity declines and enforcement costs: “The United States economy would be poorer by more than a quarter of a trillion dollars.” Link.
• A study finds DACA moved 50 to 75 thousand unauthorized immigrants into the labor force while increasing incomes for immigrants at the bottom of the income distribution. Using these estimates, the author contends that the (now defunct) DAPA, which targeted unauthorized parents of US citizens and LPRs for legalization, would move over 250 thousand unauthorized individuals into employment. Link. Another finds a 38% reduction in the likelihood of poverty for DACA-eligible immigrants. Link.
• As a complement to the above linked paper on education investment, more fine-grained results on education outcomes for DACA recipients: “the effect of DACA on educational investments depends on how easily colleges accommodate working students.” Link.
• On the mental health outcomes of children of DACA recipients. Link. On the health outcomes for DACA recipients versus their unqualified DREAMer counterparts. Link. On Medicaid use in mixed-status families, and the effects of deportation risk thereon. Link.
• Again from Cato, a report on the IRCA (alias "Reagan amnesty") reviews several studies of the economic effects of that 1986 law, which paired legalization for close to three million unauthorized immigrants with increased border security and employer verification. Alongside specific takeaways regarding wages and tax revenues for/from the population that gained legal status (increases in both), a larger claim emerges: legalization programs are most sensible "within the context of comprehensive immigration reform." Link. For more on the Reagan Amnesty and its legacy, see this report from the DHS and this post from the Migration Policy Institute.
• Vox’s Dara Lind, one of the few reliably accurate mainstream reporters on immigration law and policy, gives an overview of the DREAMer generation: “It’s the combination of settledness and the difficulty of getting legal that make DREAMers generationally unique in the history of US immigration policy.” Link. An idea discussed in that post—that increased border enforcement paradoxically kept migrants in the U.S.—is given depth by Princeton sociologist Douglas Massey here and here. For more on the relationship between immigration law, increased enforcement, and the growth of the unauthorized population, see this paper, this book, and this article.

## IVORY MECHANICS

### Regional parochialism and the production of knowledge in universities

"Scholarly understanding of how universities transform money and intellect into knowledge remains limited. At present we have only rudimentary measures of knowledge production's inputs: tuition and fees, government subsidies, philanthropic gifts, and the academic credentials of students and faculty. Output measures are equally coarse: counts of degrees conferred; dissertations, articles and books completed; patents secured; dollars returned on particular inventions. As for the black box of knowledge production in between: very little."

From the introduction to a new book on American social science research that aims to uncover the institutional pathways that produce (and favor) certain areas of research.

It continues:

"The rise of 'global' discourse in the US academy has coevolved with fundamental changes in academic patronage, university prestige systems, and the international political economy. America's great research institutions are now only partly servants of the US nation-state. This fact has very large implications for those who make their careers producing scholarly knowledge."

• A short interview with co-authors Mitchell L. Stevens and Cynthia Miller-Idris. "Sociology department chairs said frankly that they deliberately steer graduate students away from international study because such projects on non-U.S. topics are less likely to have purchase on the tenure-line job market.… The tenure process is largely mediated by disciplines, and because those disciplines prioritize their own theoretical abstractions, contextual knowledge loses out." Link.
• A paper examines previous attempts to map the parochialism of a discipline, finding that “conventional measures based on nation-state affiliation capture only part of the spatial structures of inequality.” Employed therein: novel visualizations and mapping the social network structures of authorship and citation. Link. Relatedly, a September 2017 post by Samuel Moyn on parochialism in international law. Link.
• And a link we sent last fall, by Michael Kennedy, on interdisciplinarity and global knowledge cultures. Link.

## BRUTAL ATTACHMENTS

### A new report on the criminalization of debt

Last week, the ACLU published a report entitled "A Pound of Flesh: The Criminalization of Private Debt." It details the widespread use of the criminal justice system in the collection of debts—including medical, credit card, auto, education and household—in many cases resulting in de facto debtor's jails.

"In 44 states, judges—including district court civil judges, small-claims court judges, clerk-magistrates, and justices of the peace—are allowed to issue arrest warrants for failure to appear at post-judgment proceedings or for failure to provide information about finances. These warrants, usually called 'body attachments' or 'capias warrants,' are issued on the charge of contempt of court.

At the request of a collection company, a court can enter a judgment against a debtor, authorize a sheriff to seize a debtor's property, and order an employer to garnish the debtor's wages.… In most of the country, an unpaid car loan or a utility bill that's in arrears can result in incarceration."

• The report was given a lengthy write-up at The Intercept. "Federal law outlawed debt prisons in 1833, but lenders, landlords and even gyms and other businesses have found a way to resurrect the Dickensian practice. With the aid of private collection agencies, they file millions of lawsuits in state and local courts each year, winning 95 percent of the time." Link.
• A brief overview of the history of debtors' prisons, leading to the upward trend of collectors' leveraging criminal consequences against debtors. Link.
• A 2011 paper titled "Creditor's Contempt" describes the procedural and doctrinal mechanisms linking collectors and courts, and the "difficult balance between the state's and creditors' interest in rigorous judgment enforcement and debtors' interest in imposing reasonable limitations on the coerciveness of debt collection." Link. And documentation of a Duke Law conference covers the criminalization of debt alongside discussions of credit scoring and consumer bankruptcy. Link.
• The criminalization of private debt dovetails with the more widely discussed issue of criminal justice debt resulting from fines and fees, which also leads to de facto debtor incarceration. Often called "legal financial obligations" (LFOs), these revenue-raising fees are levied for everything from warrants and case processing to parole check-ins and electronic monitoring devices. For more, see this 2010 report from the Brennan Center for Justice, this 2015 investigation from NPR, and this 2016 reform guide from Harvard's Criminal Justice Policy Program. (Also from CJPP, an interactive criminal justice debt policy mapping tool. Link.)
• In a 2014 post on their now-defunct blog House of Debt, Atif Mian and Amir Sufi (authors of a book by the same name) on the history of debt forgiveness. Link. (For attempts at exploiting the imperfections of debt markets to cancel various kinds of debt, see the Rolling Jubilee project, and its relative the Debt Collective.)

## DEPENDENCE EXTERIOR

### State universities' reliance on out-of-state enrollment

Research on enrollment patterns finds that shrinking state funds leads admissions departments to look for out-of-state tuition financing.

"Fixed effects panel models revealed a strong negative relationship between state appropriations and nonresident freshman enrollment. This negative relationship was stronger at research universities than master’s or baccalaureate institutions. These results provide empirical support for assertions by scholars that state disinvestment in public higher education compels public universities to behave like private universities by focusing on attracting paying customers.

We contribute to a growing body of evidence that showing that university revenue seeking behaviors are associated with a strong Matthew Effect. Cheslock and Gianneschi showed that only flagship research universities could generate substantial revenues from voluntary support. Therefore, increasing reliance on voluntary support increases the differences between ‘have’ and ‘have-not’ universities. Similarly, our results suggest that relying on nonresident enrollment growth to compensate for declines in state appropriations also increases the difference between the haves and the have-nots. Many public universities may desire tuition revenue from nonresident students. However, descriptive statistics suggest that only research universities are capable of generating substantial nonresident enrollment."

• An NBER working paper, from 2016, produces similar findings in the case of international student enrollment: "Our analysis focuses on the interaction between the type of university experience demanded by students from abroad and the supply-side of the U.S. market. For the period between 1996 and 2012, we estimate that a 10% reduction in state appropriations is associated with an increase in foreign enrollment of 12% at public research universities and about 17% at the most resource-intensive public universities." Link to the paper, link to a summary.

## DISTINCT FUSION

### Tracking the convergence of terms across disciplines

In a new paper, CHRISTIAN VINCENOT looks at the process by which two synonymous concepts developed independently in separate disciplines, and how they were brought together.

“I analyzed research citations between the two communities devoted to ACS research, namely agent-based (ABM) and individual-based modelling (IBM). Both terms refer to the same approach, yet the former is preferred in engineering and social sciences, while the latter prevails in natural sciences. This situation provided a unique case study for grasping how a new concept evolves distinctly across scientific domains and how to foster convergence into a universal scientific approach. The present analysis based on novel hetero-citation metrics revealed the historical development of ABM and IBM, confirmed their past disjointedness, and detected their progressive merger. The separation between these synonymous disciplines had silently opposed the free flow of knowledge among ACS practitioners and thereby hindered the transfer of methodological advances and the emergence of general systems theories. A surprisingly small number of key publications sparked the ongoing fusion between ABM and IBM research.”

Link to a summary and context. Link to the abstract. ht Margarita

• Elsewhere in metaresearch, a new paper from James Evans’s Knowledge Lab examines influence by other means than citations: “Using a computational method known as topic modeling—invented by co-author David Blei of Columbia University—the model tracks ‘discursive influence,’ or recurring words and phrases through historical texts that measure how scholars actually talk about a field, instead of just their attributions. To determine a given paper’s influence, the researchers could statistically remove it from history and see how scientific discourse would have unfolded without its contribution.” Link to a summary. Link to the full paper.

## URBAN WEALTH FUNDS

### Social wealth funds on the municipal level

Matt Bruenig, Roger Farmer and Miles Kimball, and Sam Altman have all pushed for versions of a US sovereign wealth fund for social good. Their work focuses on funds at the national level. But another version of the idea comes from Dag Detter and Stefan Fölster, whose 2017 book advocates for “urban wealth funds,” funded via better management of government land and other nonfinancial assets. A few such funds have already had success.

Using Boston as an example of a city that could profit from an urban wealth fund, Detter writes for the World Economic Forum in February:

“…Like many other cities, Boston does not assess the market value of its economic assets. Unlocking the public value of poorly utilized real estate or monetizing its transportation and utility assets – smarter asset management, in other words – would yield a return that would enable it to more than double its infrastructure investments. Through smarter asset management, Boston could improve its public transport system and other services without needing to opt for privatization, raise taxes or cut spending elsewhere.

“What’s the catch? Actually, there isn’t one.”

Link to the full post. A 2017 Brookings report showed how Copenhagen successfully implemented urban wealth fund policy:

“This paper explores how the Copenhagen model can revitalize cities and finance large-scale infrastructure by increasing the commercial yield of publicly owned land and buildings without raising taxes. The approach deploys an innovative institutional vehicle—a publicly owned, privately run corporation—to achieve the high-level management and value appreciation of assets more commonly found in the private sector while retaining development profits for public use.”

• Another successful version of urban value capture: Hong Kong’s metro (the MTR). “Hong Kong is one of the world’s densest cities, and businesses depend on the metro to ferry customers from one side of the territory to another. As a result, the MTR strikes a bargain with shop owners: In exchange for transporting customers, the transit agency receives a cut of the mall’s profit, signs a co-ownership agreement, or accepts a percentage of property development fees. In many cases, the MTR owns the entire mall itself.” Link.
• Detter and Fölster’s previous book envisions better management of government assets on the national level.

## TARGET VARIABLE

### Big data's effect on the credit-scoring industry

A lengthy 2016 article from the Yale Journal of Law and Technology delves into credit-scoring then suggests a new legislative framework.

Since 2008, lenders have only intensified their use of big-data profiling techniques. With increased use of smartphones, social media, and electronic means of payment, every consumer leaves behind a digital trail of data that companies—including lenders and credit scorers—are eagerly scooping up and analyzing as a means to better predict consumer behavior. The credit-scoring industry has experienced a recent explosion of start-ups that take an 'all data is credit data' approach that combines conventional credit information with thousands of data points mined from consumers' offline and online activities. Many companies also use complex algorithms to detect patterns and signals within a vast sea of information about consumers' daily lives. Forecasting credit risk on the basis of a consumer's retail preferences is just the tip of the iceberg; many alternative credit-assessment tools now claim to analyze everything from consumer browsing habits and social media activities to geolocation data.

### Tallying the gains of migration

We recently linked to a paper by LANT PRITCHETT that challenged development orthodoxy by pointing out that the income gains for the subjects of best practice direct development interventions are about 40 times smaller than those from allowing the same people to work in a rich country like the United States.

That argument was built upon previous scholarship that attempted to put rigorous numbers to the obvious intuition that migration is beneficial for those drawn to wealthy countries by labor markets. From a 2016 paper by Pritchett and co-authors MICHAEL CLEMENS and CLAUDIO MONTENEGRO:

"We use migrant selection theory and evidence to place lower bounds on the ad valorem equivalent of labor mobility barriers to the United States, with unique nationally-representative microdata on both US immigrant workers and workers in their 42 home countries. The average price equivalent of migration barriers in this setting, for low-skill males, is greater than $13,700 per worker per year." ### April 14th, 2018 ## Inventions that Changed the World ### R&D HISTORY | ML AND ECONOMICS | SCANLON ON INEQUALITY ## METARESEARCH AND DEVELOPMENT ### Changes in R & D funding and allocation In a new report on workforce training and technological competitiveness, a task force led by former Commerce Secretary Penny Pritzker describes recent trends in research and development investment. Despite the fact that “total U.S. R&D funding reached an all-time high of nearly$500 billion in 2015, nearly three percent of U.S. gross domestic product,” the balance in funding has shifted dramatically to the private sector: “federal funding for R&D, which goes overwhelmingly to basic scientific research, has declined steadily and is now at the lowest level since the early 1950s.” One section of the report contains this striking chart:

Link to the full report. ht Will

• A deeper dive into the report's sourcing leads to a fascinating repository of data from the American Association for the Advancement of Science on the U.S. government's investments in research since the 1950s. Alongside the shift from majority federal to majority private R&D funding, the proportion of investments across different academic disciplines has also changed significantly. One table shows that the share of federal R&D funding for environmental science, engineering, and math/computer science has grown the most, from a combined 43.2% in 1970 to 54.8% in 2017. Meanwhile, funding for social science research has decreased the most. In 1970, the social sciences received 4.3% of the government's R&D funding; but in 2017, that share had fallen to 1.8%. Much more data on public sector R&D investments is available from the AAAS here.
• A March 2017 article in Science explains some of these shifts.
• A section of a 1995 report commissioned by the U.S. Senate Committee on Appropriations charts and contextualizes the explosion of federal research and development funding in the immediate aftermath of the Second World War.
• A study from the Brookings Institution finds that federal funding for research and development accounts for up to 2.8 percent of GDP in some of the largest metropolitan areas in America. The authors have fifty ideas for how municipalities can capture more of the economic impact generated by that R&D.
• Michael comments: "With the diminishing share (4.3% to 1.8% of total government research) of halved expenditures—and business not naturally inclined to conduct this kind of research (except in, as one would expect, instances of direct business application like surge pricing and Uber)—social science research appears to no longer have a natural home."

## NON-ZERO PRICE

### "Digital goods have created large gains in well-being that are missed by conventional measures of GDP and productivity"

A new paper by ERIK BRYNJOLFSSON et al. suggests using massive online choice experiments as a method to find the true impact of digital goods on well-being. The background section gives an example of the impact that is currently unmeasured:

“... [in] a number of sectors ... physical goods and services are being substituted with digital goods and services. An apropos example of such a transition good is an encyclopedia. Since the 2000s, people have increasingly flocked to Wikipedia to get information about a wide variety of topics updated in real time by volunteers. In 2012, Encyclopedia Britannica, which had been one of the most popular encyclopedias, ceased printing books after 244 years (Pepitone 2012). Wikipedia has over 60 times as many articles as Britannica had, and its accuracy has been found to be on par with Britannica (Giles 2005). Far more people use Wikipedia than ever used Britannica—demand and well-being have presumably increased substantially. But while the revenues from Britannica sales were counted in GDP statistics, Wikipedia has virtually no revenues and therefore doesn’t contribute anything to GDP other than a few minimal costs for running servers and related activities and some voluntary contributions to cover these costs…For such transition goods, consumer surplus increases as free access spurs demand, but revenue decreases as prices become zero. Hence GDP and consumer welfare actually move in opposite directions.”

One finding of note: “50% of the Facebook users in our sample would give up all access to Facebook for one month if we paid them about $50 or more.” Link to paper on NBER here. A free draft is available here. ### April 28th, 2018 ## The Inaccessible Rock ### BASIC INCOME IN CANADA | PERCEPTIONS OF INEQUALITY | HARDWARE AND AI ## ONTARIO FOR ALL ### Canada calculates expanding Ontario's guaranteed income to the entire nation Canada’s Parliamentary Budget Office looks at the cost of expanding the Ontario pilot nationwide. Full report here. ht Lauren ANDREW COYNE of the NATIONAL POST summarizes the findings (all figures are in Canadian dollars): “The results, speculative as they are, are intriguing. The PBO puts the cost of a nationwide rollout of the Ontario program, guaranteeing every adult of working age a minimum of 16,989 CAD annually (24,027 CAD for couples), less 50 per cent of earned income—there’d also be a supplement of up to 6,000 CAD for those with a disability—at 76.0 billion CAD. “Even that number, eye-watering as it is (the entire federal budget, for reference, is 312 billion CAD), is a long way from the 500 billion CAD estimates bandied about in some quarters. “But that’s just the gross figure. The PBO estimates the cost of current federal support programs for people on low-income (not counting children and the elderly, who already have their own guaranteed income programs) at$33 billion annually. Assuming a federal basic income replaced these leaves a net cost of 43 billion CAD. That’s still a lot—one seventh of current federal spending.”

• A few features of the Ontario model differentiate it from a a prototypical (universal) basic income: 1) the Ontario pilot is not universal: only those “living on low income (under 34,000 per year if you’re single or under 48,000 per year if a couple”) are eligible, according to the Ontario government. 2) It functions as a negative income tax—50% of any earned income above a set threshold is subtracted from the benefit. 3) If implemented, this guaranteed basic income would "replace Ontario Works and the Ontario Disability Support Program." The PBO report uses similar parameters expanded to the federal level.
• In an article in Fast Company from February on the pilot, Ontario premier Kathleen Wynne explains the thinking behind using basic income to replace other social assistance: “‘I’ve met lots of people on social assistance who give a lot to the community and I have often thought ‘why aren’t we paying you to do this?’ Wynne says. ‘I envision a world where we help people to do the work that they can do. By work, I mean involvement in human society. I hope that, as we go through this project, we will see how that will work better.’” Link.
• A HuffPost article imagines other ways the guaranteed income might cost even less over time: "This raw cost estimate is a very simplified snapshot. It just models what the government would have to spend to deliver the basic income, if nothing else changed. But with a basic income, plenty would change. First, we could expect a steep drop in the poverty rate. And that, in turn, could mean big savings for governments, because poverty is a major expense—particularly when it comes to health care." Link.

## POSTAL OPTION

### Renewed interest in an old model

Last week we linked to the widely publicized news that SENATOR KIRSTEN GILLIBRAND would be pushing legislation to reintroduce government-run commercial banking through the United States Postal Service.

In a 2014 article for the HARVARD LAW REVIEW, law professor and postal banking advocate MEHRSA BARADARAN describes the context that makes postal banking an appealing solution:

“Credit unions, S&Ls, and Morris Banks were alternatives to mainstream banks, but they were all supported and subsidized by the federal government through targeted regulation and deposit insurance protection.

Banking forms homogenized in the 1970s and 1980s, leaving little room for variation in institutional or regulatory design. Eventually, each of these institutions drifted from their initial mission of serving the poor and began to look more like commercial banks, even competing with them for ever-shrinking profit margins.

The result now is essentially two forms of banks: regulated mainstream banks that seek maximum profit for their shareholders by serving the needs of the wealthy and middle class, and unregulated fringe banks that seek maximum profits for their shareholders by serving the banking and credit needs of the poor. What is missing from the American banking landscape for the first time in almost a century is a government-sponsored bank whose main purpose is to meet the needs of the poor."

## Lay of the Land

### A new paper on the labor effects of cash transfers

SARAH BAIRD, DAVID MCKENZIE, and BERK OZLER of the WORLD BANK review a variety of cash transfer studies, both governmental and non-governmental, in low- and middle-income countries. Cash transfers aren’t shown to have the negative effects on work that some fear:

"The basic economic model of labor supply has a very clear prediction of what should be expected when an adult receives an unexpected cash windfall: they should work less and earn less. This intuition underlies concerns that many types of cash transfers, ranging from government benefits to migrant remittances, will undermine work ethics and make recipients lazy.

Overall, cash transfers that are made without an explicit employment focus (such as conditional and unconditional cash transfers and remittances) tend to result in little to no change in adult labor. The main exceptions are transfers to the elderly and some refugees, who reduce work. In contrast, transfers made for job search assistance or business start-up tend to increase adult labor supply and earnings, with the likely main channels being the alleviation of liquidity and risk constraints."

Link to the working paper. Table 2—which covers the channels through which cash impacts labor, is especially worth a read—as many studies on cash transfers don’t go into this level of detail.

• A study on a large-scale unconditional cash transfer in Iran: "With the exception of youth, who have weak ties to the labor market, we find no evidence that cash transfers reduced labor supply, while service sector workers appear to have increased their hours of work, perhaps because some used transfers to expand their business." Link.
• Continuing the analysis of Hauschofer and Schapiro’s controversial results from a cash study transfer in Kenya, Josh Rosenberg at GiveDirectly has, at the end of his overview, some thoughtful questions for continuing research: "Is our cost-effectiveness model using a reasonable framework for estimating recipients’ standard of living over time?… GiveDirectly provides large, one-time transfers whereas many government cash transfers provide smaller ongoing support to poor families. How should we apply new literature on other kinds of cash programs to our estimates of the effects of GiveDirectly?" Link.

## EACH POINT ON THE CHAIN

### Arguments for Value-Added Tax in the US, and using VAT to fund basic income

#### VAT

The Wall Street Journal lays out the basics: “Unlike a traditional sales tax, a VAT is a levy on consumption that taxes the value added to a product or service by businesses at each point in the chain of production.”

VATs are ubiquitous—except in the United States. According to a 2013 Hamilton Project report, “In recent years, the VAT has raised about 20 percent of the world’s tax revenue (Keen and Lockwood 2007). This experience suggests that the VAT can raise substantial revenue, is administrable, and is minimally harmful to economic growth.”  The TPC notes that “every economically advanced nation except the United States” has a VAT. Countries adopted VATs over time: the EU first unified all its VATs in the 1970s, China adopted a VAT in 1984, Canada in 1991, and so on. Now the US is the only country in the OECD without one.

#### Why is there no VAT in the US?

"Back in 1988, Harvard economist Larry Summers [...] explained that the reason the U.S. doesn't have a VAT is because liberals think it's regressive and conservatives think it's a money machine. We'll get a VAT, he said, when they reverse their positions." (Forbes.)

A VAT could certainly be a revenue-raising powerhouse. According to the CBO, a 5% VAT could raise 2.7 trillion dollars in 2017-2026 with a broad base, or 1.8 trillion with a narrow base—the most massive of all the options for revenue in their 2016 report.

And as for the regressive concerns, VAT proposals usually suggest adjusting other taxes or credits commensurately. A 2010 Tax Policy report considers a VAT in the context of lowering payroll or corporate taxes, and the Hamilton Project suggests adding tax credits or straightforward cash to low-income households.

VATs are appealing beyond their ability to raise a lot of money. They’re also easier to administer and document than other tax forms. A 2014 study by Dina Pomeranz examines the way the VAT is documented in Chile, and finds that "forms of taxation such as the VAT, which leave a stronger paper trail and thereby generate more information for the tax authority, provide an advantage for tax collection over other forms of taxation, such as a retail sales tax." Beyond that, Michael Graetz argues in the Wall Street Journal, "shifting taxes from production to consumption would stimulate jobs and investments and induce companies to base headquarters here rather than abroad." The Tax Foundation has advocated for a VAT to replace the Corporate Income Tax for similar reasons.

## SHOCK-LEVEL-ZERO

### Jobs guarantees vs. basic income

In a characteristically lengthy and thorough post, SCOTT ALEXANDER of SLATE STAR CODEX argues for a basic income over a jobs guarantee, in dialogue with a post by SIMON SARRIS.

Here's how Alexander addresses the claim that “studies of UBI haven’t been very good, so we can’t know if it works”:

“If we can’t 100% believe the results of small studies – and I agree that we can’t – our two options are to give up and never do anything that hasn’t already been done, or to occasionally take the leap towards larger studies. I think basic income is promising enough that we need to pursue the second. Sarris has already suggested he won’t trust anything that’s less than permanent and widespread, so let’s do an experiment that’s permanent and widespread.”

Link to the full piece on Slate Star.

For another angle on the same question, MARTIN RAVALLION recently published a paper at the CENTER FOR GLOBAL DEVELOPMENT looking at employment guarantees and income guarantees primarily in India:

“The paper has pointed to evidence for India suggesting that the country’s Employment Guarantee Schemes have been less cost effective in reducing current poverty through the earnings gains to workers than one would expect from even untargeted transfers, as in a UBI. This calculation could switch in favor of workfare schemes if they can produce assets of value (directly or indirectly) to poor people, though the evidence is mixed on this aspect of the schemes so far in India.”

Ravallion takes a nuanced view of arguments for the right to work and the right to income, as well as the constraints of implementation, and concludes, "The key point is that, in some settings, less effort at fine targeting may well prove to be more cost-effective in assuring economic freedom from material deprivation." Full study available here. ht Sidhya

## ARTIFICIAL INFERENCE

### Causal reasoning and machine learning

In a recent paper titled "The Seven Pillars of Causal Reasoning with Reflections on Machine Learning", JUDEA PEARL, professor of computer science at UCLA and author of Causality, writes:

“Current machine learning systems operate, almost exclusively, in a statistical or model-free mode, which entails severe theoretical limits on their power and performance. Such systems cannot reason about interventions and retrospection and, therefore, cannot serve as the basis for strong AI. To achieve human level intelligence, learning machines need the guidance of a model of reality, similar to the ones used in causal inference tasks. To demonstrate the essential role of such models, I will present a summary of seven tasks which are beyond reach of current machine learning systems and which have been accomplished using the tools of causal modeling."

The tasks include work on counterfactuals, and new approaches to handling incomplete data. Link to the paper. A vivid expression of the issue: "Unlike the rules of geometry, mechanics, optics or probabilities, the rules of cause and effect have been denied the benefits of mathematical analysis. To appreciate the extent of this denial, readers would be stunned to know that only a few decades ago scientists were unable to write down a mathematical equation for the obvious fact that 'mud does not cause rain.' Even today, only the top echelon of the scientific community can write such an equation and formally distinguish 'mud causes rain' from 'rain causes mud.'”

Pearl also has a new book out, co-authored by DANA MCKENZIE, in which he argues for the importance of determining cause and effect in the machine learning context. From an interview in Quanta magazine about his work and the new book:

"As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. If we want machines to reason about interventions ('What if we ban cigarettes?') and introspection ('What if I had finished high school?'), we must invoke causal models. Associations are not enough—and this is a mathematical fact, not opinion.

We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans."

## PAVEMENT, NURSING, MISSILES

### Algorithm Tips, a compilation of "potentially newsworthy algorithms" for journalists and researchers

DANIEL TRIELLI, JENNIFER STARK, and NICK DIAKOPOLOUS and Northwestern’s Computational Journalism Lab created this searchable, non-comprehensive list of algorithms in use at the federal, state, and local levels. The “Methodology” page explains the data-scraping process, then the criteria for inclusion:

“We formulated questions to evaluate the potential newsworthiness of each algorithm:

Can this algorithm have a negative impact if used inappropriately?
Can this algorithm raise controversy if adopted?
Is the application of this algorithm surprising?
Does this algorithm privilege or harm a specific subset of people?
Does the algorithm have the potential of affecting a large population or section of the economy?

If the answers for any of these questions were 'yes', the algorithm could be included on the list."

Link. The list includes a huge range of applications, from a Forest Service algorithmic ranking of invasive plants, to an intelligence project meant to discover “significant societal events” from public data—and pavement, nursing, and missiles too.

• Nick Diakopolous also wrote a guide for journalists on investigating algorithms: “Auditing algorithms is not for the faint of heart. Information deficits limit an auditor’s ability to sometimes even know where to start, what to ask for, how to interpret results, and how to explain the patterns they’re seeing in an algorithm’s behavior. There is also the challenge of knowing and defining what’s expected of an algorithm, and how those expectations may vary across contexts.” Link.
• The guide is a chapter from the upcoming Data Journalism Handbook. One of the partner organizations behind the guide has a website of advice and stories from the data-reporting trenches, such as this on trying to figure out prescription drug deaths: “The FDA literally found three different ways to spell ASCII. This was a sign of future surprises.”

## Phantom Perspective

### A new report from Fordham CLIP sheds light on the market for student list data from higher education institutions

From the paper authored by N. CAMERON RUSSELL, JOEL R. REIDENBERG, ELIZABETH MARTIN, and THOMAS NORTON of the FORDHAM CENTER ON LAW AND INFORMATION POLICY:

“Student lists are commercially available for purchase on the basis of ethnicity, affluence, religion, lifestyle, awkwardness, and even a perceived or predicted need for family planning services.

This information is being collected, marketed, and sold about individuals because they are students."

Drawing from publicly-available sources, public records requests from educational institutions, and marketing materials sent to high school students gathered over several years, the study paints an unsettling portrait of the murky market for student list data, and makes recommendations for regulatory response:

1. The commercial marketplace for student information should not be a subterranean market. Parents, students, and the general public should be able to reasonably know (i) the identities of student data brokers, (ii) what lists and selects they are selling, and (iii) where the data for student lists and selects derives. A model like the Fair Credit Reporting Act (FCRA) should apply to compilation, sale, and use of student data once outside of schools and FERPA protections. If data brokers are selling information on students based on stereotypes, this should be transparent and subject to parental and public scrutiny.
2. Brokers of student data should be required to follow reasonable procedures to assure maximum possible accuracy of student data. Parents and emancipated students should be able to gain access to their student data and correct inaccuracies. Student data brokers should be obligated to notify purchasers and other downstream users when previously-transferred data is proven inaccurate and these data recipients should be required to correct the inaccuracy.
3. Parents and emancipated students should be able to opt out of uses of student data for commercial purposes unrelated to education or military recruitment.
4. When surveys are administered to students through schools, data practices should be transparent, students and families should be informed as to any commercial purposes of surveys before they are administered, and there should be compliance with other obligations under the Protection of Pupil Rights Amendment (PPRA)."

## VISIBLE CONSTRAINT

### Including protected variables can make algorithmic decision-making more fair

A recent paper co-authored by JON KLEINBERG, JENS LUDWIG, SENDHIL MULLAINATHAN, and ASHESH RAMBACHAN addresses algorithmic bias, countering the "large literature that tries to 'blind' the algorithm to race to avoid exacerbating existing unfairness in society":

"This perspective about how to promote algorithmic fairness, while intuitive, is misleading and in fact may do more harm than good. We develop a simple conceptual framework that models how a social planner who cares about equity should form predictions from data that may have potential racial biases. Our primary result is exceedingly simple, yet often overlooked: a preference for fairness should not change the choice of estimator. Equity preferences can change how the estimated prediction function is used (such as setting a different threshold for different groups) but the estimated prediction function itself should not change. Absent legal constraints, one should include variables such as gender and race for fairness reasons.

Our argument collects together and builds on existing insights to contribute to how we should think about algorithmic fairness.… We empirically illustrate this point for the case of using predictions of college success to make admissions decisions. Using nationally representative data on college students, we underline how the inclusion of a protected variable—race in our application—not only improves predicted GPAs of admitted students (efficiency), but also can improve outcomes such as the fraction of admitted students who are black (equity).

Across a wide range of estimation approaches, objective functions, and definitions of fairness, the strategy of blinding the algorithm to race inadvertently detracts from fairness."

## CLIMATE PREDICTION MARKET

A 2011 paper by SHI-LING HSU suggests a way of using a carbon tax to generate more accurate predictions of future climate conditions:

“The market for tradable permits to emit in the future is essentially a prediction market for climate outcomes. And yet, unlike prediction markets that have been operated or proposed thus far, this prediction market for climate outcomes operates against the backdrop of an actual and substantial tax liability. Whereas prediction markets have heretofore largely involved only recreational trading, this prediction market will operate against a regulatory backdrop and thus will provide much stronger incentives for traders to acquire and trade on information.”

A 2018 paper by GARY LUCAS and FELIX MORMANN suggests using similar predictions for climate policies beyond carbon taxes:

“We explain how both the federal and state governments could use prediction markets to help resolve high-profile controversies, such as how best to allocate subsidies to promote clean technology innovation and which policy strategy promises the greatest reduction in carbon emissions.”

• In 2016, a group of researchers modeled the way that information would converge in a climate prediction market, and found “market participation causes most traders to converge quickly toward believing the ‘true’ climate model, suggesting that a climate market could be useful for building public consensus.” Link.
• Tyler Cowen wrote about Hsu’s paper in 2011: “I think of such fine-tuning as a misguided approach. Is there such a good ‘basket’ measure of climate outcomes with sufficiently low short-term volatility?” Link.
• A 2017 paper by Michael Thicke makes a similar point about prediction models for science generally: “Prediction markets for science could be uninformative or deceptive because scientific predictions are often long-term, while prediction markets perform best for short-term questions.” Link.

## EVIDENCE PUZZLES

### The history and politics of RCTs

In a 2016 working paper, JUDITH GUERON recounts and evaluates the history of randomized controlled trials (RCTs) in the US, through her own experience in the development of welfare experiments through the MDRC and the HHS:

“To varying degrees, the proponents of welfare experiments at MDRC and HHS shared three mutually reinforcing goals. The first was to obtain reliable and—given the long and heated controversy about welfare reform—defensible evidence of what worked and, just as importantly, what did not. Over a pivotal ten years from 1975 to 1985, these individuals became convinced that high-quality RCTs were uniquely able to produce such evidence and that there was simply no adequate alternative. Thus, their first challenge was to demonstrate feasibility: that it was ethical, legal, and possible to implement this untried—and at first blush to some people immoral—approach in diverse conditions. The other two goals sprang from their reasons for seeking rigorous evidence. They were not motivated by an abstract interest in methodology or theory; they wanted to inform policy and make government more effective and efficient. As a result, they sought to make the body of studies useful, by assuring that it addressed the most significant questions about policy and practice, and to structure the research and communicate the findings in ways that would increase the potential that they might actually be used."

## DATA IS NONRIVAL

### Considerations on data sharing and data markets

CHARLES I. JONES and CHRISTOPHER TONETTI contribute to the “new but rapidly-growing field” known as the economics of data:

“We are particularly interested in how different property rights for data determine its use in the economy, and thus affect output, privacy, and consumer welfare. The starting point for our analysis is the observation that data is nonrival. That is, at a technological level, data is not depleted through use. Most goods in economics are rival: if a person consumes a kilogram of rice or an hour of an accountant’s time, some resource with a positive opportunity cost is used up. In contrast, existing data can be used by any number of firms or people simultaneously, without being diminished. Consider a collection of a million labeled images, the human genome, the U.S. Census, or the data generated by 10,000 cars driving 10,000 miles. Any number of firms, people, or machine learning algorithms can use this data simultaneously without reducing the amount of data available to anyone else. The key finding in our paper is that policies related to data have important economic consequences.”

After modeling a few different data-ownership possibilities, the authors conclude, “Our analysis suggests that giving the data property rights to consumers can lead to allocations that are close to optimal.” Link to the paper.

• Jones and Tonetti cite an influential 2015 paper by Alessandro Acquisti, Curtis R. Taylor, and Liad Wagman on “The Economics of Privacy”: “In digital economies, consumers' ability to make informed decisions about their privacy is severely hindered, because consumers are often in a position of imperfect or asymmetric information regarding when their data is collected, for what purposes, and with what consequences.” Link.
• For more on data populi, Ben Tarnoff has a general-interest overview in Logic Magazine, including mention of the data dividend and a comparison to the Alaska Permanent Fund. Tarnoff uses the oil industry as an analogy throughout: “In the oil industry, companies often sign ‘production sharing agreements’ (PSAs) with governments. The government hires the company as a contractor to explore, develop, and produce the oil, but retains ownership of the oil itself. The company bears the cost and risk of the venture, and in exchange receives a portion of the revenue. The rest goes to the government. Production sharing agreements are particularly useful for governments that don’t have the machinery or expertise to exploit a resource themselves.” Link.

## ALTERNATIVE ACTUARY

### History of risk assessment, and some proposed alternate methods

A 2002 paper by ERIC SILVER and LISA L. MILLER on actuarial risk assessment tools provides a history of statistical prediction in the criminal justice context, and issues cautions now central to the contemporary algorithmic fairness conversations:

"Much as automobile insurance policies determine risk levels based on the shared characteristics of drivers of similar age, sex, and driving history, actuarial risk assessment tools for predicting violence or recidivism use aggregate data to estimate the likelihood that certain strata of the population will commit a violent or criminal act.

To the extent that actuarial risk assessment helps reduce violence and recidivism, it does so not by altering offenders and the environments that produced them but by separating them from the perceived law-abiding populations. Actuarial risk assessment facilitates the development of policies that intervene in the lives of citizens with little or no narrative of purpose beyond incapacitation. The adoption of risk assessment tools may signal the abandonment of a centuries-long project of using rationality, science, and the state to improve upon the social and economic progress of individuals and society."

A more recent paper presented at FAT* in 2018 and co-authored by CHELSEA BARABAS, KARTHIK DINAKAR, JOICHI ITO, MADARS VIRZA, and JONATHAN ZITTRAIN makes several arguments reminiscent of Silver and Miller's work. They argue in favor of causal inference framework for risk assessments aimed at working on the question "what interventions work":

"We argue that a core ethical debate surrounding the use of regression in risk assessments is not simply one of bias or accuracy. Rather, it's one of purpose.… Data-driven tools provide an immense opportunity for us to pursue goals of fair punishment and future crime prevention. But this requires us to move away from merely tacking on intervenable variables to risk covariates for predictive models, and towards the use of empirically-grounded tools to help understand and respond to the underlying drivers of crime, both individually and systemically."

• In his 2007 book Against Prediction, lawyer and theorist Bernard Harcourt provided detailed accounts and critiques of the use of actuarial methods throughout the criminal legal system. In place of prediction, Harcourt proposes a conceptual and practical alternative: randomization. From a 2005 paper on the same topic: "Instead of embracing the actuarial turn in criminal law, we should rather celebrate the virtues of the random: randomization, it turns out, is the only way to achieve a carceral population that reflects the offending population. As a form of random sampling, randomization in policing has significant positive value: it reinforces the central moral intuition in the criminal law that similarly situated individuals should have the same likelihood of being apprehended if they offend—regardless of race, ethnicity, gender or class." Link to the paper. (And link to another paper of Harcourt's in the Federal Sentencing Reporter, "Risk as a Proxy for Race.")
• A recent paper by Megan Stevenson assesses risk assessment tools: "Despite extensive and heated rhetoric, there is virtually no evidence on how use of this 'evidence-based' tool affects key outcomes such as incarceration rates, crime, or racial disparities. The research discussing what 'should' happen as a result of risk assessment is hypothetical and largely ignores the complexities of implementation. This Article is one of the first studies to document the impacts of risk assessment in practice." Link
• A compelling piece of esoterica cited in Harcourt's book: a doctoral thesis by Deborah Rachel Coen on the "probabilistic turn" in 19th century imperial Austria. Link.

## BANKING AS ART

### On the history of economists in central banks

A recent paper by FRANÇOIS CLAVEAU and JÉRÉMIE DION applies quantitative methods to the historical study of central banks, demonstrating the transition of central banking from an "esoteric art" to a science, the growth of economics research within central banking institutions, and the corresponding rise in the dominance of central banks in the field of monetary economics. From the paper:

"We study one type of organization, central banks, and its changing relationship with economic science. Our results point unambiguously toward a growing dominance of central banks in the specialized field of monetary economics. Central banks have swelling research armies, they publish a growing share of the articles in specialized scholarly journals, and these articles tend to have more impact today than the articles produced outside central banks."

Link to the paper, which contains a vivid 1929 dialogue between Keynes and Sir Ernest Musgrave Harvey of the Bank of England, who asserts, "It is a dangerous thing to start giving reasons."

h/t to the always-excellent Beatrice Cherrier who highlighted this work in a brief thread and included some visualizations, including this one showing the publishing rate of central banking researchers:

• Via both Cherrier and the paper, a brief Economist article on the crucial significance of the central banking conference in Jackson Hole, hosted by the Federal Reserve Bank of Kansas City: "Davos for central bankers." Link. (And link to an official history of the conference.)
• Another paper co-authored by Claveau looks at the history of specialties in economics, using quantitative methods to map the importance of sets of ideas through time. "Among our results, especially noteworthy are (1) the clear-cut existence of ten families of specialties, (2) the disappearance in the late 1970s of a specialty focused on general economic theory, (3) the dispersal of the econometrics-centered specialty in the early 1990s and the ensuing importance of specific econometric methods for the identity of many specialties since the 1990s, and (4) the low level of specialization of individual economists throughout the period in contrast to physicists as early as the late 1960s." Link

## ENERGY BOOM

Representative Carlos Curbelo (R-FL) introduced a carbon tax bill to the House last week (though it is “sure to fail” with the current government, it's unusual to see a carbon tax proposed by a Republican). According to Reuters, “Curbelo said the tax would generate $700 billion in revenue over a decade for infrastructure investments.” A deep analysis is available from The Center on Global Energy Policy at Columbia SIPA, which started up a Carbon Tax Initiative this year. For a broader look at carbon taxes, earlier this month the Columbia initiative published a significant four-part series on the “economic, energy, and environmental implications of federal carbon taxes” (press release here). The overview covers impacts on energy sources: “The effects of a carbon tax on prices are largest for energy produced by coal, followed by oil, then natural gas, due to the difference in carbon intensity of each fuel. Every additional dollar per ton of the carbon tax increases prices at the pump by slightly more than one cent per gallon for gasoline and slightly less than one cent per gallon for diesel.” And examines a few possible revenue uses: “How the carbon tax revenue is used is the major differentiating factor in distributional outcomes. A carbon tax policy can be progressive, regressive, or neither.” Overview here. Link to report on energy and environmental implications; link to report on distributional implications; link to report on implications for the economy and household welfare. ### August 11th, 2018 ## Constellation ### AUTOMATION AND CAPITAL STOCK | METARESEARCH ## ALCHEMIST STOCK ### Automation, employment, and capital investment At his blog STUMBLING AND MUMBLING, CHRIS DILLOW discusses recent reporting on rapid automation fears in the United Kingdom: "'More than six million workers are worried their jobs could be replaced by machines over the next decade' says the Guardian. This raises a longstanding paradox – that, especially in the UK, the robot economy is much more discussed than observed. What I mean is that the last few years have seen pretty much the exact opposite of this. Employment has grown nicely whilst capital spending has been sluggish. The ONS says that 'annual growth in gross fixed capital formation has been slowing consistently since 2014.' And the OECD reports that the UK has one of the lowest usages of industrial robots in the western world. My chart, taken from the Bank of England and ONS, puts this into historic context. It shows that the gap between the growth of the non-dwellings capital stock and employment growth has been lower in recent years than at any time since 1945. The time to worry about machines taking people’s jobs was the 60s and 70s, not today.… If we looked only at the macro data, we’d fear that people are taking robots' jobs – not vice versa." Link to the post. ### August 18th, 2018 ## House Fronts ### CASH TRANSFERS IN IRAN | PERCEPTIONS OF WELFARE ## COMPENSATION TREATMENT ### In Iran, cash transfers don't reduce labor supply A new study examines the effects of Iran's changeover from energy subsidies to cash transfers. From the abstract, by DJAVAD SALEHI-ISFAHANI and MOHAMMED H. MOSTAFAVI-DEHZOOEI of the ECONOMIC RESEARCH FORUM: “This paper examines the impact of a national cash transfer program on labor supply in Iran. [...] We find no evidence that cash transfers reduced labor supply, in terms of hours worked or labor force participation. To the contrary, we find positive effects on the labor supply of women and self-employed men.” Most recent version here. The ungated working paper is available here. • Another paper co-authored by Salehi-Isfahani further details the energy subsidies program and the role that cash transfers played in the reforms, with a specific focus on differences in take-up. Link. • We’ve previously shared work from Damon Jones and Ioana Marinescu on the Alaska Permanent Fund dividend, which found that “a universal and permanent cash transfer does not significantly decrease aggregate employment.” Link. • In other Basic Income news, petitions and protests are being organized in response to the cancellation of the Ontario pilot. ### August 25th, 2018 ## A Ship So Big, A Bridge Cringes ### PLACE-BASED POLICIES | CODETERMINATION ## SPATIAL PARAMETERS ### On place-based and adaptable public policy A recent report published by BROOKINGS INSTITUTE discusses the potential effectiveness of place-based policies for strengthening the economies of depressed areas. Co-authored by Harvard’s BENJAMIN AUSTIN, EDWARD GLAESER, and LAWRENCE H. SUMMERS, the report emphasizes that region-specific, flexible policies may best foster a nation-wide equilibrium: "Traditionally, economists have been skeptical towards [place-based] policies because of a conviction that relief is best targeted towards poor people not poor places, because incomes in poor areas were converging towards incomes in rich areas anyway, and because of fears that favoring one location would impoverish another. This paper argues for reconsidering place-based policies ... Indeed, even the most diehard opponent of place-based redistribution should see the logic of tailoring Federal policies to local labor market conditions. Standard social policy rules, like the Bailey (1976)—Chetty (2006) formula for unemployment insurance, depend on parameters that differ across space. If non-employment is particularly harmful in one location and particularly sensitive to public policies, then that diehard could still support a place-based revenue-neutral twist that reallocates funds from benefits that subsidize not working to benefits that encourage employment, without encouraging migration or raising housing prices.” Link to full paper. The two main policy recommendations are an expanded EITC and subsidies for employment. ### September 1st, 2018 ## The Braid ### ACADEMIC INFLUENCE ON POLICY | GLOBALIZATION WAVES ## CONSTRAINED POSSIBILITIES ### On the relationship between academic economics and public policy In a recent working paper, ELIZABETH POPP BERMAN discusses the interconnected fields of academic economics and public policy. The paper conceptualizes the translation of certain academic ideas into public policy, clarifying the relation by describing different economic theories as having certain “affordances”: "I borrow the concept of affordances, which has been used widely to describe how particular technologies proved the potential for some kinds of action but not others. I suggest that knowledge, like technologies, may afford some possibilities but not others. In particular, some theories produce knowledge that, simply because of the kind of knowledge it is, is useful and usable for particular actors in the policy field, while others, regardless of their truth or the accuracy with which they describe the world, do not.” The paper also examines the gap between academic theory and policy application and includes takeaways for those interested in the role of academic experts in the process of policy creation: "It is important to recognize the relative autonomy of the academic field from the policy field. While outside groups may support one school of thought or another, the development of academic disciplines is not determined solely by who has the most money, but also by stakes—including intellectual stakes—specific to the academic field. Similarly, while the academic and policy fields may be linked in ways that facilitate the transmission of people and ideas, the academic dominance of a particular approach does not translate to policy dominance, even given influential champions.” Link to full paper. ht Michael • This work builds off a 2014 paper Berman co-authored with David Hirschman, which also explores the degree to which economists, their tools and ideas, influence and create policy. Similar to the concept of “affordances”, Berman and Hirschman argue that “economic style can shape how policymakers approach problems, even if they ignore the specific recommendations of trained economists.” Link. • A 2010 paper offers a new framework for properly assessing research impact, which includes quantifying conventional citation data as well as other qualitative outputs. Link. ## PRESCIENT HEGEMON ### Branko Milanovic with a speculative paper on globalization from the turn of the millennium Back in 1999, economist Branko Milanovic wrote a ("several times rejected") paper proposing three periods of globalization—the third being the present one—and the countervailing ideologies that sprang up to contest the first two. From the paper: “We are currently standing at the threshold of the Third Globalization. the Roman-led one of the 2nd-4th century, the British-led one of the 19th century, and the current one led by the United States. Each of them not only had a hegemon country but was associated with a specific ideology. However, in reaction to the dominant ideology and the effects of globalization (cultural domination, increasing awareness of economic inequities) an alternative ideology (in the first case, Christianity, in the second, Communism) sprang up. The alternative ideology uses the technological means supplied by the globalizers to subvert or attack the dominant ideological paradigm." Read the full paper here. • For more Milanovic on the politics of globalization, slides from a recent presentation of his on global inequality and its political consequences features much of relevance to this vintage paper. Some of its broader questions: "Does global equality of opportunity matter? Is 'citizenship rent' morally acceptable? What is the 'optimal' global income distribution? Can something 'good' (global middle class) be the result of something 'bad' (shrinking of national middle classes and rising income inequality)? Are we back to Mandeville?" Link. ### September 8th, 2018 ## Very First Stone ### SOCIAL WEALTH FUNDS | CONGRESSIONAL TECH ASSESSMENT ## WEALTH BEGETS WEALTH ### Matt Bruenig's Social Wealth Fund proposal, and responses Last week, MATT BRUENIG of the PEOPLE’S POLICY PROJECT published the most detailed version of a bold policy he’s been writing about for a long time: a Social Wealth Fund for America. “If we want to get serious about reducing wealth and income inequality, then we have to get serious about breaking up this extreme concentration of wealth. A dividend-paying social wealth fund provides a natural solution to this problem. It reduces wealth inequality by moving wealth out of the hands of the rich who currently own it and into a collective fund that everyone in the country owns an equal part of. It then reduces income inequality by redirecting capital income away from the affluent and parceling it out as a universal basic dividend that goes out to everyone in society.” The full report contains history on Sweden and Norway, information on the Alaska Permanent Fund, and then a sketch of the “American Solidarity Fund,” including funding and governance. The report stakes conceptual ground, and doesn’t offer new macroeconomic analysis. Link. • Matt Yglesias summarizes and adds context in Vox, noting that Bruenig’s political angle is not imperative for the SWF idea. Other proposals for government stock ownership “invariably conceptualize the government as a silent partner in the enterprises it would partially own, trying to find a way for the government to reap the fiscal or economic benefits of government stock ownership without the socialistic implications of government officials running private firms. Bruenig’s proposal is the opposite of that, a way to put real meat on the bones of “democratic socialism” at a time when the phrase is gaining momentum as a slogan and an organizing project but also, to an extent, lacks clear definition.” Link. • In an illustration of Yglesias’s point, Roger Farmer, who has suggested funding a SWF through countercyclical asset purchases, makes his ideological differences clear on Twitter: “You don’t have to be a Democratic Socialist to see value in a scheme whereby government borrows and invests in the stock market…unlike Matt Bruenig I do not see this as a tool for political control of corporate agendas and I have advocated that the Treasury purchase an index fund of non-voting shares.” Link. • Mike Konczal criticizes the SWF idea along multiple lines. “We want shareholders to ruthlessly extract profits, but here for the public, yet we also want the viciousness of market relations subjected to the broader good. Approaching this as shareholders is probably the worst point of contact to try and fix this essential conflict.” Link. • Bruenig responds to Konczal. Link. • Peter Gowan expands on the idea in Jacobin: “Following [Rudolf] Meidner, I think it is worth considering multiple social wealth funds to be established along sectoral lines.” Link. • Rachel Cohen gets responses from Peter Barnes and others in the Intercept. [Link](https://theintercept.com/2018/08/28/social-wealth-fund-united-stat ### September 15th, 2018 ## The Marshes ### UNIVERSITIES AND LOCAL GROWTH | INTERFLUIDITY ON SOCIAL WEALTH FUNDS ## THE JANUS FACE ### The paradoxical outcomes of university-centered economic growth A recent paper by RICHARD FLORIDA and RUBEN GAETANI takes an empirical look at the role of research universities in anchoring local economies and driving economic growth. The paper examines the density of patenting and financial investment within the internal geographies of specific American cities and argues that knowledge agglomeration exacerbates economic, occupational, and spatial segregation. “Although universities certainly affect national levels of innovation and growth, research has shown that they tend to affect innovation and growth by operating through more localized channels. The roles played by Stanford University in the rise and economic performance of Silicon Valley and of MIT in the Boston-Cambridge ecosystem are cases in point. Universities constitute a rare, irreproducible asset at the local level. At the same time, it is increasingly clear that the knowledge-economy metros and so-called college towns suffer from relatively high levels of inequality and segregation.” Set to be released in the October issue of MANAGERIAL & DECISION ECONOMICS, the paper presents a nuanced exploration of agglomeration economies and complicates the use of universities as levers for economic revitalization, job creation, and mutual prosperity. Link to the working paper. • As spotlighted in a November newsletter, Lyman Stone discusses national problems with the role of the US higher education system: “The problems we face are: (1) the regional returns to higher education are too localized, (2) the price of higher education is bid up very high, (3) the traditional entrepreneurial player, state governments, is financially strained or unwilling, (4) private entrance is systematically suppressed by unavoidable market features.” Link. • At CityLab, Richard Florida examined venture-capital invested start-ups and found they disproportionately clustered in metropolitan regions with high-performing universities. Link. • For a deep dive into the role universities play in economic and spatial development, see Margaret O’Mara’s book on Cold War era “Cities of Knowledge." Link. ### September 22nd, 2018 ## Red Wall ### AI LABOR CHAIN | BIOETHICS ## MATERIAL UNDERSTANDING ### The full resource stack needed for Amazon's Echo to "turn on the lights" In a novel new project, KATE CRAWFORD and VLADAN JOLER present an "anatomical case study" of the human labor, data, and planetary resources necessary for the functioning of an Amazon Echo. A 21-part essay accompanies an anatomical map (pictured above), making the case for the importance of understanding the complex resource networks that make up the "technical infrastructures" threaded through daily life: "At this moment in the 21st century, we see a new form of extractivism that is well underway: one that reaches into the furthest corners of the biosphere and the deepest layers of human cognitive and affective being. Many of the assumptions about human life made by machine learning systems are narrow, normative and laden with error. Yet they are inscribing and building those assumptions into a new world, and will increasingly play a role in how opportunities, wealth, and knowledge are distributed. The stack that is required to interact with an Amazon Echo goes well beyond the multi-layered 'technical stack' of data modeling, hardware, servers and networks. The full stack reaches much further into capital, labor and nature, and demands an enormous amount of each. Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch." Link to the full essay and map. • More on the nuanced ethical dilemmas of digital technology: "Instead of being passive victims of (digital) technology, we create technology and the material, conceptual, or ethical environments, possibilities, or affordances for its production of use; this makes us also responsible for the space of possibilities that we create." Link. • As shared in our April newsletter, Tim Hwang discusses how hardware influences the progress and development of AI. Link. ### September 29th, 2018 ## Catastrophe Theory ### FALSE MIDDLE CLASS | ETHICAL DATA-SHARING ## MIDDLE WAGE ### Questioning the great transition into a "global middle class" Economist STEVE KNAUSS, in a new paper published by the CANADIAN JOURNAL OF DEVELOPMENT STUDIES, examines the "myth" of the global middle class and the claim that the$2/day measurement tells us anything substantive about poverty and inequality around the world.

"On the defensive in recent years, advocates of globalization have taken to highlighting achievements in developing countries, where globalization has supposedly pulled the majority out of poverty and catapulted them into the swelling "global middle class" remaking our world. This article provides a critical look at this interpretation. Carefully reviewing the global income distribution data behind such claims, it presents original calculations that generate new stylized facts for the globalization era.

The global income distribution approach does potentially have much to offer in terms of revealing the complexity of these changes, but in order to do so, greater attention and resources should be devoted to deepening our knowledge of the socio-historical changes underpinning the new realities of class formation and how they relate to the observed changes in global incomes. Instead of, or in addition to, constructing groups according to income thresholds, or national/global based deciles, ventiles or percentiles, more research should start from the other end, identifying national and global groups based on similarities in class formation and then attempting to trace such trajectories through the global income distribution."

"The question is: does their new petty income from the informal sector compensate for their loss of rural land, livestock, etc? It is not clear that it does. Therefore, we cannot say that this is a straightforward narrative of 'progress'—at least not in all regions."

• Development economist Morten Jerven with a 2010 paper diving into the metrics question in the context of poverty in Africa: "The article therefore concludes that it is futile to use GDP estimates to prove a link between income today and existence of pro-growth institutions in the past, and recommends a searching reconsideration of the almost exclusive use of GDP as a measure of relative development." Link.

## HARD CAPS

### Economic growth vs. natural resources

A recent Foreign Policy op-ed by JASON HICKEL examines “green growth,” a policy that calls for the absolute decoupling of GDP from the total use of natural resources. Hickel synthesizes three studies and explains that even in high-efficiency scenarios, economic growth makes it impossible to avoid unsustainably using up natural resources (including fossil fuels, minerals, livestock, forests, etc).

“Study after study shows the same thing. Scientists are beginning to realize that there are physical limits to how efficiently we can use resources. Sure, we might be able to produce cars and iPhones and skyscrapers more efficiently, but we can’t produce them out of thin air. We might shift the economy to services such as education and yoga, but even universities and workout studios require material inputs. Once we reach the limits of efficiency, pursuing any degree of economic growth drives resource use back up.”

The op-ed sparked debate about the state of capitalism in the current climate crisis, most notably in an Bloomberg op-ed by NOAH SMITH, who claims that Hickel is a member of “a small but vocal group of environmentalists telling us that growth is no longer possible—that unless growth ends, climate change and other environmental impacts will destroy civilization.” Though Smith’s op-ed doesn’t directly engage with many of Hickel’s points, his general position prompted a clarifying (and heated)response from Hickel:

“Noah is concerned that if we were to stop global growth, poor countries would be ‘stuck’ at their present level of poverty. But I have never said that poor countries shouldn’t grow—nor has anyone in this field of study (which Noah would know had he read any of the relevant literature). I have simply said that we can’t continue with aggregate global growth.

...
While poor countries may need some GDP growth, that should never—for any nation, rich or poor—be the objective as such. The objective should be to improve human well-being: better health, better education, better housing, happiness, etc. The strategy should be to target these things directly. To the extent that achieving these goals entails some growth, so be it. But that’s quite different from saying that GDP needs to grow forever.”

• From a study on the limits of green growth: “GDP cannot be decoupled from growth in material and energy use. It is therefore misleading to develop growth-oriented policy around the expectation that decoupling is possible. GDP is increasingly seen as a poor proxy for societal wellbeing. Society can sustainably improve wellbeing, including the wellbeing of its natural assets, but only by discarding GDP growth as the goal in favor of more comprehensive measures of societal wellbeing.” Link.
• In a recent article, Juan Moreno-Cruz, Katharine L. Ricke, and Gernot Wagner discuss ways to approach the climate crisis and argue that “mitigation (the reduction of carbon dioxide and other greenhouse gas emissions at the source) is the only prudent response.” Link.

## CLAIMS THAT CAN'T BE TESTED

### What policy lessons can we derive from UBI experiments?

Political philosopher KARL WIDERQUIST of Georgetown has published a 92-page book examining historical and current basic income pilots, the difficulties of extrapolating from policy research to policy, and “the practical impossibility of testing UBI.”

In his introduction, Widerquist mentions that the challenges for translating research into policy stem not only from the science, but also from the audience’s moral preferences and judgments, which are particularly heightened in the basic income discourse:

“Except in the rare case where research definitively proves a policy fails to achieve its supporters’ goals, reasonable people can disagree whether the evidence indicates the policy works and should be introduced or whether that same evidence indicates the policy does not work and should be rejected. This problem greatly affects the UBI discussion because supporters and opponents tend to take very different moral positions. Many people, including many specialists, are less than fully aware of the extent to which their beliefs on policy issues are driven by empirical evidence about a policy’s effects or by controversial moral evaluation of those effects. For example, mainstream economic methodology incorporates a money-based version of utilitarianism. Non-money-based utilitarianism was the prevailing ethical framework when basic mainstream economic techniques were developed but it lost prominence decades ago.”

Widerquist also writes lucidly on considerations for how to communicate scientific caveats and takeaways. The full book is available here. ht Lauren who comments: "It’s incredibly difficult to test every aspect of many, many policies (including most that are currently at full national scale). Testing a given welfare policy arguably only has to get decision makers to a point where it can be determined that the policy substantially helps those who need it and doesn’t hurt anyone as a result."

• Activist Stanislas Jourdan spoke at the European Parliament in September about a basic income for Europe. Video of the presentation is here; slides are here. On the financing question, Jourdan proposes VAT ("already the most harmonized tax at EU level, large and reliable tax base"), as well as a European Corporation Tax, carbon taxes, and "quantitative easing for the people."