O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

We Must Redraw the Map

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Carregando em…3
×

Confira estes a seguir

1 de 44 Anúncio

We Must Redraw the Map

Baixar para ler offline

My keynote at the 2018 New Profit Gathering of Leaders conference in Boston on May 17, 2018. I talk about the lessons from technology platforms, how they teach us what is wrong with our economy, and the possibilities of AI for creating better, fairer, more effective decisions about "who gets what and why" in the economy.

My keynote at the 2018 New Profit Gathering of Leaders conference in Boston on May 17, 2018. I talk about the lessons from technology platforms, how they teach us what is wrong with our economy, and the possibilities of AI for creating better, fairer, more effective decisions about "who gets what and why" in the economy.

Anúncio
Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (18)

Semelhante a We Must Redraw the Map (20)

Anúncio

Mais de Tim O'Reilly (12)

Mais recentes (20)

Anúncio

We Must Redraw the Map

  1. We Must Redraw The Map! Tim O’Reilly @timoreilly oreilly.com wtfeconomy.com New Profit Gathering of Leaders May 26, 2018
  2. How is the economy changing? What are the implications for business? What does technology now make possible that was previously impossible? What work needs doing? Why aren’t we doing it? wtfeconomy.com
  3. “…47 percent of jobs are “at risk” of being automated in the next 20 years.” Carl Frey and Michael Osborne, Oxford University “The Future of Employment: How Susceptible Are Jobs to Computerisation?”
  4. Dealing with climate change Rebuilding our infrastructure Feeding the world Ending disease Resettling refugees Caring for each other Educating the next generation Enjoying the fruits of shared prosperity
  5. Gradually, then suddenly
  6. Gradually, then suddenly Artificial Intelligence and algorithmic systems are everywhere, in new kinds of partnerships with humans
  7. Gradually, then suddenly Large segments of the economy are governed not by free markets but by centrally managed platform monopolies
  8. We are all living and working inside a machine
  9. Does that machine have human interests at heart?
  10. The algorithms decide “who gets what – and why” Markets are outcomes. A better designed marketplace can have better outcomes. The choices made by the marketplace designer have enormous consequences for the participants and for society Are they the right choices?
  11. The algorithms do what we ask them to do
  12. But not necessarily what we want!
  13. The Equinix NY4 data center, where trillions of dollars change hands
  14. What is the objective function of our financial markets? “The Social Responsibility of Business Is to Increase Its Profits” Milton Friedman, 1970
  15. We have to let go of this map that is steering us wrong In 1625, we thought California was an island
  16. The master algorithm asks for growth to go on forever It should be doing a better job of solving for
  17. Divergence of productivity and real median family income in the US
  18. Fitness Landscapes The way in which genes contribute to the survival of an organism can be viewed as a landscape of peaks and valleys. Through a series of experiments, organisms evolve towards fitness peaks, adapted to a particular environment, or they die out. Image source: http://evolution.berkeley.edu/evolibrary/article/side_0_0/complexnovelties_02
  19. Technology also has a fitness landscape In my career, I’ve watched a number of migrations to new peaks Apple Personal Computer Big Data and AI Smartphones
  20. Generosity takes us to the next peak Tim Berners-Lee, 1990 The World Wide Web Linus Torvalds, 1991 Linux Big Data and AI Tim Berners-Lee, 1990 The World Wide Web Linus Torvalds, 1991 Linux
  21. The same dynamics play out at the national level Inclusive economies prosper. Extractive economies falter. Why do we incentivize extractive behavior?
  22. “Doughnut Economics” Kate Raworth
  23. Oikonomia vs Chrematistike
  24. 1. We must rewrite the rules King George III George Washington
  25. Another view of “Who Gets What – and Why” Profit = Revenue – Expenses Profit = Revenue – (Cost of materials + cost of labor + cost of capital) Return to capital = Revenue – (Cost of materials + cost of labor) Shouldn’t the return be proportional to the contribution of all of the inputs?
  26. “The opportunity for AI is to help humans model and manage complex interacting systems.” Paul R. Cohen
  27. “Computational Sustainability is a new interdisciplinary research field, with the overarching goal of studying and providing solutions to computational problems for balancing environmental, economic, and societal needs for a sustainable future. Such problems are unique in scale, impact, complexity, and richness, often involving combinatorial decisions, in highly dynamic and uncertain environments, offering challenges but also opportunities for the advancement of the state-of-the-art of computer and information science. Work in Computational Sustainability integrates in a unique way various areas within computer science and applied mathematics, such as constraint reasoning, optimization, machine learning, and dynamical systems.” Carla Gomes
  28. “What good governance and the good society look like is now inextricably linked to an understanding of the digital.” Tom Steinberg, MySociety 2. Leaders must become digitally literate!
  29. How we apply these ideas at O’Reilly Media
  30. Government is a platform. Its policies shape who gets what and why. What lessons should we be taking from the success and failure of tech platforms? wtfeconomy.com
  31. $470B Spent on government safety net programs $42B Technology and government are the two most powerful ways to get to scale. At Code for America, we bring them together. Charitable contributions towards safety net Code for America
  32. People, Not Data
  33. GetCalFresh
  34. ClearMyRecord
  35. This is what technology wants “Prosperity in human societies is best understood as the accumulation of solutions to human problems. We won’t run out of work until we run out of problems.” Nick Hanauer
  36. What would it take for us to  Put people to work tackling the world’s greatest problems?  Treat humans as assets, not liabilities?  Create an economy based on caring and creativity, while machines focus on repetitive tasks?  Apply on-demand marketplace models to healthcare, augmenting community health workers with telemedicine and AI?  Give everyone access to knowledge on demand, whenever we need it?  Have fresh approaches to public policy based on what is possible now, and by learning what works, rather than picking from set political menus?
  37. Dealing with climate change Rebuilding our infrastructure Feeding the world Ending disease Resettling refugees Caring for each other Educating the next generation Enjoying the fruits of shared prosperity
  38. “Biophilic work” Natasha Iskander
  39. Let the machines do as much of the work as they can. Let humans get on with the real work of the 21st century.

Notas do Editor

  • My book WTF? Is a reflection on many of the technological changes you’ve been hearing about this morning. It talks about what the great technology platforms have to tell us about the future of business and the economy.
    How is work changing?
    What are the implications for business? What does technology now make possible that was previously impossible? What work needs doing? Why aren’t we doing it?
  • The book starts out with the fearful projections that AI is going to automate more and more human work, leaving us all with nothing to do.

    Frey and Osborne’s projection that up to 47% of human tasks, including many white collar jobs, could be eliminated by automation within the next 20 years
    seems to have been taken as gospel.
  • There’s no work left for humans? Seriously. WTF?
  • There’s so much work to be done!

    Dealing with climate change Rebuilding our infrastructure Feeding the world Ending disease Resettling refugees Caring for each other Educating the next generation Enjoying the fruits of shared prosperity
  • My worry is very different. The world is changing and we, and our institutions, must adapt to the new world we are living in.

    There’s a great story in Ernest Hemingway’s The Sun Also Rises, in which the narrator asks a character named Mike how he went bankrupt. “Two ways,” he replied. “First gradually, then suddenly.” Technological change happens like that too, first gradually, then suddenly.

  • Gradually, then suddenly, artificial Intelligence and algorithmic systems are everywhere, in new kinds of partnerships with humans
  • Gradually, then suddenly, large segments of the economy are governed not by free markets but by centrally managed platforms
  • These networks rule our lives more deeply than we think. We are all living and working inside a machine. It isn’t just this worker in a Google data center.

    Our modern systems are massive hybrid AIs. These AIs are not external to us. We are part of them. We are inside them. They shape what we think and how we act.

    When you look at a company like Google, you see that humans are working alongside automation in very new ways. Even in a company as driven by computer technology as Google, there are humans who keep things running. There are other humans who write code and AI models, and manage and train the algorithms of search, advertising, and the Google Brain. There are other humans – all of us - who contribute new knowledge and seek it out, reinforcing neural pathways by what we link to, and what we pass on.

    This has implications that we are only now starting to become aware of.

    https://www.google.com/about/datacenters/gallery/#/people/14
  • The big question is whether that machine has human interests at heart.
  • A simple example of the invisible hand of a free market information economy is the supermarket checkout line. Everyone can see how long the lines are, and how much stuff the other people have in their baskets, and as a result, they choose what they think is the fastest line with some success, and all the lines even out.


    Image: Getty Images
  • I’m not talking about some kind of killer robot. I’m talking about the economic machines that rule our society.

    One of the things that platforms teach us that has enormous impact for the design of policy and business is that markets are outcomes. In our economic policy, we assume a free market of rational actors making decisions with perfect information. Say that again in an age woken up by the problems of Facebook and Cambridge Analytica.

    It isn’t the free market that decides who gets what and why. It is the folks designing and managing the marketplace at these internet platforms. The choices made by the designers of those algorithms have enormous consequences for the participants and for society.

    The platform’s algorithms decide “who gets what and why” – the fundamental question outlined by Nobel prize-winning economist Al Roth in his book about marketplace design.

    (Roth got his Nobel prize in economics for exploring how to design better marketplaces. He worked on kidney transplant marketplaces, and he showed that if you can increase trust, you can create a “thicker marketplace,” in which it is easier to match up those who have something and those who need it.

    Hal Varian’s former protege Jonathan Hall, now chief economist at Uber, pointed me to this book, and said it was really shaping how he approached his job at Uber. That gave me a language for thinking and talking about platforms that I hadn’t had before. )


  • Uber and Lyft teach us a lot about the future world we are entering. The idea that humans are working inside the machine is no longer something that happens just in the digital realm, safely separated from “the real world.” Uber drivers and passengers are all part of a vast digital machine. And the algorithms were designed for too long to treat drivers as a disposable commodity. Growth in users is the Silicon Valley gospel, and so the algorithms were designed with low prices to attract more users, and for them to be picked up as quickly as possible. Too many drivers drove down wages, and new drivers had to constantly be recruited with wild incentives, which were easy to game.

    Doing a better job of balancing the value allocated to passengers vs the value allocated to drivers, and to the owners and investors of platform itself, is central to the success of Uber and Lyft.

    Does that sound at all like it might have a lesson for our wider economy?

    Uber and Lyft are now seeking a more sustainable path, where drivers are incented to stick around, and the algorithm makes a better allocation. Additional factors of impact on society, such as road congestion, are also starting to be taken into account. But there’s still a long way to go.
  • But there’s also the problem of unintended consequences. Facebook was designed to help people connect with their friends, as in this example. But in my book, I spend some time on Facebook and fake news as an illustration of how algorithmic systems can go wrong.
  • Facebook’s struggle with fake news is a great example of what AI researchers have warned about as “the runaway objective function.” The algorithms do exactly what we ask them to do.

    Facebook’s engineers are a bit of the same situation as Mickey Mouse in Walt Disney’s retelling of Goethe’s story The Sorcerer’s Apprentice. Mickey borrows his master’s spellbook, and compels the broom to help him fetch water. Unfortunately, he doesn’t know how to stop the broom, and before long
  • He is desperately trying to find a way to stop the power he has unleashed. This is what Mark Zuckerberg and team look like right now. That’s a runaway objective function at work.

    Facebook told their systems to optimize for engagement – to show people more of what they liked, commented on, and shared, and content that people like them engaged with. Their idea was that this would lead to more human connection. It turned out instead to increase hyperpartisanship and to drive people apart, and now they are trying to stop it.

    “people like you” turned out to be a very powerful tool, one that got out of control, driving polarization and radicalization. Researcher Renee DiResta found that when she began researching the anti-vaxxer movement, the algorithms at YouTube and Facebook concluded she liked conspiracy theories, and before long, she was down the rabbit hole of “chem trails,” all the way to the Flat Earth. Is it any wonder that these systems became a recruiting tool for Islamic fundamentalists and a weapon for destabilizing American democracy?
  • But there’s one other place that our algorithms have gone wrong, and that’s in our financial markets. Those markets, just like Uber, Facebook, and Google, are vast, algorithmic marketplaces. And the algorithm has a runaway objective function.

    And that’s where we should be worrying about Skynet, that fabled AI gone wrong, hostile to humans.
  • What is the objective function of our financial markets? When, in 1970, Milton Friedman said that the social responsibility of business is to increase its profits, and when, a few years later, Michael Jensen began to preach the gospel of shareholder value maximization and the need to align executive compensation with rising stock prices, they didn’t mean to create the devastation they wreaked on the economy, but it’s time to recognize it.

    (Milton Friedman penned an op-ed in the New York Times arguing that the social responsibility of business was to increase its profits. Anything else was, in effect, taking money from its shareholders. Then in 1976, William Meckling and Michael Jensen wrote a paper outlining the reasoning behind aligning the interests of management with shareholders, which was eventually accomplished with executive pay via stock options. So called “shareholder value” thinking was soon taught in business schools, and that’s when the great divergence between productivity and wages began.
  • One of the key ideas in the book is that our maps of the world are steering us wrong. In 1625, Henry Briggs published a map stolen from the Spanish by the Dutch, which showed California as an island. This map guided exploration for the next hundred years.

    Henry Briggs, 1625: “California sometymes supposed to be a part of ye westerne continent, but since by a Spanish Charte taken by ye Hollanders it is found to be a goodly land…” In 1705, a Jesuit priest, Eusebio Kino, led an overland expedition across the top of the Sea of Cortez, and argued that what came to be called Baja California was in fact an island. My question is why it took an overland expedition, rather than just sailing up the Sea of Cortez!
  • One of the bad maps that we have in economics and business is that growth goes on forever. In Silicon Valley, there’s often talk of “exponential growth,” but it’s pretty clear that even Facebook is showing only linear growth. But even linear growth runs up against limits. Facebook is at 2 billion users, a third of the world’s population. Instead of telling these platforms they must grow at all costs, we should be asking them to do a better job of solving for who gets what and why.
  • Because here has been the result of our current bad map. Take a look at the divergence of productivity and real median family income! Why do we see that, despite the continuing growth of productivity, family incomes have stagnated, and as Raj Chetty’s research has shown, most children in developed countries can no longer expect to do better economically than their parents. Inequality has skyrocketed.

    What if, instead of having a tax code that incentivized companies to increase their profits at the expense off people, we had one that incentivized them to put people to work on solving the hardest problems.

    It’s rather disheartening when you hear a Goldman Sachs executive say ”there’s no money to be made keeping people healthy.”


    Source http://stateofworkingamerica.org/charts/productivity-and-real-median-family-income-growth-1947-2009/ via https://en.wikipedia.org/wiki/Income_inequality_in_the_United_States
  • I think the idea from evolutionary biology, about fitness landscapes with peaks and values, is actually a better metaphor for how the future unfolds than the graph that goes always up and to the right.

    A fitness landscape is a way of visualizing how genes contribute to the survival of an organism and a species. External conditions can be viewed as a landscape of peaks and valleys. Through a series of experiments, organisms evolve towards fitness peaks, adapted to a particular environment, or they die out.

    Because of the hypercompetitive nature of Silicon Valley, and the speed with which new technologies are introduced, it is like a petri dish where you can watch evolution in action. It is also a great way to get perspective on decline and fall in the slower evolutionary landscape of nations.
  • In my career, I’ve watched a number of migrations to new peaks, and I’d like to share with you some observations about what happened, and why. And then we’ll talk about some lessons for digitalization of the overall economy.

    When a new wave of technology hits, a new company almost always becomes dominant. The dominant company of one technology wave sometimes manages to survive, but it loses its privileged position as the technology marketplace migrates to a new peak. The path to the top of each new peak requires new competencies – a new fitness function – and the old competency actually holds back the previously dominant company.
  • One of the things that I’ve learned is that the surest way to drive entrepreneurs to seek the fitness peak of a new technology and a new business model is for dominant players to take too much of the value for themselves. And just as in biology, it’s easier to get to the new peak from the valley. I watched this happen with Microsoft in the 1990s. The company had used its dominance over the operating system to lock out competitors. But the innovators just went elsewhere, where there was an opportunity for open innovation, and invented the future on the way up a new fitness peak. Tim Berners-Lee introduced the World Wide Web in 1990, and Linus Torvalds introduced Linux in 1991. Between the two of them, the paradigm changed. Software was now a commodity. Big data was the new source of competitive advantage, with Google at the latest peak in the fitness landscape.

    Net lesson: You lose when you try to capture too much of the value for yourself. And you lose again if you hang on to the old rules of business when faced with the resulting change in the fitness landscape.
  • This is also the conclusion of Oxford developmental economist Kate Raworth, author of the book Doughnut Economics, who talks about the job of economics not being how to keep growth going up and to the right, but instead about how to keep the world in “the doughnut,” the narrow band between human undershoot, in which much of humanity is left out of the bounty that is possible, and economic overshoot, where humanity takes an unsustainable proportion of the world’s resources, leading to instability.

    This model also applies to companies. You are actually managing a complex adaptive system. Your job is to stay in balance with your ecosystem and the business environment. And sometimes that means you need to adapt to new conditions.
    Image: Kate Raworth and Christian Guthier/The Lancet Planetary Health
  • Business is starting to recognize this as well. A recent study by the BCG Henderson Institute used machine learning to look at financial reports from tens of thousands of companies, and based on the language they used, put them into two buckets, characterized by Aristotle’s two branches of economics: oikonomia, the management of the household, and chrematistike, the pursuit of wealth. They discovered that those who pursued oikonomia – thinking of their stakeholders more broadly – actually outperformed those practicing chrematistike.
  • King George III, Portrait by Joaquin Zoffany, 1771
    George Washington, Portrait by Gilbert Stuart

    So what do we need to do? First, our world view must change. The world once believed in the divine right of kings, that some people were more equal than others, and naturally inherited wealth and power. After the American Revolution, King George III of England expected that George Washington would be crowned king in America. When George Washington went back to his farm, he is reported to have said, “He is the greatest man in the world.”

    Today, we believe in the divine right of capital. Those who have it may be generous and give back, but they accept their privilege. If we talk about “triple bottom line,” it is a pale thing, a shadow of what it ought to be, a true understanding of “who gets what and why,” using our newfound intelligence to make a more robust distribution of the fruits of machine productivity. Or as I put it in my book, echoing Joseph Stiglitz, we must rewrite the rules.
  • In my book, I made a throwaway reference to the idea that future economic historians might well look back on this period when we believed in the divine right of capital while looking down on our ancestors who believed in the divine right of kings. A reader pointed me to a remarkable book from 2001 by Marjorie Kelly called The Divine Right of Capital.

    She makes the point that our values have become embedded in our profit and loss statements. If you look at the idealized P&L, you realize that profit is what’s left over after deducting the costs of inputs such as materials, labor, and capital. Yet somehow, we rewrote the P&L such that all the surplus belongs to only one of the inputs.

    Consider Apple today. It is a hugely profitable company. If their employees left, they’d be toast. If their suppliers left, they’d be toast. If their customers left, they’d be toast. If the rule of law failed, they’d be toast. If no one bought their stock, no one would care but stockholders. Yet Carl Icahn buys $6 billion in Apple stock, greenmails the company to use its cash to do $100 billion in stock buybacks. Apple could instead have lowered prices, paid more to the clerks in the Apple stores or to the workers in their Chinese factories (their corporate aristocracy is already well paid), and paid their taxes. They did none of these things, because of this crazy system that accepts the rise in stock price as the goal to be satisfied above all others.

  • Paul R. Cohen, a former DARPA programming manager now Dean of a new school of Information Sciences at the University of Pittsburgh recently put it beautifully at a recent meeting of the National Academies, where we were both speaking about the future of AI. He said, “The opportunity for AI is to help humans model and manage complex interacting systems.”

    On the positive side, these vast algorithmic tools let us do things that were previously impossible. Google gives searchable access to trillions of documents – it’s not quite “access to all the world’s information,” but it’s the closest thing we’ve seen. Facebook connects billions of people. Uber and Lyft have put millions of people to work providing on-demand transportation.

  • But you can also see the enormous power for algorithmic systems to do good in the new field that Cornell professor Carla Gomes calls Computational Sustainability. She’s working with the Brazilian national grid to build data models that determine which Amazon tributary to dam, solving simultaneously for the need for power generation, the fewest number of people that need to be displaced, and the impact on endangered species. In California, she’s helping the water management districts time the release of water into California rice fields to coordinate with the migrations of waterfowl. Both farmers and waterfowl benefit. The possibilities are enormous.

    We must use these tools to confront the challenges of the 21st century!
  • Second, leaders MUST become digitally literate. Tom Steinberg, the founder of UK non-profit MySociety, a pioneer in government innovation, once wrote, “You [can] no longer run a country properly if the elites don’t understand technology in the same way they grasp economics or ideology.”

    And more importantly, “what good governance and the good society look like is now inextricably linked to an understanding of the digital.”

    If you watched the recent congressional hearings with Mark Zuckerberg, you watched the consequences of that ignorance on display. But it is also on display in each of our own organizations. We are struggling to keep up with an understanding of fast-moving technology. But we must give technologists a seat at the leadership table.

    https://www.mysociety.org/2012/02/11/5-years-on-why-understanding-chris-lightfoot-matters-now-more-than-ever/
  • Some of you may be wondering how you apply these ideas if you’re not an Amazon or a Google or a Facebook. I want to give illustrations from my own efforts, both at my company, O’Reilly Media, and at Code for America, the non-profit started and run by my wife, Jennifer Pahlka.

    O’Reilly Media is a technology information provider. We’re a publisher, a conference producer, and run an online learning platform called Safari. Our mission is “Changing the world by spreading the knowledge of innovators,” but our motto is “Create more value than you capture.”

    We launched Safari in 2001 as an ebook aggregation platform. We realized (network effects, duh) that if ebooks were ever to really take off, we had to bring together the entire industry around them, not just provide a service just for ourselves and our own customers. (Remember that this was six years before the Kindle launched in 2007!) So we invited Pearson, the company that was our biggest competitor at the time, to join us in the business, which eventually came to offer tens of thousands of ebooks from hundreds of publishers.

    This was originally a joint venture with Pearson, with the other publishers as marketplace participants but not owners. We bought Pearson out in 2014, and it is now a wholly owned subsidiary of O’Reilly.
  • As people’s learning needs extended beyond books, we added video learning, including video courses with an interactive coding environment, and even synchronous online training with a live instructor. Whenever we launch new features, we’re careful to try to bring our partners along with us, rather than using those new features to take more of the pie for ourselves, as so many other platform companies seem to do. We realized early on that if you want to create a sustainable network marketplace, you have to balance the value allocated to the members of the network, not just to the core.

    Here’s a really interesting story about how deeply we think about balancing the value to all parts of the ecosystem. Last year, when we launched live online training as a new feature, O’Reilly Media President Laura Baldwin called an emergency meeting of our exec team. Her message: not all of our partners got on board as quickly as we would have liked, so, for example, O’Reilly started out offering 100 courses, while Pearson had only ten. The feature was a HUGE success, and so O’Reilly had taken a way larger share of the provider payments, and Pearson’s income had dropped by nearly half in that first month. The emergency was that we had to work harder to bring them on board, NOT to extend our lead and take more of the pie!
  • We’ve even extended our network thinking to turning our customers into content providers, with features like case studies, where companies share the lessons learned as they’ve implemented various technologies.

    Every company needs to think of itself as a networked marketplace, and to have its goal as managing the benefit to all of the participants.
  • The other area where I spend a lot of my time is with government. I believe that government has a lot to learn from the great technology platforms. Like them, its algorithms shape who gets what and why. But unlike the tech platforms, it is stuck in the past century.

    That’s why a big part of my time is spent with Code for America, the non-profit started by my wife Jennifer Pahlka.
  • We believe that technology and government are the two most important ways to get to scale. We put them together.

    We help to ”debug” government programs so that they become more effective. To give you a sense of the leverage we’re trying to harness, consider that all charitable spending on the social safety net in America totals about $42 billion annually. The government spends nearly half a trillion.

    If we can make the $470B work even 10% better, we can drive impact the size of the entire charitable sector. There is enormous opportunity to improve how government services are administered and improve the outcomes we get for those dollars.
    Source:
    Gov spending on welfare (safety net) $470B: http://www.usgovernmentspending.com/year_spending_2014USrn_17rs2n_4041#usgs302
    Amount of charitable donations going to poverty alleviation (human services) $42B ($115B goes to religious charities, $55B to education):
    http://www.charitynavigator.org/index.cfm?bay=content.view&cpid=42#.Vxlw6BMrLBI
    470/42 = 11
  • We ask ourselves what would happen if government programs worked as well as the best consumer internet applications. How might they work differently if we put people at the center?
  • One of our signature projects started out with a redesign of the online application for SNAP (Food Stamps) in California. We started on this project in 2013 when the city and county of San Francisco asked us to help them understand why their participation rate in SNAP was so low. What we found was that the online application that they were sending people to through all their outreach efforts contained 200 questions, and took an hour to complete. It didn’t work on a mobile phone. Many of the people applying don’t own computers, so they have to use ones at the library…where the timeout is 30 minutes…and the application gave them no way to save their work. We replaced it with a mobile app that can be completed in seven minutes … and then began to follow the users via text messaging, to find other problems in the process, and give feedback to our government partners.

    We’ve now expanded the project to all 58 California counties, with funding from the state, and are now working on a pilot for an Integrated benefit application (SNAP, WIC, and Medicaid) in five other states, with charitable funding. (Our model is to build pilots with charitable funding, and then get to sustainability with government funding, once we’ve proved that our intervention works.)
  • What ties both of these efforts together is the idea of technology as a force for good, which is a theme throughout my book.

    As my friend Nick Hanauer put it “Technology is the solution to human problems. We won’t run out of work till we run out of problems.” Are we done yet? Are we done yet?
  • There are graphs that we want to go up and to the right.

    I highly recommend to doubters This World In Data, the site Max Roser runs at Oxford showing progress over the past centuries. Here’s one showing change in human life expectancy. You can see that for hundreds of years, life expectancy was flat, unless it went down due to wars and plague. Then suddenly in the mid 1800s, it suddenly goes up and to the right. And as you add new countries to this interactive visualization, you see that as they too join the industrial revolution, they follow the same path.
  • What would it take for us to

    Put people to work tackling the world’s greatest problems?
    Treat humans as assets, not liabilities?
    Create an economy based on caring and creativity, while machines focus on repetitive tasks?
    Apply on-demand marketplace models to healthcare, augmenting community health workers with telemedicine and AI?
    Give everyone access to knowledge on demand, whenever we need it?
    Have fresh approaches to public policy based on what is possible now, and by learning what works, rather than picking from set political menus?


  • There’s another important point: When you look at that list of unsolved problems that I showed earlier, they are all examples of what
  • Natasha Iskander of NYU calls “biophilic work,” the work of improving and remediating life. This will always be with us.

    We are a social species. Doing things for each other is in our nature. As long as there’s a reasonably fair distribution of the fruits of productivity, we will make an economy for each other. As Clay Christensen noted in his “Law of Conservation of Attractive Profits,” whenever one thing becomes a commodity, something else becomes valuable.

    But meanwhile, we have vast problems to solve, the result of the extractive economy of the past.
  • Why can we only see AI and other WTF? Technologies of the 21st century as engines of disruption and destruction, rather than as engines of creativity and prosperity? Why aren’t we talking about universal basic income as our birthright – the result of human ingenuity?

    Let the machines do as much of the work as they can. Let humans get on with the real work of the 21st century.

    Thank you very much.
  • This is just an author slide for when it gets uploaded. It won’t be part of the talk.

×