Mais conteúdo relacionado


Similar a The Future of Smart Disclosure(20)

Mais de Tim O'Reilly(20)


The Future of Smart Disclosure

  1. Some Context for Thinking About The Future of Smart Disclosure Tim O’Reilly O’Reilly Media Smart Disclosure Summit March 30, 2012
  2. “The skill of writing is to create a context in which other people can think.” -Edwin Schlossberg
  3. What technology trends tell us about where smart disclosure will ultimately take us
  4. Moore’s Law First 10 Years 150.0 112.5 75.0 Gordon Moore 37.5 0 Jan 1, 1970 Jun 25, 1971 Dec 16, 1972 Jun 9, 1974 Dec 1, 1975 May 24, 1977 Nov 15, 1978 May 8, 1980
  5. Moore’s Law with Gov Drag 150.0 112.5 Clay Johnson Society Gov 75.0 37.5 0 Jan 1, 1970 Jun 25, 1971 Dec 16, 1972 Jun 9, 1974 Dec 1, 1975 May 24, 1977 Nov 15, 1978 May 8, 1980
  6. Gov Vs. Moore 2011 300000000 225000000 150000000 75000000 0 Jan 1, 1970 Dec 16, 1972 Dec 1, 1975 Nov 15, 1978 Oct 30, 1981 Oct 14, 1984 Sep 29, 1987 Sep 13, 1990 Aug 28, 1993 Aug 12, 1996 Jul 28, 1999 Jul 12, 2002 Jun 26, 2005 Jun 10, 2008 May 26, 2
  7. 
  8. Government data driving the mapping revolution  USGS and other survey maps  Street maps  Address databases  ...
  9. GPS: A 21st century platform launched in 1973  Massive investment for uncertain return  Policy decisions can have enormous impact  Marketplaces take time to develop, and go in unexpected directions
  10. Lesson 1 Government is a platform
  11. MapQuest - the counterexample
  12. Lesson 2 It’s still really early. Choice engines haven’t yet had their “Google Maps moment”
  13.  Siri
  14. Lesson 3 Seek out commercial partners, don’t just wait for them to come to you
  15.  Health datapalooza
  16. Lesson 4 Keep data formats simple
  17. Lesson 5 Open data policies matter, because private parties will try to hoard data and claim it as their own
  18. A device that knows where I am better than I do, a knowing assistant telling me where to go and how to get there.
  19. An Internet Operating System that Controls Access to Data An application that depends on cooperating cloud data services operating in real time: - Location - Search - Speech recognition - Live Traffic - Imagery
  20. Lesson 6 Real-time data will become the norm. Plan for that future!
  21. “Would you be willing to cross the street with information that was five minutes old?” -Jeff Jonas
  22. Returning to maps... It’s easy to take the blue dot for granted, but it’s a wonder of real-time data coordination
  23.  Combining data from multiple sources is critical – GPS – Cell tower triangulation – WiFi signals – But that’s only the beginning of the sensor revolution
  24.  Uses the accelerometer to note when you’re walking, running, driving, or stationary  Wakes up the location sensors every time you’re stationary for a while  Logs the location and the length of time you were there  Private, encrypted data store on your phone  A platform enabling private, high quality location and movement data for location and “quantified self” fitness apps  Completely automatic (except to correct locations if wrong) and “always on”
  25. Lesson 7 Getting privacy rules right is going to be a matter of thoughtful tradeoffs
  26. The Google Autonomous Vehicle
  27. 2005: Seven Miles in Seven Hours
  28. 2011: Hundreds of thousands of miles in ordinary traffic
  29. Artificial Intelligence “the science and engineering of making intelligent machines” -John McCarthy, 1956
  30. But it isn’t just better AI “We don’t have better algorithms. We just have more data.” - Peter Norvig, Chief Scientist, Google
  31. Human-Computer Symbiosis “The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” – Licklider, J.C.R., "Man-Computer Symbiosis", IRE Transactions on Human Factors in Electronics, vol. HFE-1, 4-11, Mar 1960. Eprint
  32. Lesson 8: Look at intent and outcomes, not just acquisition of data
  33. It is precisely because of the overlap between computers and human activity that all this magic becomes possible
  34. Lesson 9: Beyond Smart Disclosure: Feedback Loops and “Algorithmic Regulation”
  35.  credit card fraud detection as an ec
  36.  image of google search for “Smart Disclosure”  if good ads here, this could work for both search quality and ad stuff
  37. “Half the money I spend on advertising is wasted; the trouble is I don't know which half.” - John Wanamaker (1838-1922)
  38. “Only 1% of healthcare spend now goes to diagnosis. We need to shift from the idea that you do diagnosis at the start, followed by treatment, to a cycle of diagnosis, treatment, we explore what works.” -Pascale Witz, GE Medical Diagnostics
  39. for profit colleges
  40. Lesson 10: Bad actors will always try to game the system.
  41. Lesson 11: The secret of algorithmic data systems is to focus on real time measurement of outcomes
  42. “The legitimate object of government is to do for the people what needs to be done, but which they cannot, by individual effort, do at all, or do so well, for themselves.” -Abraham Lincoln

Notas do Editor

  1. \n
  2. \n
  3. \n
  4. And the reason I’m looking to the future is because of Moore’s Law. As you recall, this law,\nnamed after Intel co-founder Gordon Moore, predicts that computing power will double \nevery two years. As you can see that leads to accelerating increases in power.\n In a recent talk at Code for America, Clay Johnson pointed out\n
  5. that the slow pace of government action, and slow procurement processes, put government\nbehind on the Moore’s Law curve.\n
  6. Over time, this compounds, putting government technology further and further behind the\nprivate sector curve. As a result, it behooves government to try to shoot further ahead of the target.\nAnd that’s why I want to provide some context for thinking about the future.\n
  7. Now, we’re all very excited about the potential to turn *this*\n
  8. into *this* - new services like Billshrink, which helps people compare credit cards or wireless plans\n
  9. And to make “choice engines” that work like Kayak\n
  10. or openTable.\n
  11. The new insurance finder is a good example of a government site that does this.\n
  12. \n
  13. \n
  14. \n
  15. But I want to start somewhere more prosaic, with maps. Most of us remember when these things were on paper, right? \nInterestingly, it was open government data that drove the transition to Geographic Information Systems, and ultimately\nthe electronic maps and directions we enjoy today.\n
  16. \n
  17. There are a lot of lessons from GPS.\nRonald Reagan the father of foursquare.\n
  18. But the really big lesson I want to take from GPS is that government, at its best, is a platform. It does things that are hard, and big,\nand that enable the private sector. National highways, space travel, satellites, are good examples.\n\nAll the innovation that has come from the private sector in the location arena was only possible because government built the platform.\n\nI believe data is the platform for the 21st century.\n
  19. Remember when online mapping services looked like this? This is mapquest, circa 2005, just before the arrival of Google Maps.\n
  20. There comes a time when someone cracks the code and things really start to hum along.\n
  21. That’s also something you see in Apple’s Siri, which bills itself as a “decision engine.” Humans give high level direction,\nalgorithms figure out the best answer, and try to take you there. It’s kind of a black box choice engine.\n
  22. Google maps was not only much more interactive, it integrated many other sources of data, and turned itself into a data\nand mapping platform for other services.\n
  23. One of the most interesting additions to Google maps was transit data - again, something else that came from government\n
  24. Many people don’t realize that transit data in google maps (and subsequently in other mapping services and smartphone apps)\nactually began with an initiative from the city of Portland’s TriMet transit agency.\n
  25. Trimet didn’t just release their data, they actively reached out to Microsoft and Google to partner on their new data idea.\nGoogle took them up on their proposal. Other services and other later cities joined in.\n
  26. This same kind of “developer outreach” characterized Todd Park’s work on open data at HHS. Rather than just opening\nthe data, he proactively sought out partners. The HHS open data initiative now features a thriving developer conference, \nhundreds of apps, and several funded startups. I know that’s what you’re also trying to do with smart disclosure.\n
  27. Another lesson from what Trimet and Google did with transit data was the development of a dirt simple data format that was\nopen and easy for other cities to copy, and for any application to read.\n
  28. It’s simply a small collection of text files, listing the agency, the location of stops, the routes they fall on, and the scheduled times \nfor each bus or train route at each stop.\n
  29. But there’s another lesson in transit data. Back in 2009, there was a legal controversy in San Francisco when \nNextBus Information Systems sued a small iphone developer for creating an app based on the real time transit data\ncollected by the NextBus GPS system in the SF Muni buses. Nextbus lost the claim; Muni had made sure the\ncontract allowed for open re-use. But be on the lookout for vendors trying to lock down data paid for with\npublic money.\n
  30. \n
  31. Returning to the evolving saga of mapping data, let’s consider how mobile phones are transforming \nmapping. A phone is ...\n
  32. More than that, a smartphone depends on what you might call ... This has been a key framing metaphor for\nmy thinking for most of the past decade. I urge you to adopt that same frame, and understand how data\nis becoming a new operating system, a new platform, and to think what is the appropriate role of government\nas part of that platform\n
  33. This is the next lesson of the way mapping data is evolving on the mobile platform!\n
  34. Jeff Jonas of IBM did a commercial a couple of years ago that asked this provocative question....\n\nIt’s becoming quite clear that real time data is going to be the norm. \n
  35. And so, while I’m excited about smart disclosure applications like BillGuard, that warn me of suspicious transactions, I’ll\nbe even more excited as these systems drive smart alerts, and give me more control - for example, warning me when \nI’m about to exceed my family budget with my next credit card purchase, and not just looking for fraud. Of course,\nthat assumes that cc companies would have your best interests at heart - which is why this kind of warning is more\nlikely to come from third party apps than from cc vendors.\n
  36. Returning to maps, we see the role of real time in “the blue dot” that tells you where you are on your route.\n
  37. In order to keep track of location, you really need to have access to multiple data sources. In cities, for instance, tall buildings cut\noff the view of GPS satellites. Cell tower triangulation and mapping of known wi-fi signals provides redundancy and greater\naccuracy.\n
  38. I was introduced just the other day to a new location platform called PlaceMe, which uses the sensors in the phone\nto do even better real time location detection, mapping your location to venues and addresses without any effort\non your part.\n
  39. \n
  40. It’s kind of eerie just how accurate it is.\n
  41. And of course, while this is a private app, not a social sharing app, the implications for privacy are enormous. We now carry around\na sensor platform in our pocket, and it makes possible all kinds of new data services. And that leads me to Lesson 7...\n
  42. And that leads me to the latest revolution in mapping - the Google Autonomous Vehicle.\n\nI want to talk about this for several reasons. One of them is to remind you just how far the future of smart disclosure\nmight take us. This used to be a map! Then we had smarter interfaces to show humans how to get where they are \ngoing. But ultimately, the data disappears into a device or service that just knows how to do the right thing.\n
  43. But there’s another point I want to emphasize about the development of autonomous vehicles.\nYou see, back in 2005, when DARPA issued a Grand Challenge for autonomous vehicles, the winner went seven miles in seven hours.\n
  44. Yet only six years later, Google has announced a vehicle that has gone ...\n
  45. Was it a huge advance in AI, akin to what we saw when IBM’s Watson beat human champions at the game of Jeopardy?\n
  46. Peter Norvig says that the AI isn’t any better. Google just has more data. What kind of data? \n
  47. It turns out that Google had human drivers drive all those streets in cars that were taking pictures, and taking very precise measurements of distances to everything. The autonomous vehicle is actually remembering the route that was driven by human drivers at some previous time. That “memory”, as recorded by the car’s electronic sensors, is stored in the cloud, and helps guide the car. As Peter pointed out, “picking a traffic light out of the field of view of a video camera is a hard AI problem. Figuring out if it’s red or green when you already know it’s there is trivial.”\n
  48. This is an example of what JCR Licklider, the DARPA program manager who originally funded the work on TCP/IP that brought\nus the Internet, wrote about in his 1960 paper Man-Computer Symbiosis....\n
  49. So when Google got “busted” for collecting wifi data, and policy makers didn’t understand why they might want to do that,\nexcept for nefarious purposes, it was the policy makers who weren’t seeing far enough into the future.\n
  50. So here’s a piece of advice to policy makers:\nWe’re increasingly going to need a privacy regime that doesn’t focus on what data you collect or have, but on how you use it,\nand regulates misuse, not possession.\n
  51. and it’s also important to note that the choice engines are increasingly algorithmic, operating as a kind of black box.\n
  52. I want to move on, and to talk a bit about where all this is taking us - towards systems that are algorthmically driven and\ntherefore must be “algorithmically regulated.” I’m told that “regulation” has become a dirty word in Washington, and that\nwe should just talk about making markets work better. Well, I’m not going to back down. One of the things that make markets\nwork better is the right kind of regulation. Your car’s carburetor or fuel injection system is a regulatory system. The autopilot\nof an airplane is a regulatory system, and the Google self-driving car is a regulatory system, using algorithms (i.e. rules) and feedback\nloops to keep on course.\n
  53. \n
  54. Credit card fraud detection is a great commercial example of algorithmic regulation. All kinds of data is mined and monitored\nto detect abnormal patterns. Government regulation needs to move in this same direction. This requires a new sense of what \n“regulation” means. It’s not the articulation of fixed rules of behavior, which are then monitored by periodic inspection, but\na set of rules (i.e. algorithms) that are constantly evolving in response to new data, new attacks, in order to achieve desired outcomes.\n
  55. I’m not an expert on credit card fraud detection systems, so I’m going to explain the concept more thoroughly by looking at\nanother similar system, the algorithmic regulation by which Google ensures search quality, and by which it seeks out the\nmost relevant ads. A lot of people don’t realize how this works. Essentially google “tests” search quality by sending out a set of sample\nqueries to thousands of testers with a simple question: are these good results? If the answer is “no,” they tweak the algorithm. They \ndon’t fix individual problems.\n
  56. The first thing to understand is that algorithmic regulation depends on feedback loops that manage for outcomes. \nIncreasingly, technology is solving what we can call “the Wanamaker problem.”\n
  57. When Google revolutionized the ad world by paying for clicks rather than page impressions, they were moving from a model\nwhere you pay for some set of activities (we showed your ad 100,000 times) to one where you pay for outcomes (5000 people\nclicked on it.) There’s a continuous measurement loop, and Google’s ability to outperform the competition depends on a \nhuge amount of data mining to predict what people are more likely to click on. Until recently, their competitors sold to the highest\nbidder, but Google realized that if you could predict likelihood of click, you could actually make more money ....\n
  58. We’re now seeing this same idea spread to other areas of the economy. For example, in healthcare, personalized medicine requires new kinds of diagnostic feedback loops.\n
  59. That’s also one focus of the Accountable Care Act - to pay for outcomes, not for procedures.\n
  60. In the city of San Francisco, you’re seeing something similar, where all the parking meters are equipped with sensors, and pricing varies by time of day, and ultimately by demand. \n
  61. AT first glance, the Education Department’s new regulations on for profit colleges’ eligibility for federal student loans\n seems like a great attempt at algorithmic regulation until you look at the details. Only 35% of students\nhave to be able to repay their loans?\n
  62. We really have to watch out for bad actors lobbying the system.\n
  63. As a technologist, I was struck by the comparison with Google’s “Panda” search algorithm update, which penalized content\nfarms and other sites that were gaming the system to get higher search rankings. Imagine if Matt Cutts and Amit Singhal sat\ndown with the content farms and agreed to water down the update so that only 35% of the results were useful, to protect\nthe business model of the content farms?\n
  64. So get some cojones and don’t be afraid of regulation.\n
  65. \n
  66. This means regulatory independence. The Fed is probably the best example.\n
  67. I want to return to billshrink. Is it really frequent flyer miles that make for the best credit card value? The real smart disclosure\nwe need here is which of these guys are charging the most in fees, which banks are clearing checks proactively in such a way\nas to generate overdraft fees? So be very pointed in figuring out what data needs to be disclosed to really serve the consumer.\n
  68. And you need to think hard about what data will really support those outcomes. It may not be data you have now. You have\nto be hungry for new data and new algorithms that give better results. Just like Google is. Just like hedge funds are. Just like\nthe private sector.\n
  69. This shift requires new competencies of companies. The field has increasingly come to be called “Data Science” - extracting meaning and services from data - and as you can see, the set of skills that make up this job description are in high demand according to LinkedIn. They are literally going asymptotic.\n
  70. In closing, I want to return to the notion of government as a platform. When I first articulated that notion, I argued that \ngovernment is, at bottom, a mechanism for collective action, a means for doing things that are best done together. So\nI was delighted recently to discover that Abraham Lincoln had said much the same thing 150 years ago. But this notion\nalso suggests a level of restraint. The best government programs enable the private sector; they don’t compete with it.\nI hope that smart disclosure follows this lead, that it enables, and to use Richard Thaler’s notion, *nudges* the market \nin the right direction to produce socially beneficial outcomes, but that it does so with a light hand. As the Chinese philosopher\nLao Tzu said three thousand years ago, “When the best leader leads, the people say ‘We did it ourselves.’”\n