DRT Metrics: New Paradigms in DRT and Shared MOD Metrics
1. 5M Paratransit
System Will Rodman
Vice President of
Business
Development
1Baltimore, MD April 15-17, 2019
International Conference on Demand Responsive and Innovative Transportation Services
New Paradigms in DRT and
Shared MOD Metrics
Good afternoon, and thank you Lindsey. So, most of you know me from my 40+ years of paratransit design, planning and evaluation, and quite often, I would be asked to prepare a peer review as part of a paratransit service evaluation, and I will tell you that I and other colleagues of mine amassed our own data bases for such evaluations because….
…the National Transit Database of DRT systems is not particularly useful.
Here we have two systems: one that provides just ADA paratransit service and one that provides a coordinated service. All else being equal, which service will likely have the higher productivity?
My guess is that the coordinated service would have a higher productivity because there will likely be more many-to-few and many-to one trips and the trips will tend to be shorter.
And with a higher productivity, there should be a lower unit cost.
And yet, organizations “in the know” compare such systems all the time, relying on NTD data.
Pioneer Institute: Philadelphia vs. Boston example.
Here are two more systems. One with a 100% dedicated service and the other with a service mix of dedicated and non-dedicated service. With everything else equal, which will likely have the higher productivity and lower unit cost?
The second one. One of the reasons is because NDSP labor will tend to be less expensive (no benefits, for openers), while driver labor costs with benefits have been shown to reflect up to 70% of the operating cost structure of dedicated paratransit services. But whether or not the system uses NDSPs is not particularly evident from the NTD data.
Moreover, in cases where they do use NDSPs, the productivity is going to be inflated anyway because the FTA has said its okay to report NDSP revenue hours as just the live passenger time. So, the fewer the hours, the higher the productivity. Thus, comparing the productivity of a 100% dedicated service with a system using NDSPs is an apples to orange comparison.
An example of another apples to orange comparison is to compare two systems with very different service mixes, such as a 85%/15% service mix and a 50%/50% service mix. Who would you expect to have the higher productivity and lower unit cost between these two? Right, but this information is not in the NTD data either.
And here are three more comparisons:
The first compares two systems with very different average trip lengths: which do you think will have the hire productivity and lower unit cost? Now, in this case, NTD does provide some mileage clues but you have to know to do this, and because the ridership figures include PCAs and companions, you really don’t wind up with an accurate stat for average miles per trip.
The second compares two system serving areas with very different traffic congestion. Which will have the higher productivity and lower unit cost? Does the NTD data account for this? No.
The third compares two systems with different demands from customers who use wheelchairs. Which will have the higher productivity and lower unit cost? Does the NTD data account for this? No as well.
And here’s another comparison:
An ADA paratransit system where customers really do not have any options.
Vs.
An ADA paratransit system where the transit agency is also providing an alternative service, like an taxi-subsidy program, or even a microtransit service which would be included in DRT stats, as well.
My guess would be that the latter system probably has a lower unit cost, never mind how the productivity is being calculated, but here again, this significant difference is not being tracked in the NTD data.
So, you get the idea. During my years at Multisystems and then at Nelson\Nygaard, I would have loved to have this kind of information readily available so that I could pick true peers when doing a peer evaluation, and because of these shortcomings, the only way to do it was to amass the data your self, by surveying systems, asking the right questions, and maintaining your own database, which I did. But it used to drive nuts when I would see somebody’s else’s peer comparison who just used NTD data and ended up comparing apples and oranges.
So, what we can do about it? We can start to seriously push for more disaggregated and way more useful data to be collected.
8
9
So, I have finally gotten to the “so what” slide and for those who know me well, one slide won’t do it, so I have two. A Part 1 and a Part 2.
Some of suggested that we split off ADA paratransit from the other DRT services, and that alone would be helpful. Yes it would, but that alone doesn’t do it for me because there are so many other non-ADA paratransit/DRT services offered by transit agencies.
So, my first suggestion, kind of as a point of departure, is to report DRT stats by mode/program:
Stats and costs for dedicated service, splitting out ADA and non-ADA paratransit trips
Ditto for non-dedicated service
Stats and costs for alternatives services
Stats and costs for micro-transit service
Many may think this would be an imposition, but virtually all software systems in use today can generate this split out of stats and costs
And my pet peeve, in case you hadn’t noticed, was productivity and for all these reasons that I laid out, trips per RVH which has been used for years by all of us is fine for trend analyses but is not particularly useful as a metric of comparison among peers unless you truly have similar peers, and even then, differences in trip length and traffic congestion and wheelchair trips % can sink you.
So, the latter is easy, just ask for the percentage or number of trips that require a wheelchair accessible vehicle.
But for the other two, I recommend two new productivity metrics which our software product 5M already tracks and which are also being used in Europe:
direct miles per RVH, which helps correct for differences in average trip length. And
Direct travel time per RVH, which helps correct for differences in local traffic congestion.
And for these two metrics, paratransit software systems that use Google Maps or an equivalent in the dispatch functions can get 100% reporting for both of these, and in cases where you don’t have that kind of a system, you can go onto Google with a statistically relevant sample and do your own.
Here’s some examples of what I am taking about.
The top table splits out stats for the NDSPs, and you’ll notice the much higher productivity as previously explained.
The bottom table reports stats for five different providers in five different areas , each with different average trip lengths, and you see that the trips per RVH are kind of similar, but then you go over to the direct miles per RVH and you are able to see who the better performers truly are.
So, what I want to do and what I hope happens, if we collectively push for this, is to reach an industry-wide consensus on more useful data to track for DRT NTD reporting and help the FTA put this new requirement in place.
And maybe the best way to reach a reach that consensus is to fast-track a TCRP research effort. So, if anybody from the FTA is or TRB is here, please think about this. It’s important.
I have written about this in my blog, which is on our company website. It’s called Will’s Pub, as in Public House where all are welcome. And there have already some very thoughtful comments from some industry stalwarts in response to my blog. I didn’t even have to beg them. But I invite you all to come on by the pub and have a pint. And If you think I’m on the right track, let me know, and if you have any similar or new ideas, let’s hear ‘em.
Thank you.