Perspective of the Passengers: The Taxi Rider

By way of introduction, my name is Ghery, and I am a computational intelligence lawyer. Basically, it’s my job to evaluate the due-diligence done by persons or corporations in creating an artificial intelligence designed to be used in robotics. I imagine this is the real reason I was asked to write an essay about transportation in the year 2043, because my commute is about as boring as it can get. I am aware that everyone else chosen to write for this series chooses to do something exciting, such as take a train or a bus or ride a bike. Me, I go by car, and I am not ashamed.

I am in good company - just less than half of all commuters still get to work riding alone in cars. This is down significantly from the 90% or more of people who used to drive themselves to work alone before cars began driving themselves, and that is generally viewed as a good thing. But single-occupancy vehicles are not really the great evil they were once considered to be. For one thing...

Perhaps I had better tell the story of my commute properly; this may be more interesting to you thank I thought.

I live in Holladay, which is a nicely upscale community in the southeast corner of the Salt Lake Valley (in the state of Utah, USA). I do pretty well for myself and have a nice detached house where I live with my family. We have all the usual suburban things; a back yard with a tree house, a swing set, and a trampoline, a patio and a place for barbecues and parties, and a shed for the lawnmower and yard things. We keep those there because at my house, the garage is just for cars. We own two of them.

Private Vehicles

My wife has medium-size SUV to get her and the kids around to school and their activities. I say SUV, but this is 2043 so I don’t have to mention that it is a plug-in hybrid electric SUV. With energy so cheap, gas so expensive, and carbon taxes always increasing, we’d be losing money to buy a straight internal-combustion car. Most places she takes the car it travels using its batteries alone; it will often go many months without needing to go to a gas station. When it does need gas we send it off to a station in self-driving mode. All cars are required to have a self-driving mode, and many roads (such as interstate highways) are complete self-driving zones, where no manual driving is allowed. When we first got the car my wife and I firmly agreed that we still wanted the car to have the optional driver’s package, which means that our SUV still has a steering wheel and pedals that fold out when activated, but my wife recently told me it’s been months since she’s used them. She says she likes to spend her time in the car facing the kids and talking with them, but I suspect she’s found a new show to watch that she enjoys more than driving.

I will never find anything on a screen that I like more than driving. Call me old-fashioned, call me a control freak, call me whatever you want, but it won’t change my mind. I like to be in control of my vehicle, and I love being behind the wheel of a car that responds well to my commands. I’m not so fanatical as to think that self-driving cars are the end of humanity like some of my buddies are – I never liked sitting in traffic, even when I was driving – but I do think that something is being lost from society now that a majority of adults no longer hold valid driver’s licenses.

That being said, I do not drive myself to work. I don’t even let my car drive me to work. My car is far too expensive for that. I’ve got a – well, I won’t bore you with the details, but you should know that I’ve got my dream sports car sitting in the garage, waiting for me whenever I need it. It too is a hybrid battery-electric, and it also has a self-driving mode, but I only use those features for getting to the recreational roads I drive on weekends. Utah is a fantastic state with many high-quality mountain roads and passes that are still signed and striped for manual driving. In fact, it’s become something of a tourist industry; many people from out of state will ride in their cars all the way to Utah so that they can drive over Guardsman Pass in Salt Lake County, the Alpine and Nebo Loops in Utah County, or Highway 12 near the national parks in South/Central Utah. Every other weekend, I’m out there driving with them.

Commuting

But during the morning commute, I don’t bother trying to drive. In fact, I couldn’t do it if I tried. Almost all of the busy roads in Salt Lake County have been reconfigured for autonomous-vehicles-only, meaning the road looks less like a street with lanes and stoplights and more like a flat, open river of asphalt with nothing to guide the stray human driver who accidentally ventures into robot-land. These roads are obviously illegal for humans to drive on, but not all roads have been designed to exclude us. Many connecting streets still get painted every few years and retain their stoplights and street signs so that, during the off-peak hours, residents of these neighborhoods can disengage their cars’ self-driving features and take control for themselves. These roads are slowly becoming something of a rarity as public opinion switches from considering driving a car as a ‘lost art’ to an ‘expensive and dangerous hobby,’ and most of these roads require that the car be equipped with a self-driving nanny software that is ready to take over if you try to steer your car off the road or break the speed limit by an unacceptable amount – so it’s hardly the same thing as hitting the open road. I find these types of streets more of a frustration than anything, so during the week I go completely cold-turkey. Either I am a driver and I am in complete control, or I am a passenger with a busy work schedule to stick to; I won’t be humored into playing these little half-and-half games. Besides, my car is too expensive for the insurance company to let me take it out more than a few times a week.
So, when it is time for me to go to work, I hail a taxi. I subscribe to a taxi service for a couple hundred dollars per month, and in return I get the guarantee that a taxi will pick me up within 5 minutes of my summoning it every time. I mainly use it for work, but I also regularly use it to go out for lunches and other work-related trips. I rarely use it outside of work hours, but it’s nice knowing that I could use it any time I need.

The Taxis

The 5 minute waits aren’t usually that bad. As I eat my breakfast at the table I simply tap the summon button on my screen’s home screen before I get to the news or mail or whatever else I want to read over breakfast. The car shows up almost exactly when I finish eating, and it’s there waiting for me when I step out the front door – all warmed, cooled, scraped, washed, dried, or whatever. Being a car guy, I usually give it more of a look than most do, just to take stock of what I’m riding in. Nowadays all the taxis that come to Holladay are all battery-electric, and hardly any come with a range-extender internal combustion engine anymore. I’ve always paid the premium to be sure I get the car all to myself, but sometimes the cars that show up at my doorstep are designed for more people. Sometimes these take the shape of two benches facing each other, and sometimes they are individual seats facing all towards the front. I get less and less of these types of cars these days as the taxi fleets continue to run a more optimized fleet of vehicles, which is kind of sad for me. I liked the extra room to stretch out my legs. I still see these types of larger cars fairly regularly, such as when I take a client out to lunch or to a meeting or something, but then I have to share the car with someone else. Instead, I’ve been getting more and more one-seater cars, which are cool on their own but often take a little getting used to.

For one thing, these cars usually have an odd number of wheels. Odd as in unusual, because I realize that two is an even number - don’t get all funny on me now. And three is an odd number no matter how you cut it. Once you make a car short and thin enough for just one person to fit inside, it’s amazing how many variations there are in wheel arrangements. Two wheels in front for steering and one in back for driving force. Two in back and one in front like a tricycle. Two in front that slide together into two as the car speeds up. And of course, two wheels like a motorcycle, only it is a car that is gyroscopically balanced so that it doesn’t tip over.

Lit Motors' C-1 prototype vehicle uses gyroscopes to balance the car on 2 wheels

Fewer wheels means less friction, meaning less energy needed to move. Fewer wheels also mean less weight to lug around, also resulting in less energy needed to move. 1-seater cars are approaching the ‘motorcycle’ point, where they are just as energy efficient as motorized, bike. The internal combustion versions of these vehicles would have been getting 70, 80, or even 90 miles per gallon, but of course none of these types of cars comes with an internal combustion engine anymore.
The inside of a one-seater autonomous taxi is a lesson in kingly luxury. The cars arriving at my house have access to my account profile, and so they know my preference settings. The cabin temperature, seat temperature, lighting levels, and screen displays are already exactly as I like them the moment I enter, and since the car knows that I’m headed to work, I don’t even need to speak a word. The door closes after me the moment I put both feet on the floor, and the massage motors in the seat back begin their work as the car silently glides out of the driveway. No seatbelt; corporately owned autonomous car fleets are held to different safety standards than private vehicles, and seatbelts are viewed by the large corporations as annoyances that their customers shouldn’t need to deal with. They remind passengers that an accident could happen, which really takes them out of the moment and ruins the whole experience. Besides, autonomous cars crash so little, and when they do the crashes are so benign, that the airbags, the soft interior, and the lack of a windshield do a sufficient job of ensuring an acceptable level of safety.

Interior

Yes, you did just read that right – ‘the lack of a windshield.’ Autonomous taxis rarely get to travel anyplace where there isn’t any traffic, so all a windshield would do is display the rear-end of the proceeding vehicles to the passengers who really don’t care all that much for the view (Autonomous cars have much shorter following distances than human drivers do, often leaving a gap of single-digit inches between cars, leaving very little room to see anything else from a windshield). It’s the same phenomenon as glass elevators; when the elevator is on the edge of a building and there is something to see, glass walls on the elevators are a really neat gimmick; but when the elevator is on the inside of a building, and the only thing to see is the inside of the elevator shaft, no one would ever bother to look out a window.

The taxis I ride in have curved immersive screens that curl up and around my head, as if I were in a miniature planetarium. Like glass elevators, it’s more of a gimmick than anything, since most programs don’t make use of the full wrap-around screen; most of the time I’m in the car the screens around me are all black while the movie I’m watching is contained on the one screen directly in front of me. I’m sort of old fashioned about the ways I watch my movies too, I guess. The only time I can think of when the curved immersive screens would be remotely useful is for gaming, which just isn’t my cup of tea. There’s just something perverse about playing a racing game while in a car that’s driving you around without your control. If that sort of thing interests you, why not just pull up the external camera feeds and watch the real thing?

Huh. I guess I really am more old-fashioned than I thought I was. This is starting to get depressing.

Anyway, there’s literally an infinite amount of entertainment options available to passengers of autonomous taxis, and this endless on-demand entertainment was one of the largest contributing factors to the public’s lightning-fast shift in positions on autonomous-only roads. Within ten years we went from a majority of drivers saying that they would be alright sharing the roads with autonomous cars but would never approve any plan banning human drivers, to a supermajority of voters approving just such a ban in exchange for greatly relaxed car designs (designs that would allow for the immersive screens and elimination of seat belts). Never mind talk of stranded assets, of gradual incremental improvements, of the slow but steady march of progress – no, there was an entertainment revolution at hand, and once that industry saw a way to exploit autonomous vehicles, there was never any contest about which rules would eventually prevail.

Here’s an example of just how prevalent the entertainment industry is in the transportation world these days.

Entertainment's influence

My commute to work takes me about 25 minutes to complete, which isn’t much different than the same trip used to take when everybody drove their own cars. Yes, we do have much better traffic management systems that make traffic jams a very rare occurrence. My one-seater taxi cabs are able to drive two or sometimes three abreast within their lane, and since they are half as long as some of the old SUV’s used to be it is possible to fit four or six cars into the roadspace previously occupied by just one car (lanes are a more nebulous concept on autonomous-only roads, and vary in width based on the demand from different types of vehicles). And, on top of all of that, speed limits have been abolished, meaning that the cars could travel at fantastical three-digit speeds if they felt so inclined. But instead, after all these innovations, my commute time hasn’t changed all that much.
Why? Because 25 minutes is the amount of time required to view a single episode of most shows. The taxi company I subscribe to goes to great lengths to ensure that the experience of their riders is a complete one, and so the taxis will do whatever it takes – be it going slower or faster, to taking an alternative route, or even to circling the block in order to drag out the length of a journey just long enough for the passenger to finish their program. Nothing is more unsatisfying than having your program switch off just at the moment of suspense, or reveal, or whatever – and most people would rate the taxi ride lower because of it, subconsciously or not.

In my day and age, innovation has switched from exploiting physics in order to create new physical possibilities to exploiting human psychology to create new economic possibilities. It is what it is.
All of this is within reason, of course. Circling the block is almost universally outlawed, and where it isn’t outright illegal it is usually prohibitively expensive; roads are pay-by-the-foot-by-demand, meaning that roadspace that is in high demand costs more to use than if no one needed it. Alternative routes also have their problems, from erratic traffic demands and unpredictable trip times to surge pricing designed to discourage through-traffic on less busy streets making sudden detours very expensive for the taxi company. And of course vehicle speed is not something that is determined by just the single vehicle anymore, but by the complete ‘swarm’ of vehicles using the road together in that instant. A vehicle going purposefully slower (or faster) than the swarm around it will be subject to hefty fines from the private companies that are under contract to operate the busiest roads, and if the fine weren’t enough, there is some anecdotal evidence that road operators tend to give less-preferential treatment to companies and individuals who have caused offence in the past. All of this is to say that while the taxi company can do some things to make the commute last long enough for the average entertainment to run to completion, there is a good reason I said it takes ‘about’ 25 minutes to complete my commute. Uncertainty is the price you pay when you use an autonomous taxi.

As other essayists in this series have said, you have a choice between convenience and certainty. Pick the convenience of a taxi at your doorstep and you lose any certainty of exactly when you will get to your destination; pick the certainty and scheduling of riding in a train (or even a bus) and you face the inconvenience of getting to and from the stations on your own. It’s an accepted fact of life for commuters in the year 2043.

Salt Lake Central Station

My commute ends at Salt Lake Central Station. If I were to pull up the view from the external cameras onto my immersive screens, I would see my car take an exit off of the freeway that enters directly into the Salt Lake Central Station compound. Salt Lake Central Station – from here on out referred to SLCS, pronounced ‘slicks,’ by us locals – is a massive public-private transit-oriented development, conceived of by Salt Lake City as a way to lure in the tax dollars of businesses which otherwise would have sprawled out to new business parks on the fringes of the valley. Here, multiple office towers rise up above and over the railroad tracks and a series of below-grade parking structures that are tied directly into the interstate highway, so as to never clutter up the local surface streets with extra car traffic. Elevators and escalators bring workers up from the rail, bus, and car platforms to a false ground level on the second or third story, where we mingle and mix on a cascading series of pedestrian plazas and malls as we make our way to our home buildings. It is an amazing complex spanning multiple blocks and completely filling in the area between 6thwest and the I-15, which had been nothing more than abandoned warehouses and rail yards when humans drove their own cars.

The underground parking garages would not have been possible back then. These are short, tight, unmarked passageways navigable only by cars in self-driving mode and getting constant updates from SLCS’s traffic-management computers. There is ample room dedicated to parking and storage, but for the most part these parking garages are only called ‘parking garages’ because we have no other convenient word for them. Around the periphery of these mostly unlit garages is a brightly lit wall with a cheery-colored sidewalk running along beside it. This is the Pickup/drop-off (P/d) area where cars, vans, and occasionally buses disgorge and devour their loads of passengers. Time and space along these sidewalks is at a premium, so you are given only a few seconds to gather your things and get out of your car before warning alerts begin to sound from within your taxi. Taxis that doddle too long in the P/d zone are happily given extra charges and fines by the SLCS’s traffic-management software, and these fines are almost always passed directly onto the customer’s bill. This is not a place for hanging around, as many thousands of people use SLCS as the main entry-point into downtown Salt Lake City, thanks to the expansive bus-rapid transit and streetcar networks that cut through the city starting here at SLCS. Imagine the drop-off area of the busiest airport you can think of, and multiply the amount of traffic by about six; it really is that busy.

Once I’m out of the car and on the sidewalk, it pulls away behind me and sulks back into the dark sections of the garage to charge its batteries on an inductive charging pad and wait for another customer to summon it. There are no aisles and parking spaces like there used to be in garages made for humans to navigate through; instead the entire space is open except for the structural columns, leaving the self-driving vehicles to figure out on their own the most ideal parking configurations. Many times the cars fit themselves together like blocks in a Tetris game, allowing themselves to become completely surrounded by other vehicles as they wait and charge; when they need to get out, the other vehicles will open up an escape route, because none of the vehicles every really ‘parks’ or ‘turns off.’ An electric vehicle is always on, the old saying goes, and so these cars don’t so much ‘park’ as they do ‘hibernate.’ As the vehicles shift around, great waves move throughout the garage like ripples in a pond. Humans are not allowed in this area at all; it isn’t particularly dangerous in the same way that walking into a train yard is dangerous, as each of these cars is still equipped with sensors and is programmed not to run into a human – but it would severely ruin the way the garage operates, and would cost you a hefty fine. SLCS’s computers are given facial recognition software to identify rule-breakers, meaning there is an almost complete certainty that if you decide to break a rule anywhere within the complex, you will get a fine for it.

I don’t have to walk far along the cheery sidewalk before I get to a sliding glass door leading to a bank of escalators and elevators. Each door is crowned with a bank of screens identifying where you are in relation to important landmarks, and often giving specific directions to individuals as they walk under. Each door is identified by a code name – 1A, 5G, 10V, and so on – so that when you summon your car you’ll have a better idea of where to pick it up. Once your car is given access to a certain section of P/d curb at a specific time, you’ll receive a notification on your mobile screen telling you to meet your car at a certain door at a certain time – usually within 5 minutes, though on especially crowded days it can be longer. It’s sort of like catching a flight at an airport. This sort of thing stresses some people out, so many apps exist to break this information in to more harmless instructions that the user can follow without getting overwhelmed. ‘Turn left,’ the screen will say (or a voice will say it directly into your ear, if you’ve got the right implants), ‘then go through the green door,’ without ever a mention of that door being T12 or whatever. I don’t use those apps – because I’m too old fashioned.

And that is how I start my day. From the garage I go up the escalator, find my way through the plazas to my favorite drinks shop, get myself the carbonated/caffeinated beverage of the day, and then head up to my office for a full day’s work.

***

Autonomous vehicles and artificial intelligence

As I mentioned before, I’m a computational intelligence lawyer, meaning I evaluate the safety of artificial intelligence systems designed to operate robotically in public spaces. Such as autonomous cars, buses, trucks, and trains. And passenger drones too, but since those are so new I’ll let someone else write up an essay about those. What is important is that, if it is a robotic vehicle and it interacts with the public, I evaluate whether or not it represents a significant harm to the average reasonable member of the public. It is not easy, and it requires some knowledge of how autonomous systems work before I am adequately able to explain myself.

The first problem is that autonomous vehicles have been presented to the public in exactly the wrong way. When touting a new vehicle, the manufacturer will almost always spend all their time listing and explaining the various forms of detection a vehicle has been outfitted with. Uber, for example, debuted a car with two forward facing radar sensors, one on each corner, and thus declared that its cars were safer than those of Tesla, who continue to use only 1 front radar system. Google (formerly known separately as Waymo) decided a short time later to debut an updated version of its sensor suite with a third center-mounted radar sensor, and then made a big deal of declaring its hardware the safest of all three.

This vehicle must be the safest.

The trouble is, hardware is cheap. Software is expensive. Anyone can load up a vehicle with sensors of every kind, but unless those sensors are connected to a competent software, that car isn’t going to drive itself.

Autonomy is not a detection problem. It is an artificial intelligence problem.

A common example used to explain this is a bat. When humans move through their environment they rely almost entirely on sight, which is supplemented by the accelerometers of our inner ears. This adds up to two means of detection. Bats, when they navigate through their environment, use the same sight and balance as humans do but also use echolocation – a sort of biological sonar. This adds up to three means of detection, or 150% that of humans. So, the question goes, who should be the better drivers? Humans or bats?

The answer is obviously humans. Neither humans nor bats are born with a natural ability to drive cars, but the human’s larger intelligence allows it to learn a new highly complex behavior. Even though our methods of detection number smaller than bats, we are able to process and react to the inputs in ways that a bat’s small intelligence is simply not able to do.

The same holds true for autonomous vehicles. It would be theoretically possible to build an autonomous car that is just as safe as a human driver using nothing more than a single swiveling camera for its detection system, so long as the processing power of its on-board computer were roughly equivalent to what a human uses while driving. But because this hypothetical scenario would require a very large amount of processing power (the human brain is capable of some 38 thousand trillion operations per second – more than only a small handful of the world’s largest supercomputers, even in my day), designers of autonomous vehicle intelligence systems take shortcuts – and the most effective of these shortcuts is a broad array of detection sensors.

It seems counter intuitive, but the more detection sensors outfitted on a car, the lessprocessing power a car needs. There are diminishing returns, of course, but in general for every new type of sensor you put on your car, you decrease the required amount of processing power by about 10%. The reason for this is that each type of sensor eliminates one of the variables the computer would otherwise need to solve for when processing the raw data. Say an autonomous car uses optics as its primary method of detection – meaning the car relies on processing the images recorded by its cameras in order to navigate. For every frame that the cameras record, the computer must run an image-recognition program – identifying the lines in the road, the edges of the road, the vehicles in front, the signs on the roadside – and do this under a variety of challenging conditions, such as in low light, direct sunlight, blurry imagery due to rain or weather, or even simple things like the glare of an oncoming car’s headlights. Then, a higher-level program must analyze these results and try to estimate the car’s position in the world; how far away is it from the cars beside it? How far away is the car ahead of it? Is that large square shape on the side of the road really a sign or is it a truck on the road or some other obstacle? And then, once the physical measurements have been derived from the image, an even higher level program must analyze the trends developing between images – is that car coming closer? How quickly is it closing the distance between the two cars?

And since most cameras operate at a rate of many frames per second, this process must be repeated multiple times every second while the vehicle is active. It becomes easy to see how most computers would very quickly run out of processing power in such conditions.

But what if instead the car were to also feature a radar system? Radars are fantastic devices for measuring distances and relative speeds between objects, meaning that the onboard computer wouldn’t need to analyze each image quite so much; instead of needing to spend processing power deriving the distances between it and other cars using image recognition software, it could rely on the radar for such information and use its processing power on other tasks. This is why, in addition to the obligatory cameras and optics-based systems, most cars are equipped with radars, ultrasonic sensors, and LIDAR devises – both active and solid state. Each of these devises reduces the amount of processing power the onboard computer will require to be able to determine its own position, speed, and direction, and also the particulars of the environment around it.

But still, none of this will be any good unless there is a fantastic piece of software at the end able to make sense of this information and use it to drive.

For a long time, programmers and computer scientists had thought that such a problem could be coded out line by line by human beings as more and more sub-tasks and scenarios were identified and solved. Driving, they surmised, could be broken down into a series of small tasks and likely situations that humans would be able to program the solutions to. For example, one task might be following a painted line, which is simple enough to write code for. Another task might be spotting a deer in the road and stopping for it, which is also a reasonable enough task for a human to tackle.

The problem, though, became when to use each of these sub-tasks and pre-programmed responses. When should the car follow paint stripes, and when should it follow something else? What if the paint stripes were difficult to make out, or what if the road had been newly repainted and multiple new and old striping patterns existed? Should the car constantly be on the lookout for deer? How much processing power would it take to constantly evaluate each image for a deer, just in case one happens to jump into the frame?

Why not simply generalize? Why not say ‘IF there is an obstacle, THEN stop?’ What makes an obstacle? Is the obstacle a leaf on the road or a fallen tree? Is stopping always the right choice? What about swerving?

The point is, in a world of infinite complexity, your computer would require an infinite amount of pre-programmed responses. An early idea was to have the car stop every time it got overwhelmed and call its headquarters where a licensed driver would remotely operate the vehicle until it got back into its comfort zone. This ran into the obvious problems of limited connectivity along the entirety of the world’s road networks, as well as some very serious concerns about security, hacking, and passenger safety. More fundamentally, it highlighted the importance of what makes humans different than computers – that humans will be able to see and understand an entire situation and react to it without needing a pre-programmed response. Humans could be problem solvers – and so, in order to make a vehicle totally autonomous, it would need to develop a human-like ability to detect, understand, and then solve any type of problem that was presented to it.

Games

Fortunately, this type of problem had already been confronted by programmers of a different sort. Long before engineers faced the problem of getting a computer to drive a car, they set themselves the challenge of getting a computer to play chess and other games. One of the earliest and most effective ways of teaching a computer to play chess was the brute force method, which relied upon the computer being able to predict every possible set of moves and then evaluating which would be most advantageous. Games were perfectly suited for this method of calculations since the rules of chess and other games were fixed, finite, and ‘closed.’ No outside variables needed to be accounted for. The computer simply needed to calculate all the possible moves for the current and subsequent rounds and then evaluate which positions gave it a better advantage. Often times the number of possibilities reached into the hundreds of thousands or even millions, depending on how far ahead the computer was programmed to calculate. Such calculations took time, but that was alright because even human players are allotted an amount of time to perform similar calculations.

The rise of chess-playing computerswas slow and closely mirrored the development of faster and more sophisticated hardware. Though many projects were started in the 1960’s, it wasn’t until the 1970’s that chess computers became good enough to compete with the top human players – and even then it wasn’t until the late 1990’s that computers began to beat world champion chess players at their own game. This is because it wasn’t the software that enabled the computer to win, it was the hardware – or the ability of a computer to process massive amounts of possible moves and outcomes and evaluate each one. The tipping point – when computers became undeniably better at chess than humans – came not because of an innovation in software, but rather through the slow and steady development of processing capacity.


But with the growing capacity of hardware, it was only a matter of time before better analyzing and processing techniques could become feasible and practical. The first new development was a technique called ‘reinforcement learning,’ in which a computer would slowly ‘learn’ how to complete its task better by brute force trial and error. This was the beginning of what we now call ‘machine learning,’ in which a computer is able to ‘teach’ itself new skills based on inputs. In reinforcement learning, the input was a simple ‘right’ or ‘wrong’ – either what the computer did was correct or it was not. Slowly, performing every possible action and receiving feedback, the computer would begin to favor actions that it knew would be likely to produce a positive feedback.


The most impressive example of machine learning was the sudden rise of machine vision – a program in which a computer can recognize pieces of an image, even though to the computer the image is simply a smattering of data across an assembly of pixels. Extreme pattern recognition would be required to make any sense of the data that the computers were receiving as inputs, and no human programmers could ever be up to the task of creating ‘If Then’ statements that describe every type of pattern that any type of object could ever create on a network of pixels. But, after showing a computer hundreds, thousands, and often times millions of pictures of a specific item, the computers themselves could distill the patterns created by the item and come up with a more succinct and fundamental definition of those patterns than human programmers could. And, at a pace that surprised nearly everyone, computers learned to see and recognize objects.

Another amazing possibility opened by machine learning was the ability to compete at games where even the brute-force method of computing couldn’t ever be enough. The best example of such an achievement was when Google’s ‘Alpha Go’ software beat Lee Sedol in March of 2016, proving that computers could beat world champions of games that were ‘open,’ incalculable, and (practically) infinite.


This marked a significant turning point in the development of intelligence software. Using only the brute-force method of analyzing millions of possibilities, the computer was still following a program written entirely by humans. Though the outcomes could astonish even the people that had programmed the machines, if a human were to follow the same process on paper that the computer had performed electronically, the same result would be reached. Computers could not ‘learn’ or be ‘taught,’ but instead could only be programmed to perform certain actions in certain situations. No result was entirely unexpected because the software ran under total control of the humans who wrote it.

With machine learning – whether by means of reinforcement learning or the more complex neural-networkingtechniques that were to follow – control of the software is either partially or totally ceded to the computer. Computers could ‘learn’ on their own, because the point of the software was to adapt to the inputs the computer was given. It was therefore possible to ‘teach’ the computer by controlling the inputs it received. Results could be genuinely surprising because, once the software had been given an sufficient amount of inputs, the software was no longer something that had been written by a human. Instead of certainty the results were described by probability, much in the same way that subatomic particles - or even people - could be described.

Here is a universal truth: In certainty, there is no opportunity – only in uncertainty.

Programmers could not be certain that their computers would behave dependably in any specific regard because they had not actively programmed the details of their computers’ software; the computers themselves had done it. But they could show that, based on previous testing, that it would be some percent probable that a computer would act in a specified fashion. And in this probabilistic approach, the opportunity for autonomy was opened.

Autonomous vehicles

Programmers could not be certain that their autonomous vehicle would stop, pull over, swerve, or even react to the presence of a deer on or near a road. They did not program a specific response for how their vehicle should react to a deer. Instead, after being shown many hundreds, thousands, and millions of miles driven by human vehicles in all types of situations, the computers learned how humans drive and react, and learned to do it consistently. It wouldn’t matter if the obstacle were a deer, a coconut, or a truckload of cows spilled into the street, the car would recognize that the situation was out of the ordinary, and that emergency actions should be taken. But what would those emergency actions be – pulling over? Stopping? Swerving? Speeding up?

How could a reaction to an event like this possibly be pre-programmed into a computer?

This is the point where I can finally introduce my job to you. My job, if you can remember, is to assess the risk that autonomous robots pose to the general public - and just like the computer programmers who have ceded their certainty for probabilities, I too work extensively with the most likely probable outcome.

I’ve explained all this programming theory to you because, even in my time, there are far too many people who treat autonomous robots and vehicles as though they were dumb machines that can only follow explicit instructions. These people do not understand that in order to perform tasks at a human level, these machines must possess at the very less least a human-level intelligence, if only in the narrow range of a specific task. Driverless cars are not just cars that have be pre-programmed how to drive, but are rather autonomous intelligent robots that are ‘smarter’ with respect to driving and navigation, than at least most humans, if not all humans. So how are – or more relevantly, were – human drivers regulated? Did we investigate each crash by demanding of the driver “you found yourself in situation 432.89 subsection 45 rule six, in which you were required to apply the brakes within 0.2 seconds and then swerve at 13 degrees out of line in order to avoid potential lateral motion of the obstacle ahead of you?” No, we did not. Instead we asked ourselves if the driver did that which any average, reasonable, and responsible person would do. We judged the vehicles against ourselves. And so, in the age of autonomous vehicles, we do the same thing still. And in so doing our transportation industry is setting a precedent of how all artificial intelligent systems - narrow, broad, and presumably someday super-human - will be handled under the law; but more on that some other time.

In humans, we would not consider someone to be an average, reasonable, or responsible driver unless they hold a license. A license holder has presumably fulfilled two qualifications: 1) become educated in the rules of driving and 2) accumulate many hours of practice behind the wheel under the observation of a license holding driver, usually somewhere between 40 and 80 hours. As far as autonomous intelligent computer algorithms are concerned, the same pattern applies. The two questions I am to ask regarding any autonomous vehicle license is Has the computer been given access to a database of local driving laws? And, has the car fulfilled its basic requirement for miles driven without a significant breach of safety?

Of course, these questions do not apply to individual vehicles, but rather new versions of the software the run the vehicles. And, rather than measuring the amount of practice time in two-digit numbers of hours, we look to see how many tens of millions of miles the software has driven without incident.
In the early days, when autonomous vehicles were receiving their first fleet-wide licenses, the requirements were significantly higher. Elon Musk, then CEO of Tesla Motors (as it was called then), famously said that his software would require data from at least 6 billion miles before it could be considered reliably safe. His company had the advantage in this regard since Tesla vehicles could already be in direct communication with the company’s main computers, even though they were owned privately. Other car manufactures were bound by dealership laws, which prohibited updates and data sharing between the manufacturer and the customer – that would be unfair to the car dealers – so instead they relied upon artificial intelligence developed by third parties. Google was a leader in machine learning and complex algorithms, but until it was able to accumulate real-world miles at the rate of Tesla (which increased from tens of millions of miles per day to hundreds of millions as the Tesla fleet grew larger and larger) the best it could do was to limit its vehicles to locations that had been 3-D mapped down to the centimeter. Other companies with experience in algorithms, such as Uber, were also able to carve out market shares for themselves, while companies that tried to buy only software or the companies that produced it struggled mightily. The issue was not how good the software was at performance, but instead how good the software was at learning. How many hours or miles of driving had the software analyzed or ‘learned’ from? There is no way to get around it; it’s exactly like studying for a test – if you haven’t prepared your vehicle with real world miles, it won’t have learned enough to pass a driving test.

Still, despite all this history and all the procedure in place, there is still a large fraction of the public and even many regulators who do not understand that it is artificial intelligence driving their vehicles, not a blind series of pre-programmed if-then statements. These people continually bring up that old red-herring argument called The Trolley Problem, and try to insist that morals need to be pre-programmed into vehicles. These people are fools because they do not realize that pre-programming morals into vehicles makes them less able to abide by those morals. It limits the intelligence that the computer is able to gain and use when confronting unfamiliar situations, and it increases the amount of situations that an autonomous vehicle can deal with.

But what, they ask, would a car do if there is a moral dilemma – a choice between running down an old lady or child with a ball? That is an important decision that we must not allow computers to make! That leads to the complete erosion of morals from society, the complete reliance on computers and statistics to determine who is allowed to live and die, and it makes us all less human…

These are empty arguments, and I can prove it just by asking the same question back at them: who should the car hit, the old lady or the child? Some will argue the old lady because she’s near the end of her life, and others will argue the child because the child is more likely to recover. But most cannot answer the question or keep demanding details to try and justify one decision or another until they dig themselves into ridiculous holes – how many grandchildren does the old lady have? Does the child an only child, and can the parents have another one? None of these details are relevant because the car could have no way of discerning these things from a few frames as it speeds towards them.

The whole point of the Trolley Problem is to be unsolvable, and so it can be nothing more than a dead end – a delay tactic for those who are scared of automation, robots, and artificial intelligences and are unable or unwilling to publicly articulate their concerns. And since the problem cannot be solved, it is logical to simply avoid such situations. A car that must run over anything has already failed, and it is my job to make sure that the artificial intelligences that operate our vehicles are smart enough to preemptively avoid such situations.

Statistics and probabilities are my tools. Has this company done their due diligence in developing a new version of artificial intelligence by feeding it enough real-world miles? Or are simulated miles sufficient in this case? What is the driving record of the previous intelligence system – were there many near-misses or even accidents? Under what circumstances did the system perform the weakest, and how is this weakness being adjusted? Are there reasonable expectations that such weaknesses can even be resolved? Would a judge agree with any of these answers?

As you can see, I am not shy about ambiguity or uncertainty – it is precisely that gray area that allows me to still have a decent job. But the uncertainty I deal with is the solvable and manageable kind, not like the unsolvable Trolley Problem. Perhaps I am so defensive about this because most people in my time associate lawyers as delaying and dragging out the full adoption of autonomous vehicles and artificial intelligence in our society. In the mind of the general public, computers and artificial intelligences are so much more capable than the duties we give them and offer so many more advances than we are willing to take – much in the same way that in your era people assumed that surveillance videos could be ‘zoomed and enhanced,’ until any small detail could be laid bare and obvious. They – the public of my time – assume that all lawyers are incentivized to argue unsolvable problems like the Trolley Problem to delay a greater rollout of artificial intelligence, because the longer we can delay it the more money we can earn for ourselves. They are not wrong; there are many lawyers who are just as bad or even worse than the stereotype lawyer from comedy shows. But they are also not right, because the issue of artificial intelligence is vastly more complex than any one person – or perhaps any one society – can fully grasp.

We all want robotic butlers to do our chores for us. We all want flying cars. We all want jobs to be tailor-fit to our highest skillsets, and we all want schools that match the curriculum to the student in ways that human teachers with 20 other students in the class never could. We all want personalized health diagnostics to prescribe just the right medicines, diets, and exercise regimes that work just for us. We want our world to know us and understand us as individuals, and now that our cars can drive us around with boring regularity it seems like these sorts of things ought to be ours as well. What most people do not realize is that each and every action our man-made machine intelligences learn to perform is not a feat of electronic puppetry, but one of trust. Trust that an unconscious machine has become as fluent and capable of understanding the complexities of a task in ways that a conscious human cannot.

The statistics prove that this is possible to achieve – intelligences with enough learning time have outperformed humans at every task that has been contested – but they also show that intelligences that have been prematurely released into ‘the wild’ can cause a disproportional amount of chaos. Like the Designer People with simulated personalities from Douglas Adams’ ‘Young Zaphod Plays It Safe,’ these unqualified intelligences are able to move about in society undetected by the general population, not because the population cannot recognize that an intelligence system is incomplete, but because there are no recognizable signsthat an intelligence system is incomplete. It is only after tragedy strikes – a car killing its occupants after it drives under a truck or head-on into an oncoming car because it didn’t realize it got its left-hand/right-hand driving boundaries confused, as examples – that the lack of proper testing is revealed.

New failure modes are discovered all the time – and will be discovered for as long as people continue to move about in vehicles. This should be reasonably expected, and a rational, reasonable person would know that there is always the probability of failure, even if such a probability is very small. The moral thing to do is to actively investigate precisely how probable any type of failure actually is, and then take the proper precautions against them.

Artificial intelligences do not make us less intelligent – by comparison or otherwise. Ceding some (or even a lot) of control to machines, computers, and statistics does not also mean ceding our moral authority to the robots. Even in the age of advanced technology we remain human beings – infinitely complex general intelligences that fit their containers, no matter how large or small these containers are. Whatever the choices we are given by our circumstances, our societies, and our technologies, the moral dilemmas will always be the same – only the scale is different. It is our choices that give us and our lives any sort of meaning at all, and in this regard – with machines there to cater to more and more of our mundane needs than ever before – we are perhaps more human than ever before.

So choose well, you readers of the past. I’ll know if you did.


0 Response to "Perspective of the Passengers: The Taxi Rider"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel