Why are some mobility experts spooked by this plan to develop a data standard that would allow cities to build a real-time traffic control system?
Imagine driving through Los Angeles in the year 2040. There’s a mix of self-driving and human-controlled vehicles on Santa Monica Boulevard. A serious collision slows traffic to a crawl. But then a special orchestration of traffic signals flips on, parting the sea of cars for an ambulance to throttle through the streets.
This traffic engineering fantasy may be inching to reality, as companies such as IBM, Microsoft, Google, and HERE Maps develop what’s known as “digital twin” technology. The term describes a virtual simulacra of something in the physical world—whether it’s a car engine, a casino floor, or the street network of a major city—that visualizes real changes as they occur, and is “smart” enough to model possible scenario outcomes. In the L.A. example, imagine that a downtown city worker viewed a traffic simulation seconds after the car crash and approved a recommended route for the ambulance, alerting all those connected self-driving vehicles to move aside.
But if the phrase “digital twin” strikes up images of a pixelated doppelgänger dogging your commute, you’re not necessarily wrong to feel creeped out. And you might not have to wait very long to find out if any of those fears are justified: Next week, transportation officials from 13 major American cities will discuss (among other items) whether to collectively to build towards such a model.
“Going forward, each city must manage its own Digital Twin, which will provide the ground truth on which mobility services depend,” states the bylaws of the Open Mobility Foundation, a new nonprofit that counts city leaders on its board of directors.
Click Here for Full Article